Skip to content
#

openshift

Here are 1,464 public repositories matching this topic...

jx
mcabrito
mcabrito commented Apr 15, 2020

Summary

I’m trying to use a Chart Repository by external Nexus, but I’d a problem in pipeline when I’m using the command:

jx step helm release

When this command do the upload chart file, my $CHART_REPOSITORY is changing and including the /api/charts

+ jx step helm release --verbose
DEBUG: Using helmBinary helm with feature flag: template-mode
DEBUG: Initialising Helm 'i
peterdewinter
peterdewinter commented Apr 29, 2020

In the documentation, a reference is made towards the Openshift hardening guide for version 3.11 or 3.10.
Where can I find the benchmark/guide document for this? Is it available for download (Have a red hat access account)?

ehmusman
ehmusman commented Apr 15, 2020

General information

i am installing kubernetese in ubuntu 18.04. first i installed kubect, then virtual box than minikube. when i check the version of kubectl it gives the following error..

  • Minishift version:
  • OS: Linux / macOS / Windows
  • Hypervisor: KVM / Hyper-V / VirtualBox / hyperkit

error: Missing or incomplete configuration info. Please point to an existing, complete

Mad-ness
Mad-ness commented Feb 21, 2019

Description

I'm trying to set up openshift-3.11 on CentOS-7 on VirtualBox VMs:

cat /etc/redhat-release
CentOS Linux release 7.6.1810 (Core)

# cat /etc/os-release
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.o
strimzi-kafka-operator
carlosjgp
carlosjgp commented Apr 23, 2020

Suggestion / Problem
After enabling KafkaConnectors CRD through the annotation as described on the documentation
we lost all the configuration from the existing connectors configured through the REST API

Our expectation was that both could live together

Documentation Link

techdragon
techdragon commented Apr 21, 2020

Since this has transitioned to using an in repo helm chart, the helm chart documentation should be checked, and documented. The current helm chart in this repo, seems to be built from several layers of templating which makes it harder to understand, and the correctness of this documentation all the more essential.

  • The last Helm Stable repo version had the appropriate section documenting the
kameshsampath
kameshsampath commented Nov 12, 2018

Currently the Jaeger variables are added as ENV in Docker file:

  ENV JAEGER_SERVICE_NAME=customer\
  JAEGER_ENDPOINT=http://jaeger-collector.istio-system.svc:14268/api/traces\
  JAEGER_PROPAGATION=b3\
  JAEGER_SAMPLER_TYPE=const\
  JAEGER_SAMPLER_PARAM=1

We could avoid this and add them to Kubernetes Deployment YAML

(OR)

Except for JAEGER_SERVICE_NAME other variables

DRAKUN
DRAKUN commented Sep 5, 2019
INFO Loading bundle: crc_libvirt_4.1.11.crcbundle ... 
INFO Extracting bundle: crc_libvirt_4.1.11.crcbundle ... 
INFO Creating VM ...                              
INFO Verifying validity of the cluster certificates ... 
ERRO Error occurred: Failed internal dns query: Temporary Error: ssh command error:
command : host -R 3 foo.apps-crc.testing
err     : exit status 1
output  : Host foo
dustymabe
dustymabe commented Feb 16, 2018

if there happens to be an imagestream that already exists and I want to use it for a container that is part of my kedge.yaml file how do I do that? I was able to bring up a container that used the full name of the container to pull from a container registry, but what I'd really like is for a new deployment to get triggered when that image stream updates as well.

jetersen
jetersen commented Dec 12, 2019

Currently trying to configure buildSync.json and I have no idea what is the proper way of configuring the preinstalled buildSync plugin.

at the moment I am getting replication errors for artifactory-build-info and I would like to configure it properly.

djdevin
djdevin commented Apr 17, 2020

Upgrading from v1, I ran the cluster wide steps in the deployment guide.

The new pods failed. I checked the image stream and it was still using the old v1 images. I didn't catch it but it was some error about Unknown parameter --expose...something (sorry), so I figured the image was still out of date.

I deleted the imagestream and re ran all the deployment commands and everything worked beca

Improve this page

Add a description, image, and links to the openshift topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the openshift topic, visit your repo's landing page and select "manage topics."

Learn more

You can’t perform that action at this time.