openshift
Here are 1,464 public repositories matching this topic...
Summary
I’m trying to use a Chart Repository by external Nexus, but I’d a problem in pipeline when I’m using the command:
jx step helm release
When this command do the upload chart file, my $CHART_REPOSITORY is changing and including the /api/charts
+ jx step helm release --verbose
DEBUG: Using helmBinary helm with feature flag: template-mode
DEBUG: Initialising Helm 'i
In the documentation, a reference is made towards the Openshift hardening guide for version 3.11 or 3.10.
Where can I find the benchmark/guide document for this? Is it available for download (Have a red hat access account)?
error: Missing or incomplete configuration info. Please point to an existing, complete config file:
General information
i am installing kubernetese in ubuntu 18.04. first i installed kubect, then virtual box than minikube. when i check the version of kubectl it gives the following error..
- Minishift version:
- OS: Linux / macOS / Windows
- Hypervisor: KVM / Hyper-V / VirtualBox / hyperkit
error: Missing or incomplete configuration info. Please point to an existing, complete
There's a non-deterministic test in client_factory_test.go that sometimes fail. Most of the time it seems ok, so it hasn't harmed so much and would be low priority at first glance... but it would be nice to see if there's a better way to write that.
Incriminated code line 54 I think:
time.Sleep(time.Millisecond * 100)Example of failure: https://travis-ci.org/github/kiali/ki
Description
I'm trying to set up openshift-3.11 on CentOS-7 on VirtualBox VMs:
cat /etc/redhat-release
CentOS Linux release 7.6.1810 (Core)
# cat /etc/os-release
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.o-
Updated
Nov 5, 2018
Add some fields to pods_table, as result, it looks like kubectl get pod -owide.
'NAME', 'STATUS', 'READY', 'RESTARTS', 'AGE', 'IP', 'NODE'.
You must use KubernetesDeserializer.registerCustomKind in order to successfully watch CustomResourceDefinitions, as implemented in CRDExample. Yet this type is in a package named "internal". Does this imply it shouldn't be used by library users? Sh
Suggestion / Problem
After enabling KafkaConnectors CRD through the annotation as described on the documentation
we lost all the configuration from the existing connectors configured through the REST API
Our expectation was that both could live together
Documentation Link
Since this has transitioned to using an in repo helm chart, the helm chart documentation should be checked, and documented. The current helm chart in this repo, seems to be built from several layers of templating which makes it harder to understand, and the correctness of this documentation all the more essential.
- The last Helm Stable repo version had the appropriate section documenting the
Add documentation comments to all of the VmOrTemplate operations methods
Compose
Hi, can you write small example, how to use this image in docker-compose.yml?
Kind of issue
Uncomment only one of these:
Bug
Feature request
Observed behavior
https://github.com/heketi/heketi/blob/master/client/cli/go/cmds/heketi_storage.go
A mistake in the word:
"Volume %v alreay exists"
Expected/desired behavior
"Volume %v already exists"
Details on how to reproduce (minimal and precise)
[root@glusterfs-heketi-0 /]# he
Currently the Jaeger variables are added as ENV in Docker file:
ENV JAEGER_SERVICE_NAME=customer\
JAEGER_ENDPOINT=http://jaeger-collector.istio-system.svc:14268/api/traces\
JAEGER_PROPAGATION=b3\
JAEGER_SAMPLER_TYPE=const\
JAEGER_SAMPLER_PARAM=1
We could avoid this and add them to Kubernetes Deployment YAML
(OR)
Except for JAEGER_SERVICE_NAME other variables
-
Updated
May 15, 2020 - Shell
I can't seem to find the documentation to upgrade the glusterfs pods. Might as well add a flag to gk-deploy or, maybe, just create a gk-upgrade script?
-
Updated
May 12, 2020 - JavaScript
-
Updated
Apr 29, 2020 - Go
INFO Loading bundle: crc_libvirt_4.1.11.crcbundle ...
INFO Extracting bundle: crc_libvirt_4.1.11.crcbundle ...
INFO Creating VM ...
INFO Verifying validity of the cluster certificates ...
ERRO Error occurred: Failed internal dns query: Temporary Error: ssh command error:
command : host -R 3 foo.apps-crc.testing
err : exit status 1
output : Host foo
I find that there are two ways to fetch github token in the pipeline library
withCredentialsto lookupcd-githubsee: https://github.com/fabric8io/fabric8-pipeline-library/blob/0bff0f50a2af9a0dcc4d2c4abb28d1eafcbfdfb9/vars/mavenCI.groovy#L14Fabric8Command.getGitHubTokenwhich uses a different mechanism. See: https://github.com/fabric8io/fabric8-pipeline-library/blob/0bff0f50a2af9a0
Which section(s) is the issue in?
In "Updating RHEL compute machines in your cluster"
What needs fixing?
subscription-manager repos --disabl
-
Updated
May 12, 2020 - Dockerfile
if there happens to be an imagestream that already exists and I want to use it for a container that is part of my kedge.yaml file how do I do that? I was able to bring up a container that used the full name of the container to pull from a container registry, but what I'd really like is for a new deployment to get triggered when that image stream updates as well.
It would be nice if reshifter provided the ability to specify a prefix within the S3 bucket under which the backups should be written. Right now reshifter S3 backups just upload the zip to the S3 bucket root.
We need to deduce a logic for finding/detecting kubernetes-controllers inside the cluster, as they are not a namespaced resource. In minikube, the default nginx-ingress-controller deployment is in the kube-system namespace but is it a globally followed arrangement?
Currently trying to configure buildSync.json and I have no idea what is the proper way of configuring the preinstalled buildSync plugin.
at the moment I am getting replication errors for artifactory-build-info and I would like to configure it properly.
Upgrading from v1, I ran the cluster wide steps in the deployment guide.
The new pods failed. I checked the image stream and it was still using the old v1 images. I didn't catch it but it was some error about Unknown parameter --expose...something (sorry), so I figured the image was still out of date.
I deleted the imagestream and re ran all the deployment commands and everything worked beca
-
Cleanup if needed (careful, deleting projects...)
oc delete crd installations.istio.openshift.com
oc delete project istio-operator
oc delete project istio-system -
Create projects
oc new-project istio-operator
oc new-project istio-system -
Add Istio CRDs
git clone https://github.com/Maistra/istio-operator.git
cd istio-operator
oc apply -n istio-operator -f ./deploy/
oc apply
Improve this page
Add a description, image, and links to the openshift topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the openshift topic, visit your repo's landing page and select "manage topics."
Installation method "Method