cncf
Here are 245 public repositories matching this topic...
Feature idea summary
Cgroups plugin supports only proportional and max Block IO policies. We should support BFQ scheduler as well. Disk stats for the scheduler are in blkio.bfq.io_service_bytes and blkio.bfq.io_serviced files.
here we confuse the user with two things
and
we should had sticked with first advice
drewpca(pts/0):~% minikube start -p foo --memory 6GB
😄 [foo] m
Helm documentation states the following:
-1) (k8s) metadata.name is restricted to a maximum length of 63 characters because of limitations to the DNS system
-2) For that reasons, release names are (DNS labels that are) limited to 53 characters
Statement 1) is not correct.
k8s does not impose a max length of 63 characters on resource names.
The actual max length for a resource name is 253 c
-
Updated
Apr 12, 2021 - C++
The PR #2892 attempted to bump the otelcol dependency to 0.23, but a breaking change from that version would cause a test failure:
$ go test ./...
go: downloading go.opentelemetry.io/collector v0.23.0
# github.com/jaegertracing/jaeger/v2/storage/memory [github.com/jaegertracing/jaeger/v2/storage/memory.test]
storage/memory/exporter.go:49:4: cannot use s.pushTraceData (type func(cOne of the AWS China regions now supports Route53: https://aws.amazon.com/about-aws/whats-new/2020/05/amazon-route-53-is-now-available-in-AWS-china-region/
If someone with an AWS China account can try creating a cluster using Route53 rather than gossip k8s.local and update docs/aws-china.md with their findings that would be much appreciated. Also identify any changes needed to be made in Kops t
When I run "vtctlclient help ApplyRoutingRules", I get informations on how to use this option. But when I run "vtctlclient help GetRoutingRules", I do not get such informations, and it looks like the routing rules are displayed. Output below.
jgagne@ip-172-31-40-211:~/my-vitess-example$ vtctlclient help GetRoutingRules
{
}
jgagne@ip-172-31-40-211:~/my-vitess-example$ vtctlclient hel
Expected Behavior
We currently use the AWS Load Balancer Controller to manage pod mappings to the AWS Application Load Balancer. As part of this setup, the AWS LB Controller manages a custom condition on our pods, which looks like:
status:
conditions:
- lastProbeTime: null
lastTransitionTime: null-
Updated
Apr 12, 2021 - Ruby
-
Updated
Apr 12, 2021 - Rust
-
Updated
Apr 12, 2021 - Go
-
Updated
Apr 12, 2021 - Shell
-
Updated
Apr 12, 2021 - Go
Prometheus v2.26.0 has the flag --enable-feature=promql-negative-offset to enable negative offset.
It works fine.
But I can't enable it with Thanos 0.19.0.
It looks like the flag is not available for Thanos Query.
So the next query my_metric_status offset -24h fails with the error 1:31: parse error: unexpected <op:-> in offset, expected duration
Is this feature implemented?
-
Updated
Apr 12, 2021 - Go
-
Updated
Apr 12, 2021
-
Updated
Apr 12, 2021 - Go
-
Updated
Apr 8, 2021 - Go
Change '登陆' to '登录'
Describe the Bug
Change '登陆' to '登录'
/kind bug
/assign @leoendless
/priority Low
/milestone 3.1.0
/area console
Description
- add ALLOW action to skip the remain block rules and accept the incoming request
Is your feature request related to a problem? Please describe.
It's very easy to see ingester and ha ring status, but distributor ring status is only sort of viewable via Consul
Describe the solution you'd like
A status page that shows the distributor ring status similar to how we can currently view ingester ring status.
Describe alternatives you've considered
Currently I ta
What would you like to be added/modified:
To improve project stability, we need more tests to cover corner cases.
And the code coverage is around 50% currently, we need to add more tests to improve it.
To improve case coverage, we may need a list of cases to track the work.
For code coverage, simply check bef
-
Updated
Apr 12, 2021 - C++
Bug Report
https://github.com/chaos-mesh/chaos-mesh/blob/master/controllers/awschaos/ec2stop/types.go#L56-L60 These lines of codes make it possible to use another endpoint (like localstack) to do AWS experiments. However, the Recover of ec2stop and other kinds of chaos doesn't have these codes.
It was amazing that even with this bug, the integration test has passed. The expected pheno
i am cloning this git repository into the path /root/src/github.com/
my GOPATH=/root/src/
my GOROOT=/root/
my GOBIN="/root/src/github.com/virtual-kubelet/bin"
and i enter /root/src/github.com/virtual-kubelet directory and run the make build command, i get this error:
`which: no gobin in (/bin:/root/src/github.com//bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin:/usr/loc
Support Downward API
-
Updated
Mar 19, 2021
We sanitize secrets when they arrive at the cache layer (https://github.com/projectcontour/contour/blob/main/internal/dag/secret.go#L34:6), however, that logic is duplicated in the dag package (https://github.com/projectcontour/contour/blob/main/internal/dag/accessors.go#L225).
I theory we should be able to rely on the secrets being valid from the cache bit, but in any case should centralize t
Improve this page
Add a description, image, and links to the cncf topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the cncf topic, visit your repo's landing page and select "manage topics."

Follow-up of kubernetes/kubernetes#98241 (comment). Currently we maintain a local registry for looking up the ClusterEvent obj by specifying a cluste event (string). The registry is not that necessary as we can build that ClusterEvent obj on the fly in movePodsToActiveOrBackoffQueue().
/kind cleanup
/sig scheduling