context: kubernetes/kubernetes#68478
The inotify code was removed from golang.org/x/exp several years ago. Therefore
importing it from that path prevents downstream consumers from using any module
that makes use of more recent features of golang.org/x/exp.
Given that this code is by definition frozen and that the long term path should
be to migrate to fsnotify, replacing the current code by an identical standalone
copy doesn't have maintenance cost, and will unlock other activities for
kubernetes for example.
context: kubernetes/kubernetes#68478
The inotify code was removed from golang.org/x/exp several years ago. Therefore
importing it from that path prevents downstream consumers from using any module
that makes use of more recent features of golang.org/x/exp.
Given that this code is by definition frozen and that the long term path should
be to migrate to fsnotify, replacing the current code by an identical standalone
copy doesn't have maintenance cost, and will unlock other activities for
kubernetes for example.
We see a lot of logs in k/k CI as follows:
"Factory "mesos" was unable to handle container "/system.slice/home-kubernetes-containerized_mounter.mount"
It would be better if we do some sanity check for mesos running before
we try to use it.
Change-Id: I5f6ebcd44fdd4f8d724b85857edf1600473ef1ab
GetSpec() can be called concurrently in
manager/container.go.updateSpec()
results into a concurrent map access on the labels map because we're
directly updating the map inside GetSpec(). The labels map from the
container handler is not a copy of the map itself, just a reference,
that's why we're getting the concurrent map access.
Fix this by moving the label update with restartcount to the handler's
initialization method which is not called concurrently.
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
Per-CPU stats are more expensive to transport and store, and that
level of detail is not required in many cases.
We export overall total cpu in the same metric as per-cpu, so that
dashboards which previously summed over cpu will work identically.
As these can otherwise block indefinitely due to docker issues.
This is to fix https://github.com/kubernetes/kubernetes/issues/53207,
where kubelet relies on cadvisor for gathering docker information as
part of its periodic node status update.
This commit includes changes to integrate containerd
runtime to cadvisor to collect container stats
Signed-off-by: abhi <abhi@docker.com>
Test cases and minor changes
This commit include test cases and minor fixes
for the same
Signed-off-by: abhi <abhi@docker.com>