As these can otherwise block indefinitely due to docker issues.
This is to fix https://github.com/kubernetes/kubernetes/issues/53207,
where kubelet relies on cadvisor for gathering docker information as
part of its periodic node status update.
This commit includes changes to integrate containerd
runtime to cadvisor to collect container stats
Signed-off-by: abhi <abhi@docker.com>
Test cases and minor changes
This commit include test cases and minor fixes
for the same
Signed-off-by: abhi <abhi@docker.com>
When cAdvisor starts up, it would read the `vendor` files in
`/sys/bus/pci/devices/*` to see if any NVIDIA devices (vendor ID: 0x10de) are
attached to the node. If no NVIDIA devices are found, this code path would
become dormant for the rest of cAdvisor lifetime. If NVIDIA devices are found,
we would start a goroutine that would check for the presence of NVML by trying
to dynamically load it at regular intervals. We need to do this regular
checking instead of doing it just once because it may happen that cAdvisor is
started before the NVIDIA drivers and NVML are installed. Once the NVML
dynamic loading succeeds, we would use NVML’s query methods to find out how
many devices exist on the node and create a map from their minor numbers to
their handles and cache that map. The goroutine would exit at this point.
If we detected the presence of NVML in the previous step, whenever a new
container is detected by cAdvisor, cAdvisor would read the `devices.list` file
from the container's devices cgroup. The `devices.list` file lists the
major:minor number of all the devices that the container is allowed to access.
If we find any device with major number 195 (which is the major number assigned
to NVIDIA devices), we would cache the list of corresponding minor numbers for
that container.
During every housekeeping operation, in addition to collecting all the existing
metrics, we will use the cached NVIDIA device minor numbers and the map from
minor numbers to device handles to get metrics for GPU devices attached to the
container.
As of the 4.7 kernel, the cpustats field returned from libcontainer
contains values for every possible cpu (including nonexistent ones).
The extra values are all 0s.
If we assume that hotplug events won't happen, we can get a more
accurage cpu count by using runtime.NumCPU and then ignoring any values
beyond that.
Cache the Docker thin pool name in the dockerFactory and pass it to each Docker container handler,
instead of calling `docker info` each time a new container handler is created, as the thin pool name
is extremely unlikely to change.
PerDiskStats reported from cgroups were not being surfaced into
prometheus. In order to properly correlate the metrics, we need to
assign a device label to each metric (which is the FS or device path).
Since blkio cgroup tracks devices, we create a synthetic device
`/dev/NAME` for the metric.
Assign a Device label to each PerDiskStat for the handlers up front, and
then surface the PerDiskStat values into the prometheus metrics. Report
two new metrics - total bytes read and total bytes written.
If CPU quota is configured (cpu.cfs_quota != -1) the CFS will provide
stats about elapsed periods and throtting in cpu.stats. This change
makes these information available as container_cpu_cfs_* metrics.
Ensure that kernel >= 4.4.0 or RHEL/Centos 7 kernel >= 3.10.0-366 exists before starting the thin
pool watcher. Prior versions have a bug in which reserving and releasing the metadata snapshot can
cause thin pool corruption.