Go to file
Maciej "Iwan" Iwanowski 396c32a3b6
We no longer need to append entries to logfile
Signed-off-by: Maciej "Iwan" Iwanowski <maciej.iwanowski@intel.com>
2020-04-15 08:45:31 +02:00
.github/workflows Adding build config to unit tests too 2020-04-15 08:45:31 +02:00
accelerators Merge pull request #2478 from dims/switch-to-klog-v2 2020-04-10 09:13:38 -07:00
build We no longer need to append entries to logfile 2020-04-15 08:45:31 +02:00
cache Switch to klog v2 2020-04-09 22:54:37 -04:00
client Switch to klog v2 2020-04-09 22:54:37 -04:00
cmd Merge pull request #2478 from dims/switch-to-klog-v2 2020-04-10 09:13:38 -07:00
collector filtering __name__ 2020-02-03 00:31:05 +03:00
container Merge pull request #2478 from dims/switch-to-klog-v2 2020-04-10 09:13:38 -07:00
deploy update cAdvisor args to match kubelet. 2020-03-23 11:43:50 -07:00
devicemapper Switch to klog v2 2020-04-09 22:54:37 -04:00
docs Measuring perf events - chapter I (#2419) 2020-04-09 10:20:07 -07:00
events Switch to klog v2 2020-04-09 22:54:37 -04:00
fs Switch to klog v2 2020-04-09 22:54:37 -04:00
info Make a copy of the maps of MachineInfo 2020-04-11 17:37:31 -07:00
integration Fixed typo 2020-04-15 08:45:31 +02:00
machine Merge pull request #2490 from tedyu/mach-info-copy 2020-04-10 11:01:42 -07:00
manager Merge pull request #2492 from tedyu/container-info-copy 2020-04-14 11:24:01 -07:00
metrics Switch to klog v2 2020-04-09 22:54:37 -04:00
nvm Merge pull request #2478 from dims/switch-to-klog-v2 2020-04-10 09:13:38 -07:00
perf Integration test for Docker and perf stats 2020-04-15 08:45:30 +02:00
stats Measuring perf events - chapter I (#2419) 2020-04-09 10:20:07 -07:00
storage Move storage, mesos, global container integrations under cmd/internal 2020-03-23 16:48:40 +00:00
summary Export type to calculate percentiles 2015-07-21 17:52:01 -07:00
utils Merge pull request #2478 from dims/switch-to-klog-v2 2020-04-10 09:13:38 -07:00
validate cgroup: initial support for cgroups v2 2019-09-06 21:26:35 +02:00
version Simplify cAdvisor release versioning 2016-06-29 18:27:07 -07:00
watcher Drop support for rkt - which is now archived 2019-10-01 20:03:22 -04:00
zfs Switch to klog v2 2020-04-09 22:54:37 -04:00
.gitignore Add a docker based integration test 2020-04-06 19:57:28 -04:00
AUTHORS Remove mention of contributors file. We don't have one. 2014-12-30 17:16:46 +00:00
CHANGELOG.md v0.36.0 changelog 2020-02-28 16:45:38 -08:00
CONTRIBUTING.md Add CONTRIBUTING.md 2014-06-10 13:09:14 -07:00
doc.go Move main package to cmd subdir 2020-03-21 00:49:05 -04:00
go.mod Switch to klog v2 2020-04-09 22:54:37 -04:00
go.sum Switch to klog v2 2020-04-09 22:54:37 -04:00
LICENSE Migrating cAdvisor code from lmctfy 2014-06-09 12:12:07 -07:00
logo.png Run PNG crusher on logo.png 2016-02-10 15:02:44 -08:00
Makefile Add a docker based integration test 2020-04-06 19:57:28 -04:00
README.md don't update the latest tag anymore 2020-03-02 09:55:14 -08:00
test.htdigest Added HTTP Auth and HTTP Digest authentication #302 2014-12-11 17:25:43 +05:30
test.htpasswd Added HTTP Auth and HTTP Digest authentication #302 2014-12-11 17:25:43 +05:30

cAdvisor

cAdvisor (Container Advisor) provides container users an understanding of the resource usage and performance characteristics of their running containers. It is a running daemon that collects, aggregates, processes, and exports information about running containers. Specifically, for each container it keeps resource isolation parameters, historical resource usage, histograms of complete historical resource usage and network statistics. This data is exported by container and machine-wide.

cAdvisor has native support for Docker containers and should support just about any other container type out of the box. We strive for support across the board so feel free to open an issue if that is not the case. cAdvisor's container abstraction is based on lmctfy's so containers are inherently nested hierarchically.

cAdvisor

Quick Start: Running cAdvisor in a Docker Container

To quickly tryout cAdvisor on your machine with Docker, we have a Docker image that includes everything you need to get started. You can run a single cAdvisor to monitor the whole machine. Simply run:

VERSION=v0.35.0 # use the latest release version from https://github.com/google/cadvisor/releases
sudo docker run \
  --volume=/:/rootfs:ro \
  --volume=/var/run:/var/run:ro \
  --volume=/sys:/sys:ro \
  --volume=/var/lib/docker/:/var/lib/docker:ro \
  --volume=/dev/disk/:/dev/disk:ro \
  --publish=8080:8080 \
  --detach=true \
  --name=cadvisor \
  gcr.io/google-containers/cadvisor:$VERSION

cAdvisor is now running (in the background) on http://localhost:8080. The setup includes directories with Docker state cAdvisor needs to observe.

Note: If you're running on CentOS, Fedora, or RHEL (or are using LXC), take a look at our running instructions.

We have detailed instructions on running cAdvisor standalone outside of Docker. cAdvisor running options may also be interesting for advanced usecases. If you want to build your own cAdvisor Docker image, see our deployment page.

For Kubernetes users, cAdvisor can be run as a daemonset. See the instructions for how to get started, and for how to kustomize it to fit your needs.

Building and Testing

See the more detailed instructions in the build page. This includes instructions for building and deploying the cAdvisor Docker image.

Exporting stats

cAdvisor supports exporting stats to various storage plugins. See the documentation for more details and examples.

Web UI

cAdvisor exposes a web UI at its port:

http://<hostname>:<port>/

See the documentation for more details.

Remote REST API & Clients

cAdvisor exposes its raw and processed stats via a versioned remote REST API. See the API's documentation for more information.

There is also an official Go client implementation in the client directory. See the documentation for more information.

Roadmap

cAdvisor aims to improve the resource usage and performance characteristics of running containers. Today, we gather and expose this information to users. In our roadmap:

  • Advise on the performance of a container (e.g.: when it is being negatively affected by another, when it is not receiving the resources it requires, etc).
  • Auto-tune the performance of the container based on previous advise.
  • Provide usage prediction to cluster schedulers and orchestration layers.

Community

Contributions, questions, and comments are all welcomed and encouraged! cAdvisor developers hang out on Slack in the #sig-node channel (get an invitation here). We also have the kubernetes-users Google Groups mailing list.