Go to file
David Ashpole 1654de2f62
Merge pull request #2239 from lorenz/nocgo-cgroupstats
Remove use of cgo for cgroup stats and refactoring of constants
2019-05-14 12:03:59 -07:00
accelerators Move from glog to klog 2018-11-08 18:06:28 -05:00
api Move from glog to klog 2018-11-08 18:06:28 -05:00
build cadvisor: build: fix broken build with Makefile 2018-02-05 19:17:10 +05:30
cache Support multiple storage backends 2019-04-10 09:38:17 +08:00
client Move from glog to klog 2018-11-08 18:06:28 -05:00
collector update testify dependency 2017-11-17 16:15:28 -08:00
container Replace use of cgo for getting number of online CPUs 2019-05-13 21:54:02 +02:00
deploy add manifests for rbac and psps 2019-05-01 21:36:39 +01:00
devicemapper Move from glog to klog 2018-11-08 18:06:28 -05:00
docs fix typo 2019-04-01 16:55:02 +09:00
events Move from glog to klog 2018-11-08 18:06:28 -05:00
fs Move fs.Context to types.go 2019-04-11 13:50:03 +00:00
Godeps Update vendored golang.org/x/sys/unix 2019-05-13 03:04:18 +02:00
healthz Fix imported package names to not use mixedCaps or under_scores 2015-10-22 12:10:57 +08:00
http add prometheus metrics timestamp 2018-12-11 11:44:17 +08:00
info Move from glog to klog 2018-11-08 18:06:28 -05:00
integration Move from glog to klog 2018-11-08 18:06:28 -05:00
machine Bug fix if no NUMA support in kernel 2018-12-14 01:56:05 -08:00
manager Split docker context initialization 2019-04-11 13:50:03 +00:00
metrics fix timestamp error for container_last_seen 2019-01-04 10:12:38 +08:00
pages update asset headers to 2019 2019-01-03 10:40:50 -08:00
storage Move from glog to klog 2018-11-08 18:06:28 -05:00
summary Export type to calculate percentiles 2015-07-21 17:52:01 -07:00
utils Remove cgo use and lots of ad-hoc defined kernel constants 2019-05-13 03:04:26 +02:00
validate add check for cpu cfs bandwidth in validate endpoint 2018-03-23 17:47:21 +08:00
vendor Update vendored golang.org/x/sys/unix 2019-05-13 03:04:18 +02:00
version Simplify cAdvisor release versioning 2016-06-29 18:27:07 -07:00
watcher Reorganize code to allow conditional enablement of runtimes 2019-04-05 17:37:49 -04:00
zfs Move from glog to klog 2018-11-08 18:06:28 -05:00
.gitignore Gitignore Files generated by JetBrains IDEs 2017-03-18 16:51:36 +05:30
AUTHORS Remove mention of contributors file. We don't have one. 2014-12-30 17:16:46 +00:00
cadvisor_test.go Add udp and udp6 network statistics 2017-04-10 20:41:51 +01:00
cadvisor.go Move auto-registration to explicit install packages, register plugin interfaces 2019-04-11 13:50:03 +00:00
CHANGELOG.md v0.33.0 Changelog 2019-02-26 17:05:53 -08:00
CONTRIBUTING.md Add CONTRIBUTING.md 2014-06-10 13:09:14 -07:00
LICENSE Migrating cAdvisor code from lmctfy 2014-06-09 12:12:07 -07:00
logo.png Run PNG crusher on logo.png 2016-02-10 15:02:44 -08:00
Makefile migrate to prow, which uses node-e2e to run tests 2018-02-01 15:20:53 -08:00
README.md update documentation to make /var/run read-only, and add /dev/disk to the kustomize base 2018-08-21 17:39:56 -07:00
storagedriver.go Support multiple storage backends 2019-04-10 09:38:17 +08:00
test.htdigest Added HTTP Auth and HTTP Digest authentication #302 2014-12-11 17:25:43 +05:30
test.htpasswd Added HTTP Auth and HTTP Digest authentication #302 2014-12-11 17:25:43 +05:30

cAdvisor

cAdvisor (Container Advisor) provides container users an understanding of the resource usage and performance characteristics of their running containers. It is a running daemon that collects, aggregates, processes, and exports information about running containers. Specifically, for each container it keeps resource isolation parameters, historical resource usage, histograms of complete historical resource usage and network statistics. This data is exported by container and machine-wide.

cAdvisor has native support for Docker containers and should support just about any other container type out of the box. We strive for support across the board so feel free to open an issue if that is not the case. cAdvisor's container abstraction is based on lmctfy's so containers are inherently nested hierarchically.

cAdvisor

Quick Start: Running cAdvisor in a Docker Container

To quickly tryout cAdvisor on your machine with Docker, we have a Docker image that includes everything you need to get started. You can run a single cAdvisor to monitor the whole machine. Simply run:

sudo docker run \
  --volume=/:/rootfs:ro \
  --volume=/var/run:/var/run:ro \
  --volume=/sys:/sys:ro \
  --volume=/var/lib/docker/:/var/lib/docker:ro \
  --volume=/dev/disk/:/dev/disk:ro \
  --publish=8080:8080 \
  --detach=true \
  --name=cadvisor \
  google/cadvisor:latest

cAdvisor is now running (in the background) on http://localhost:8080. The setup includes directories with Docker state cAdvisor needs to observe.

Note: If you're running on CentOS, Fedora, or RHEL (or are using LXC), take a look at our running instructions.

We have detailed instructions on running cAdvisor standalone outside of Docker. cAdvisor running options may also be interesting for advanced usecases. If you want to build your own cAdvisor Docker image, see our deployment page.

For Kubernetes users, cAdvisor can be run as a daemonset. See the instructions for how to get started, and for how to kustomize it to fit your needs.

Building and Testing

See the more detailed instructions in the build page. This includes instructions for building and deploying the cAdvisor Docker image.

Exporting stats

cAdvisor supports exporting stats to various storage plugins. See the documentation for more details and examples.

Web UI

cAdvisor exposes a web UI at its port:

http://<hostname>:<port>/

See the documentation for more details.

Remote REST API & Clients

cAdvisor exposes its raw and processed stats via a versioned remote REST API. See the API's documentation for more information.

There is also an official Go client implementation in the client directory. See the documentation for more information.

Roadmap

cAdvisor aims to improve the resource usage and performance characteristics of running containers. Today, we gather and expose this information to users. In our roadmap:

  • Advise on the performance of a container (e.g.: when it is being negatively affected by another, when it is not receiving the resources it requires, etc).
  • Auto-tune the performance of the container based on previous advise.
  • Provide usage prediction to cluster schedulers and orchestration layers.

Community

Contributions, questions, and comments are all welcomed and encouraged! cAdvisor developers hang out on Slack in the #sig-node channel (get an invitation here). We also have the kubernetes-users Google Groups mailing list.