. Remove counting of taskgroups from scheddebug.
. Move monitoring thread 500ms ahead of other containers housekeeping.
. Rely on /proc/loadavg for root load.
. Cover up for scheddebug atomicity issues (WIP)
. Remove counting of monitoring thread.
Getting better, but still a bit farther away from ideal load :(
Made an oomparser that gets oomkill logs from kernel messages
Made an oomparser that gets oomkill logs from kernel messages
Made an oomparser that gets oomkill logs from kernel messages
Made an oomparser that gets oomkill logs from kernel messages
Made an oomparser that gets oomkill logs from kernel messages
Made an oomparser that gets oomkill logs from kernel messages
Type can be one of "none", "noop", "deadline", "cfq".
For block devices that don't use scheduler (like dm), the type will be "none".
We'll also report "none" for partitions when we start reporting those.
The stats are only populated when cAdvisor is running outside network namespaces.
We'll add a different backend to retrieve the same data from within namespaces.
sched_debug is getting the wrong load information. Runnable tasks list
in the output include running and sleeping tasks. We only need to look
at nr_running for each scheduling entity to figure out load. We also
don't need per-core stats.
I am going to redo these to derive per-cgroup load from nr_running.
This is read once at start of cAdvisor. We can use this to report
machine state as well as return logical name for block devices in UI.
Signed-off-by: Rohit Jnagal <jnagal@google.com> (github: rjnagal)