This patch adds requisite support to retrieve system uuid details for Power
systems. Power systems do not have DMI data. However most of the relevant details
are either in /proc or /sys. For baremetal servers, the UID is available in
/proc/device-tree/system-id. For guests the UUID is available in
/proc/device-tree/vm,uuid inside the guest. Guest's /proc filesystem do not have
/proc/device-tree/system-id
Example
On baremetal system
$cat /proc/device-tree/system-id
2122AAA
On a guest VM
$cat /proc/device-tree/vm,uuid
4b1a1a7e-079e-479c-8072-d8108f31050c
Signed-off-by: Pradipta Kr. Banerjee <bpradip@in.ibm.com>
The current logic assumes that entries are added to the store in
monotonically increasing order for time. This is not true when
we add creation events for existing containers.
. Remove counting of taskgroups from scheddebug.
. Move monitoring thread 500ms ahead of other containers housekeeping.
. Rely on /proc/loadavg for root load.
. Cover up for scheddebug atomicity issues (WIP)
. Remove counting of monitoring thread.
Getting better, but still a bit farther away from ideal load :(
Made an oomparser that gets oomkill logs from kernel messages
Made an oomparser that gets oomkill logs from kernel messages
Made an oomparser that gets oomkill logs from kernel messages
Made an oomparser that gets oomkill logs from kernel messages
Made an oomparser that gets oomkill logs from kernel messages
Made an oomparser that gets oomkill logs from kernel messages
Type can be one of "none", "noop", "deadline", "cfq".
For block devices that don't use scheduler (like dm), the type will be "none".
We'll also report "none" for partitions when we start reporting those.
The stats are only populated when cAdvisor is running outside network namespaces.
We'll add a different backend to retrieve the same data from within namespaces.
sched_debug is getting the wrong load information. Runnable tasks list
in the output include running and sleeping tasks. We only need to look
at nr_running for each scheduling entity to figure out load. We also
don't need per-core stats.
I am going to redo these to derive per-cgroup load from nr_running.