Linux Scheduler Statistics
/proc/schedstat format and /proc/<pid>/stat format changes
/proc/schedstat format and /proc/<pid>/stat format changes
version 9
Version 9 introduces support for sched_domains, which hit the mainline kernel
in 2.6.7. Some counters make more sense to be per-runqueue; other to be
per-domain. These are detailed below.
In version 9 of schedstat,
there is at least one level of domain statistics for each cpu listed,
and there may well be more than one domain. Domains have no particular
names in this implementation, but the
highest numbered one typically arbitrates balancing across all the cpus on
the machine, while domain0 is the most tightly focused domain, sometimes
balancing only between pairs of cpus. The first field in the domain stats
is a bit map indicating which cpus are affected by that domain.
CPU statistics
cpuN 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
NOTE: |
In the sched_yield() statistics,
the active queue is considered empty if it has only one process
in it, since obviously the process calling sched_yield() is that process.
|
First four fields are sched_yield() statistics:
- # of times both the active and the expired queue were empty
- # of times just the active queue was empty
- # of times just the expired queue was empty
- # of times sched_yield() was called
Next four are schedule() statistics:
- # of times the active queue had at least one other process on it.
- # of times we switched to the expired queue and reused it
- # of times schedule() was called
- # of times schedule() left the processor idle
Next four are active_load_balance() statistics:
- # of times active_load_balance() was called
- # of times active_load_balance() caused this cpu to gain a task
- # of times active_load_balance() caused this cpu to lose a task
- # of times active_load_balance() tried to move a task and failed
Next three are try_to_wake_up() statistics:
- # of times try_to_wake_up() was called
- # of times try_to_wake_up() successfully moved the awakening task
- # of times try_to_wake_up() attempted to move the awakening task
|
Next two are wake_up_forked_thread() statistics:
- # of times wake_up_forked_thread() was called
- # of times wake_up_forked_thread() successfully moved the forked task
Next one is a sched_migrate_task() statistic:
- # of times sched_migrate_task() was called
Next one is a sched_balance_exec() statistic:
- # of times sched_balance_exec() was called
Next three are statistics describing scheduling latency:
- sum of all time spent running by tasks on this processor (in ms)
- sum of all time spent waiting to run by tasks on this processor (in ms)
- # of tasks (not necessarily unique) given to the processor
The last six are statistics dealing with pull_task():
- # of times pull_task() moved a task to this cpu when newly idle
- # of times pull_task() stole a task from this cpu when another cpu was newly idle
- # of times pull_task() moved a task to this cpu when idle
- # of times pull_task() stole a task from this cpu when another cpu was idle
- # of times pull_task() moved a task to this cpu when busy
- # of times pull_task() stole a task from this cpu when another cpu was busy
|
Domain statistics
domainN 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
The first field is a bit mask indicating what cpus this domain operates over.
The next fifteen are load_balance() statistics:
- # of times in this domain load_balance() was called when the cpu was idle
- # of times in this domain load_balance() tried to move one or more tasks and failed, when the cpu was idle
- sum of imbalances discovered (if any) with each call to load_balance() in this domain when the cpu was idle
- # of times in this domain load_balance() was called but did not find a busier queue while the cpu was idle
- # of times in this domain a busier queue was found while the cpu was idle but no busier group was found
- # of times in this domain load_balance() was called when the cpu was just becoming idle
- # of times in this domain load_balance() tried to move one or more tasks and failed, when the cpu was just becoming idle
- sum of imbalances discovered (if any) with each call to load_balance() in this domain when the cpu was just becoming idle
- # of times in this domain load_balance() was called but did not find a busier queue while the cpu was just becoming idle
- # of times in this domain a busier queue was found while the cpu was just becoming idle but no busier group was found
|
- # of times in this domain load_balance() was called when the cpu was busy
- # of times in this domain load_balance() tried to move one or more tasks and failed, when the cpu was busy
- sum of imbalances discovered (if any) with each call to load_balance() in this domain when the cpu was busy
- # of times in this domain load_balance() was called but did not find a busier queue while the cpu was busy
- # of times in this domain a busier queue was found while the cpu was busy but no busier group was found
Next two are sched_balance_exec() statistics:
- # of times in this domain sched_balance_exec() successfully pushed a task to a new cpu
- # of times in this domain sched_balance_exec() tried but failed to push a task to a new cpu
Next two are try_to_wake_up() statistics:
- # of times in this domain try_to_wake_up() tried to move a task based on affinity and cache warmth
- # of times in this domain try_to_wake_up() tried to move a task based on load balancing
|
/proc/<pid>/stat
The patch also patches the stat output of individual processes
to include
the same information (obtainable from /proc/<pid>/stat). There,
the above three new
fields are tacked on the end but apply only for that process.
The program
latency.c makes
use of these extra fields to report on how well a particular process
is faring under the scheduler's policies. The example below utilizes
that program on a
particular process, rather than examining runqueue statistics through
sampling /proc/schedstat. The program observed, loadtest,
is a simply-written cpu-intensive program and thus uses up most of
its allocated timeslice without voluntarily pausing for I/O. Processes
such as cc and bash may well pause for
other I/O events and give up the
cpu at times, and thus appear to have much smaller timeslices. But it
is important to remember that avgrun only tells us, on average,
how long
we were on the cpu each time, not what our given timeslice was.
% latency 25611
25611 (loadtest) avgrun=60.36ms avgwait=0.00ms
25611 (loadtest) avgrun=92.56ms avgwait=0.00ms
25611 (loadtest) avgrun=99.94ms avgwait=0.00ms
25611 (loadtest) avgrun=99.94ms avgwait=0.00ms
25611 (loadtest) avgrun=99.94ms avgwait=0.00ms
25611 (loadtest) avgrun=99.96ms avgwait=0.00ms
25611 (loadtest) avgrun=99.96ms avgwait=0.00ms
25611 (loadtest) avgrun=99.94ms avgwait=0.02ms
25611 (loadtest) avgrun=99.94ms avgwait=0.00ms
25611 (loadtest) avgrun=99.94ms avgwait=0.00ms
25611 (loadtest) avgrun=99.94ms avgwait=0.00ms
25611 (loadtest) avgrun=99.94ms avgwait=0.00ms
25611 (loadtest) avgrun=99.94ms avgwait=0.00ms
25611 (loadtest) avgrun=99.92ms avgwait=0.02ms
25611 (loadtest) avgrun=99.96ms avgwait=0.00ms
Process 25611 (loadtest) has exited.
%
Since the above test was done on an unloaded, multiple-cpu machine,
loadtest
pretty much had a cpu to itself, was granted about a 100ms timeslice,
and used virtually all of it before giving up the cpu.
Renicing loadtest can show dramatically how the
timeslice changes with the
priority of the process. Running N+1 loadtests, where N
is the number of processors on the machine, introduces contention and
the avgwait field starts to go up significantly.
Questions to ricklind@us.ibm.com