Linux Scheduler Statistics
/proc/schedstat format and /proc/<pid>/stat format changes
/proc/schedstat format and /proc/<pid>/stat format changes version 5


If you have scripts for version 4, note that several fields were deleted, causing other fields to move, and some new fields were added. Version 4 scripts should require a moderate porting effort, depending on how modular you made the field parsing.

Format for version 5 of schedstat:

tag 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
tag is cpuN or totals.

NOTE: In the sched_yield() statistics, the active queue is considered empty if it has only one process in it, since obviously the process calling sched_yield() is that process.

First four are sched_yield() statistics:

  1. # of times both the active and the expired queue were empty
  2. # of times just the active queue was empty
  3. # of times just the expired queue was empty
  4. # of times sched_yield() was called

Next four are schedule() statistics:

  1. # of times the active queue had at least one other process on it.
  2. # of times we switched to the expired queue and reused it
  3. # of times schedule() was called

Next seven are statistics dealing with load_balance() (requires CONFIG_SMP):

  1. # of times load_balance() was called at an idle tick
  2. # of times load_balance() was called at a busy tick
  3. # of times load_balance() was called from schedule()
  4. # of times load_balance() was called
  5. sum of imbalances discovered (if any) with each call to load_balance()
  6. # of times load_balance() was called when we did not find a "busiest" group
  7. # of times load_balance() was called when we did not find a "busiest" queue

Next two are statistics dealing with pull_task() (requires CONFIG_SMP):

  1. # of times pull_task() moved a task to this cpu
  2. # of times pull_task() stole a task from this cpu

Next three are statistics dealing with active_load_balance() (requires CONFIG_SMP):

  1. # of times active_load_balance() was called
  2. # of times active_load_balance() caused us to gain a task
  3. # of times active_load_balance() caused us to lose a task

Next two are simply call counters for two routines:

  1. # of times sched_balance_exec() was called
  2. # of times migrate_to_cpu() was called

Last three are statistics dealing with scheduling latency:

  1. sum of all time spent running by tasks on this processor (in ms)
  2. sum of all time spent waiting by tasks for this processor (in ms)
  3. # of tasks (not necessarily unique) given to the processor

The last three make it possible to find the average latency on a particular runqueue or the system overall. Given two points in time, A and B, (23B - 23A)/(24B - 24A) will give you the average time processes had to wait after being scheduled to run but before actually running.


/proc/<pid>/stat

The patch also patches the stat output of individual processes to include the same information (obtainable from /proc/<pid>/stat). There, the above three new fields are tacked on the end but apply only for that process. The program latency.c, mentioned on the previous page, makes use of these extra fields to report on how well a particular process is faring under the scheduler's policies. The example below utilizes that program on a particular process, rather than examining runqueue statistics through sampling /proc/schedstat. The program observed, loadtest, is a simply-written cpu-intensive program and thus uses up most of its allocated timeslice without voluntarily pausing for I/O. Processes such as cc and bash may well pause for other I/O events and give up the cpu at times, and thus appear to have much smaller timeslices. But it is important to remember that avgrun only tells us, on average, how long we were on the cpu each time, not what our given timeslice was.

% latency 25611
25611 (loadtest) avgrun=60.36ms avgwait=0.00ms
25611 (loadtest) avgrun=92.56ms avgwait=0.00ms
25611 (loadtest) avgrun=99.94ms avgwait=0.00ms
25611 (loadtest) avgrun=99.94ms avgwait=0.00ms
25611 (loadtest) avgrun=99.94ms avgwait=0.00ms
25611 (loadtest) avgrun=99.96ms avgwait=0.00ms
25611 (loadtest) avgrun=99.96ms avgwait=0.00ms
25611 (loadtest) avgrun=99.94ms avgwait=0.02ms
25611 (loadtest) avgrun=99.94ms avgwait=0.00ms
25611 (loadtest) avgrun=99.94ms avgwait=0.00ms
25611 (loadtest) avgrun=99.94ms avgwait=0.00ms
25611 (loadtest) avgrun=99.94ms avgwait=0.00ms
25611 (loadtest) avgrun=99.94ms avgwait=0.00ms
25611 (loadtest) avgrun=99.92ms avgwait=0.02ms
25611 (loadtest) avgrun=99.96ms avgwait=0.00ms
Process 25611 (loadtest) has exited.
%

Since the above test was done on an unloaded, multiple-cpu machine, loadtest pretty much had a cpu to itself, was granted about a 100ms timeslice, and used virtually all of it before giving up the cpu. Renicing loadtest can show dramatically how the timeslice changes with the priority of the process. Running N+1 loadtests, where N is the number of processors on the machine, introduces contention and the avgwait field starts to go up significantly.

Questions to ricklind@us.ibm.com