Blog

  • Debian WordPress 6.5

    Today I have updated the Debian WordPress packages to version 6.5.

    Not exactly sure what has changed, but they’re very excited over on the WordPress site about fonts and templates. I don’t think I’m selling it well, so hop over to the WordPress 6.5 Announcement for the real details.

  • Debian WordPress 6.4.1

    Debian WordPress 6.4.1

    The Debian WordPress package was updated tonight to version 6.4.1. Version 6.4 got missed before they updated to a minor update.

    The major change I can see is the introduction of a new theme called twentytwentyfour plus some easier, or more confusing, ways of writing posts. If you want more control on how they look, you’ll love it but if you just want to bang something out you won’t.

  • Devices with cgroup v2

    Docker and other container systems by default restrict access to devices on the host. They used to do this with cgroups with the cgroup v1 system, however, the second version of cgroups removed this controller and the man page says:

    Cgroup v2 device controller has no interface files and is implemented on top of cgroup BPF.
    https://www.kernel.org/doc/Documentation/admin-guide/cgroup-v2.rst

    That is just awesome, nothing to see here, go look at the BPF documents if you have cgroup v2.

    With cgroup v1 if you wanted to know what devices were permitted, you just would cat /sys/fs/cgroup/XX/devices.allow and you were done!

    The kernel documentation is not very helpful, sure its something in BPF and has something to do with the cgroup BPF specifically, but what does that mean?

    There doesn’t seem to be an easy corresponding method to get the same information. So to see what restrictions a docker container has, we will have to:

    1. Find what cgroup the programs running in the container belong to
    2. Find what is the eBPF program ID that is attached to our container cgroup
    3. Dump the eBPF program to a text file
    4. Try to interpret the eBPF syntax

    The last step is by far the most difficult.

    Finding a container’s cgroup

    All containers have a short ID and a long ID. When you run the docker ps command, you get the short id. To get the long id you can either use the --no-trunc flag or just guess from the short ID. I usually do the second.

    $ docker ps 
    CONTAINER ID   IMAGE            COMMAND       CREATED          STATUS          PORTS     NAMES
    a3c53d8aaec2   debian:minicom   "/bin/bash"   19 minutes ago   Up 19 minutes             inspiring_shannon
    

    So the short ID is a3c53d8aaec2 and the long ID is a big ugly hex string starting with that. I generally just paste the relevant part in the next step and hit tab. For this container the cgroup is /sys/fs/cgroup/system.slice/docker-a3c53d8aaec23c256124f03d208732484714219c8b5f90dc1c3b4ab00f0b7779.scope/ Notice that the last directory has “docker-” then the short ID.

    If you’re not sure of the exact path. The “/sys/fs/cgroup” is the cgroup v2 mount point which can be found with mount -t cgroup2 and then rest is the actual cgroup name. If you know the process running in the container then the cgroup column in ps will show you.

    $ ps -o pid,comm,cgroup 140064
        PID COMMAND         CGROUP
     140064 bash            0::/system.slice/docker-a3c53d8aaec23c256124f03d208732484714219c8b5f90dc1c3b4ab00f0b7779.scope

    Either way, you will have your cgroup path.

    eBPF programs and cgroups

    Next we will need to get the eBPF program ID that is attached to our recently found cgroup. To do this, we will need to use the bpftool. One thing that threw me for a long time is when the tool talks about a program or a PROG ID they are talking about the eBPF programs, not your processes! With that out of the way, let’s find the prog id.

    $ sudo bpftool cgroup list /sys/fs/cgroup/system.slice/docker-a3c53d8aaec23c256124f03d208732484714219c8b5f90dc1c3b4ab00f0b7779.scope/
    ID       AttachType      AttachFlags     Name
    90       cgroup_device   multi

    Our cgroup is attached to eBPF prog with ID of 90 and the type of program is cgroup _device.

    Dumping the eBPF program

    Next, we need to get the actual code that is run every time a process running in the cgroup tries to access a device. The program will take some parameters and will return either a 1 for yes you are allowed or a zero for permission denied. Don’t use the file option as it dumps the program in binary format. The text version is hard enough to understand.

    sudo bpftool prog dump xlated id 90 > myebpf.txt

    Congratulations! You now have the eBPF program in a human-readable (?) format.

    Interpreting the eBPF program

    The eBPF format as dumped is not exactly user friendly. It probably helps to first go and look at an example program to see what is going on. You’ll see that the program splits type (lower 4 bytes) and access (higher 4 bytes) and then does comparisons on those values. The eBPF has something similar:

       0: (61) r2 = *(u32 *)(r1 +0)
       1: (54) w2 &= 65535
       2: (61) r3 = *(u32 *)(r1 +0)
       3: (74) w3 >>= 16
       4: (61) r4 = *(u32 *)(r1 +4)
       5: (61) r5 = *(u32 *)(r1 +8)
    

    What we find is that once we get past the first few lines filtering the given value that the comparison lines have:

    • r2 is the device type, 1 is block, 2 is character.
    • r3 is the device access, it’s used with r1 for comparisons after masking the relevant bits. mknod, read and write are 1,2 and 3 respectively.
    • r4 is the major number
    • r5 is the minor number

    For a even pretty simple setup, you are going to have around 60 lines of eBPF code to look at. Luckily, you’ll often find the lines for the command options you added will be near the end, which makes it easier. For example:

      63: (55) if r2 != 0x2 goto pc+4
      64: (55) if r4 != 0x64 goto pc+3
      65: (55) if r5 != 0x2a goto pc+2
      66: (b4) w0 = 1
      67: (95) exit

    This is a container using the option --device-cgroup-rule='c 100:42 rwm'. It is checking if r2 (device type) is 2 (char) and r4 (major device number) is 0x64 or 100 and r5 (minor device number) is 0x2a or 42. If any of those are not true, move to the next section, otherwise return with 1 (permit). We have all access modes permitted so it doesn’t check for it.

    The previous example has all permissions for our device with id 100:42, what about if we only want write access with the option --device-cgroup-rule='c 100:42 r'. The resulting eBPF is:

      63: (55) if r2 != 0x2 goto pc+7  
      64: (bc) w1 = w3
      65: (54) w1 &= 2
      66: (5d) if r1 != r3 goto pc+4
      67: (55) if r4 != 0x64 goto pc+3
      68: (55) if r5 != 0x2a goto pc+2
      69: (b4) w0 = 1
      70: (95) exit
    

    The code is almost the same but we are checking that w3 only has the second bit set, which is for reading, effectively checking for X==X&2. It’s a cautious approach meaning no access still passes but multiple bits set will fail.

    The device option

    docker run allows you to specify files you want to grant access to your containers with the --device flag. This flag actually does two things. The first is to great the device file in the containers /dev directory, effectively doing a mknod command. The second thing is to adjust the eBPF program. If the device file we specified actually did have a major number of 100 and a minor of 42, the eBPF would look exactly like the above snippets.

    What about privileged?

    So we have used the direct cgroup options here, what does the --privileged flag do? This lets the container have full access to all the devices (if the user running the process is allowed). Like the --device flag, it makes the device files as well, but what does the filtering look like? We still have a cgroup but the eBPF program is greatly simplified, here it is in full:

       0: (61) r2 = *(u32 *)(r1 +0)
       1: (54) w2 &= 65535
       2: (61) r3 = *(u32 *)(r1 +0)
       3: (74) w3 >>= 16
       4: (61) r4 = *(u32 *)(r1 +4)
       5: (61) r5 = *(u32 *)(r1 +8)
       6: (b4) w0 = 1
       7: (95) exit

    There is the usual setup lines and then, return 1. Everyone is a winner for all devices and access types!

  • Fixing iCalendar feeds

    Fixing iCalendar feeds

    The local government here has all the schools use an iCalendar feed for things like when school terms start and stop and other school events occur. The department’s website also has events like public holidays. The issue is that all of them don’t make it an all-day event but one that happens at midnight, or one past midnight.

    The events synchronise fine, though Google’s calendar is known for synchronising when it feels like it, not at any particular time you would like it to.

    Screenshot of Android Calendar showing a tiny bar at midnight which is the event.

    Even though a public holiday is all day, they are sent as appointments for midnight.

    That means on my phone all the events are these tiny bars that appear right up the top of the screen and are easily missed, especially when the focus of the calendar is during the day.

    On the phone, you can see the tiny purple bar at midnight. This is how the events appear. It’s not the calendar’s fault, as far as it knows the school events are happening at midnight.

    You can also see Lunar New Year and Australia Day appear in the all-day part of the calendar and don’t scroll away. That’s where these events should be.

    Why are all the events appearing at midnight? The reason is the feed is incorrectly set up and has the time. The events are sent in an iCalendar format and a typical event looks like this:

    BEGIN:VEVENT
    DTSTART;TZID=Australia/Sydney:20230206T000000
    DTEND;TZID=Australia/Sydney:20230206T000000
    SUMMARY:School Term starts
    END:VEVENT

    The event starting and stopping date and time are the DTSTART and DTEND lines. Both of them have the date of 2023/02/06 or 6th February 2023 and a time of 00:00:00 or midnight. So the calendar is doing the right thing, we need to fix the feed!

    The Fix

    I wrote a quick and dirty PHP script to download the feed from the real site, change the DTSTART and DTEND lines to all-day events and leave the rest of it alone.

    <?php
    $site = $_GET['s'];
    if ($site == 'site1') {
        $REMOTE_URL='https://site1.example.net/ical_feed';
    } elseif ($site == 'site2') {
        $REMOTE_URL='https://site2.example.net/ical_feed';
    } else {
        http_response_code(400);
        die();
    }
    
    $fp = fopen($REMOTE_URL, "r");
    if (!$fp) {
        die("fopen");
    }
    header('Content-Type: text/calendar');
    while (( $line = fgets($fp, 1024)) !== false) {
        $line = preg_replace(
            '/^(DTSTART|DTEND);[^:]+:([0-9]{8})T000[01]00/',
            '${1};VALUE=DATE:${2}',
            $line);
        echo $line;
    }
    ?>

    It’s pretty quick and nasty but gets the job done. So what is it doing?

    • Lines 2-10: Check the given variable s and match it to either “site1” or “site2” to obtain the URL. If you only had one site to fix you could just set the REMOTE_URL variable.
    • Lines 12-15: A typical fopen() and nasty error handling.
    • Line 16: set the content type to a calendar.
    • Line 17: A while loop to read the contents of the remote site line by line.
    • Line 18-21: This is where the “magic” happens, preg_replace is a Perl regular expression replacement. The PCRE is:
      • Finding lines starting with DTSTART or DTEND and store it in capture 1
      • Skip everything that isn’t a colon. This is the timezone information. I wasn’t sure if it was needed and how to combine it so I took it out. All the all-day events I saw don’t have a time zone.
      • Find 8 numerics (this is for YYYYMMDD) and store it in capture 2.
      • Scan the Time part, a literal “T” then HHMMSS. Some sites use midnight some use one minute past, so it covers both.
      • Replace the line with either DTSTART or DTEND (capture 1), set the value type to DATE as the default is date/time and print the date (capture 2).
    • Line 22: Print either the modified or original line.

    You need to save the script on your web server somewhere, possibly with an alias command.

    The whole point of this is to change the type from a date/time to a date-only event and only print the date part of it for the start and end of it. The resulting iCalendar event looks like this:

    BEGIN:VEVENT
    DTSTART;VALUE=DATE:20230206
    DTEND;VALUE=DATE:20230206
    SUMMARY:School Term starts
    END:VEVENT

    The calendar then shows it properly as an all-day event. I would check the script works before doing the next step. You can use things like curl or wget to download it. If you use a normal browser, it will probably just download the translated file.

    If you’re not seeing the right thing then it’s probably the PCRE failing. You can check it online with a regex checker such as https://regex101.com. The site has saved my PCRE and match so you got something to start with.

    Calendar settings

    The last thing to do is to change the URL in your calendar settings. Each calendar system has a different way of doing it. For Google Calendar they provide instructions and you want to follow the section titled “Use a link to add a public Calendar”.

    The URL here is not the actual site’s URL (which you would have put into the REMOTE_URL variable before) but the URL of your script plus the ‘?s=site1″ part. So if you put your script aliased to /myical.php and the site ID was site1 and your website is www.example.com the URL would be “https://www.example.com/myical.php?s=site1”.

    You should then see the events appear as all-day events on your calendar.

  • WordPress 6.1

    Debian will soon have WordPress version 6.1 I’m not really sure of the improvements, but there is a new 2023 theme as part of the update.

    They really weren’t mucking around when they said the 6.0.3 security release would be short-lived.

    The updates seem to be focused on content creation and making the formatting do what content creators want it to do. For me, I need headings 1 and 2 , paragraphs and preformatted text.

  • Linux Memory Statistics

    Pretty much everyone who has spent some time on a command line in Linux would have looked at the free command. This command provides some overall statistics on the memory and how it is used. Typical output looks something like this:

                 total        used        free      shared  buff/cache  available
    Mem:      32717924     3101156    26950016      143608     2666752  29011928
    Swap:      1000444           0     1000444
    

    Memory sits in the first row after the headers then we have the swap statistics. Most of the numbers are directly fetched from the procfs file /proc/meminfo which are scaled and presented to the user. A good example of a “simple” stat is total, which is just the MemTotal row located in that file. For the rest of this post, I’ll make the rows from /proc/meminfo have an amber background.

    What is Free, and what is Used?

    While you could say that the free value is also merely the MemFree row, this is where Linux memory statistics start to get odd. While that value is indeed what is found for MemFree and not a calculated field, it can be misleading.

    Most people would assume that Free means free to use, with the implication that only this amount of memory is free to use and nothing more. That would also mean the used value is really used by something and nothing else can use it.

    In the early days of free and Linux statistics in general that was how it looked. Used is a calculated field (there is no MemUsed row) and was, initially, Total - Free.

    The problem was, Used also included Buffers and Cached values. This meant that it looked like Linux was using a lot of memory for… something. If you read old messages before 2002 that are talking about excessive memory use, they quite likely are looking at the values printed by free.

    The thing was, under memory pressure the kernel could release Buffers and Cached for use. Not all of the storage but some of it so it wasn’t all used. To counter this, free showed a row between Memory and Swap with Used having Buffers and Cached removed and Free having the same values added:

                 total       used       free     shared    buffers     cached
    Mem:      32717924    6063648   26654276          0     313552    2234436
    -/+ buffers/cache:    3515660   29202264
    Swap:      1000444          0    1000444

    You might notice that this older version of free from around 2001 shows buffers and cached separately and there’s no available column (we’ll get to Available later.) Shared appears as zero because the old row was labelled MemShared and not Shmem which was changed in Linux 2.6 and I’m running a system way past that version.

    It’s not ideal, you can say that the amount of free memory is something above 26654276 and below 29202264 KiB but nothing more accurate. buffers and cached are almost never all-used or all-unused so the real figure is not either of those numbers but something in-between.

    Cached, just not for Caches

    That appeared to be an uneasy truce within the Linux memory statistics world for a while. By 2014 we realised that there was a problem with Cached. This field used to have the memory used for a cache for files read from storage. While this value still has that component, it was also being used for tmpfs storage and the use of tmpfs went from an interesting idea to being everywhere. Cheaper memory meant larger tmpfs partitions went from a luxury to something everyone was doing.

    The problem is with large files put into a tmpfs partition the Free would decrease, but so would Cached meaning the free column in the -/+ row would not change much and understate the impact of files in tmpfs.

    Lucky enough in Linux 2.6.32 the developers added a Shmem row which was the amount of memory used for shmem and tmpfs. Subtracting that value from Cached gave you the “real” cached value which we call main_cache and very briefly this is what the cached value would show in free.

    However, this caused further problems because not all Shem can be reclaimed and reused and probably swapped one set of problematic values for another. It did however prompt the Linux kernel community to have a look at the problem.

    Enter Available

    There was increasing awareness of the issues with working out how much memory a system has free within the kernel community. It wasn’t just the output of free or the percentage values in top, but load balancer or workload placing systems would have their own view of this value. As memory management and use within the Linux kernel evolved, what was or wasn’t free changed and all the userland programs were expected somehow to keep up.

    The kernel developers realised the best place to get an estimate of the memory not used was in the kernel and they created a new memory statistic called Available. That way if how the memory is used or set to be unreclaimable they could change it and userland programs would go along with it.

    procps has a fallback for this value and it’s a pretty complicated setup.

    1. Find the min_free_kybtes setting in sysfs which is the minimum amount of free memory the kernel will handle
    2. Add a 25% to this value (e.g. if it was 4000 make it 5000), this is the low watermark
    3. To find available, start with MemFree and subtract the low watermark
    4. If half of both Inactive(file) and Active(file) values are greater than the low watermark, add that half value otherwise add the sum of the values minus the low watermark
    5. If half of Slab Claimable is greater than the low watermark, add that half value otherwise add Slab Claimable minus the low watermark
    6. If what you get is less than zero, make available zero
    7. Or, just look at Available in /proc/meminfo

    For the free program, we added the Available value and the +/- line was removed. The main_cache value was Cached + Slab while Used was calculated as Total - Free - main_cache - Buffers. This was very close to what the Used column in the +/- line used to show.

    What’s on the Slab?

    The next issue that came across was the use of slabs. At this point, main_cache was Cached + Slab, but Slab consists of reclaimable and unreclaimable components. One part of Slab can be used elsewhere if needed and the other cannot but the procps tools treated them the same. The Used calculation should not subtract SUnreclaim from the Total because it is actually being used.

    So in 2015 main_cache was changed to be Cached + SReclaimable. This meant that Used memory was calculated as Total - Free - Cached - SReclaimable - Buffers.

    Revenge of tmpfs and the return of Available

    The tmpfs impacting Cached was still an issue. If you added a 10MB file into a tmpfs partition, then Free would reduce by 10MB and Cached would increase by 10MB meaning Used stayed unchanged even though 10MB had gone somewhere.

    It was time to retire the complex calculation of Used. For procps 4.0.1 onwards, Used now means “not available”. We take the Total memory and subtract the Available memory. This is not a perfect setup but it is probably going to be the best one we have and testing is giving us much more sensible results. It’s also easier for people to understand (take the total value you see in free, then subtract the available value).

    What does that mean for main_cache which is part of the buff/cache value you see? As this value is no longer in the used memory calculation, it is less important. Should it also be reverted to simply Cached without the reclaimable Slabs?

    The calculated fields

    In summary, what this means for the calculated fields in procps at least is:

    • Used: Total - Available, unless Available is not present then it’s Total – Free
    • Cached: Cached + Reclaimable Slabs
    • Swap/Low/HighUsed: Corresponding Total - Free (no change here)

    Almost everything else, with the exception of some bounds checking, is what you get out of /proc/meminfo which is straight from the kernel.

  • WordPress 5.8.2 Debian packages

    After a bit of a delay, WordPress version 5.8.2 packages should be available now. This is a minor update from 5.8.1 which fixes two bugs but not the security bug.

    The security bug is due to WordPress shipping its own CA store, which is a list of certificates it trusts to sign for websites. Debian WordPress has used the system certificate store which uses /etc/ssl/certs/ca-certificates.crt for years so is not impacted by this change. That CA file is generated by update-ca-certificates and is part of the ca-certificates package.

    We have also had another go of tamping down the nagging WordPress does about updates, as you cannot use the automatic updates through WordPress but via the usual Debian system. I see we are not fully there as WordPress has a site health page that doesn’t like things turned off.

    The two bugs fixed in 5.8.2 I’ve not personally hit, but they might help someone out there. In any case, an update is always good.

    Next stop 5.9

    The next planned release is in late January 2022. I’m sure there will be a new default theme, but they are planning on making big changes around the blocks and styles to make it easier to customise the look.

  • Fediverse Test Three

    This supposedly will go out to the fediverse if I can fix wp-cron.

  • Changing Grafana Legends

    I’m not sure if I just can search Google properly, or this really is just not written down much, but I have had problems with Grafana Legends (I would call them the series labels). The issue is that Grafana queries Prometheus for a time series and you want to display multiple lines, but the time-series labels you get are just not quite right.

    A simple example is you might be using the black-box exporter to monitor an external TCP port and you would just like the port number separate to display. The default output would look like this:

    probe_duration_seconds{instance="example.net:5222",job="blackbox",module="xmpp_banner"} = 0.01
    probe_duration_seconds{instance="example.net:5269",job="blackbox",module="xmpp_banner"} = 0.01
    

    I can graph the number of seconds that it takes to probe the 5222 and 5269 TCP ports, but my graph legend is going to have the hostname, making it cluttered. I just want the legend to be the port numbers on Grafana.

    The answer is to use a Prometheus function called label_replace that takes an existing label, applies a regular expression, then puts the result into another label. That’s right, regular expressions, and if you get them wrong then the label just doesn’t appear.

    Perl REGEX Problems courtesy of XKCD

    The label_replace documentation is a bit terse, and in my opinion, the order of parameters is messed up, but after a few goes I had what I needed:

    label_replace(probe_duration_seconds{module="xmpp_banner"}, "port", "$1", "instance", ".*:(.*)")
    
    probe_duration_seconds{instance="example.net:5222",job="blackbox",module="xmpp_banner",port="5222"}	0.001
    probe_duration_seconds{instance="example.net:5269",job="blackbox",module="xmpp_banner",port="5269"}	0.002
    

    The response now has a new label (or field if you like) called port. So what is this function to our data coming from probe_duration_seconds? The function format is:

    label_replace(value, dst_label, replacement, src_label, regex)

    So the function does the following:

    1. Evaluate value, which is generally some sort of query such as probe_duration_seconds
    2. Find the required source label src_label, in this example is instance, in this case the values are example.net:5222 and example.net:5269
    3. Apply regular expression regex, for us its “.*:(.*)” That says skip everying before “:” then capture/store everything past “:”. The brackets mean copy what is after the colon and put it in match
    4. Make a new label specified in dst_label, for us this is port
    5. Whatever is in replacement goes into dst_label. For this example it is “$1” which means match in our regular expression in the label called port.

    In short, the function captures everything after the colon in the instance label and puts that into a new label called port. It does this for each value that is returned into the first parameter.

    This means I can use the {{port}} in my Grafana graph Legend and it will show 5222 or 5269 respectively. I have made the Legend “TCP {{port}}” to give the below result, but I could have used {{port}} in Grafana Legend and made the result “TCP $1” in the label_replace function to get the same result.

    Grafana console showing the use of the label_replace function
  • Percent CPU for processes

    The ps program gives a snapshot of the processes running on your Unix-like system. On most Linux installations, this will be the ps program from the procps project.

    While you can get a lot of information from the tool, a lot of the fields need further explanation or can give “wrong” or confusing information; or putting it another way, they provide the right information that looks wrong.

    One of these confusing fields is the %CPU or pcpu field. You can see this as the third field with the ps aux command. You only really need the u option to see it, but ps aux is a pretty common invokation.

    More than 100%?

    This post was inspired by procps issue 186 where the submitter said that the sum of the processes cannot be more than the number of CPUs times 100%. If you have 1 CPU then the sum of %CPU for all processes should be 100% or less, have 16 CPUs then 1600% is your maximum number.

    Some people reason for the oddity of over 100% CPU as some rounding thing gone wrong and at first I did think that; except I know we get a lot of reports about comparing the top header CPU load vs process load not lining up and its because “they’re different”.

    The trick here, is ps is reporting a percentage of what? Or, perhaps to give a better clue, a percentage of when?

    PCPU Calculations

    So to get to the bottom of this, let’s look at the relevant code. In ps/output.c we have a function pr_pcpu that prints the percent CPU. The relevant lines are:

      total_time = pp->utime + pp->stime;
      if(include_dead_children)
          total_time += (pp->cutime + pp->cstime);
      seconds = cook_etime(pp);
      if (seconds)
          pcpu = (total_time * 1000ULL / Hertz) / seconds;

    OK, ignoring the include _dead_time line (you get this from the S option and means you include the time this process waited for its children processes) and the scaling (process times are in Jiffies, we have the CPU as 0 to 999 for reasons) you can reduce this down to.

    %CPU = ( Tutime + Tstime ) / Tetime

    So we find the amount of time the CPU(s) have been busy either in userland or the system, add them together, then divide the sum by the total time. The utime and stime increment like a car’s odometer. So if a process uses one Jiffy of CPU time in userland, that counter goes to 1. If it does it again a few seconds later, then that counter goes to 2.

    To give an example, if a process has run for ten seconds and within those ten seconds the CPU has been busy in userland for that process, then we get 10/10 = 100% which makes sense.

    Not all Start times are the same

    Let’s take another example, a process still consumes ten seconds CPU time but been running for twenty seconds, the answer is 10/20 or 50%. With our single CPU example system both of these cannot be running at the same time otherwise we have 150% CPU utilisation which is not possible.

    However, let’s adjust this slightly. We have assumed uniform utilisation. But take the following scenario:

    • At time T: Process P1 starts and uses 100% CPU
    • At time T+10 seconds: Process P1 stops using CPU but still runs, perhaps waiting for I/O or sleeping.
    • Also at time T+10 seconds: Process P2 starts and uses 100% CPU
    • At time T+20 we run the ps command and look at the %CPU column

    The output for ps -o times,etimes,pcpu,comm would look something like:

        TIME ELAPSED %CPU COMMAND
          10      20   50 P1
          10      10  100 P2

    What we will see is P1 has 10/20 or 50% CPU and P2 has 10/10 or 100% CPU. Add those up, and you have 150% CPU, magic!

    The key here is the ELAPSED column. P1 has given you the CPU utilisation across 20 seconds of system time and P2 the CPU utilisation across only 10 seconds. You directly add them together you get the wrong answer.

    What’s the point of %CPU?

    Probably the %CPU column gives results that a lot of people are not expecting, so what’s the point of it? Don’t use it to see why the CPU is running hot; you can see above those two processes were working the CPU hard at different times. What it is useful for is to see how “busy” a process is, but be warned its an average. It’s helpful for something that starts busy but if the process idles or hardly uses CPU for a week then goes bananas you won’t see it.

    The top program, because a lot of its statistics are deltas from the last refresh, is a much better program for this sort of information about what is happening right now.