Tag: Linux

  • Devices with cgroup v2

    Docker and other container systems by default restrict access to devices on the host. They used to do this with cgroups with the cgroup v1 system, however, the second version of cgroups removed this controller and the man page says:

    Cgroup v2 device controller has no interface files and is implemented on top of cgroup BPF.
    https://www.kernel.org/doc/Documentation/admin-guide/cgroup-v2.rst

    That is just awesome, nothing to see here, go look at the BPF documents if you have cgroup v2.

    With cgroup v1 if you wanted to know what devices were permitted, you just would cat /sys/fs/cgroup/XX/devices.allow and you were done!

    The kernel documentation is not very helpful, sure its something in BPF and has something to do with the cgroup BPF specifically, but what does that mean?

    There doesn’t seem to be an easy corresponding method to get the same information. So to see what restrictions a docker container has, we will have to:

    1. Find what cgroup the programs running in the container belong to
    2. Find what is the eBPF program ID that is attached to our container cgroup
    3. Dump the eBPF program to a text file
    4. Try to interpret the eBPF syntax

    The last step is by far the most difficult.

    Finding a container’s cgroup

    All containers have a short ID and a long ID. When you run the docker ps command, you get the short id. To get the long id you can either use the --no-trunc flag or just guess from the short ID. I usually do the second.

    $ docker ps 
    CONTAINER ID   IMAGE            COMMAND       CREATED          STATUS          PORTS     NAMES
    a3c53d8aaec2   debian:minicom   "/bin/bash"   19 minutes ago   Up 19 minutes             inspiring_shannon
    

    So the short ID is a3c53d8aaec2 and the long ID is a big ugly hex string starting with that. I generally just paste the relevant part in the next step and hit tab. For this container the cgroup is /sys/fs/cgroup/system.slice/docker-a3c53d8aaec23c256124f03d208732484714219c8b5f90dc1c3b4ab00f0b7779.scope/ Notice that the last directory has “docker-” then the short ID.

    If you’re not sure of the exact path. The “/sys/fs/cgroup” is the cgroup v2 mount point which can be found with mount -t cgroup2 and then rest is the actual cgroup name. If you know the process running in the container then the cgroup column in ps will show you.

    $ ps -o pid,comm,cgroup 140064
        PID COMMAND         CGROUP
     140064 bash            0::/system.slice/docker-a3c53d8aaec23c256124f03d208732484714219c8b5f90dc1c3b4ab00f0b7779.scope

    Either way, you will have your cgroup path.

    eBPF programs and cgroups

    Next we will need to get the eBPF program ID that is attached to our recently found cgroup. To do this, we will need to use the bpftool. One thing that threw me for a long time is when the tool talks about a program or a PROG ID they are talking about the eBPF programs, not your processes! With that out of the way, let’s find the prog id.

    $ sudo bpftool cgroup list /sys/fs/cgroup/system.slice/docker-a3c53d8aaec23c256124f03d208732484714219c8b5f90dc1c3b4ab00f0b7779.scope/
    ID       AttachType      AttachFlags     Name
    90       cgroup_device   multi

    Our cgroup is attached to eBPF prog with ID of 90 and the type of program is cgroup _device.

    Dumping the eBPF program

    Next, we need to get the actual code that is run every time a process running in the cgroup tries to access a device. The program will take some parameters and will return either a 1 for yes you are allowed or a zero for permission denied. Don’t use the file option as it dumps the program in binary format. The text version is hard enough to understand.

    sudo bpftool prog dump xlated id 90 > myebpf.txt

    Congratulations! You now have the eBPF program in a human-readable (?) format.

    Interpreting the eBPF program

    The eBPF format as dumped is not exactly user friendly. It probably helps to first go and look at an example program to see what is going on. You’ll see that the program splits type (lower 4 bytes) and access (higher 4 bytes) and then does comparisons on those values. The eBPF has something similar:

       0: (61) r2 = *(u32 *)(r1 +0)
       1: (54) w2 &= 65535
       2: (61) r3 = *(u32 *)(r1 +0)
       3: (74) w3 >>= 16
       4: (61) r4 = *(u32 *)(r1 +4)
       5: (61) r5 = *(u32 *)(r1 +8)
    

    What we find is that once we get past the first few lines filtering the given value that the comparison lines have:

    • r2 is the device type, 1 is block, 2 is character.
    • r3 is the device access, it’s used with r1 for comparisons after masking the relevant bits. mknod, read and write are 1,2 and 3 respectively.
    • r4 is the major number
    • r5 is the minor number

    For a even pretty simple setup, you are going to have around 60 lines of eBPF code to look at. Luckily, you’ll often find the lines for the command options you added will be near the end, which makes it easier. For example:

      63: (55) if r2 != 0x2 goto pc+4
      64: (55) if r4 != 0x64 goto pc+3
      65: (55) if r5 != 0x2a goto pc+2
      66: (b4) w0 = 1
      67: (95) exit

    This is a container using the option --device-cgroup-rule='c 100:42 rwm'. It is checking if r2 (device type) is 2 (char) and r4 (major device number) is 0x64 or 100 and r5 (minor device number) is 0x2a or 42. If any of those are not true, move to the next section, otherwise return with 1 (permit). We have all access modes permitted so it doesn’t check for it.

    The previous example has all permissions for our device with id 100:42, what about if we only want write access with the option --device-cgroup-rule='c 100:42 r'. The resulting eBPF is:

      63: (55) if r2 != 0x2 goto pc+7  
      64: (bc) w1 = w3
      65: (54) w1 &= 2
      66: (5d) if r1 != r3 goto pc+4
      67: (55) if r4 != 0x64 goto pc+3
      68: (55) if r5 != 0x2a goto pc+2
      69: (b4) w0 = 1
      70: (95) exit
    

    The code is almost the same but we are checking that w3 only has the second bit set, which is for reading, effectively checking for X==X&2. It’s a cautious approach meaning no access still passes but multiple bits set will fail.

    The device option

    docker run allows you to specify files you want to grant access to your containers with the --device flag. This flag actually does two things. The first is to great the device file in the containers /dev directory, effectively doing a mknod command. The second thing is to adjust the eBPF program. If the device file we specified actually did have a major number of 100 and a minor of 42, the eBPF would look exactly like the above snippets.

    What about privileged?

    So we have used the direct cgroup options here, what does the --privileged flag do? This lets the container have full access to all the devices (if the user running the process is allowed). Like the --device flag, it makes the device files as well, but what does the filtering look like? We still have a cgroup but the eBPF program is greatly simplified, here it is in full:

       0: (61) r2 = *(u32 *)(r1 +0)
       1: (54) w2 &= 65535
       2: (61) r3 = *(u32 *)(r1 +0)
       3: (74) w3 >>= 16
       4: (61) r4 = *(u32 *)(r1 +4)
       5: (61) r5 = *(u32 *)(r1 +8)
       6: (b4) w0 = 1
       7: (95) exit

    There is the usual setup lines and then, return 1. Everyone is a winner for all devices and access types!

  • Linux Memory Statistics

    Pretty much everyone who has spent some time on a command line in Linux would have looked at the free command. This command provides some overall statistics on the memory and how it is used. Typical output looks something like this:

                 total        used        free      shared  buff/cache  available
    Mem:      32717924     3101156    26950016      143608     2666752  29011928
    Swap:      1000444           0     1000444
    

    Memory sits in the first row after the headers then we have the swap statistics. Most of the numbers are directly fetched from the procfs file /proc/meminfo which are scaled and presented to the user. A good example of a “simple” stat is total, which is just the MemTotal row located in that file. For the rest of this post, I’ll make the rows from /proc/meminfo have an amber background.

    What is Free, and what is Used?

    While you could say that the free value is also merely the MemFree row, this is where Linux memory statistics start to get odd. While that value is indeed what is found for MemFree and not a calculated field, it can be misleading.

    Most people would assume that Free means free to use, with the implication that only this amount of memory is free to use and nothing more. That would also mean the used value is really used by something and nothing else can use it.

    In the early days of free and Linux statistics in general that was how it looked. Used is a calculated field (there is no MemUsed row) and was, initially, Total - Free.

    The problem was, Used also included Buffers and Cached values. This meant that it looked like Linux was using a lot of memory for… something. If you read old messages before 2002 that are talking about excessive memory use, they quite likely are looking at the values printed by free.

    The thing was, under memory pressure the kernel could release Buffers and Cached for use. Not all of the storage but some of it so it wasn’t all used. To counter this, free showed a row between Memory and Swap with Used having Buffers and Cached removed and Free having the same values added:

                 total       used       free     shared    buffers     cached
    Mem:      32717924    6063648   26654276          0     313552    2234436
    -/+ buffers/cache:    3515660   29202264
    Swap:      1000444          0    1000444

    You might notice that this older version of free from around 2001 shows buffers and cached separately and there’s no available column (we’ll get to Available later.) Shared appears as zero because the old row was labelled MemShared and not Shmem which was changed in Linux 2.6 and I’m running a system way past that version.

    It’s not ideal, you can say that the amount of free memory is something above 26654276 and below 29202264 KiB but nothing more accurate. buffers and cached are almost never all-used or all-unused so the real figure is not either of those numbers but something in-between.

    Cached, just not for Caches

    That appeared to be an uneasy truce within the Linux memory statistics world for a while. By 2014 we realised that there was a problem with Cached. This field used to have the memory used for a cache for files read from storage. While this value still has that component, it was also being used for tmpfs storage and the use of tmpfs went from an interesting idea to being everywhere. Cheaper memory meant larger tmpfs partitions went from a luxury to something everyone was doing.

    The problem is with large files put into a tmpfs partition the Free would decrease, but so would Cached meaning the free column in the -/+ row would not change much and understate the impact of files in tmpfs.

    Lucky enough in Linux 2.6.32 the developers added a Shmem row which was the amount of memory used for shmem and tmpfs. Subtracting that value from Cached gave you the “real” cached value which we call main_cache and very briefly this is what the cached value would show in free.

    However, this caused further problems because not all Shem can be reclaimed and reused and probably swapped one set of problematic values for another. It did however prompt the Linux kernel community to have a look at the problem.

    Enter Available

    There was increasing awareness of the issues with working out how much memory a system has free within the kernel community. It wasn’t just the output of free or the percentage values in top, but load balancer or workload placing systems would have their own view of this value. As memory management and use within the Linux kernel evolved, what was or wasn’t free changed and all the userland programs were expected somehow to keep up.

    The kernel developers realised the best place to get an estimate of the memory not used was in the kernel and they created a new memory statistic called Available. That way if how the memory is used or set to be unreclaimable they could change it and userland programs would go along with it.

    procps has a fallback for this value and it’s a pretty complicated setup.

    1. Find the min_free_kybtes setting in sysfs which is the minimum amount of free memory the kernel will handle
    2. Add a 25% to this value (e.g. if it was 4000 make it 5000), this is the low watermark
    3. To find available, start with MemFree and subtract the low watermark
    4. If half of both Inactive(file) and Active(file) values are greater than the low watermark, add that half value otherwise add the sum of the values minus the low watermark
    5. If half of Slab Claimable is greater than the low watermark, add that half value otherwise add Slab Claimable minus the low watermark
    6. If what you get is less than zero, make available zero
    7. Or, just look at Available in /proc/meminfo

    For the free program, we added the Available value and the +/- line was removed. The main_cache value was Cached + Slab while Used was calculated as Total - Free - main_cache - Buffers. This was very close to what the Used column in the +/- line used to show.

    What’s on the Slab?

    The next issue that came across was the use of slabs. At this point, main_cache was Cached + Slab, but Slab consists of reclaimable and unreclaimable components. One part of Slab can be used elsewhere if needed and the other cannot but the procps tools treated them the same. The Used calculation should not subtract SUnreclaim from the Total because it is actually being used.

    So in 2015 main_cache was changed to be Cached + SReclaimable. This meant that Used memory was calculated as Total - Free - Cached - SReclaimable - Buffers.

    Revenge of tmpfs and the return of Available

    The tmpfs impacting Cached was still an issue. If you added a 10MB file into a tmpfs partition, then Free would reduce by 10MB and Cached would increase by 10MB meaning Used stayed unchanged even though 10MB had gone somewhere.

    It was time to retire the complex calculation of Used. For procps 4.0.1 onwards, Used now means “not available”. We take the Total memory and subtract the Available memory. This is not a perfect setup but it is probably going to be the best one we have and testing is giving us much more sensible results. It’s also easier for people to understand (take the total value you see in free, then subtract the available value).

    What does that mean for main_cache which is part of the buff/cache value you see? As this value is no longer in the used memory calculation, it is less important. Should it also be reverted to simply Cached without the reclaimable Slabs?

    The calculated fields

    In summary, what this means for the calculated fields in procps at least is:

    • Used: Total - Available, unless Available is not present then it’s Total – Free
    • Cached: Cached + Reclaimable Slabs
    • Swap/Low/HighUsed: Corresponding Total - Free (no change here)

    Almost everything else, with the exception of some bounds checking, is what you get out of /proc/meminfo which is straight from the kernel.

  • Changing Grafana Legends

    I’m not sure if I just can search Google properly, or this really is just not written down much, but I have had problems with Grafana Legends (I would call them the series labels). The issue is that Grafana queries Prometheus for a time series and you want to display multiple lines, but the time-series labels you get are just not quite right.

    A simple example is you might be using the black-box exporter to monitor an external TCP port and you would just like the port number separate to display. The default output would look like this:

    probe_duration_seconds{instance="example.net:5222",job="blackbox",module="xmpp_banner"} = 0.01
    probe_duration_seconds{instance="example.net:5269",job="blackbox",module="xmpp_banner"} = 0.01
    

    I can graph the number of seconds that it takes to probe the 5222 and 5269 TCP ports, but my graph legend is going to have the hostname, making it cluttered. I just want the legend to be the port numbers on Grafana.

    The answer is to use a Prometheus function called label_replace that takes an existing label, applies a regular expression, then puts the result into another label. That’s right, regular expressions, and if you get them wrong then the label just doesn’t appear.

    Perl REGEX Problems courtesy of XKCD

    The label_replace documentation is a bit terse, and in my opinion, the order of parameters is messed up, but after a few goes I had what I needed:

    label_replace(probe_duration_seconds{module="xmpp_banner"}, "port", "$1", "instance", ".*:(.*)")
    
    probe_duration_seconds{instance="example.net:5222",job="blackbox",module="xmpp_banner",port="5222"}	0.001
    probe_duration_seconds{instance="example.net:5269",job="blackbox",module="xmpp_banner",port="5269"}	0.002
    

    The response now has a new label (or field if you like) called port. So what is this function to our data coming from probe_duration_seconds? The function format is:

    label_replace(value, dst_label, replacement, src_label, regex)

    So the function does the following:

    1. Evaluate value, which is generally some sort of query such as probe_duration_seconds
    2. Find the required source label src_label, in this example is instance, in this case the values are example.net:5222 and example.net:5269
    3. Apply regular expression regex, for us its “.*:(.*)” That says skip everying before “:” then capture/store everything past “:”. The brackets mean copy what is after the colon and put it in match
    4. Make a new label specified in dst_label, for us this is port
    5. Whatever is in replacement goes into dst_label. For this example it is “$1” which means match in our regular expression in the label called port.

    In short, the function captures everything after the colon in the instance label and puts that into a new label called port. It does this for each value that is returned into the first parameter.

    This means I can use the {{port}} in my Grafana graph Legend and it will show 5222 or 5269 respectively. I have made the Legend “TCP {{port}}” to give the below result, but I could have used {{port}} in Grafana Legend and made the result “TCP $1” in the label_replace function to get the same result.

    Grafana console showing the use of the label_replace function
  • Percent CPU for processes

    The ps program gives a snapshot of the processes running on your Unix-like system. On most Linux installations, this will be the ps program from the procps project.

    While you can get a lot of information from the tool, a lot of the fields need further explanation or can give “wrong” or confusing information; or putting it another way, they provide the right information that looks wrong.

    One of these confusing fields is the %CPU or pcpu field. You can see this as the third field with the ps aux command. You only really need the u option to see it, but ps aux is a pretty common invokation.

    More than 100%?

    This post was inspired by procps issue 186 where the submitter said that the sum of the processes cannot be more than the number of CPUs times 100%. If you have 1 CPU then the sum of %CPU for all processes should be 100% or less, have 16 CPUs then 1600% is your maximum number.

    Some people reason for the oddity of over 100% CPU as some rounding thing gone wrong and at first I did think that; except I know we get a lot of reports about comparing the top header CPU load vs process load not lining up and its because “they’re different”.

    The trick here, is ps is reporting a percentage of what? Or, perhaps to give a better clue, a percentage of when?

    PCPU Calculations

    So to get to the bottom of this, let’s look at the relevant code. In ps/output.c we have a function pr_pcpu that prints the percent CPU. The relevant lines are:

      total_time = pp->utime + pp->stime;
      if(include_dead_children)
          total_time += (pp->cutime + pp->cstime);
      seconds = cook_etime(pp);
      if (seconds)
          pcpu = (total_time * 1000ULL / Hertz) / seconds;

    OK, ignoring the include _dead_time line (you get this from the S option and means you include the time this process waited for its children processes) and the scaling (process times are in Jiffies, we have the CPU as 0 to 999 for reasons) you can reduce this down to.

    %CPU = ( Tutime + Tstime ) / Tetime

    So we find the amount of time the CPU(s) have been busy either in userland or the system, add them together, then divide the sum by the total time. The utime and stime increment like a car’s odometer. So if a process uses one Jiffy of CPU time in userland, that counter goes to 1. If it does it again a few seconds later, then that counter goes to 2.

    To give an example, if a process has run for ten seconds and within those ten seconds the CPU has been busy in userland for that process, then we get 10/10 = 100% which makes sense.

    Not all Start times are the same

    Let’s take another example, a process still consumes ten seconds CPU time but been running for twenty seconds, the answer is 10/20 or 50%. With our single CPU example system both of these cannot be running at the same time otherwise we have 150% CPU utilisation which is not possible.

    However, let’s adjust this slightly. We have assumed uniform utilisation. But take the following scenario:

    • At time T: Process P1 starts and uses 100% CPU
    • At time T+10 seconds: Process P1 stops using CPU but still runs, perhaps waiting for I/O or sleeping.
    • Also at time T+10 seconds: Process P2 starts and uses 100% CPU
    • At time T+20 we run the ps command and look at the %CPU column

    The output for ps -o times,etimes,pcpu,comm would look something like:

        TIME ELAPSED %CPU COMMAND
          10      20   50 P1
          10      10  100 P2

    What we will see is P1 has 10/20 or 50% CPU and P2 has 10/10 or 100% CPU. Add those up, and you have 150% CPU, magic!

    The key here is the ELAPSED column. P1 has given you the CPU utilisation across 20 seconds of system time and P2 the CPU utilisation across only 10 seconds. You directly add them together you get the wrong answer.

    What’s the point of %CPU?

    Probably the %CPU column gives results that a lot of people are not expecting, so what’s the point of it? Don’t use it to see why the CPU is running hot; you can see above those two processes were working the CPU hard at different times. What it is useful for is to see how “busy” a process is, but be warned its an average. It’s helpful for something that starts busy but if the process idles or hardly uses CPU for a week then goes bananas you won’t see it.

    The top program, because a lot of its statistics are deltas from the last refresh, is a much better program for this sort of information about what is happening right now.

  • 25 Years of Free Software

    When did I start writing Free Software, now called Open Source? That’s a tricky question. Does the time start with the first file edited, the first time it compiles or perhaps even some proto-program you use to work out a concept for the real program formed later on.

    So using the date you start writing, especially in a era before decent version control systems, is problematic. That is why I use the date of the first release of the first package as the start date. For me, that was Monday 24th July 1995.

    (more…)
  • Sending data in a signal

    The well-known kill system call has been around for decades and is used to send a signal to another process. The most common use is to terminate or kill another process by sending the KILL or TERM signal but it can be used for a form of IPC, usually around giving the other process a “kick” to do something.

    One thing that isn’t as well known is besides sending a signal to a process, you can send some data to it. This can either be an integer or a pointer and uses similar semantics to the known kill and signal handler. I came across this when there was a merge request for procps. The main changes are using sigqueue instead of kill in the sender and using a signal action not a signal handler in the receiver.

    (more…)
  • procps-ng 3.3.16

    procps-ng version 3.3.16 was released today. Besides some documentation and library updates, there were a few incremental changes.

    Zombie Hunting with pgrep

    Ever wanted to find zombies? Perhaps processes with other states? pgrep has a shiny new runstate flag to help you which will match process against the runstate. I’m curious to see the use-cases for this flag; it certainly will get used (e.g. find my zombies) but as some processes bounce in and out of states (think Run to Sleep and back) it might add some confusion.

    Snice plays nice with PIDs

    Top Enhancements

    Top got a bunch of love again in this release. If you ever wanted your processes to be shown in fuchsia? Perhaps goldenrod? With some earlier versions of top, you could by directly editing the toprc file but now everyone can have more than the standard 8 colours!

    If you use the other filters parameter for some fancy process filtering in top, it now will save that configuration.

    Collapsed children (process names are weird) get some help. If you are in tree view, you can collapse or fold the children processes under the parent. Their CPU is also added to the parent so there are no “missing” CPU ticks.

    For people who use the One True Editor (which is, of course, VIM) you can use the vim navigation keys to move through the process list.

    Where to find it?

    You’ll find the latest version of procps either at our git repository or download a tarball.

  • psmisc 23.0

    I had to go check but it has been over 3 years since the last psmisc release back in February 2014. I really didn’t think it had been that long ago.  Anyhow, with no further delay, psmisc version 23.0 has been released today!

    Update: 23.1 is out now, removed some debug line out of killall and shipped two missing documents.

    (more…)

  • The sudo tty bug and procps

    There have been recent reports of a security bug in sudo (CVE-2017-1000367) where you can fool sudo into thinking what controlling terminal it is running on to bypass its security checks.  One of the first things I thought of was, is procps vulnerable to the same bug? Sure, it wouldn’t be a security bypass, but it would be a normal sort of bug. A lot of programs  in procps have a concept of a controlling terminal, or the TTY field for either viewing or filtering, could they be fooled into thinking the process had a different controlling terminal?

    Was I going to be in the same pickle as the sudo maintainers? The meat between the stat parsing sandwich? Can I find any more puns related somehow to the XKCD comic?

    TLDR: No.

    (more…)
  • Displaying Linux Memory

    Memory management is hard, but RAM management may be even harder.

    Most people know the vague overall concept of how memory usage is displayed within Linux. You have your total memory which is everything inside the box; then there is used and free which is what the system is or is not using respectively. Some people might know that not all used is used and some of it actually is free.  It can be very confusing to understand, even for a someone who maintains procps (the package that contains top and free, two programs that display memory usage).

    So, how does the memory display work?

    (more…)