Category: Uncategorized

  • Fixing iCalendar feeds

    Fixing iCalendar feeds

    The local government here has all the schools use an iCalendar feed for things like when school terms start and stop and other school events occur. The department’s website also has events like public holidays. The issue is that all of them don’t make it an all-day event but one that happens at midnight, or one past midnight.

    The events synchronise fine, though Google’s calendar is known for synchronising when it feels like it, not at any particular time you would like it to.

    Screenshot of Android Calendar showing a tiny bar at midnight which is the event.

    Even though a public holiday is all day, they are sent as appointments for midnight.

    That means on my phone all the events are these tiny bars that appear right up the top of the screen and are easily missed, especially when the focus of the calendar is during the day.

    On the phone, you can see the tiny purple bar at midnight. This is how the events appear. It’s not the calendar’s fault, as far as it knows the school events are happening at midnight.

    You can also see Lunar New Year and Australia Day appear in the all-day part of the calendar and don’t scroll away. That’s where these events should be.

    Why are all the events appearing at midnight? The reason is the feed is incorrectly set up and has the time. The events are sent in an iCalendar format and a typical event looks like this:

    BEGIN:VEVENT
    DTSTART;TZID=Australia/Sydney:20230206T000000
    DTEND;TZID=Australia/Sydney:20230206T000000
    SUMMARY:School Term starts
    END:VEVENT

    The event starting and stopping date and time are the DTSTART and DTEND lines. Both of them have the date of 2023/02/06 or 6th February 2023 and a time of 00:00:00 or midnight. So the calendar is doing the right thing, we need to fix the feed!

    The Fix

    I wrote a quick and dirty PHP script to download the feed from the real site, change the DTSTART and DTEND lines to all-day events and leave the rest of it alone.

    <?php
    $site = $_GET['s'];
    if ($site == 'site1') {
        $REMOTE_URL='https://site1.example.net/ical_feed';
    } elseif ($site == 'site2') {
        $REMOTE_URL='https://site2.example.net/ical_feed';
    } else {
        http_response_code(400);
        die();
    }
    
    $fp = fopen($REMOTE_URL, "r");
    if (!$fp) {
        die("fopen");
    }
    header('Content-Type: text/calendar');
    while (( $line = fgets($fp, 1024)) !== false) {
        $line = preg_replace(
            '/^(DTSTART|DTEND);[^:]+:([0-9]{8})T000[01]00/',
            '${1};VALUE=DATE:${2}',
            $line);
        echo $line;
    }
    ?>

    It’s pretty quick and nasty but gets the job done. So what is it doing?

    • Lines 2-10: Check the given variable s and match it to either “site1” or “site2” to obtain the URL. If you only had one site to fix you could just set the REMOTE_URL variable.
    • Lines 12-15: A typical fopen() and nasty error handling.
    • Line 16: set the content type to a calendar.
    • Line 17: A while loop to read the contents of the remote site line by line.
    • Line 18-21: This is where the “magic” happens, preg_replace is a Perl regular expression replacement. The PCRE is:
      • Finding lines starting with DTSTART or DTEND and store it in capture 1
      • Skip everything that isn’t a colon. This is the timezone information. I wasn’t sure if it was needed and how to combine it so I took it out. All the all-day events I saw don’t have a time zone.
      • Find 8 numerics (this is for YYYYMMDD) and store it in capture 2.
      • Scan the Time part, a literal “T” then HHMMSS. Some sites use midnight some use one minute past, so it covers both.
      • Replace the line with either DTSTART or DTEND (capture 1), set the value type to DATE as the default is date/time and print the date (capture 2).
    • Line 22: Print either the modified or original line.

    You need to save the script on your web server somewhere, possibly with an alias command.

    The whole point of this is to change the type from a date/time to a date-only event and only print the date part of it for the start and end of it. The resulting iCalendar event looks like this:

    BEGIN:VEVENT
    DTSTART;VALUE=DATE:20230206
    DTEND;VALUE=DATE:20230206
    SUMMARY:School Term starts
    END:VEVENT

    The calendar then shows it properly as an all-day event. I would check the script works before doing the next step. You can use things like curl or wget to download it. If you use a normal browser, it will probably just download the translated file.

    If you’re not seeing the right thing then it’s probably the PCRE failing. You can check it online with a regex checker such as https://regex101.com. The site has saved my PCRE and match so you got something to start with.

    Calendar settings

    The last thing to do is to change the URL in your calendar settings. Each calendar system has a different way of doing it. For Google Calendar they provide instructions and you want to follow the section titled “Use a link to add a public Calendar”.

    The URL here is not the actual site’s URL (which you would have put into the REMOTE_URL variable before) but the URL of your script plus the ‘?s=site1″ part. So if you put your script aliased to /myical.php and the site ID was site1 and your website is www.example.com the URL would be “https://www.example.com/myical.php?s=site1”.

    You should then see the events appear as all-day events on your calendar.

  • WordPress 6.1

    Debian will soon have WordPress version 6.1 I’m not really sure of the improvements, but there is a new 2023 theme as part of the update.

    They really weren’t mucking around when they said the 6.0.3 security release would be short-lived.

    The updates seem to be focused on content creation and making the formatting do what content creators want it to do. For me, I need headings 1 and 2 , paragraphs and preformatted text.

  • Linux Memory Statistics

    Pretty much everyone who has spent some time on a command line in Linux would have looked at the free command. This command provides some overall statistics on the memory and how it is used. Typical output looks something like this:

                 total        used        free      shared  buff/cache  available
    Mem:      32717924     3101156    26950016      143608     2666752  29011928
    Swap:      1000444           0     1000444
    

    Memory sits in the first row after the headers then we have the swap statistics. Most of the numbers are directly fetched from the procfs file /proc/meminfo which are scaled and presented to the user. A good example of a “simple” stat is total, which is just the MemTotal row located in that file. For the rest of this post, I’ll make the rows from /proc/meminfo have an amber background.

    What is Free, and what is Used?

    While you could say that the free value is also merely the MemFree row, this is where Linux memory statistics start to get odd. While that value is indeed what is found for MemFree and not a calculated field, it can be misleading.

    Most people would assume that Free means free to use, with the implication that only this amount of memory is free to use and nothing more. That would also mean the used value is really used by something and nothing else can use it.

    In the early days of free and Linux statistics in general that was how it looked. Used is a calculated field (there is no MemUsed row) and was, initially, Total - Free.

    The problem was, Used also included Buffers and Cached values. This meant that it looked like Linux was using a lot of memory for… something. If you read old messages before 2002 that are talking about excessive memory use, they quite likely are looking at the values printed by free.

    The thing was, under memory pressure the kernel could release Buffers and Cached for use. Not all of the storage but some of it so it wasn’t all used. To counter this, free showed a row between Memory and Swap with Used having Buffers and Cached removed and Free having the same values added:

                 total       used       free     shared    buffers     cached
    Mem:      32717924    6063648   26654276          0     313552    2234436
    -/+ buffers/cache:    3515660   29202264
    Swap:      1000444          0    1000444

    You might notice that this older version of free from around 2001 shows buffers and cached separately and there’s no available column (we’ll get to Available later.) Shared appears as zero because the old row was labelled MemShared and not Shmem which was changed in Linux 2.6 and I’m running a system way past that version.

    It’s not ideal, you can say that the amount of free memory is something above 26654276 and below 29202264 KiB but nothing more accurate. buffers and cached are almost never all-used or all-unused so the real figure is not either of those numbers but something in-between.

    Cached, just not for Caches

    That appeared to be an uneasy truce within the Linux memory statistics world for a while. By 2014 we realised that there was a problem with Cached. This field used to have the memory used for a cache for files read from storage. While this value still has that component, it was also being used for tmpfs storage and the use of tmpfs went from an interesting idea to being everywhere. Cheaper memory meant larger tmpfs partitions went from a luxury to something everyone was doing.

    The problem is with large files put into a tmpfs partition the Free would decrease, but so would Cached meaning the free column in the -/+ row would not change much and understate the impact of files in tmpfs.

    Lucky enough in Linux 2.6.32 the developers added a Shmem row which was the amount of memory used for shmem and tmpfs. Subtracting that value from Cached gave you the “real” cached value which we call main_cache and very briefly this is what the cached value would show in free.

    However, this caused further problems because not all Shem can be reclaimed and reused and probably swapped one set of problematic values for another. It did however prompt the Linux kernel community to have a look at the problem.

    Enter Available

    There was increasing awareness of the issues with working out how much memory a system has free within the kernel community. It wasn’t just the output of free or the percentage values in top, but load balancer or workload placing systems would have their own view of this value. As memory management and use within the Linux kernel evolved, what was or wasn’t free changed and all the userland programs were expected somehow to keep up.

    The kernel developers realised the best place to get an estimate of the memory not used was in the kernel and they created a new memory statistic called Available. That way if how the memory is used or set to be unreclaimable they could change it and userland programs would go along with it.

    procps has a fallback for this value and it’s a pretty complicated setup.

    1. Find the min_free_kybtes setting in sysfs which is the minimum amount of free memory the kernel will handle
    2. Add a 25% to this value (e.g. if it was 4000 make it 5000), this is the low watermark
    3. To find available, start with MemFree and subtract the low watermark
    4. If half of both Inactive(file) and Active(file) values are greater than the low watermark, add that half value otherwise add the sum of the values minus the low watermark
    5. If half of Slab Claimable is greater than the low watermark, add that half value otherwise add Slab Claimable minus the low watermark
    6. If what you get is less than zero, make available zero
    7. Or, just look at Available in /proc/meminfo

    For the free program, we added the Available value and the +/- line was removed. The main_cache value was Cached + Slab while Used was calculated as Total - Free - main_cache - Buffers. This was very close to what the Used column in the +/- line used to show.

    What’s on the Slab?

    The next issue that came across was the use of slabs. At this point, main_cache was Cached + Slab, but Slab consists of reclaimable and unreclaimable components. One part of Slab can be used elsewhere if needed and the other cannot but the procps tools treated them the same. The Used calculation should not subtract SUnreclaim from the Total because it is actually being used.

    So in 2015 main_cache was changed to be Cached + SReclaimable. This meant that Used memory was calculated as Total - Free - Cached - SReclaimable - Buffers.

    Revenge of tmpfs and the return of Available

    The tmpfs impacting Cached was still an issue. If you added a 10MB file into a tmpfs partition, then Free would reduce by 10MB and Cached would increase by 10MB meaning Used stayed unchanged even though 10MB had gone somewhere.

    It was time to retire the complex calculation of Used. For procps 4.0.1 onwards, Used now means “not available”. We take the Total memory and subtract the Available memory. This is not a perfect setup but it is probably going to be the best one we have and testing is giving us much more sensible results. It’s also easier for people to understand (take the total value you see in free, then subtract the available value).

    What does that mean for main_cache which is part of the buff/cache value you see? As this value is no longer in the used memory calculation, it is less important. Should it also be reverted to simply Cached without the reclaimable Slabs?

    The calculated fields

    In summary, what this means for the calculated fields in procps at least is:

    • Used: Total - Available, unless Available is not present then it’s Total – Free
    • Cached: Cached + Reclaimable Slabs
    • Swap/Low/HighUsed: Corresponding Total - Free (no change here)

    Almost everything else, with the exception of some bounds checking, is what you get out of /proc/meminfo which is straight from the kernel.

  • WordPress 5.8.2 Debian packages

    After a bit of a delay, WordPress version 5.8.2 packages should be available now. This is a minor update from 5.8.1 which fixes two bugs but not the security bug.

    The security bug is due to WordPress shipping its own CA store, which is a list of certificates it trusts to sign for websites. Debian WordPress has used the system certificate store which uses /etc/ssl/certs/ca-certificates.crt for years so is not impacted by this change. That CA file is generated by update-ca-certificates and is part of the ca-certificates package.

    We have also had another go of tamping down the nagging WordPress does about updates, as you cannot use the automatic updates through WordPress but via the usual Debian system. I see we are not fully there as WordPress has a site health page that doesn’t like things turned off.

    The two bugs fixed in 5.8.2 I’ve not personally hit, but they might help someone out there. In any case, an update is always good.

    Next stop 5.9

    The next planned release is in late January 2022. I’m sure there will be a new default theme, but they are planning on making big changes around the blocks and styles to make it easier to customise the look.

  • Fediverse Test Three

    This supposedly will go out to the fediverse if I can fix wp-cron.

  • Changing Grafana Legends

    I’m not sure if I just can search Google properly, or this really is just not written down much, but I have had problems with Grafana Legends (I would call them the series labels). The issue is that Grafana queries Prometheus for a time series and you want to display multiple lines, but the time-series labels you get are just not quite right.

    A simple example is you might be using the black-box exporter to monitor an external TCP port and you would just like the port number separate to display. The default output would look like this:

    probe_duration_seconds{instance="example.net:5222",job="blackbox",module="xmpp_banner"} = 0.01
    probe_duration_seconds{instance="example.net:5269",job="blackbox",module="xmpp_banner"} = 0.01
    

    I can graph the number of seconds that it takes to probe the 5222 and 5269 TCP ports, but my graph legend is going to have the hostname, making it cluttered. I just want the legend to be the port numbers on Grafana.

    The answer is to use a Prometheus function called label_replace that takes an existing label, applies a regular expression, then puts the result into another label. That’s right, regular expressions, and if you get them wrong then the label just doesn’t appear.

    Perl REGEX Problems courtesy of XKCD

    The label_replace documentation is a bit terse, and in my opinion, the order of parameters is messed up, but after a few goes I had what I needed:

    label_replace(probe_duration_seconds{module="xmpp_banner"}, "port", "$1", "instance", ".*:(.*)")
    
    probe_duration_seconds{instance="example.net:5222",job="blackbox",module="xmpp_banner",port="5222"}	0.001
    probe_duration_seconds{instance="example.net:5269",job="blackbox",module="xmpp_banner",port="5269"}	0.002
    

    The response now has a new label (or field if you like) called port. So what is this function to our data coming from probe_duration_seconds? The function format is:

    label_replace(value, dst_label, replacement, src_label, regex)

    So the function does the following:

    1. Evaluate value, which is generally some sort of query such as probe_duration_seconds
    2. Find the required source label src_label, in this example is instance, in this case the values are example.net:5222 and example.net:5269
    3. Apply regular expression regex, for us its “.*:(.*)” That says skip everying before “:” then capture/store everything past “:”. The brackets mean copy what is after the colon and put it in match
    4. Make a new label specified in dst_label, for us this is port
    5. Whatever is in replacement goes into dst_label. For this example it is “$1” which means match in our regular expression in the label called port.

    In short, the function captures everything after the colon in the instance label and puts that into a new label called port. It does this for each value that is returned into the first parameter.

    This means I can use the {{port}} in my Grafana graph Legend and it will show 5222 or 5269 respectively. I have made the Legend “TCP {{port}}” to give the below result, but I could have used {{port}} in Grafana Legend and made the result “TCP $1” in the label_replace function to get the same result.

    Grafana console showing the use of the label_replace function
  • Percent CPU for processes

    The ps program gives a snapshot of the processes running on your Unix-like system. On most Linux installations, this will be the ps program from the procps project.

    While you can get a lot of information from the tool, a lot of the fields need further explanation or can give “wrong” or confusing information; or putting it another way, they provide the right information that looks wrong.

    One of these confusing fields is the %CPU or pcpu field. You can see this as the third field with the ps aux command. You only really need the u option to see it, but ps aux is a pretty common invokation.

    More than 100%?

    This post was inspired by procps issue 186 where the submitter said that the sum of the processes cannot be more than the number of CPUs times 100%. If you have 1 CPU then the sum of %CPU for all processes should be 100% or less, have 16 CPUs then 1600% is your maximum number.

    Some people reason for the oddity of over 100% CPU as some rounding thing gone wrong and at first I did think that; except I know we get a lot of reports about comparing the top header CPU load vs process load not lining up and its because “they’re different”.

    The trick here, is ps is reporting a percentage of what? Or, perhaps to give a better clue, a percentage of when?

    PCPU Calculations

    So to get to the bottom of this, let’s look at the relevant code. In ps/output.c we have a function pr_pcpu that prints the percent CPU. The relevant lines are:

      total_time = pp->utime + pp->stime;
      if(include_dead_children)
          total_time += (pp->cutime + pp->cstime);
      seconds = cook_etime(pp);
      if (seconds)
          pcpu = (total_time * 1000ULL / Hertz) / seconds;

    OK, ignoring the include _dead_time line (you get this from the S option and means you include the time this process waited for its children processes) and the scaling (process times are in Jiffies, we have the CPU as 0 to 999 for reasons) you can reduce this down to.

    %CPU = ( Tutime + Tstime ) / Tetime

    So we find the amount of time the CPU(s) have been busy either in userland or the system, add them together, then divide the sum by the total time. The utime and stime increment like a car’s odometer. So if a process uses one Jiffy of CPU time in userland, that counter goes to 1. If it does it again a few seconds later, then that counter goes to 2.

    To give an example, if a process has run for ten seconds and within those ten seconds the CPU has been busy in userland for that process, then we get 10/10 = 100% which makes sense.

    Not all Start times are the same

    Let’s take another example, a process still consumes ten seconds CPU time but been running for twenty seconds, the answer is 10/20 or 50%. With our single CPU example system both of these cannot be running at the same time otherwise we have 150% CPU utilisation which is not possible.

    However, let’s adjust this slightly. We have assumed uniform utilisation. But take the following scenario:

    • At time T: Process P1 starts and uses 100% CPU
    • At time T+10 seconds: Process P1 stops using CPU but still runs, perhaps waiting for I/O or sleeping.
    • Also at time T+10 seconds: Process P2 starts and uses 100% CPU
    • At time T+20 we run the ps command and look at the %CPU column

    The output for ps -o times,etimes,pcpu,comm would look something like:

        TIME ELAPSED %CPU COMMAND
          10      20   50 P1
          10      10  100 P2

    What we will see is P1 has 10/20 or 50% CPU and P2 has 10/10 or 100% CPU. Add those up, and you have 150% CPU, magic!

    The key here is the ELAPSED column. P1 has given you the CPU utilisation across 20 seconds of system time and P2 the CPU utilisation across only 10 seconds. You directly add them together you get the wrong answer.

    What’s the point of %CPU?

    Probably the %CPU column gives results that a lot of people are not expecting, so what’s the point of it? Don’t use it to see why the CPU is running hot; you can see above those two processes were working the CPU hard at different times. What it is useful for is to see how “busy” a process is, but be warned its an average. It’s helpful for something that starts busy but if the process idles or hardly uses CPU for a week then goes bananas you won’t see it.

    The top program, because a lot of its statistics are deltas from the last refresh, is a much better program for this sort of information about what is happening right now.

  • 25 Years of Free Software

    When did I start writing Free Software, now called Open Source? That’s a tricky question. Does the time start with the first file edited, the first time it compiles or perhaps even some proto-program you use to work out a concept for the real program formed later on.

    So using the date you start writing, especially in a era before decent version control systems, is problematic. That is why I use the date of the first release of the first package as the start date. For me, that was Monday 24th July 1995.

    (more…)
  • Sending data in a signal

    The well-known kill system call has been around for decades and is used to send a signal to another process. The most common use is to terminate or kill another process by sending the KILL or TERM signal but it can be used for a form of IPC, usually around giving the other process a “kick” to do something.

    One thing that isn’t as well known is besides sending a signal to a process, you can send some data to it. This can either be an integer or a pointer and uses similar semantics to the known kill and signal handler. I came across this when there was a merge request for procps. The main changes are using sigqueue instead of kill in the sender and using a signal action not a signal handler in the receiver.

    (more…)
  • WordPress 5.4

    Debian packages for WordPress version 5.4 will be uploaded shortly. I’m just going through the install testing now.

    One problem I have noticed is, at least for my setup, there is an issue with network updates. The problem is that WordPress will ask me if I want to update the network sites, I say yes and get a SSL error.

    After lots of debugging, the problem is that the fsockopen option to use SNI is turned off for network updates. My sites need SNI so without this they just bomb out with a SSL handshake error.

    I’m not sure what the real fix is, but my work-around was to temporary set the SNI in the fsockopen transport while doing the site updates.

    The file you want wp-includes/Requests/Transport/fsockopen.php and in the request method of Requests_Transport_fsockopen you’ll see something like:

                           stream_context_set_option($context, array('ssl' => $context_options)); 
                    } 
                    else { 
                            $remote_socket = 'tcp://' . $host; 
                    }
    

    Just before the stream_context_set_option put the line:

                            $context_options['SNI_enabled'] = true;

    Ugly but it works

    Update May 2020

    Looking into this more, there is a bug in the fsockopen transport. If you have verify_peer turned off (which network upgrades do) then it turns SNI off. You still need SNI even if you are going to not verify the certificate. I raised https://core.trac.wordpress.org/ticket/50288#ticket but its simply commenting out the line that disables SNI in Requests/Transport/fsockopen.php around line 444.