Blog

  • Updated: psmisc, gw6c and gjay

    Time away from work and its been either raining or hot. So I’ve updated and released some software.  It always seems to happen there is a lot of Free Software development during the breaks.

    psmisc got a bunch of updates, including a new program called prtstat which formats the stat file in the procfs for a pid in (hopefully) a nice way.  No sooner had I released the latest update when a bug report came in. It seems fuser -m -k is a little too happy about killing itself. The fix is in the CVS but anoying I missed that.

    Next up was the Debian gw6c package. I was asked why it didn’t get moved from unstable to testing.  The problem is that while Linux has iproute, kfreebsd does not so the lack of a dependency was stopping it transitioning.  To make matters worse the freebsd template was missing from their package.  After some deb-substvars evilness to fix the dependencies and some dh_install overrides in the debian/rules file it should all be happy when its finished.

    Finally, I miss having good random playlists. I’m too lazy to make them myself so I use some random thing which often gives me rubbish.  A program called Gjay used to be in the Debian archive but got removed, mainly because the upstream stopped supporting it.  I can write C (the programming language its written in) and I wanted to use it, so I fixed it.  My version is 64-bit clean, so it works on my amd64 and it works with audacious not the old xmms which is great.  More importantly, it compiles, it runs and it even works properly.
    I’m just wondering if I want to release it out to the wider world or not.

  • Changing Sites

    While I originally had a blog site on Avogato, I just didnt seem to use it much.  I needed some place that I could put some writing that wasn’t quite up to a new whole page with the associated heavy work to format it but it had to go somewhere.

    So I’ll try to put those middle entries into this place.

    Now all I got to work out is how to link the thing to Planet Debian and I’ll be set.

  • Connecting to Internode IPv6 on Debian

    If you are running Debian and are connected to the internet by Australian ISP Internode you can connect to their tunnel broker. This page describes how to do it with a few simple steps.

    For more information about what Internode is doing with IPv6, have a quick look at the Internode IPv6 page. That page will give you some basic overall view of how the system is setup. Don’t use the instructions they give you. While they do work, its a lot more complicated their way.

    ##Information to collect

    Before you start, you will need to know the following information:

    * Your internode username and password, this is the same details that you put in your ADSL modem to connect to the ISP.
    * Decide if you just want the Debian computer using IPv6 in “host mode” or you want everyone on your LAN to route through this computer in “router mode”.

    ##Installation
    You will first need the gateway client program, which is found in the Debian package [gogoc](http://packages.debian.org/gogoc). If you are running in “router mode” you will also need to install
    [radvd](http://packages.debian.org/radvd). Both of these packages are in the Debian main distribution so you can download them the normal way you get your Debian packages.

    Edit the gogoc configuration file */etc/gogoc/gogoc.conf* to suit your situation, the important lines are:

    userid=MY_USERNAME
    passwd=MY_PASSWORD
    server=sixgw.internode.on.net
    auth_method=any
    host_type=MY_HOST_TYPE
    

    For MY_HOST_TYPE, use “host” or “router” depending if you want just this computer or everyone on your LAN to have IPv6 respectively.

    ##Starting gogoc for the first time
    When you first start gogoc it will try to make a secured connection to the tunnel broker. The problem is that it needs to check the key you get is ok. This means that the first time you run gogoc you need to do it on the command line, like this:

    server# invoke-rc.d gw6c stop
    Stopping Gateway6 Client: gw6c.
    server# gw6c
    sixgw.internode.on.net is an unknown host, do you want to add its key?? (Y/N)
    server# killall gw6c
    server# invoke-rc.d gw6c start
    Starting Gateway6 Client : gw6c.
    

    The server key is now stored in */var/lib/gogoc/gogockeys.pub* and the program will start automatically with no further key problems.

    ##Checking its working
    There are a few ways of checking your configuration is working:

    * **pgrep gw6c** returns the pid of the program
    * Use ifconfig program on interface tun0 or (if you are in router mode) eth0 should show inet6 addresses starting with 2001:44b8:: prefix which belongs to Internode.
    * Browse to and watch the bouncing Google words.
    * ifconfig output should look something like the following:

    server$ /sbin/ifconfig  | egrep '(Link|inet6)'
    eth0    Link encap:Ethernet  HWaddr 12:34:56:78:9a:bc
         inet6 addr: 2001:44b8:42:22::1/64 Scope:Global
         inet6 addr: fe80::234:56ff:fe78:9abc/64 Scope:Link
    lo    Link encap:Local Loopback
         inet6 addr: ::1/128 Scope:Host
    tun   Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
         inet6 addr: 2001:44b8:41::43/128 Scope:Global
    
  • Using Amanda for Backups

    [AMANDA](http://www.amanda.org/) as the name says, is an advanced disk archiver or, more simply, a backup program. It’s a really useful program, let down by something approaching the worst documentation out there. I’ve had about 4 attempts on and off for about the same number of years at using it and deciphering the documentation. Finally I got it to work, but it should not be that hard. Who knows, maybe in
    another 4 years I won’t hate Wiki documentation? Naah.

    This document shows you what files you need to create, their format and how to use some of the programs. It’s probably only a brief get you started type of document, but at least you should be able to start. I have tested this setup on a bunch of [Debian](http://www.debian.org/) GNU/Linux servers but it should be reasonably similar for other systems.

    Speaking of my systems, what I’m writing here works for me with my setup. It’s quite likely it will work for you, but you should verify and test everything. If something is not quite right and you lose all your valuable
    files because something is wrong in here, you should of tested it properly yourself. In any case, a non-tested backup system is almost as useless as a non-working one.

    #Determine your backup regime and other details
    The first step is to decide what you want to backup. Do you want to do everything? Just home files? The variety is quite large and it is just a matter of working out how to tell AMANDA how to do it. AMANDA works
    in chunks of partitions, so if you have everything in one partition as / then that’s one big chunk. All is not lost however as you can use some tricks to exclude some type of files or directories.

    For my setup, I backup all of /, most of /var except logs and cache and /home with the exception of some junk. The method for backing up all of a partition can be done differently than backing up part of it. I’m not
    sure, but I think its quicker to backup all of a partition when you can.

    Next, think of a name to call the backup regime. I call mine “normal” and thats what I’ll use here. AMANDA calls the name “the config” in the manual pages. It’s just a label for the set of configuration files.

    #Determining tapetype

    The configuration file (see next section) will need to know the tapetype. If you don’t know what drive model you have, you can often get some information about the device in the **/proc/scsi/scsi** file. My file shows my ancient Sony tape drive.

    Attached devices:
    Host: scsi0 Channel: 00 Id: 01 Lun: 00
      Vendor: SONY     Model: SDT-5000&
    nbsp;        Rev: 3.30
      Type:   Sequential-Access    &nbs
    p;           ANSI SC
    SI revision: 02

    The tapetype is a bunch of parameters about your tape drive and they are different for
    every device. Have a look at [AMANDA tapetype list](http://amanda.sourceforge.net/fom-serve/cache/45.html) to see if yours is there.

    Failing that, use the **amtapetype** program. Be warned, it will take a long time for this program to work. When I mean a long time, I’m talking about 2 hours or so. That’s roughly how long it took for me.

    To run it, type:

    server# amtapetype /dev/nst0
    

    and then wait, and wait… Eventually you’ll get something that you can cut and paste into your amanda.conf /dev/st0 is the device the tapedrive uses, try that one first if you have a SCSI tape drive.

    #Main configuration file – amanda.conf
    Most configuration information goes into the amanda.conf configuration file. The configuration files are kept into a place /etc/amanda/*configname* which for me is /etc/amanda/normal.

    org "Example Company"   # Title of report
    mailto "root"           # recipients of report, space separated
    dumpuser "backup"       # the user to run dumps under
    inparallel 4            # maximum dumpers that will run in parallel
    netusage  600           # maximum net bandwidth for Amanda, in KB per sec
    
    # a filesystem is due for a full backup once every  days
    dumpcycle 4 weeks       # the number of days in the normal dump cycle
    tapecycle 8 tapes       # the number of tapes in rotation
    
    bumpsize 20 MB          # minimum savings (threshold) to bump level 1 > 2
    bumpdays     1          # minimum days at each level
    bumpmult     4          # threshold = bumpsize * (level-1)**bumpmult
    
    runtapes     1
    tapedev "/dev/nst0"     # Linux @ tuck, important: norewinding
    
    tapetype SDT-5000               # what kind of tape it is (see tapetypes below)
    labelstr "^MY-TAPE-[0-9][0-9]*$"        # label constraint regex: all tapes must match
    
    diskdir "/var/tmp"              # where the holding disk is
    disksize 1000 MB                        # how much space can we use on it
    infofile "/var/lib/amanda/normal/curinfo"       # database filename
    logfile  "/var/log/amanda/normal/log"   # log filename
    
    # where the index files live
    indexdir "/var/lib/amanda/normal/index"
    
    define tapetype SDT-5000 {
        comment "Sony SDT-5000"
        length 1584 mbytes
        filemark 0 kbytes
        speed 271 kps
    }
    define dumptype comp-home-tar {
        program "GNUTAR"
        comment "home partition dump with tar"
        options compress-fast, index, exclude-list "/etc/amanda/normal/home.exclude"
       priority medium
    }
    
    define dumptype comp-var-tar {
        program "GNUTAR"
        comment "var partition dump with tar"
        options compress-fast, index, exclude-list "/etc/amanda/normal/var.exclude"
        priority high
    }
    

    So what do all those lines mean? You can leave most of them what they are
    in this example. Read the amanda(8) manual page for information about
    what each of the lines does. I’ll only point out some of the more significant
    or tricky ones here.

    runtapes
    I have runtapes set to 1 because it is a single tape
    drive and not some fancy multi-drive tape jukebox.
    tapedev
    This is the Unix device that the tape is found. Try
    /dev/nst0 first as its a good default. Use dmesg to find the device if
    that doesn’t work.
    tapetype
    This refers to a label that is defined further along
    in the configuration file.
    labelstr
    All tape labels have to match this regular expression. AMA
    NDA wont recognize it otherwise.
    diskdir
    A directory where AMANDA can temporarily store its files
    before they are put onto the tape, not sure if /var/tmp is a good idea or not.
    disksize
    The amount of space that AMANDA can use in the previously
    given diskdir.
    infofile, logfile, indexdir
    Filenames for storing information and
    status of your backups. All these directories for these files need to exist,
    see below.

    The tapetype definition have been previously explained. It’s either going to be a cut and paste from a website or the output of amtapetype.

    The dumptypes depart from the usual ones you see in the amanda documentation. I am using tar here because then I can select what files and directories I want in the archive. The two dumptypes are identical except for the priority and the exclude-list.

    #disklist config file
    The disklist file is found in */etc/amanda/normal/* directory and it lists what disks on what hosts are to be backed up using what backup format. Each line has three entries separated by whitespace: hostname, drive or partition and dumptype. The dumptype is one of the ones that was defined in the configuration file amanda.conf. My disklist looks like:

    # disklist for normal backups
    # Located at /etc/amanda/normal/disklist
    #
    localhost /var  comp-var-tar
    localhost /home comp-home-tar
    

    Reading both this file and the amanda.conf file, you can see that this means /var is backed up using tar and gzip and it will use the exclusion file var.exclude and /home is backed up also using tar and gzip but with the
    exclusion file home.exclude.

    #The tar exclude files
    I create two exclude files, one for each partition type I am backing up. The contents of these exclude files are a file or directory name, one per line. You may want to read the GNU tar info files about what can go here, its just using the –exclude-from flag. My var.exclude excludes logs, debian packages, cached processed manual pages and temporary files, it looks like this:

    /logs/*/*.gz
    /cache/apt/archives/*.deb
    cache/man/man
    tmp
    

    #Creating directories and files
    There are a fair few directories and files that AMANDA needs to have setup with the correct permissions before it will work. The easiest way to document this is to show you the commands used. Remember that my config name is “normal” and a lot of these directories are defined in the amanda.conf file as is the user.

    server# chown backup.backup /etc/amanda/normal
    server# chmod 770 /etc/amanda/normal
    server# touch /etc/amanda/normal/tapelist
    server# chown backup.backup /etc/amanda/normal/*
    server# chmod 500 /etc/amanda/normal/*
    server# touch /var/lib/amanda/amandates
    server# chown backup.backup /var/lib/amanda/amandates
    server# mkdir /var/lib/amanda/normal
    server# mkdir /var/lib/amanda/normal/index
    server# chown -R backup.backup /var/lib/amanda/normal
    server# chmod -R 770 /var/lib/amanda/normal
    server# mkdir /var/log/amanda/normal
    server# chown backup.backup /var/log/amanda/normal
    server# chmod 770 /var/log/amanda/normal
    

    #Making a new AMANDA tape
    You will now need to make a new tape and label it. Put the tape in the drive and use the command

    > amlabel normal MY-TAPE-01

    Remember the label has to match the regular expression you have in the amanda.conf file. If all goes well, in a minute or two you will have a tape ready.

    #Checking the configuration
    AMANDA has a checking program, called amcheck, that makes sure everything is ready to go for the backup. The example output given below shows most things are ok, but I need to change a tape (and run amlabel on it) because I’ve already used this tape.

    # su backup
    $ amcheck normal
    Amanda Tape Server Host Check
    -----------------------------
    Holding disk /var/tmp: 6349516 KB disk space available, that's plenty
    ERROR: cannot overwrite active tape MY-TAPE-01
           (expecting a new tape)
    NOTE: skipping tape-writable test
    Server check took 9.027 seconds
    
    Amanda Backup Client Hosts Check
    --------------------------------
    Client check: 1 host checked in 0.032 seconds, 0 problems found
    
    (brought to you by Amanda 2.4.4)
    

    #To do the backup
    After all that setup and testing, the backup itself is, well, pretty boring. To start it, as the backup user type /usr/sbin/amdump normal. You probably want this in a crontab so it is done regularly. After some time whoever was mentioned in the mailto line in the amanda.conf file will get an email about the backup.

    #Recovering a file
    To test the backup, you really need to test that you can restore a file. To do this I cd to /tmp and then run the amrecover utility. The amrecover program looks a lot like a ftp client and is pretty easy to use. Once again, it needs the config name so to run it, type amrecover normal. It may not know what host you want to go to, so set the host with the sethost command.

    amrecover> sethost localhost
    200 Dump host set to localhost.
    

    Next you need to tell the recover program what disk you are trying to recover. We’ll select the /home directory here.

    amrecover>; setdisk /home
    Scanning /var/tmp...
    20030618: found Amanda directory.
    200 Disk set to /home.
    

    We’ve found the disk and the host, now to wander around the filesystem and find the file. The file we are after is /home/csmall/myfile.txt. We’ve already at /home with the setdisk command, so now it is just a matter of cd into the csmall directory, add the file myfile.txt to the recovery list and then issue the extract command to start the recovery process.

    amrecover> cd csmall
    /home/csmall
    amrecover> add myfile.txt
    Added /csmall/myfile.txt
    amrecover> extract
    
    Extracting files using tape drive /dev/nst0 on host localhost.
    The following tapes are needed: MY-TAPE-01
    
    Restoring files into directory /var/tmp
    Continue [?/Y/n]?
    

    The restore program has found the file you are after and what tapes are needed to restore the file. If there were many files there may be multiple tapes required. It will now ask you to continue and after pressing
    enter it will pause while you load the required tape into the drive. Pressing enter again will restore the file. It may take an hour or so to get it.

    Extracting files using tape drive /dev/nst0 on host localhost.
    Load tape MY-TAPE-01 now
    Continue [?/Y/n/s/t]?
    ./csmall/myfile.txt
    
    amrecover>
    

    The result should be now a restored csmall/myfile.txt in your current directory. If that’s the case, the restore test has succeeded.

  • Creating an APT archive

    [apt](http://packages.debian.org/apt) is a very important and useful tool that is used mainly for [Debian](http://www.debian.org/) GNU/Linux computers to download and install packages. It has the ability of sorting out the dependencies for packages and downloading from multiple sites.

    For various reasons, people want to run their own apt archive that is separate from the rest of the Debian package distribution system. This gives a much better way of distributing binary and source packages that just a plain FTP or HTTP site.

    For this document, I have used my archive hosted on Internode as the example. Apparently there is a way of doing this using the new pool method. I couldn’t get it to work and junked it and went back to the old way which was putting the packages under dist. It seems a lot cleaner and the reasons for having /pool/ don’t really apply for small archives.

    #Definitions
    To understand how apt (or Debian for that matter) sorts its files, you need to understand the various ways files are catalogued. This will help in deciding what to call the various directories.

    Dist
    – The distribution of Debian. Can either be a code-word like woody, sid or sarge or a type like stable, testing or unstable. For my archive I use unstable. Not that some places DIST is the directory dist/distname such as dist/unstable.
    Section
    – The section determines what is the state of the package and is determined by the copyright. DFSG-free packages go into the main section.
    Arch
    – What architecture is the package built for?

    #Directory Layout
    Apt requires a certain type of directory layout to work. The directories can either be real directories or symlinks. This is what my archive looks like:

    apt/
      +-dists/
        +-unstable/
          +-main/
            +-binary-i386/
              +-Packages
              +-Packages.gz
              +-gkrellm-wmium_1.0.8-1_i386.deb
              +-wmium_1.0.8-1_i386.deb
            +-source/
              +-wmium_1.0.8-1.diff.gz
              +-wmium_1.0.8-1.dsc
              +-wmium_1.0.8.orig.tar.gz
    

    The binaries are found in the sub-directory *./apt/dists/unstable/main/binary-i386/* while the source packages are found in *./apt/dists/unstable/main/source/*

    -ftparchive configuration file
    The most difficult part of the whole exercise is trying to get this configuration file right. It’s badly documented and has no real examples combined with the fact if something doesn’t work you don’t know why.
    I call mine archive.conf but it doesn’t really matter what it is called as long as you use the same name when you run the programs in the next steps. After much trial and error, I have the following configuration file, explanations of what the lines do follows.

    Dir {
      ArchiveDir "/home/example/myarchive/apt";
    };
    
    BinDirectory "dists/unstable" {
      Packages "dists/unstable/main/binary-i386/Packages";
      SrcPackages "dists/unstable/main/source/Sources";
    };
    
    ArchiveDir
    The absolute path to the top of the archive from the server’s point of view. This directory will have the dists directory in it. Now if you are building the files on one machine but uploading them to another (like I do) then the directory is the directory for the building machine.
    BinDirectory
    This is the directory of the dist, that directory only has the main symlink in it.
    Packages
    The location of the Packages file. The full path will be

    SrcPackages
    The location of the Source Packages file. The full path
    will be $ArchiveDir/$BinDirectory/main/$Packages

    #Adding Packages
    To add packages, put the .deb files into the binary-i386 directory and the orig.tar.gz, .dsc and diff.gz files into the source directory.

    #Running apt-ftparchive
    To update or create the Packages filenames, you need to run apt-ftparchive. The programs scans for packages and creates the right paths in the Packages file for them.

    $ apt-ftparchive generate archive.conf
     dists/unstable: 2 files 3017kB 0s
    Done Packages, Starting contents.
    Done. 3017kB in 2 archives. Took 0s
    

    Notice it has found 2 files and 2 archives, which means it is working because that was the number of packages I had in my archive. You should also have a Packages and Packages.gz in the binary-i386 directory.

    #Uploading the archive
    If you are using the same computer for creating the archive then you are done. If not then then you need to move the files onto the server. How you do this depends on what the server has available. Ideally, they
    have scp or rsync which makes it very easy. My ISP only has FTP which means I need something like [lftp](href=”http://packages.debian.org/lftp) to do the copying.

     $ lftp -c 'open -u myusername ftp.myisp.net ; mirror -n -R apt apt'
    

    This command recursively copies files from the local apt directory to the remote apt directory on the ftp server. See the lftp manual page for details.

    #sources.list changes
    Now you have a working archive, you need to change your /etc/apt/sources.list file so that apt knows to get packages from your archive. It looks like just another archive.

    deb http://users.on.net/csmall/apt unstable main
    

    #My Makefile
    The following is my Makefile that sits at the top directory (the same directory that the apt subdirectory sits in on the local computer) that I use to make the various files.

    instpkg:
      -mv incoming/*_i386.deb apt/dists/unstable/main/binary-i386/
      -mv incoming/*.dsc incoming/*.diff.gz incoming/*.orig.tar.gz apt/dists/unstable/main/source/
      apt-ftparchive generate archive.conf
    
    lftp:
      lftp -c 'open -u myself ftp.isp.net ; mirror -n -R apt apt'
    
  • Filtering base64 encoded spam

    I hate spam, though I get an awful lot of it. About 1/3 of my email is spam, though on a bad day the ratio can be reversed. If you want to see just how much spam I get, I’ve used to have a nice graph of spam. To get rid of it I use a lot of filters. One of these is the Postfix Body Checks feature. This feature allows you to match lines in the body of the email and reject them at the server. I use Perl Compatible Regular Expression (PCRE) matching for the lines.

    Recently though, I noticed a lot of spam, usually about Viagra, that was passing through my spam traps. I noticed the emails all talk about a small set of webservers, so I’ll just filter on the urls. It didn’t work.
    SpamAssassin in the end gave me the hint.

    X-Spam-Status: No, hits=0.0 required=5.0
           tests=BASE64_ENC_TEXT,EMAIL_ATTRIBUTION,HTML_60_70,
               HTML_IMAGE_ONLY_04,MIME_HTML_ONLY,PENIS_ENLARGE,REMOVE_PAGE
                version=2.53
    

    It was base64 encoded email! That’s why my simple PCRE text matches would not work. So I needed to use something else.

    This page is about how to filter on base64 text that appears in emails. I have used examples of PCRE and postfix but you can use this anywhere else, with the appropriate adjustments of where the files go and their syntax.

    How to filter

    A standard filter line in a postfix body_check file looks something like this:

    //   REJECT
    

    This is the old iframe hack that some spammers use to sneak URLs into your email.They are nice and clear and we just reject them. All we have to do now is change the stuff between the “toothpicks” // to what we want.

    Here’s an example spam I got today, its offering the usual garbage this shonks usually offer. Remember if they don’t advertise ethically, it is often a sign of their entire operation.

    
    

    In this case, I’ve decided I cannot be bothered getting any emails that advertise stuff on www.sellthrunet.net, I get enough junk already and it’s probably a front for spammers anyway, so I’ll filter on that domain. You need to make the string reasonably long as you are effectively cutting off parts of it.

    Debian systems have this program called mimencode, some of you might have mmencode, which is part of the Metamail package. This does the base64 encoding for you.

    So all you need to do is take the string you want filtered on, put it into mimencode and then put the resulting string into the postfix configuration. You need to do this three times, deleting a character at the front each time because base64 is done by cutting the strings up into groups of three characters each and you don’t know in advance if the your string is going to start at position 1,2 or 3.

    gonzo$ echo -n "http://www.sellthrunet.net/" | mimencode
    HR0cDovL3d3dy5zZWxsdGhydW5ldC5uZXQv
    gonzo$ echo -n "ttp://www.sellthrunet.net/" | mimencode
    dHRwOi8vd3d3LnNlbGx0aHJ1bmV0Lm5ldC8=
    gonzo$ echo -n "tp://www.sellthrunet.net/" | mimencode
    dHA6Ly93d3cuc2VsbHRocnVuZXQubmV0Lw==
    

    Next you need to remove part of the encoded string at the end. Remember that 3 characters are encoded into 4 symbols. So character one contributes to symbol 1 and 2, two to 2 and 3 and three to 3 and 4. The = means the string was not a multiple of 3 and it needs padding. If the encoded string has no =, you can use it as-is, otherwise remove all = plus one more character at the end of the string. Remember that you are cutting off up to two characters from your regular expression from both ends so be careful it is still meaningful. The last string for example is only matching “tp://www.sellthrunet.ne” which still looks ok.

    Finally, you can join the strings using the regular expression “or” symbol. Also be careful to escape any strings that use special regular expression characters. Base64 can have plus ‘+’ and slash ‘/’ which need
    escaping with a backslash .

    (HR0cDovL3d3dy5zZWxsdGhydW5ldC5uZXQv|dHRwOi8vd3d3LnNlbGx0aHJ1bmV0Lm5ldC|dHA6Ly93
    d3cuc2VsbHRocnVuZXQubmV0L)
    

    I have a bypass line in my setup so usually any lines that are base64 encoded are bypassed, so if you have the same thing make sure this line goes before your bypass line or it will never match. We also need to tell postfix to use case sensitive matching because it is base64 hash we are matching and not the real string itself, so we use the i flag after the last slash . The relevant lines in the body_checks file are now:

    #
    # sellthrunet.net
    /(HR0cDovL3d3dy5zZWxsdGhydW5ldC5uZXQv|dHRwOi8vd3d3LnNlbGx0aHJ1bmV0Lm5ldC|dHA6Ly93d3cuc2VsbHRocnVuZXQubmV0L)/i REJECT Spamvertised website
    # don't bother checking each line of attachments
    /^[0-9a-z+/=]{60,}s*$/                OK
    

    To test it, I use pcregrep and mimencode again, on the mail file. This will show in clear text the spamming line and gives you an idea that it should work.

    $ pcregrep 'dHRwOi8vd3d3LnNlbGx0aHJ1bmV0Lm5ldC' /var/mail/csmall  | mimencode -u
    http://www.sellthrunet.net/pek/m2b.php?man=ki921">&ltl;im//www.sellthrunet.net/pek/m2b.php?man=ki
    

  • Bridging firewalls for ADSL Connections

    For a long time I’ve had the 56k (hah – If I’m lucky) dialup. Between the modem and my local network was a nice Linux firewall, all was good. Then I changed my connection to ADSL from [Internode][], that was good too. I soon found out that I couldn’t put my firewall in as-is, that was bad.

    ##Why Bridging?
    The problem is that, like a lot of other DSL networks out there, [Internode][] sees your LAN and their network device at the telephone exchange like its one big Ethernet LAN. Normal firewalls expect two different blocks of IP addresses (or subnets) on their “outside” and “inside” interfaces, eg network number 10 on the inside and 42 on the outside. The problem is with the given setup, network 42 is both on the outside and inside, a real problem for a standard firewall.

    A bridging firewall expects all its interfaces on the same network. It looks a lot like an Ethernet switch or a hub and in-fact with no firewall rules it behaves exactly like that. The tricky thing is that it has to act like a switch when passing packets but act like a router when its deciding if it should be passing that packet at all.

    It should be mentioned that you only need a bridging firewall when you want the computers on your local network to all have real live addresses (so no NAT) and your ISP is not expecting you to have a router there.

    ##Kernel Patches and changes
    The standard Linux kernel has firewalling in it, it also has bridging code, so we’re set right? That depends on what version kernel you have. For 2.4.x kernels you need a patch, but the newer 2.6.x kernels have ebtables (which is the project that swallowed up the iptables+bridge code) so no patching is needed.

    I a 2.4.x kernel, the bridge code needs a modification so it goes and “asks” the firewall code if it is OK to forward a packet. Without that patch, your bridge code will happily send any packets that come along.

    ##Compiling 2.4.x kernels

    Now it used to be quite easy as there was only one source of the firewall-bridge linking code. However the code used to sit with the bridge project at sourceforge but has now moved in with the ebtables project also at sourceforge. The following table may make it easier to understand what patch you need

    Kernel version Patch
    2.4.18 bridge-nf-0.0.7-against-2.4.18.diff
    2.4.21 ebtables-brnf-3_vs_2.4.21.diff.gz
    2.4.22 ebtables-brnf-2_vs_2.4.22.diff.gz

    The 2.4.21 kernel patch didn’t work cleanly and I needed to manually fix a few files to get it to patch and compile, the good news is the 2.4.22 kernel patch did work cleanly for a stock 2.4.22 kernel.

    * net/Makefile : Add “bridge/netfilter” to the mod-subdirs line
    * net/ipv4/ip_output.c : Add 4 lines from the rej file. Note that in the last file the pointer handle “skb2” is now called “to” and “skb” is called “from” so make sure you make those adjustments when you do your hand-patching.
    * net/bridge/br_netfilter.c : Uses old route table functions and a structure that doesnt have pmtu any more. Use the patch at .

    You probably should also read the documentation with respect to the different patches. Earlier patches have their Bridge document Page while the newer patches are a poorer cousin to ebtables itself on the newer site but you might dredge up something on the ebtables dcoumentation page

    For compiling, I enabled bridging, netfilter, iptables and the bridge netfilter support. The kernel compiled fine and I then installed it on the firewall.

    ##Compiling 2.6.x kernels
    At the time of this writing, I was unable to use the physdev feature of iptables, which means the bridging firewall was unable to use iptables where the physical interface needed to be specified, iptables gave an invalid argument every time I used -m physdev, I rolled back to kernel 2.4.22.

    As previously mentioned, the 2.6.x kernels have ebtables built in, so there is no need for patching. ebtables used to be just for filtering based on layer-2 information, such as ethernet MAC addresses but it now allows the Linux bridge to look at the same things ipfilter can see. Some 2.6 kernel and iptables setups cannot handle the physdev module, so you might need ebtables anyway.

    There’s two ways of filtering IP packets in 2.6 kernels. You can use ipfilters which can see bridged packets and you can use ebtables which has some limited support of IP. Unless there is a good reason, go with the iptables, it has a lot more features for IP packets.

    For compiling, I enabled bridging, netfilter, iptables and iptables physdev. If you want ebtables support too enable , ebtables, ebt: filter table, ebt: log support and ebt: IP filter support. These are found in the networking options submenu of the kernel configuration.

    ##Helper Programs
    You will need two helper programs for your firewall. They both don’t need patching which is wonderful! The first is iptables for manipulating the firewall rules and the second is bridge-utils which makes the bridges. If you want to use ebtables too, get it as well.

    I run the Debian distribution so to download the two required packages was a matter of a apt-get command and I was done. If you don’t run Debian I’m sure you’ll find the programs for your distribution somewhere.

    ##Configuration
    It’s remarkably simple to make a bridging firewall. You make the bridge, then you add firewall rules in. I was pleasantly surprised by this; the hardest thing for me was to get a second Ethernet card working in my stupid hardware that has flakey ISA buses and a PCI slot that makes anything in it misbehave, luckily I had 3 other sensible PCI slots.

    To make a bridge, I use the following commands:

    myfirewall# brctl addbr br0
    myfirewall# brctl addif br0 eth0
    myfirewall# brctl addif br0 eth1
    

    That was it, one working bridge! This meant that any packets that needed to cross the bridge were allowed through. Next I had to add some firewall rules in. What to put into a firewall is explained much better elsewhere, look at the iptables reference given above.

    The way the interfaces are handled changes in the kernels. For 2.4 kernels, you use the standard iptables input and output (-i and -o ) flags to specify what your incoming and outgoing interfaces should be. For 2.6 kernels you need to use the physical device module. So whever you see a rule that has -i or -o flags, replace them with -m physdev –physdev-in or -m physdev –physdev-out to specify which interface you want (this is what breaks on my system). If you use -i and -o it will mis-match because iptables thinks the input and output interfaces are whatever you call the bridge (br0 if you use my example).

    Pretty simple stuff. I hope it was helpful for you. If there is a part that doesn’t make any sense or you’d like me to explain it better drop me a line at the address below.
    Very simple iptables rules example

    Here is a very simple example of iptables ruleset. It won’t do very much except allow everyone from the inside network to connect and for the reply packets to come back. It’s based on Rusty’s quick example. It assumes your external interface is eth0. First is the 2.4 kernel example:

    iptables -N FORWARD
    iptables -A FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT
    iptables -A FORWARD -m state --state NEW -i ! eth0 -j ACCEPT
    iptables -A FORWARD -j DROP
    

    Next is the 2.6 kernel example. The only change is the line specifying what interface we accept new connections from.

    iptables -N FORWARD
    iptables -A FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT
    iptables -A FORWARD -m state --state NEW -m physdev --physdev-in ! eth0 -j ACCEPT
    iptables -A FORWARD -j DROP
    

    ##NATing on a Bridging Firewall
    It may seem strange that if you have a bridging firewall, why would you use NAT and in fact how can you use it. The answer is you may have several IP address but more computers. Put the servers into the DMZ with real addresses and NAT the PCs.

    The setup I have has the hosts with the real and private addresses on the same physical network. This is generally a bad idea and is called multi-netting. If you can, put the private hosts on a third ethernet card.

    With multi-netting, you get the bizzare situation where everything revolves around a single interface and the firewall is part bridge, part router, based on what IP address it sees.

    The first thing to do is give the bridge interface (br0 in the example) two IP addresses. It needs to be in both the public and private networks to do the routing and NATing. If you are going the three interface method, the third interface gets the private address and the bridge interface gets the public one.

    Next, you need to add some firewall rules to do the NAT itself. This is reasonably standard. You will need to qualify the rule with the private LAN address so you don’t NAT the public IP addresses too. The example assumes the external IP address is 1.2.3.4

    iptables -t nat -A POSTROUTING -s 192.168.1.0/24 --to 1.2.3.4
    

    Finally protect your firewall, it now unfortunately has a public IP address so it can do NAT. You may want to make sure that your daemons, such as SSH, only listen to your private IP addresses. Also some firewall rules such as the following can help. Other than traffic already established, the firewall only accepts traffic to itself if it is from the private LAN IP range and it came from the internal interface and it is destined to the firewall itself. It also accepts traffic on the loopback interface but drops the rest.

    iptables -F INPUT
    iptables -A INPUT -j ACCEPT -m state --state ESTABLISHED,RELATED
    iptables -A INPUT -j ACCEPT -s 192.168.1.0/24 -i eth1 -d 192.168.1.1
    iptables -A INPUT -j ACCEPT -i lo
    iptables -A INPUT -j DROP
    

    [Internode]: http://www.internode.on.net/

  • Printing using LPRng and Foomatic

    For many years I have been using LPRng as my printer spooler. It is not the easiest one to use, but has a lot of features and is used in heavy-duty situations such as the main spoolers for University student printers.

    In the early days, all printouts were simple ASCII text and all printers understood simple ASCII text, so there were no problems. Now printouts can be a number of forms, such as PDF, Postscript, png, jpg, tex plus many
    others. Not only that, all printers have a different way of explaining how to print complex figures or graphics, or just even change the colour. A printing filter is the program that, say, converts the PDF command “now use red and write a line like this” into a language the printer itself understands. The filters have to also work out what thing is being sent to it; is that a PDF coming down the line or a Postscript file? Maybe it is nroff text?

    The first filter I used was magicfilter. I then tried turboprint, which is non-free and also whatever lprngtool uses. I now use the foomatic scripts, which appear to be the most successful.

    This document describes how I setup my LPRng program running on a Debian GNU/Linux system to talk to my Epson Stylus Color 600 that is attached to a networked print server (some Netgear thingy). The instructions should work for other distributions and of course with the exception of a different
    ppd file.

    You may also want to read another LPRng installation document too.

    Basic Setup

    The general idea is to use the Foomatic program called foomatic-rip as the LPRng input filter. This filter will convert the incoming file into something my Epson understands correctly. Ideally, I just tell my system “print this” and it does it, without any further input.

    The steps in setting the printing up are:

    1. Getting the right packages
    2. Finding your printer PPD
    3. Checking your ghostscript works
    4. Installing and customizing the PPD
    5. Change or create printcap file
    6. Testing

    Getting the right packages

    There are some packages you will need, or are quite useful to have. I just
    apt-get install ‘ed them and they all went in fine. Some of the files are
    dependent on what printer you have and what drivers it will be using.

    lprng
    The printer spooler. You could use other printer spoolers, but they are setup differently.
    foomatic-filters
    This holds the printer filters. Most importantly,
    it is the package with foomatic-rip.
    gs-esp
    Ghostscript comes in a variety of flavours. I needed this
    flavour because it had the output device I needed. Make sure you check you get t
    he right one for you.

    gsfonts
    Fonts for Ghostscript. Handy package to have.
    mpage
    Converts ASCII text into postscript.
    a2ps
    Converts lots of things into postscript.

    Finding your printer PPD

    The PPD file is a Postscript Printer Description. It describes your printer to the postscript and ghostscript programs. You need to get this first before doing anything else because this will determine if your printer
    is supported and also what other packages you might need.

    Previously you could get the PPD from Linux Printing.org website. But they have changed things around so they are no longer available.
    You have to get them out of the printer database, the problem is they are shipped in xml.

    A program called foomatic-ppdfile is the magic gap filler between XML and ppd. It can be used to find what PPD to use and how to generate them. For example, I try to find my Epson Stylus Color 600, with

    $ foomatic-ppdfile -P ‘Epson.*Color 600’
    Epson Stylus Color 600 Id=’Epson-Stylus_Color_600′ Driver=’gimp-print’ Compatibl
    eDrivers=’gutenprint-ijs.5.0 gimp-print omni stc600ih.upp stc600p.upp stc600pl.u
    pp stcolor stp ‘

    The Id= is used to extract the printer definition. Generally there are many drivers you can use for each printer, check the Linux printing website for details of each.

    For my printer, the default driver is called gimp-print, but I don’t have that one. foomatic-ppdfile complains:

    $ foomatic-ppdfile -p ‘Epson-Stylus_Color_600’ > /etc/lprng/Epson-Stylus_
    Color_600-gimp-print.ppd

    There is neither a custom PPD file nor the driver database entry contains suffic
    ient data to build a PPD file.

    If you get that message, try another printer driver. gutenprint is the new name of gimp-print, so we can use that:

    $ foomatic-ppdfile -d gutenprint-ijs.5.0 -p ‘Epson-Stylus_Color_600’ > /
    etc/lprng/Epson-Stylus_Color_600-gutenprint-ijs.5.0.ppd

    Checking your ghostscript works

    Debian ships various ghostscript interpreters. The question is which is
    the right one for you? Most printer drivers will need either the Gimp-Print
    driver but a lot of the HP printers will need the ijs driver. The trick
    is to look at the PPD file. For example, my file has the following lines:

    *FoomaticRIPCommandLine: “gs -q -dPARANOIDSAFER -dNOPAUSE -dBATCH -sDE&&

    VICE=stp %A%Z -sOutputFile=- -”

    The important thing is unfortunately line-wrapped but it is trying to say -sDEVI
    CE=stp. This is your output device and may or may not be supported by your
    version of ghostscript. Grep for it with the command.

    gonzo$ gs -h | grep stp
       uniprint xes cups ijs omni stp nullpage

    You can see that we grepped for stp and there is a string showing
    stp. If your ghostscript doesn’t show the right driver for you, try one of
    the other ghostscripts (gs, gs-aladdin, gs-esp). Also be careful as gs is
    an alternative and you might have the wrong one pointing in the alternatives
    file. To check you can do the following:

    gonzo$ gs -h | head -2
    ESP Ghostscript 7.05.6 (2003-02-05)
    Copyright (C) 2002 artofcode LLC, Benicia, CA.  All rights reserved.
    gonzo$ ls -l /usr/bin/gs
    lrwxr-xr-x    1 root    root        &nb
    sp; 20 May  2  2002 /usr/bin/gs -> /etc/alternatives/gs
    gonzo$ ls -l /etc/alternatives/gs
    lrwxrwxrwx    1 root    root        &nb
    sp; 15 Aug  9 15:16 /etc/alternatives/gs -> /usr/bin/gs-esp

    Installing and customizing the PPD

    It doesn’t really matter where you put your PPD file. You just specify it
    in the printcap so the foomatic-rip file can find it. I put mine in
    /etc/lprng but it is really up to you where to put it.

    I also needed to adjust my PPD. Like most of the world, I do not have
    Letter sized paper but A4. The PPD uses the default of Letter and making
    sure you remember to type “-Z PagerSize=A4” every time you print gets old
    quickly.

    Fortunately it is easy to fix it. Find the two lines that start with
    *DefaultPageSize: and *DefaultPageRegion: and change them both from Letter
    to A4. I’m sure someone who understands Postscript (I don’t) can explain
    why you need to change both but the printing complains if you only change one.

    Also remember to change the permissions so the printer filter program can
    read the file. I had it setup originally so it couldn’t and then wondered
    why my filters thought they had a “Raw” printer.

    Change or create printcap file

    The printcap file will need to be created or changed so that it uses the
    input filter (if= clause) of foomatic-rip. In turn the filter has to be
    told it is run from LPRng and the location of the PPD file. The rest of
    the information is the usual thing you would see for a remote printer.

    epson600|Epson Stylus Color 600:
        :force_localhost:
        :[email protected]:
        :if=/usr/bin/foomatic-rip:
        :filter_options= –lprng $Z /etc/lprng/Epson-Stylus_Color_600-gute
    nprint-ijs.5.0.ppd.ppd:
        :sd=/var/spool/lpd/epson600:
        :mx#0:sh:

    Testing

    Foomatic has a special flag that spits out all the other flags you can use.
    It’s a good test to see if everything is working ok. The command is just

    gonzo$ echo x | lpr -Z docs

    The file you try to print is irrelevant, just make sure it exists. You
    should then get a few pages of documents showing all the flags you can use
    to change the printing. The -Z docs flag means to print the documentation
    of the driver rather than the file itself. The foomatic documentation talks
    about using the demo file of /proc/cpuinfo but I get “nothing to print”
    messages.

    If you do not get some document with the title “Documentation for (printer
    name) (printer driver)” then check the permissions of the PPD file and
    also the printcap file. If all else fails, edit the file
    /etc/foomatic/filter.conf and change the relevant line to filter: 1.
    The debug will then be found in /tmp/foomatic-rip.log. Do not keep the
    debugging on all the time as it is a security risk.

    Central print servers and multiple queues

    In another installation I had a HP OfficeJet 155 which was used by several
    pc Linux clients. I wanted several “printers” depending if the user wanted
    draft or colour. The -Z flags seemed a little too hard.

    The idea is to have multiple printers on the central print server which then
    bounces to a real print queue which spools off the jobs. Do not have all
    the “printers” going directly to the real printer as it generally handles
    contention badly.

    The central printcap just adjusts what extra -Z options are appended and
    then bounces the job to the real print queue which spools all jobs through
    the filter and onto the printer.

    .common
        :sd=/var/spool/lpd/%P:sh:mx=0
        :lp=hpoj155@localhost

    hpoj155draft:tc=.common
        :append_z=PrintoutMode=Draft.Gray

    hpoj155bw:tc=.common
        :append_z=PrintoutMode=Normal.Gray

    hpoj155colour|hpoj155color:tc=.common

    hpoj155draftduplex:tc=.common
        :append_z=PrintoutMode=Draft.Gray,Duplex=DuplexNoTumble

    hpoj155bwduplex:tc=.common
        :append_z=PrintoutMode=Normal.Gray,Duplex=DuplexNoTumble

    hpoj155colourduplex|hpoj155colorduplex:tc=.common
        :append_z=Duplex=DuplexNoTumble

    hpoj155| HP OfficeJet D155xi remote printer
        :lp=printer.mynetwork%9100
        :if=/usr/bin/foomatic-rip
        :filter_options= –lprng $Z /etc/foomatic/lpd/HP-OfficeJ
    et_D155-hpijs.ppd
        :sd=/var/spool/lpd/%P:sh:mx=0

    The print queues are now setup on the main server. Next is to make it
    easier on the client pcs by setting up the queues and the aliases.
    I called my queues hpoj155* so that if another printer comes along. It makes
    big and confusing printer names so I created two lots of printer queues on
    the clients. One with the printer name and one without. The first name
    in the printcap is the one that is used by default.

    draftduplex|bwduplex|colourduplex|draft|bw|colour
            :client:lp=hpoj155%[email protected]:force_loc
    alhost@

    hpoj155draft|hpoj155bw|hpoj155colour|hpoj155draftduplex|hpoj155bwduplex|hpoj155c
    olourduplex
            :client:lp=%[email protected]:force_localhost@

    That way users can just print to -P colourduplex and it understands that
    it should go to the hpoj155 queue and that the printout is in colour and
    duplex mode. The user doesn’t need to know what magic -Z flags are
    required for this to happen either. They are different for different
    printer types.

  • LaTeX to HTML Converters

    I’ve been using LaTeX for many years, I should say quickly for the freaks out there that it doesn’t mean I’m into vinyl or other strangeness. LaTeX is a document processing system that creates good quality documents
    from text source, no hamsters or chains involved at all.

    The standard processors you get with LaTeX are good at converting the source into Postscript or PDF (Acrobat) documents and for most of the time this will do. However there are occasions when you want to have your document output in HTML. In this case you need to have a different processor.

    This page is about the various types of LaTeX to HTML converters out there. It is not an exhaustive list but should help other people looking around for converters. The main problem with them all is they are not
    maintained that well.

    Hyperlatex

    Hyperlatex is the converter I have used the most. It does most jobs quite well and you get reasonable results from it. My major gripes with it is that it is written in Lisp so I cannot extend it (I don’t know Lisp) and that it doesn’t do CSS that well.

    Despite those shortcomings, Hyperlatex is a good start for document conversion. Unlike most program on this page, it is actively maintained and keeps up with HTML standards. For example there is work for
    Hyperlatex output to be in XHTML.

    TTH

    TTH has put a lot of effort into the formula conversion. Most converters make an image for the formulas while TTH generates HTML for it, giving the formulas a more consistent look in the document rather than looking like they were “pasted in” later.

    TTH has a funny license in that (roughly) it is free for non-commercial use only. Depending on where you are going to use it, this may be a problem. You can buy a commercial license of TTF too.

    Heava

    HeVeA is one converter I haven’t used, but will try out soon. It looks like it would get confused by some of my documents, especially anything with nested environments.

    The program is written in a language called Objective Caml which I know even less about than Lisp. That means no way of extending it for me.

    LaTeX2HTML

    At first I thought this would be the converter for me. It looks like it converts it pages rather well and it is written in a programming language I understand (Perl).

    The main problem with this program is that it has not been maintained for years. A consequence of that is the HTML rendering is a bit old and doesn’t keep up with the latest standards.

    text4ht

    Another one I’ve not tried yet. This one does look recently maintained and I will be trying it out.

    LaTeXML

    This converter takes LaTeX as an input and instead of having an output file format of DVI makes it XML. It is written in Perl and was developed with a particular focus on the mathematical equations. To get HTML you use a post-processor.

  • Linux Distributions – Security through Unity?

    Quite often there is discussion about what operating system to use and the pros and cons of each. Of course one aspect that comes up is security, which is definitely a worthwhile goal to have. However the discussion is usually based upon technical points only; Operating system A has this feature, while B has another that is trying to do the same thing, but doesn’t do it quite as well, while C doesn’t have that feature at all.

    Technical points are important, but when you get down to the various flavours of Unix, it all rapidly becomes academic. Pretty much any Unix is more secure that any Microsoft Windows. This is because there is the
    proper concept of user/uid and process separation not to mention a nice boundary between the application and the OS. This layering helps with security but it also makes updating a lot easier.

    Sure, this flavour of Unix may have a certain feature, but does it really do anything worthwhile and
    what is the chance of some event happening where the absence of this feature
    means the server is hacked, while other similar servers with the feature are
    fine. I’ve used Sun Solaris as an example here, let’s face it, there is
    pretty much no ongoing new support for any other commercial Unix and
    no future.

    As a network engineer, security to me as just another aspect of network
    management. It is important, but so is keeping the service running free of
    faults and up to a certain level of performance. Perhaps some principles of
    network management could be used to apply to server security.

    An important lesson of network management is that quite a large number of
    faults ( some studies have said 50%, some said 70%, we’ll never know the true
    number ) can be attributed to a person or process failure, as opposed to
    a software or hardware problem as such. Whatever the percentage is, quite
    a large amount of security breaches are due to the administrators, for whatever
    reason, not running their servers correctly. Therefore, anything that makes
    the administrators job easier or the processes simpler makes security better.

    An example

    Perhaps an example will help. You’re in charge of setting up some servers
    and you can choose what goes on them. You’ve narrowed it down to Solaris
    or Debian GNU/Linux, what to choose?

    The first answer should be, if the current operators are far more comfortable
    with one over the other and you intend to use the same operators for the new
    systems without any additional staff, go with whatever they are used to.

    However if there is no strong preference, you then have to look at other things.
    How about security patches and testing? Is the setup you’re running going
    to be maintained and is it tested correctly?

    Running Software – Solaris Style

    Sun now has in their more recent versions included a lot more free software,
    but it is still not a lot and, well they just have this habit of screwing
    it up. I’m not sure why, but they don’t seem capable of compiling something
    off, say, sourceforge, without making a mess of it. Top and bash used to
    crash, never seen bash crash before I saw it on Solaris. And I won’t even
    mention the horror of their apache build.

    What happens if you want to run a MTA like postfix? Certainly a lot easier
    to run and a lot more features than the standard sendmail. Or you want some
    sort of web application that needs certain perl modules? If you’re running
    Solaris, you download all the sources, compile and repeat all through the
    dependencies. You can get pre-compiled programs from places scattered around
    the Internet, but quite often there are library version conflicts.

    That hasn’t even got into the problems when package A wants version 4 of
    library Z but package B wants version 5 of library Z. Or what happens
    if they both want version 4, but then you need to upgrade one of the
    packages which needs the newer library?

    Running Software – Debian Style

    For the Debian user, it is usually a matter of apt-get install <packagename&g
    t;.
    There are nearly 9,000 packages in the distribution, so whatever you want is
    probably there. There are only rare library conflicts; the library versions
    are standard across each release and everyone runs the same one. The only
    problems are the occasional transitional glitches as one packager is on
    the new libraries and the other is still on the old one. Still the
    occurrence of this sort of thing is greatly reduced.

    All nearly 9,000 packages go through the same QA control and have their
    bugs tracked by the same system in the same place. If the person cannot
    get a problem fixed, they have the help of at least 800 of their fellow
    Debian developers. If you’re having problems with your own version of the
    program on Solaris, you’re on your own.

    Upgrading a hassle, so it doesn’t happen

    Now the problem is that upgrading on most systems is a real pain. The problems
    surrounding the slammer and blaster worm on Microsoft servers is a good example.
    When the
    worm came out, people were saying its propagation was solely due to poor
    system maintenance where the lazy administrators did not bother to
    properly patch their servers.

    Even the OS itself can play up, causing strange and amusing problems to
    appear. My wife’s
    Windows XP computer switches off when it goes into powersave mode. This
    started happening after installing a security patch. I’m not sure what the
    power saving code has to do with security, maybe evil hackers across the
    internet cannot turn my lights on and off anymore.

    While there definitely would be a subset of administrators that did fit
    into the category of lazy and indept administrators, there were also
    plenty that could not upgrade or fix
    their systems. The problem was applying a service pack would break a great
    deal of things in some random way. Some people just could not be bothered
    or were too scared to upgrade.

    It is generally expected that when you upgrade, you will get problems and
    these problems need to be risk-managed. It shouldn’t be the usual expectation
    for a simple upgrade.

    While it is often a good idea, but not essential, to reboot a Debian system,
    for most upgrades you just install the upgraded package and that’s the task
    finished. There’s no need to stop and start the effected services because
    this is generally done for you.

    The clear layering of the application and OS and the reasonably clear layering
    of the application and library code means that if there is a problem with
    one of the layers the upgrade of it will not effect the other layers. This
    is why when an application crashes on a Unix server you restart the application
    while on a Windows server you reboot it.