Category: Software

  • Happy new RC

    It’s the late afternoon of the first day of 2010 here, though I suppose its still 2009 for someone for a little while.

    After a lot of waiting, JFFNMS release candidate 1 for 0.8.5 got uploaded to sourceforge.  This release is mainly about fixing some database release bugs 0.8.4 had and they’re all caused by the fact that working with PHP and database to release code is plain awful.

    The problem is tracking changes in your database. So version 1 has 3 tables and 60 rows, version 2 has 4 tables and 90 rows, but what changed?  Everything I’ve seen so far is a bit of a hack or is real fiddly.  JFFNMS release process is both which is why I’ll go and release several versions of C code or Debian packages before trying to crack that nut again.

    If you are wondering what JFFNMS is, its a Network Management System. It makes graphs and red/green icons depending on the status of your routers and servers. Written in PHP and web based and of course licensed under the GPL

  • Updated: psmisc, gw6c and gjay

    Time away from work and its been either raining or hot. So I’ve updated and released some software.  It always seems to happen there is a lot of Free Software development during the breaks.

    psmisc got a bunch of updates, including a new program called prtstat which formats the stat file in the procfs for a pid in (hopefully) a nice way.  No sooner had I released the latest update when a bug report came in. It seems fuser -m -k is a little too happy about killing itself. The fix is in the CVS but anoying I missed that.

    Next up was the Debian gw6c package. I was asked why it didn’t get moved from unstable to testing.  The problem is that while Linux has iproute, kfreebsd does not so the lack of a dependency was stopping it transitioning.  To make matters worse the freebsd template was missing from their package.  After some deb-substvars evilness to fix the dependencies and some dh_install overrides in the debian/rules file it should all be happy when its finished.

    Finally, I miss having good random playlists. I’m too lazy to make them myself so I use some random thing which often gives me rubbish.  A program called Gjay used to be in the Debian archive but got removed, mainly because the upstream stopped supporting it.  I can write C (the programming language its written in) and I wanted to use it, so I fixed it.  My version is 64-bit clean, so it works on my amd64 and it works with audacious not the old xmms which is great.  More importantly, it compiles, it runs and it even works properly.
    I’m just wondering if I want to release it out to the wider world or not.

  • Using Amanda for Backups

    [AMANDA](http://www.amanda.org/) as the name says, is an advanced disk archiver or, more simply, a backup program. It’s a really useful program, let down by something approaching the worst documentation out there. I’ve had about 4 attempts on and off for about the same number of years at using it and deciphering the documentation. Finally I got it to work, but it should not be that hard. Who knows, maybe in
    another 4 years I won’t hate Wiki documentation? Naah.

    This document shows you what files you need to create, their format and how to use some of the programs. It’s probably only a brief get you started type of document, but at least you should be able to start. I have tested this setup on a bunch of [Debian](http://www.debian.org/) GNU/Linux servers but it should be reasonably similar for other systems.

    Speaking of my systems, what I’m writing here works for me with my setup. It’s quite likely it will work for you, but you should verify and test everything. If something is not quite right and you lose all your valuable
    files because something is wrong in here, you should of tested it properly yourself. In any case, a non-tested backup system is almost as useless as a non-working one.

    #Determine your backup regime and other details
    The first step is to decide what you want to backup. Do you want to do everything? Just home files? The variety is quite large and it is just a matter of working out how to tell AMANDA how to do it. AMANDA works
    in chunks of partitions, so if you have everything in one partition as / then that’s one big chunk. All is not lost however as you can use some tricks to exclude some type of files or directories.

    For my setup, I backup all of /, most of /var except logs and cache and /home with the exception of some junk. The method for backing up all of a partition can be done differently than backing up part of it. I’m not
    sure, but I think its quicker to backup all of a partition when you can.

    Next, think of a name to call the backup regime. I call mine “normal” and thats what I’ll use here. AMANDA calls the name “the config” in the manual pages. It’s just a label for the set of configuration files.

    #Determining tapetype

    The configuration file (see next section) will need to know the tapetype. If you don’t know what drive model you have, you can often get some information about the device in the **/proc/scsi/scsi** file. My file shows my ancient Sony tape drive.

    Attached devices:
    Host: scsi0 Channel: 00 Id: 01 Lun: 00
      Vendor: SONY     Model: SDT-5000&
    nbsp;        Rev: 3.30
      Type:   Sequential-Access    &nbs
    p;           ANSI SC
    SI revision: 02

    The tapetype is a bunch of parameters about your tape drive and they are different for
    every device. Have a look at [AMANDA tapetype list](http://amanda.sourceforge.net/fom-serve/cache/45.html) to see if yours is there.

    Failing that, use the **amtapetype** program. Be warned, it will take a long time for this program to work. When I mean a long time, I’m talking about 2 hours or so. That’s roughly how long it took for me.

    To run it, type:

    server# amtapetype /dev/nst0
    

    and then wait, and wait… Eventually you’ll get something that you can cut and paste into your amanda.conf /dev/st0 is the device the tapedrive uses, try that one first if you have a SCSI tape drive.

    #Main configuration file – amanda.conf
    Most configuration information goes into the amanda.conf configuration file. The configuration files are kept into a place /etc/amanda/*configname* which for me is /etc/amanda/normal.

    org "Example Company"   # Title of report
    mailto "root"           # recipients of report, space separated
    dumpuser "backup"       # the user to run dumps under
    inparallel 4            # maximum dumpers that will run in parallel
    netusage  600           # maximum net bandwidth for Amanda, in KB per sec
    
    # a filesystem is due for a full backup once every  days
    dumpcycle 4 weeks       # the number of days in the normal dump cycle
    tapecycle 8 tapes       # the number of tapes in rotation
    
    bumpsize 20 MB          # minimum savings (threshold) to bump level 1 > 2
    bumpdays     1          # minimum days at each level
    bumpmult     4          # threshold = bumpsize * (level-1)**bumpmult
    
    runtapes     1
    tapedev "/dev/nst0"     # Linux @ tuck, important: norewinding
    
    tapetype SDT-5000               # what kind of tape it is (see tapetypes below)
    labelstr "^MY-TAPE-[0-9][0-9]*$"        # label constraint regex: all tapes must match
    
    diskdir "/var/tmp"              # where the holding disk is
    disksize 1000 MB                        # how much space can we use on it
    infofile "/var/lib/amanda/normal/curinfo"       # database filename
    logfile  "/var/log/amanda/normal/log"   # log filename
    
    # where the index files live
    indexdir "/var/lib/amanda/normal/index"
    
    define tapetype SDT-5000 {
        comment "Sony SDT-5000"
        length 1584 mbytes
        filemark 0 kbytes
        speed 271 kps
    }
    define dumptype comp-home-tar {
        program "GNUTAR"
        comment "home partition dump with tar"
        options compress-fast, index, exclude-list "/etc/amanda/normal/home.exclude"
       priority medium
    }
    
    define dumptype comp-var-tar {
        program "GNUTAR"
        comment "var partition dump with tar"
        options compress-fast, index, exclude-list "/etc/amanda/normal/var.exclude"
        priority high
    }
    

    So what do all those lines mean? You can leave most of them what they are
    in this example. Read the amanda(8) manual page for information about
    what each of the lines does. I’ll only point out some of the more significant
    or tricky ones here.

    runtapes
    I have runtapes set to 1 because it is a single tape
    drive and not some fancy multi-drive tape jukebox.
    tapedev
    This is the Unix device that the tape is found. Try
    /dev/nst0 first as its a good default. Use dmesg to find the device if
    that doesn’t work.
    tapetype
    This refers to a label that is defined further along
    in the configuration file.
    labelstr
    All tape labels have to match this regular expression. AMA
    NDA wont recognize it otherwise.
    diskdir
    A directory where AMANDA can temporarily store its files
    before they are put onto the tape, not sure if /var/tmp is a good idea or not.
    disksize
    The amount of space that AMANDA can use in the previously
    given diskdir.
    infofile, logfile, indexdir
    Filenames for storing information and
    status of your backups. All these directories for these files need to exist,
    see below.

    The tapetype definition have been previously explained. It’s either going to be a cut and paste from a website or the output of amtapetype.

    The dumptypes depart from the usual ones you see in the amanda documentation. I am using tar here because then I can select what files and directories I want in the archive. The two dumptypes are identical except for the priority and the exclude-list.

    #disklist config file
    The disklist file is found in */etc/amanda/normal/* directory and it lists what disks on what hosts are to be backed up using what backup format. Each line has three entries separated by whitespace: hostname, drive or partition and dumptype. The dumptype is one of the ones that was defined in the configuration file amanda.conf. My disklist looks like:

    # disklist for normal backups
    # Located at /etc/amanda/normal/disklist
    #
    localhost /var  comp-var-tar
    localhost /home comp-home-tar
    

    Reading both this file and the amanda.conf file, you can see that this means /var is backed up using tar and gzip and it will use the exclusion file var.exclude and /home is backed up also using tar and gzip but with the
    exclusion file home.exclude.

    #The tar exclude files
    I create two exclude files, one for each partition type I am backing up. The contents of these exclude files are a file or directory name, one per line. You may want to read the GNU tar info files about what can go here, its just using the –exclude-from flag. My var.exclude excludes logs, debian packages, cached processed manual pages and temporary files, it looks like this:

    /logs/*/*.gz
    /cache/apt/archives/*.deb
    cache/man/man
    tmp
    

    #Creating directories and files
    There are a fair few directories and files that AMANDA needs to have setup with the correct permissions before it will work. The easiest way to document this is to show you the commands used. Remember that my config name is “normal” and a lot of these directories are defined in the amanda.conf file as is the user.

    server# chown backup.backup /etc/amanda/normal
    server# chmod 770 /etc/amanda/normal
    server# touch /etc/amanda/normal/tapelist
    server# chown backup.backup /etc/amanda/normal/*
    server# chmod 500 /etc/amanda/normal/*
    server# touch /var/lib/amanda/amandates
    server# chown backup.backup /var/lib/amanda/amandates
    server# mkdir /var/lib/amanda/normal
    server# mkdir /var/lib/amanda/normal/index
    server# chown -R backup.backup /var/lib/amanda/normal
    server# chmod -R 770 /var/lib/amanda/normal
    server# mkdir /var/log/amanda/normal
    server# chown backup.backup /var/log/amanda/normal
    server# chmod 770 /var/log/amanda/normal
    

    #Making a new AMANDA tape
    You will now need to make a new tape and label it. Put the tape in the drive and use the command

    > amlabel normal MY-TAPE-01

    Remember the label has to match the regular expression you have in the amanda.conf file. If all goes well, in a minute or two you will have a tape ready.

    #Checking the configuration
    AMANDA has a checking program, called amcheck, that makes sure everything is ready to go for the backup. The example output given below shows most things are ok, but I need to change a tape (and run amlabel on it) because I’ve already used this tape.

    # su backup
    $ amcheck normal
    Amanda Tape Server Host Check
    -----------------------------
    Holding disk /var/tmp: 6349516 KB disk space available, that's plenty
    ERROR: cannot overwrite active tape MY-TAPE-01
           (expecting a new tape)
    NOTE: skipping tape-writable test
    Server check took 9.027 seconds
    
    Amanda Backup Client Hosts Check
    --------------------------------
    Client check: 1 host checked in 0.032 seconds, 0 problems found
    
    (brought to you by Amanda 2.4.4)
    

    #To do the backup
    After all that setup and testing, the backup itself is, well, pretty boring. To start it, as the backup user type /usr/sbin/amdump normal. You probably want this in a crontab so it is done regularly. After some time whoever was mentioned in the mailto line in the amanda.conf file will get an email about the backup.

    #Recovering a file
    To test the backup, you really need to test that you can restore a file. To do this I cd to /tmp and then run the amrecover utility. The amrecover program looks a lot like a ftp client and is pretty easy to use. Once again, it needs the config name so to run it, type amrecover normal. It may not know what host you want to go to, so set the host with the sethost command.

    amrecover> sethost localhost
    200 Dump host set to localhost.
    

    Next you need to tell the recover program what disk you are trying to recover. We’ll select the /home directory here.

    amrecover>; setdisk /home
    Scanning /var/tmp...
    20030618: found Amanda directory.
    200 Disk set to /home.
    

    We’ve found the disk and the host, now to wander around the filesystem and find the file. The file we are after is /home/csmall/myfile.txt. We’ve already at /home with the setdisk command, so now it is just a matter of cd into the csmall directory, add the file myfile.txt to the recovery list and then issue the extract command to start the recovery process.

    amrecover> cd csmall
    /home/csmall
    amrecover> add myfile.txt
    Added /csmall/myfile.txt
    amrecover> extract
    
    Extracting files using tape drive /dev/nst0 on host localhost.
    The following tapes are needed: MY-TAPE-01
    
    Restoring files into directory /var/tmp
    Continue [?/Y/n]?
    

    The restore program has found the file you are after and what tapes are needed to restore the file. If there were many files there may be multiple tapes required. It will now ask you to continue and after pressing
    enter it will pause while you load the required tape into the drive. Pressing enter again will restore the file. It may take an hour or so to get it.

    Extracting files using tape drive /dev/nst0 on host localhost.
    Load tape MY-TAPE-01 now
    Continue [?/Y/n/s/t]?
    ./csmall/myfile.txt
    
    amrecover>
    

    The result should be now a restored csmall/myfile.txt in your current directory. If that’s the case, the restore test has succeeded.

  • Creating an APT archive

    [apt](http://packages.debian.org/apt) is a very important and useful tool that is used mainly for [Debian](http://www.debian.org/) GNU/Linux computers to download and install packages. It has the ability of sorting out the dependencies for packages and downloading from multiple sites.

    For various reasons, people want to run their own apt archive that is separate from the rest of the Debian package distribution system. This gives a much better way of distributing binary and source packages that just a plain FTP or HTTP site.

    For this document, I have used my archive hosted on Internode as the example. Apparently there is a way of doing this using the new pool method. I couldn’t get it to work and junked it and went back to the old way which was putting the packages under dist. It seems a lot cleaner and the reasons for having /pool/ don’t really apply for small archives.

    #Definitions
    To understand how apt (or Debian for that matter) sorts its files, you need to understand the various ways files are catalogued. This will help in deciding what to call the various directories.

    Dist
    – The distribution of Debian. Can either be a code-word like woody, sid or sarge or a type like stable, testing or unstable. For my archive I use unstable. Not that some places DIST is the directory dist/distname such as dist/unstable.
    Section
    – The section determines what is the state of the package and is determined by the copyright. DFSG-free packages go into the main section.
    Arch
    – What architecture is the package built for?

    #Directory Layout
    Apt requires a certain type of directory layout to work. The directories can either be real directories or symlinks. This is what my archive looks like:

    apt/
      +-dists/
        +-unstable/
          +-main/
            +-binary-i386/
              +-Packages
              +-Packages.gz
              +-gkrellm-wmium_1.0.8-1_i386.deb
              +-wmium_1.0.8-1_i386.deb
            +-source/
              +-wmium_1.0.8-1.diff.gz
              +-wmium_1.0.8-1.dsc
              +-wmium_1.0.8.orig.tar.gz
    

    The binaries are found in the sub-directory *./apt/dists/unstable/main/binary-i386/* while the source packages are found in *./apt/dists/unstable/main/source/*

    -ftparchive configuration file
    The most difficult part of the whole exercise is trying to get this configuration file right. It’s badly documented and has no real examples combined with the fact if something doesn’t work you don’t know why.
    I call mine archive.conf but it doesn’t really matter what it is called as long as you use the same name when you run the programs in the next steps. After much trial and error, I have the following configuration file, explanations of what the lines do follows.

    Dir {
      ArchiveDir "/home/example/myarchive/apt";
    };
    
    BinDirectory "dists/unstable" {
      Packages "dists/unstable/main/binary-i386/Packages";
      SrcPackages "dists/unstable/main/source/Sources";
    };
    
    ArchiveDir
    The absolute path to the top of the archive from the server’s point of view. This directory will have the dists directory in it. Now if you are building the files on one machine but uploading them to another (like I do) then the directory is the directory for the building machine.
    BinDirectory
    This is the directory of the dist, that directory only has the main symlink in it.
    Packages
    The location of the Packages file. The full path will be

    SrcPackages
    The location of the Source Packages file. The full path
    will be $ArchiveDir/$BinDirectory/main/$Packages

    #Adding Packages
    To add packages, put the .deb files into the binary-i386 directory and the orig.tar.gz, .dsc and diff.gz files into the source directory.

    #Running apt-ftparchive
    To update or create the Packages filenames, you need to run apt-ftparchive. The programs scans for packages and creates the right paths in the Packages file for them.

    $ apt-ftparchive generate archive.conf
     dists/unstable: 2 files 3017kB 0s
    Done Packages, Starting contents.
    Done. 3017kB in 2 archives. Took 0s
    

    Notice it has found 2 files and 2 archives, which means it is working because that was the number of packages I had in my archive. You should also have a Packages and Packages.gz in the binary-i386 directory.

    #Uploading the archive
    If you are using the same computer for creating the archive then you are done. If not then then you need to move the files onto the server. How you do this depends on what the server has available. Ideally, they
    have scp or rsync which makes it very easy. My ISP only has FTP which means I need something like [lftp](href=”http://packages.debian.org/lftp) to do the copying.

     $ lftp -c 'open -u myusername ftp.myisp.net ; mirror -n -R apt apt'
    

    This command recursively copies files from the local apt directory to the remote apt directory on the ftp server. See the lftp manual page for details.

    #sources.list changes
    Now you have a working archive, you need to change your /etc/apt/sources.list file so that apt knows to get packages from your archive. It looks like just another archive.

    deb http://users.on.net/csmall/apt unstable main
    

    #My Makefile
    The following is my Makefile that sits at the top directory (the same directory that the apt subdirectory sits in on the local computer) that I use to make the various files.

    instpkg:
      -mv incoming/*_i386.deb apt/dists/unstable/main/binary-i386/
      -mv incoming/*.dsc incoming/*.diff.gz incoming/*.orig.tar.gz apt/dists/unstable/main/source/
      apt-ftparchive generate archive.conf
    
    lftp:
      lftp -c 'open -u myself ftp.isp.net ; mirror -n -R apt apt'
    
  • Filtering base64 encoded spam

    I hate spam, though I get an awful lot of it. About 1/3 of my email is spam, though on a bad day the ratio can be reversed. If you want to see just how much spam I get, I’ve used to have a nice graph of spam. To get rid of it I use a lot of filters. One of these is the Postfix Body Checks feature. This feature allows you to match lines in the body of the email and reject them at the server. I use Perl Compatible Regular Expression (PCRE) matching for the lines.

    Recently though, I noticed a lot of spam, usually about Viagra, that was passing through my spam traps. I noticed the emails all talk about a small set of webservers, so I’ll just filter on the urls. It didn’t work.
    SpamAssassin in the end gave me the hint.

    X-Spam-Status: No, hits=0.0 required=5.0
           tests=BASE64_ENC_TEXT,EMAIL_ATTRIBUTION,HTML_60_70,
               HTML_IMAGE_ONLY_04,MIME_HTML_ONLY,PENIS_ENLARGE,REMOVE_PAGE
                version=2.53
    

    It was base64 encoded email! That’s why my simple PCRE text matches would not work. So I needed to use something else.

    This page is about how to filter on base64 text that appears in emails. I have used examples of PCRE and postfix but you can use this anywhere else, with the appropriate adjustments of where the files go and their syntax.

    How to filter

    A standard filter line in a postfix body_check file looks something like this:

    //   REJECT
    

    This is the old iframe hack that some spammers use to sneak URLs into your email.They are nice and clear and we just reject them. All we have to do now is change the stuff between the “toothpicks” // to what we want.

    Here’s an example spam I got today, its offering the usual garbage this shonks usually offer. Remember if they don’t advertise ethically, it is often a sign of their entire operation.

    
    

    In this case, I’ve decided I cannot be bothered getting any emails that advertise stuff on www.sellthrunet.net, I get enough junk already and it’s probably a front for spammers anyway, so I’ll filter on that domain. You need to make the string reasonably long as you are effectively cutting off parts of it.

    Debian systems have this program called mimencode, some of you might have mmencode, which is part of the Metamail package. This does the base64 encoding for you.

    So all you need to do is take the string you want filtered on, put it into mimencode and then put the resulting string into the postfix configuration. You need to do this three times, deleting a character at the front each time because base64 is done by cutting the strings up into groups of three characters each and you don’t know in advance if the your string is going to start at position 1,2 or 3.

    gonzo$ echo -n "http://www.sellthrunet.net/" | mimencode
    HR0cDovL3d3dy5zZWxsdGhydW5ldC5uZXQv
    gonzo$ echo -n "ttp://www.sellthrunet.net/" | mimencode
    dHRwOi8vd3d3LnNlbGx0aHJ1bmV0Lm5ldC8=
    gonzo$ echo -n "tp://www.sellthrunet.net/" | mimencode
    dHA6Ly93d3cuc2VsbHRocnVuZXQubmV0Lw==
    

    Next you need to remove part of the encoded string at the end. Remember that 3 characters are encoded into 4 symbols. So character one contributes to symbol 1 and 2, two to 2 and 3 and three to 3 and 4. The = means the string was not a multiple of 3 and it needs padding. If the encoded string has no =, you can use it as-is, otherwise remove all = plus one more character at the end of the string. Remember that you are cutting off up to two characters from your regular expression from both ends so be careful it is still meaningful. The last string for example is only matching “tp://www.sellthrunet.ne” which still looks ok.

    Finally, you can join the strings using the regular expression “or” symbol. Also be careful to escape any strings that use special regular expression characters. Base64 can have plus ‘+’ and slash ‘/’ which need
    escaping with a backslash .

    (HR0cDovL3d3dy5zZWxsdGhydW5ldC5uZXQv|dHRwOi8vd3d3LnNlbGx0aHJ1bmV0Lm5ldC|dHA6Ly93
    d3cuc2VsbHRocnVuZXQubmV0L)
    

    I have a bypass line in my setup so usually any lines that are base64 encoded are bypassed, so if you have the same thing make sure this line goes before your bypass line or it will never match. We also need to tell postfix to use case sensitive matching because it is base64 hash we are matching and not the real string itself, so we use the i flag after the last slash . The relevant lines in the body_checks file are now:

    #
    # sellthrunet.net
    /(HR0cDovL3d3dy5zZWxsdGhydW5ldC5uZXQv|dHRwOi8vd3d3LnNlbGx0aHJ1bmV0Lm5ldC|dHA6Ly93d3cuc2VsbHRocnVuZXQubmV0L)/i REJECT Spamvertised website
    # don't bother checking each line of attachments
    /^[0-9a-z+/=]{60,}s*$/                OK
    

    To test it, I use pcregrep and mimencode again, on the mail file. This will show in clear text the spamming line and gives you an idea that it should work.

    $ pcregrep 'dHRwOi8vd3d3LnNlbGx0aHJ1bmV0Lm5ldC' /var/mail/csmall  | mimencode -u
    http://www.sellthrunet.net/pek/m2b.php?man=ki921">&ltl;im//www.sellthrunet.net/pek/m2b.php?man=ki
    

  • Printing using LPRng and Foomatic

    For many years I have been using LPRng as my printer spooler. It is not the easiest one to use, but has a lot of features and is used in heavy-duty situations such as the main spoolers for University student printers.

    In the early days, all printouts were simple ASCII text and all printers understood simple ASCII text, so there were no problems. Now printouts can be a number of forms, such as PDF, Postscript, png, jpg, tex plus many
    others. Not only that, all printers have a different way of explaining how to print complex figures or graphics, or just even change the colour. A printing filter is the program that, say, converts the PDF command “now use red and write a line like this” into a language the printer itself understands. The filters have to also work out what thing is being sent to it; is that a PDF coming down the line or a Postscript file? Maybe it is nroff text?

    The first filter I used was magicfilter. I then tried turboprint, which is non-free and also whatever lprngtool uses. I now use the foomatic scripts, which appear to be the most successful.

    This document describes how I setup my LPRng program running on a Debian GNU/Linux system to talk to my Epson Stylus Color 600 that is attached to a networked print server (some Netgear thingy). The instructions should work for other distributions and of course with the exception of a different
    ppd file.

    You may also want to read another LPRng installation document too.

    Basic Setup

    The general idea is to use the Foomatic program called foomatic-rip as the LPRng input filter. This filter will convert the incoming file into something my Epson understands correctly. Ideally, I just tell my system “print this” and it does it, without any further input.

    The steps in setting the printing up are:

    1. Getting the right packages
    2. Finding your printer PPD
    3. Checking your ghostscript works
    4. Installing and customizing the PPD
    5. Change or create printcap file
    6. Testing

    Getting the right packages

    There are some packages you will need, or are quite useful to have. I just
    apt-get install ‘ed them and they all went in fine. Some of the files are
    dependent on what printer you have and what drivers it will be using.

    lprng
    The printer spooler. You could use other printer spoolers, but they are setup differently.
    foomatic-filters
    This holds the printer filters. Most importantly,
    it is the package with foomatic-rip.
    gs-esp
    Ghostscript comes in a variety of flavours. I needed this
    flavour because it had the output device I needed. Make sure you check you get t
    he right one for you.

    gsfonts
    Fonts for Ghostscript. Handy package to have.
    mpage
    Converts ASCII text into postscript.
    a2ps
    Converts lots of things into postscript.

    Finding your printer PPD

    The PPD file is a Postscript Printer Description. It describes your printer to the postscript and ghostscript programs. You need to get this first before doing anything else because this will determine if your printer
    is supported and also what other packages you might need.

    Previously you could get the PPD from Linux Printing.org website. But they have changed things around so they are no longer available.
    You have to get them out of the printer database, the problem is they are shipped in xml.

    A program called foomatic-ppdfile is the magic gap filler between XML and ppd. It can be used to find what PPD to use and how to generate them. For example, I try to find my Epson Stylus Color 600, with

    $ foomatic-ppdfile -P ‘Epson.*Color 600’
    Epson Stylus Color 600 Id=’Epson-Stylus_Color_600′ Driver=’gimp-print’ Compatibl
    eDrivers=’gutenprint-ijs.5.0 gimp-print omni stc600ih.upp stc600p.upp stc600pl.u
    pp stcolor stp ‘

    The Id= is used to extract the printer definition. Generally there are many drivers you can use for each printer, check the Linux printing website for details of each.

    For my printer, the default driver is called gimp-print, but I don’t have that one. foomatic-ppdfile complains:

    $ foomatic-ppdfile -p ‘Epson-Stylus_Color_600’ > /etc/lprng/Epson-Stylus_
    Color_600-gimp-print.ppd

    There is neither a custom PPD file nor the driver database entry contains suffic
    ient data to build a PPD file.

    If you get that message, try another printer driver. gutenprint is the new name of gimp-print, so we can use that:

    $ foomatic-ppdfile -d gutenprint-ijs.5.0 -p ‘Epson-Stylus_Color_600’ > /
    etc/lprng/Epson-Stylus_Color_600-gutenprint-ijs.5.0.ppd

    Checking your ghostscript works

    Debian ships various ghostscript interpreters. The question is which is
    the right one for you? Most printer drivers will need either the Gimp-Print
    driver but a lot of the HP printers will need the ijs driver. The trick
    is to look at the PPD file. For example, my file has the following lines:

    *FoomaticRIPCommandLine: “gs -q -dPARANOIDSAFER -dNOPAUSE -dBATCH -sDE&&

    VICE=stp %A%Z -sOutputFile=- -”

    The important thing is unfortunately line-wrapped but it is trying to say -sDEVI
    CE=stp. This is your output device and may or may not be supported by your
    version of ghostscript. Grep for it with the command.

    gonzo$ gs -h | grep stp
       uniprint xes cups ijs omni stp nullpage

    You can see that we grepped for stp and there is a string showing
    stp. If your ghostscript doesn’t show the right driver for you, try one of
    the other ghostscripts (gs, gs-aladdin, gs-esp). Also be careful as gs is
    an alternative and you might have the wrong one pointing in the alternatives
    file. To check you can do the following:

    gonzo$ gs -h | head -2
    ESP Ghostscript 7.05.6 (2003-02-05)
    Copyright (C) 2002 artofcode LLC, Benicia, CA.  All rights reserved.
    gonzo$ ls -l /usr/bin/gs
    lrwxr-xr-x    1 root    root        &nb
    sp; 20 May  2  2002 /usr/bin/gs -> /etc/alternatives/gs
    gonzo$ ls -l /etc/alternatives/gs
    lrwxrwxrwx    1 root    root        &nb
    sp; 15 Aug  9 15:16 /etc/alternatives/gs -> /usr/bin/gs-esp

    Installing and customizing the PPD

    It doesn’t really matter where you put your PPD file. You just specify it
    in the printcap so the foomatic-rip file can find it. I put mine in
    /etc/lprng but it is really up to you where to put it.

    I also needed to adjust my PPD. Like most of the world, I do not have
    Letter sized paper but A4. The PPD uses the default of Letter and making
    sure you remember to type “-Z PagerSize=A4” every time you print gets old
    quickly.

    Fortunately it is easy to fix it. Find the two lines that start with
    *DefaultPageSize: and *DefaultPageRegion: and change them both from Letter
    to A4. I’m sure someone who understands Postscript (I don’t) can explain
    why you need to change both but the printing complains if you only change one.

    Also remember to change the permissions so the printer filter program can
    read the file. I had it setup originally so it couldn’t and then wondered
    why my filters thought they had a “Raw” printer.

    Change or create printcap file

    The printcap file will need to be created or changed so that it uses the
    input filter (if= clause) of foomatic-rip. In turn the filter has to be
    told it is run from LPRng and the location of the PPD file. The rest of
    the information is the usual thing you would see for a remote printer.

    epson600|Epson Stylus Color 600:
        :force_localhost:
        :[email protected]:
        :if=/usr/bin/foomatic-rip:
        :filter_options= –lprng $Z /etc/lprng/Epson-Stylus_Color_600-gute
    nprint-ijs.5.0.ppd.ppd:
        :sd=/var/spool/lpd/epson600:
        :mx#0:sh:

    Testing

    Foomatic has a special flag that spits out all the other flags you can use.
    It’s a good test to see if everything is working ok. The command is just

    gonzo$ echo x | lpr -Z docs

    The file you try to print is irrelevant, just make sure it exists. You
    should then get a few pages of documents showing all the flags you can use
    to change the printing. The -Z docs flag means to print the documentation
    of the driver rather than the file itself. The foomatic documentation talks
    about using the demo file of /proc/cpuinfo but I get “nothing to print”
    messages.

    If you do not get some document with the title “Documentation for (printer
    name) (printer driver)” then check the permissions of the PPD file and
    also the printcap file. If all else fails, edit the file
    /etc/foomatic/filter.conf and change the relevant line to filter: 1.
    The debug will then be found in /tmp/foomatic-rip.log. Do not keep the
    debugging on all the time as it is a security risk.

    Central print servers and multiple queues

    In another installation I had a HP OfficeJet 155 which was used by several
    pc Linux clients. I wanted several “printers” depending if the user wanted
    draft or colour. The -Z flags seemed a little too hard.

    The idea is to have multiple printers on the central print server which then
    bounces to a real print queue which spools off the jobs. Do not have all
    the “printers” going directly to the real printer as it generally handles
    contention badly.

    The central printcap just adjusts what extra -Z options are appended and
    then bounces the job to the real print queue which spools all jobs through
    the filter and onto the printer.

    .common
        :sd=/var/spool/lpd/%P:sh:mx=0
        :lp=hpoj155@localhost

    hpoj155draft:tc=.common
        :append_z=PrintoutMode=Draft.Gray

    hpoj155bw:tc=.common
        :append_z=PrintoutMode=Normal.Gray

    hpoj155colour|hpoj155color:tc=.common

    hpoj155draftduplex:tc=.common
        :append_z=PrintoutMode=Draft.Gray,Duplex=DuplexNoTumble

    hpoj155bwduplex:tc=.common
        :append_z=PrintoutMode=Normal.Gray,Duplex=DuplexNoTumble

    hpoj155colourduplex|hpoj155colorduplex:tc=.common
        :append_z=Duplex=DuplexNoTumble

    hpoj155| HP OfficeJet D155xi remote printer
        :lp=printer.mynetwork%9100
        :if=/usr/bin/foomatic-rip
        :filter_options= –lprng $Z /etc/foomatic/lpd/HP-OfficeJ
    et_D155-hpijs.ppd
        :sd=/var/spool/lpd/%P:sh:mx=0

    The print queues are now setup on the main server. Next is to make it
    easier on the client pcs by setting up the queues and the aliases.
    I called my queues hpoj155* so that if another printer comes along. It makes
    big and confusing printer names so I created two lots of printer queues on
    the clients. One with the printer name and one without. The first name
    in the printcap is the one that is used by default.

    draftduplex|bwduplex|colourduplex|draft|bw|colour
            :client:lp=hpoj155%[email protected]:force_loc
    alhost@

    hpoj155draft|hpoj155bw|hpoj155colour|hpoj155draftduplex|hpoj155bwduplex|hpoj155c
    olourduplex
            :client:lp=%[email protected]:force_localhost@

    That way users can just print to -P colourduplex and it understands that
    it should go to the hpoj155 queue and that the printout is in colour and
    duplex mode. The user doesn’t need to know what magic -Z flags are
    required for this to happen either. They are different for different
    printer types.

  • LaTeX to HTML Converters

    I’ve been using LaTeX for many years, I should say quickly for the freaks out there that it doesn’t mean I’m into vinyl or other strangeness. LaTeX is a document processing system that creates good quality documents
    from text source, no hamsters or chains involved at all.

    The standard processors you get with LaTeX are good at converting the source into Postscript or PDF (Acrobat) documents and for most of the time this will do. However there are occasions when you want to have your document output in HTML. In this case you need to have a different processor.

    This page is about the various types of LaTeX to HTML converters out there. It is not an exhaustive list but should help other people looking around for converters. The main problem with them all is they are not
    maintained that well.

    Hyperlatex

    Hyperlatex is the converter I have used the most. It does most jobs quite well and you get reasonable results from it. My major gripes with it is that it is written in Lisp so I cannot extend it (I don’t know Lisp) and that it doesn’t do CSS that well.

    Despite those shortcomings, Hyperlatex is a good start for document conversion. Unlike most program on this page, it is actively maintained and keeps up with HTML standards. For example there is work for
    Hyperlatex output to be in XHTML.

    TTH

    TTH has put a lot of effort into the formula conversion. Most converters make an image for the formulas while TTH generates HTML for it, giving the formulas a more consistent look in the document rather than looking like they were “pasted in” later.

    TTH has a funny license in that (roughly) it is free for non-commercial use only. Depending on where you are going to use it, this may be a problem. You can buy a commercial license of TTF too.

    Heava

    HeVeA is one converter I haven’t used, but will try out soon. It looks like it would get confused by some of my documents, especially anything with nested environments.

    The program is written in a language called Objective Caml which I know even less about than Lisp. That means no way of extending it for me.

    LaTeX2HTML

    At first I thought this would be the converter for me. It looks like it converts it pages rather well and it is written in a programming language I understand (Perl).

    The main problem with this program is that it has not been maintained for years. A consequence of that is the HTML rendering is a bit old and doesn’t keep up with the latest standards.

    text4ht

    Another one I’ve not tried yet. This one does look recently maintained and I will be trying it out.

    LaTeXML

    This converter takes LaTeX as an input and instead of having an output file format of DVI makes it XML. It is written in Perl and was developed with a particular focus on the mathematical equations. To get HTML you use a post-processor.

  • Linux Distributions – Security through Unity?

    Quite often there is discussion about what operating system to use and the pros and cons of each. Of course one aspect that comes up is security, which is definitely a worthwhile goal to have. However the discussion is usually based upon technical points only; Operating system A has this feature, while B has another that is trying to do the same thing, but doesn’t do it quite as well, while C doesn’t have that feature at all.

    Technical points are important, but when you get down to the various flavours of Unix, it all rapidly becomes academic. Pretty much any Unix is more secure that any Microsoft Windows. This is because there is the
    proper concept of user/uid and process separation not to mention a nice boundary between the application and the OS. This layering helps with security but it also makes updating a lot easier.

    Sure, this flavour of Unix may have a certain feature, but does it really do anything worthwhile and
    what is the chance of some event happening where the absence of this feature
    means the server is hacked, while other similar servers with the feature are
    fine. I’ve used Sun Solaris as an example here, let’s face it, there is
    pretty much no ongoing new support for any other commercial Unix and
    no future.

    As a network engineer, security to me as just another aspect of network
    management. It is important, but so is keeping the service running free of
    faults and up to a certain level of performance. Perhaps some principles of
    network management could be used to apply to server security.

    An important lesson of network management is that quite a large number of
    faults ( some studies have said 50%, some said 70%, we’ll never know the true
    number ) can be attributed to a person or process failure, as opposed to
    a software or hardware problem as such. Whatever the percentage is, quite
    a large amount of security breaches are due to the administrators, for whatever
    reason, not running their servers correctly. Therefore, anything that makes
    the administrators job easier or the processes simpler makes security better.

    An example

    Perhaps an example will help. You’re in charge of setting up some servers
    and you can choose what goes on them. You’ve narrowed it down to Solaris
    or Debian GNU/Linux, what to choose?

    The first answer should be, if the current operators are far more comfortable
    with one over the other and you intend to use the same operators for the new
    systems without any additional staff, go with whatever they are used to.

    However if there is no strong preference, you then have to look at other things.
    How about security patches and testing? Is the setup you’re running going
    to be maintained and is it tested correctly?

    Running Software – Solaris Style

    Sun now has in their more recent versions included a lot more free software,
    but it is still not a lot and, well they just have this habit of screwing
    it up. I’m not sure why, but they don’t seem capable of compiling something
    off, say, sourceforge, without making a mess of it. Top and bash used to
    crash, never seen bash crash before I saw it on Solaris. And I won’t even
    mention the horror of their apache build.

    What happens if you want to run a MTA like postfix? Certainly a lot easier
    to run and a lot more features than the standard sendmail. Or you want some
    sort of web application that needs certain perl modules? If you’re running
    Solaris, you download all the sources, compile and repeat all through the
    dependencies. You can get pre-compiled programs from places scattered around
    the Internet, but quite often there are library version conflicts.

    That hasn’t even got into the problems when package A wants version 4 of
    library Z but package B wants version 5 of library Z. Or what happens
    if they both want version 4, but then you need to upgrade one of the
    packages which needs the newer library?

    Running Software – Debian Style

    For the Debian user, it is usually a matter of apt-get install <packagename&g
    t;.
    There are nearly 9,000 packages in the distribution, so whatever you want is
    probably there. There are only rare library conflicts; the library versions
    are standard across each release and everyone runs the same one. The only
    problems are the occasional transitional glitches as one packager is on
    the new libraries and the other is still on the old one. Still the
    occurrence of this sort of thing is greatly reduced.

    All nearly 9,000 packages go through the same QA control and have their
    bugs tracked by the same system in the same place. If the person cannot
    get a problem fixed, they have the help of at least 800 of their fellow
    Debian developers. If you’re having problems with your own version of the
    program on Solaris, you’re on your own.

    Upgrading a hassle, so it doesn’t happen

    Now the problem is that upgrading on most systems is a real pain. The problems
    surrounding the slammer and blaster worm on Microsoft servers is a good example.
    When the
    worm came out, people were saying its propagation was solely due to poor
    system maintenance where the lazy administrators did not bother to
    properly patch their servers.

    Even the OS itself can play up, causing strange and amusing problems to
    appear. My wife’s
    Windows XP computer switches off when it goes into powersave mode. This
    started happening after installing a security patch. I’m not sure what the
    power saving code has to do with security, maybe evil hackers across the
    internet cannot turn my lights on and off anymore.

    While there definitely would be a subset of administrators that did fit
    into the category of lazy and indept administrators, there were also
    plenty that could not upgrade or fix
    their systems. The problem was applying a service pack would break a great
    deal of things in some random way. Some people just could not be bothered
    or were too scared to upgrade.

    It is generally expected that when you upgrade, you will get problems and
    these problems need to be risk-managed. It shouldn’t be the usual expectation
    for a simple upgrade.

    While it is often a good idea, but not essential, to reboot a Debian system,
    for most upgrades you just install the upgraded package and that’s the task
    finished. There’s no need to stop and start the effected services because
    this is generally done for you.

    The clear layering of the application and OS and the reasonably clear layering
    of the application and library code means that if there is a problem with
    one of the layers the upgrade of it will not effect the other layers. This
    is why when an application crashes on a Unix server you restart the application
    while on a Windows server you reboot it.

  • Anti-Nimda/Code Red Auto-Emailer

    The Nimda and Code Red worms really annoy the hell out of me. It puts about 10-15% more load on my servers and is caused by some slack half-witted fool who calls themselves a system administrator who cannot be bothered to fix their joke of an operating system…

    Now I got **that** off my chest, this script will send an email to postmaster@their_domain explaining that they should fix their computer. Note that you will quite often get a lot of bounce-backs
    because idiots who run unpatched servers are that same sort of idiots that don’t have common mailbox names as spelt out in RFC 2142.
    The script will also send one email per worm attack, so they can quite often get lots of emails, perhaps they will fix their server and stop being a nuisance of The Internet then.

    To use this you will need apache (most likely running on something that is not Windows NT), the mod_rewrite module for apache and php.

    First you need a file. I called it nimda.php and put in at /var/www/nimda.php but it can go anywhere. You will need to do some editing, you can change the message to whatever you like.

    Then edit apache configuration file to the following:

    RewriteEngine On
    RewriteRule ^(.*/winnt/.*) /var/www/nimda.php?url=$1
    RewriteRule ^(.*/scripts/.*) /var/www/nimda.php?url=$1
    RewriteRule ^(.*/Admin.dll) /var/www/nimda.php?url=$1
    Alias /default.ida /var/www/nimda.php?url=default.ida
    

    If that damn worm comes visiting, it should work out the domain the worm is from and email the postmaster there. Note that you will get a fair few bounced emails but I have had some success with this approach with people taking notice. I’m still getting about 100 worm visits a day per computer though 🙁

  • Linux load numbers

    Many utilities, such as top in [procps](http://procps.sf.net/) display the percentages of time the cpu is busy doing things such as userland programs, system calls or just idle. This page describes the file /proc/stat and how programs interpret the numbers they find.

    I am the [Debian](http://www.debian.org/ maintainer for procps which contains top. Often I get bug reports about those numbers that appear at the top of top (called the summary area) so hopefully it will
    help Debian users understand it too.

    ##The /proc/stat file
    The file /proc/stat file is where the cpu numbers come from. As I am typing this, my single Athlon cpu computer running Linux 2.6.15 had the first two lines of the file looking like:

    $ grep ^cpu /proc/stat
    cpu  217174 10002 105629 7692822 90422 6491 22673 0
    cpu0 217174 10002 105629 7692822 90422 6491 22673 0
    

    The first thing you can see is I have 1 cpu, as there is only the aggregate line (starting with cpu) and then one individual cpu line (showing cpu0). Each field is describing how much time the cpu is been in various states, the values are in jiffies (more about them later). From left to right, the values are:

    * Userland – running normal programs
    * Nice – running niced programs
    * System – running processes at the system level, eg the kernel
    * Idle – CPU is doing nothing (running idle task)
    * IOwait – CPU is waiting for IO to come back
    * irq – servicing a hardware interrupt
    * softirq – servicing a software interrupt
    * Steal – To do with virtual machines, this cpu is waiting for the others

    ##Jiffies
    Quite often the kernel doesn’t count time in seconds, but counts them in a unit called jiffies. There is a concept of a value called Hz or Hertz which is the number of jiffies in a second. Happily for us, we’re only
    looking at percentages, so it doesn’t really matter.