Filtering base64 encoded spam

I hate spam, though I get an awful lot of it. About 1/3 of my email is spam, though on a bad day the ratio can be reversed. If you want to see just how much spam I get, I’ve used to have a nice graph of spam. To get rid of it I use a lot of filters. One of these is the Postfix Body Checks feature. This feature allows you to match lines in the body of the email and reject them at the server. I use Perl Compatible Regular Expression (PCRE) matching for the lines.

Recently though, I noticed a lot of spam, usually about Viagra, that was passing through my spam traps. I noticed the emails all talk about a small set of webservers, so I’ll just filter on the urls. It didn’t work.
SpamAssassin in the end gave me the hint.

X-Spam-Status: No, hits=0.0 required=5.0

It was base64 encoded email! That’s why my simple PCRE text matches would not work. So I needed to use something else.

This page is about how to filter on base64 text that appears in emails. I have used examples of PCRE and postfix but you can use this anywhere else, with the appropriate adjustments of where the files go and their syntax.

How to filter

A standard filter line in a postfix body_check file looks something like this:


This is the old iframe hack that some spammers use to sneak URLs into your email.They are nice and clear and we just reject them. All we have to do now is change the stuff between the “toothpicks” // to what we want.

Here’s an example spam I got today, its offering the usual garbage this shonks usually offer. Remember if they don’t advertise ethically, it is often a sign of their entire operation.

In this case, I’ve decided I cannot be bothered getting any emails that advertise stuff on, I get enough junk already and it’s probably a front for spammers anyway, so I’ll filter on that domain. You need to make the string reasonably long as you are effectively cutting off parts of it.

Debian systems have this program called mimencode, some of you might have mmencode, which is part of the Metamail package. This does the base64 encoding for you.

So all you need to do is take the string you want filtered on, put it into mimencode and then put the resulting string into the postfix configuration. You need to do this three times, deleting a character at the front each time because base64 is done by cutting the strings up into groups of three characters each and you don’t know in advance if the your string is going to start at position 1,2 or 3.

gonzo$ echo -n "" | mimencode
gonzo$ echo -n "ttp://" | mimencode
gonzo$ echo -n "tp://" | mimencode

Next you need to remove part of the encoded string at the end. Remember that 3 characters are encoded into 4 symbols. So character one contributes to symbol 1 and 2, two to 2 and 3 and three to 3 and 4. The = means the string was not a multiple of 3 and it needs padding. If the encoded string has no =, you can use it as-is, otherwise remove all = plus one more character at the end of the string. Remember that you are cutting off up to two characters from your regular expression from both ends so be careful it is still meaningful. The last string for example is only matching “tp://” which still looks ok.

Finally, you can join the strings using the regular expression “or” symbol. Also be careful to escape any strings that use special regular expression characters. Base64 can have plus ‘+’ and slash ‘/’ which need
escaping with a backslash .


I have a bypass line in my setup so usually any lines that are base64 encoded are bypassed, so if you have the same thing make sure this line goes before your bypass line or it will never match. We also need to tell postfix to use case sensitive matching because it is base64 hash we are matching and not the real string itself, so we use the i flag after the last slash . The relevant lines in the body_checks file are now:

/(HR0cDovL3d3dy5zZWxsdGhydW5ldC5uZXQv|dHRwOi8vd3d3LnNlbGx0aHJ1bmV0Lm5ldC|dHA6Ly93d3cuc2VsbHRocnVuZXQubmV0L)/i REJECT Spamvertised website
# don't bother checking each line of attachments
/^[0-9a-z+/=]{60,}s*$/                OK

To test it, I use pcregrep and mimencode again, on the mail file. This will show in clear text the spamming line and gives you an idea that it should work.

$ pcregrep 'dHRwOi8vd3d3LnNlbGx0aHJ1bmV0Lm5ldC' /var/mail/csmall  | mimencode -u">&ltl;im//

Printing using LPRng and Foomatic

For many years I have been using LPRng as my printer spooler. It is not the easiest one to use, but has a lot of features and is used in heavy-duty situations such as the main spoolers for University student printers.

In the early days, all printouts were simple ASCII text and all printers understood simple ASCII text, so there were no problems. Now printouts can be a number of forms, such as PDF, Postscript, png, jpg, tex plus many
others. Not only that, all printers have a different way of explaining how to print complex figures or graphics, or just even change the colour. A printing filter is the program that, say, converts the PDF command “now use red and write a line like this” into a language the printer itself understands. The filters have to also work out what thing is being sent to it; is that a PDF coming down the line or a Postscript file? Maybe it is nroff text?

The first filter I used was magicfilter. I then tried turboprint, which is non-free and also whatever lprngtool uses. I now use the foomatic scripts, which appear to be the most successful.

This document describes how I setup my LPRng program running on a Debian GNU/Linux system to talk to my Epson Stylus Color 600 that is attached to a networked print server (some Netgear thingy). The instructions should work for other distributions and of course with the exception of a different
ppd file.

You may also want to read another LPRng installation document too.

Basic Setup

The general idea is to use the Foomatic program called foomatic-rip as the LPRng input filter. This filter will convert the incoming file into something my Epson understands correctly. Ideally, I just tell my system “print this” and it does it, without any further input.

The steps in setting the printing up are:

  1. Getting the right packages
  2. Finding your printer PPD
  3. Checking your ghostscript works
  4. Installing and customizing the PPD
  5. Change or create printcap file
  6. Testing

Getting the right packages

There are some packages you will need, or are quite useful to have. I just
apt-get install ‘ed them and they all went in fine. Some of the files are
dependent on what printer you have and what drivers it will be using.

The printer spooler. You could use other printer spoolers, but they are setup differently.
This holds the printer filters. Most importantly,
it is the package with foomatic-rip.
Ghostscript comes in a variety of flavours. I needed this
flavour because it had the output device I needed. Make sure you check you get t
he right one for you.

Fonts for Ghostscript. Handy package to have.
Converts ASCII text into postscript.
Converts lots of things into postscript.

Finding your printer PPD

The PPD file is a Postscript Printer Description. It describes your printer to the postscript and ghostscript programs. You need to get this first before doing anything else because this will determine if your printer
is supported and also what other packages you might need.

Previously you could get the PPD from Linux website. But they have changed things around so they are no longer available.
You have to get them out of the printer database, the problem is they are shipped in xml.

A program called foomatic-ppdfile is the magic gap filler between XML and ppd. It can be used to find what PPD to use and how to generate them. For example, I try to find my Epson Stylus Color 600, with

$ foomatic-ppdfile -P ‘Epson.*Color 600’
Epson Stylus Color 600 Id=’Epson-Stylus_Color_600′ Driver=’gimp-print’ Compatibl
eDrivers=’gutenprint-ijs.5.0 gimp-print omni stc600ih.upp stc600p.upp stc600pl.u
pp stcolor stp ‘

The Id= is used to extract the printer definition. Generally there are many drivers you can use for each printer, check the Linux printing website for details of each.

For my printer, the default driver is called gimp-print, but I don’t have that one. foomatic-ppdfile complains:

$ foomatic-ppdfile -p ‘Epson-Stylus_Color_600’ > /etc/lprng/Epson-Stylus_

There is neither a custom PPD file nor the driver database entry contains suffic
ient data to build a PPD file.

If you get that message, try another printer driver. gutenprint is the new name of gimp-print, so we can use that:

$ foomatic-ppdfile -d gutenprint-ijs.5.0 -p ‘Epson-Stylus_Color_600’ > /

Checking your ghostscript works

Debian ships various ghostscript interpreters. The question is which is
the right one for you? Most printer drivers will need either the Gimp-Print
driver but a lot of the HP printers will need the ijs driver. The trick
is to look at the PPD file. For example, my file has the following lines:

*FoomaticRIPCommandLine: “gs -q -dPARANOIDSAFER -dNOPAUSE -dBATCH -sDE&&

VICE=stp %A%Z -sOutputFile=- -”

The important thing is unfortunately line-wrapped but it is trying to say -sDEVI
CE=stp. This is your output device and may or may not be supported by your
version of ghostscript. Grep for it with the command.

gonzo$ gs -h | grep stp
   uniprint xes cups ijs omni stp nullpage

You can see that we grepped for stp and there is a string showing
stp. If your ghostscript doesn’t show the right driver for you, try one of
the other ghostscripts (gs, gs-aladdin, gs-esp). Also be careful as gs is
an alternative and you might have the wrong one pointing in the alternatives
file. To check you can do the following:

gonzo$ gs -h | head -2
ESP Ghostscript 7.05.6 (2003-02-05)
Copyright (C) 2002 artofcode LLC, Benicia, CA.  All rights reserved.
gonzo$ ls -l /usr/bin/gs
lrwxr-xr-x    1 root    root        &nb
sp; 20 May  2  2002 /usr/bin/gs -> /etc/alternatives/gs
gonzo$ ls -l /etc/alternatives/gs
lrwxrwxrwx    1 root    root        &nb
sp; 15 Aug  9 15:16 /etc/alternatives/gs -> /usr/bin/gs-esp

Installing and customizing the PPD

It doesn’t really matter where you put your PPD file. You just specify it
in the printcap so the foomatic-rip file can find it. I put mine in
/etc/lprng but it is really up to you where to put it.

I also needed to adjust my PPD. Like most of the world, I do not have
Letter sized paper but A4. The PPD uses the default of Letter and making
sure you remember to type “-Z PagerSize=A4” every time you print gets old

Fortunately it is easy to fix it. Find the two lines that start with
*DefaultPageSize: and *DefaultPageRegion: and change them both from Letter
to A4. I’m sure someone who understands Postscript (I don’t) can explain
why you need to change both but the printing complains if you only change one.

Also remember to change the permissions so the printer filter program can
read the file. I had it setup originally so it couldn’t and then wondered
why my filters thought they had a “Raw” printer.

Change or create printcap file

The printcap file will need to be created or changed so that it uses the
input filter (if= clause) of foomatic-rip. In turn the filter has to be
told it is run from LPRng and the location of the PPD file. The rest of
the information is the usual thing you would see for a remote printer.

epson600|Epson Stylus Color 600:
    :[email protected]:
    :filter_options= –lprng $Z /etc/lprng/Epson-Stylus_Color_600-gute


Foomatic has a special flag that spits out all the other flags you can use.
It’s a good test to see if everything is working ok. The command is just

gonzo$ echo x | lpr -Z docs

The file you try to print is irrelevant, just make sure it exists. You
should then get a few pages of documents showing all the flags you can use
to change the printing. The -Z docs flag means to print the documentation
of the driver rather than the file itself. The foomatic documentation talks
about using the demo file of /proc/cpuinfo but I get “nothing to print”

If you do not get some document with the title “Documentation for (printer
name) (printer driver)” then check the permissions of the PPD file and
also the printcap file. If all else fails, edit the file
/etc/foomatic/filter.conf and change the relevant line to filter: 1.
The debug will then be found in /tmp/foomatic-rip.log. Do not keep the
debugging on all the time as it is a security risk.

Central print servers and multiple queues

In another installation I had a HP OfficeJet 155 which was used by several
pc Linux clients. I wanted several “printers” depending if the user wanted
draft or colour. The -Z flags seemed a little too hard.

The idea is to have multiple printers on the central print server which then
bounces to a real print queue which spools off the jobs. Do not have all
the “printers” going directly to the real printer as it generally handles
contention badly.

The central printcap just adjusts what extra -Z options are appended and
then bounces the job to the real print queue which spools all jobs through
the filter and onto the printer.

    :[email protected]







hpoj155| HP OfficeJet D155xi remote printer
    :filter_options= –lprng $Z /etc/foomatic/lpd/HP-OfficeJ

The print queues are now setup on the main server. Next is to make it
easier on the client pcs by setting up the queues and the aliases.
I called my queues hpoj155* so that if another printer comes along. It makes
big and confusing printer names so I created two lots of printer queues on
the clients. One with the printer name and one without. The first name
in the printcap is the one that is used by default.

        :client:lp=hpoj155%[email protected]:force_loc
[email protected]

        :client:lp=%[email protected]:[email protected]

That way users can just print to -P colourduplex and it understands that
it should go to the hpoj155 queue and that the printout is in colour and
duplex mode. The user doesn’t need to know what magic -Z flags are
required for this to happen either. They are different for different
printer types.

Linux Distributions – Security through Unity?

Quite often there is discussion about what operating system to use and the pros and cons of each. Of course one aspect that comes up is security, which is definitely a worthwhile goal to have. However the discussion is usually based upon technical points only; Operating system A has this feature, while B has another that is trying to do the same thing, but doesn’t do it quite as well, while C doesn’t have that feature at all.

Technical points are important, but when you get down to the various flavours of Unix, it all rapidly becomes academic. Pretty much any Unix is more secure that any Microsoft Windows. This is because there is the
proper concept of user/uid and process separation not to mention a nice boundary between the application and the OS. This layering helps with security but it also makes updating a lot easier.

Sure, this flavour of Unix may have a certain feature, but does it really do anything worthwhile and
what is the chance of some event happening where the absence of this feature
means the server is hacked, while other similar servers with the feature are
fine. I’ve used Sun Solaris as an example here, let’s face it, there is
pretty much no ongoing new support for any other commercial Unix and
no future.

As a network engineer, security to me as just another aspect of network
management. It is important, but so is keeping the service running free of
faults and up to a certain level of performance. Perhaps some principles of
network management could be used to apply to server security.

An important lesson of network management is that quite a large number of
faults ( some studies have said 50%, some said 70%, we’ll never know the true
number ) can be attributed to a person or process failure, as opposed to
a software or hardware problem as such. Whatever the percentage is, quite
a large amount of security breaches are due to the administrators, for whatever
reason, not running their servers correctly. Therefore, anything that makes
the administrators job easier or the processes simpler makes security better.

An example

Perhaps an example will help. You’re in charge of setting up some servers
and you can choose what goes on them. You’ve narrowed it down to Solaris
or Debian GNU/Linux, what to choose?

The first answer should be, if the current operators are far more comfortable
with one over the other and you intend to use the same operators for the new
systems without any additional staff, go with whatever they are used to.

However if there is no strong preference, you then have to look at other things.
How about security patches and testing? Is the setup you’re running going
to be maintained and is it tested correctly?

Running Software – Solaris Style

Sun now has in their more recent versions included a lot more free software,
but it is still not a lot and, well they just have this habit of screwing
it up. I’m not sure why, but they don’t seem capable of compiling something
off, say, sourceforge, without making a mess of it. Top and bash used to
crash, never seen bash crash before I saw it on Solaris. And I won’t even
mention the horror of their apache build.

What happens if you want to run a MTA like postfix? Certainly a lot easier
to run and a lot more features than the standard sendmail. Or you want some
sort of web application that needs certain perl modules? If you’re running
Solaris, you download all the sources, compile and repeat all through the
dependencies. You can get pre-compiled programs from places scattered around
the Internet, but quite often there are library version conflicts.

That hasn’t even got into the problems when package A wants version 4 of
library Z but package B wants version 5 of library Z. Or what happens
if they both want version 4, but then you need to upgrade one of the
packages which needs the newer library?

Running Software – Debian Style

For the Debian user, it is usually a matter of apt-get install <packagename&g
There are nearly 9,000 packages in the distribution, so whatever you want is
probably there. There are only rare library conflicts; the library versions
are standard across each release and everyone runs the same one. The only
problems are the occasional transitional glitches as one packager is on
the new libraries and the other is still on the old one. Still the
occurrence of this sort of thing is greatly reduced.

All nearly 9,000 packages go through the same QA control and have their
bugs tracked by the same system in the same place. If the person cannot
get a problem fixed, they have the help of at least 800 of their fellow
Debian developers. If you’re having problems with your own version of the
program on Solaris, you’re on your own.

Upgrading a hassle, so it doesn’t happen

Now the problem is that upgrading on most systems is a real pain. The problems
surrounding the slammer and blaster worm on Microsoft servers is a good example.
When the
worm came out, people were saying its propagation was solely due to poor
system maintenance where the lazy administrators did not bother to
properly patch their servers.

Even the OS itself can play up, causing strange and amusing problems to
appear. My wife’s
Windows XP computer switches off when it goes into powersave mode. This
started happening after installing a security patch. I’m not sure what the
power saving code has to do with security, maybe evil hackers across the
internet cannot turn my lights on and off anymore.

While there definitely would be a subset of administrators that did fit
into the category of lazy and indept administrators, there were also
plenty that could not upgrade or fix
their systems. The problem was applying a service pack would break a great
deal of things in some random way. Some people just could not be bothered
or were too scared to upgrade.

It is generally expected that when you upgrade, you will get problems and
these problems need to be risk-managed. It shouldn’t be the usual expectation
for a simple upgrade.

While it is often a good idea, but not essential, to reboot a Debian system,
for most upgrades you just install the upgraded package and that’s the task
finished. There’s no need to stop and start the effected services because
this is generally done for you.

The clear layering of the application and OS and the reasonably clear layering
of the application and library code means that if there is a problem with
one of the layers the upgrade of it will not effect the other layers. This
is why when an application crashes on a Unix server you restart the application
while on a Windows server you reboot it.

LaTeX to HTML Converters

I’ve been using LaTeX for many years, I should say quickly for the freaks out there that it doesn’t mean I’m into vinyl or other strangeness. LaTeX is a document processing system that creates good quality documents
from text source, no hamsters or chains involved at all.

The standard processors you get with LaTeX are good at converting the source into Postscript or PDF (Acrobat) documents and for most of the time this will do. However there are occasions when you want to have your document output in HTML. In this case you need to have a different processor.

This page is about the various types of LaTeX to HTML converters out there. It is not an exhaustive list but should help other people looking around for converters. The main problem with them all is they are not
maintained that well.


Hyperlatex is the converter I have used the most. It does most jobs quite well and you get reasonable results from it. My major gripes with it is that it is written in Lisp so I cannot extend it (I don’t know Lisp) and that it doesn’t do CSS that well.

Despite those shortcomings, Hyperlatex is a good start for document conversion. Unlike most program on this page, it is actively maintained and keeps up with HTML standards. For example there is work for
Hyperlatex output to be in XHTML.


TTH has put a lot of effort into the formula conversion. Most converters make an image for the formulas while TTH generates HTML for it, giving the formulas a more consistent look in the document rather than looking like they were “pasted in” later.

TTH has a funny license in that (roughly) it is free for non-commercial use only. Depending on where you are going to use it, this may be a problem. You can buy a commercial license of TTF too.


HeVeA is one converter I haven’t used, but will try out soon. It looks like it would get confused by some of my documents, especially anything with nested environments.

The program is written in a language called Objective Caml which I know even less about than Lisp. That means no way of extending it for me.


At first I thought this would be the converter for me. It looks like it converts it pages rather well and it is written in a programming language I understand (Perl).

The main problem with this program is that it has not been maintained for years. A consequence of that is the HTML rendering is a bit old and doesn’t keep up with the latest standards.


Another one I’ve not tried yet. This one does look recently maintained and I will be trying it out.


This converter takes LaTeX as an input and instead of having an output file format of DVI makes it XML. It is written in Perl and was developed with a particular focus on the mathematical equations. To get HTML you use a post-processor.

Anti-Nimda/Code Red Auto-Emailer

The Nimda and Code Red worms really annoy the hell out of me. It puts about 10-15% more load on my servers and is caused by some slack half-witted fool who calls themselves a system administrator who cannot be bothered to fix their joke of an operating system…

Now I got **that** off my chest, this script will send an email to [email protected]their_domain explaining that they should fix their computer. Note that you will quite often get a lot of bounce-backs
because idiots who run unpatched servers are that same sort of idiots that don’t have common mailbox names as spelt out in RFC 2142.
The script will also send one email per worm attack, so they can quite often get lots of emails, perhaps they will fix their server and stop being a nuisance of The Internet then.

To use this you will need apache (most likely running on something that is not Windows NT), the mod_rewrite module for apache and php.

First you need a file. I called it nimda.php and put in at /var/www/nimda.php but it can go anywhere. You will need to do some editing, you can change the message to whatever you like.

Then edit apache configuration file to the following:

RewriteEngine On
RewriteRule ^(.*/winnt/.*) /var/www/nimda.php?url=$1
RewriteRule ^(.*/scripts/.*) /var/www/nimda.php?url=$1
RewriteRule ^(.*/Admin.dll) /var/www/nimda.php?url=$1
Alias /default.ida /var/www/nimda.php?url=default.ida

If that damn worm comes visiting, it should work out the domain the worm is from and email the postmaster there. Note that you will get a fair few bounced emails but I have had some success with this approach with people taking notice. I’m still getting about 100 worm visits a day per computer though 🙁