Linux Distributions – Security through Unity?

Quite often there is discussion about what operating system to use and the pros and cons of each. Of course one aspect that comes up is security, which is definitely a worthwhile goal to have. However the discussion is usually based upon technical points only; Operating system A has this feature, while B has another that is trying to do the same thing, but doesn’t do it quite as well, while C doesn’t have that feature at all.

Technical points are important, but when you get down to the various flavours of Unix, it all rapidly becomes academic. Pretty much any Unix is more secure that any Microsoft Windows. This is because there is the
proper concept of user/uid and process separation not to mention a nice boundary between the application and the OS. This layering helps with security but it also makes updating a lot easier.

Sure, this flavour of Unix may have a certain feature, but does it really do anything worthwhile and
what is the chance of some event happening where the absence of this feature
means the server is hacked, while other similar servers with the feature are
fine. I’ve used Sun Solaris as an example here, let’s face it, there is
pretty much no ongoing new support for any other commercial Unix and
no future.

As a network engineer, security to me as just another aspect of network
management. It is important, but so is keeping the service running free of
faults and up to a certain level of performance. Perhaps some principles of
network management could be used to apply to server security.

An important lesson of network management is that quite a large number of
faults ( some studies have said 50%, some said 70%, we’ll never know the true
number ) can be attributed to a person or process failure, as opposed to
a software or hardware problem as such. Whatever the percentage is, quite
a large amount of security breaches are due to the administrators, for whatever
reason, not running their servers correctly. Therefore, anything that makes
the administrators job easier or the processes simpler makes security better.

An example

Perhaps an example will help. You’re in charge of setting up some servers
and you can choose what goes on them. You’ve narrowed it down to Solaris
or Debian GNU/Linux, what to choose?

The first answer should be, if the current operators are far more comfortable
with one over the other and you intend to use the same operators for the new
systems without any additional staff, go with whatever they are used to.

However if there is no strong preference, you then have to look at other things.
How about security patches and testing? Is the setup you’re running going
to be maintained and is it tested correctly?

Running Software – Solaris Style

Sun now has in their more recent versions included a lot more free software,
but it is still not a lot and, well they just have this habit of screwing
it up. I’m not sure why, but they don’t seem capable of compiling something
off, say, sourceforge, without making a mess of it. Top and bash used to
crash, never seen bash crash before I saw it on Solaris. And I won’t even
mention the horror of their apache build.

What happens if you want to run a MTA like postfix? Certainly a lot easier
to run and a lot more features than the standard sendmail. Or you want some
sort of web application that needs certain perl modules? If you’re running
Solaris, you download all the sources, compile and repeat all through the
dependencies. You can get pre-compiled programs from places scattered around
the Internet, but quite often there are library version conflicts.

That hasn’t even got into the problems when package A wants version 4 of
library Z but package B wants version 5 of library Z. Or what happens
if they both want version 4, but then you need to upgrade one of the
packages which needs the newer library?

Running Software – Debian Style

For the Debian user, it is usually a matter of apt-get install <packagename&g
There are nearly 9,000 packages in the distribution, so whatever you want is
probably there. There are only rare library conflicts; the library versions
are standard across each release and everyone runs the same one. The only
problems are the occasional transitional glitches as one packager is on
the new libraries and the other is still on the old one. Still the
occurrence of this sort of thing is greatly reduced.

All nearly 9,000 packages go through the same QA control and have their
bugs tracked by the same system in the same place. If the person cannot
get a problem fixed, they have the help of at least 800 of their fellow
Debian developers. If you’re having problems with your own version of the
program on Solaris, you’re on your own.

Upgrading a hassle, so it doesn’t happen

Now the problem is that upgrading on most systems is a real pain. The problems
surrounding the slammer and blaster worm on Microsoft servers is a good example.
When the
worm came out, people were saying its propagation was solely due to poor
system maintenance where the lazy administrators did not bother to
properly patch their servers.

Even the OS itself can play up, causing strange and amusing problems to
appear. My wife’s
Windows XP computer switches off when it goes into powersave mode. This
started happening after installing a security patch. I’m not sure what the
power saving code has to do with security, maybe evil hackers across the
internet cannot turn my lights on and off anymore.

While there definitely would be a subset of administrators that did fit
into the category of lazy and indept administrators, there were also
plenty that could not upgrade or fix
their systems. The problem was applying a service pack would break a great
deal of things in some random way. Some people just could not be bothered
or were too scared to upgrade.

It is generally expected that when you upgrade, you will get problems and
these problems need to be risk-managed. It shouldn’t be the usual expectation
for a simple upgrade.

While it is often a good idea, but not essential, to reboot a Debian system,
for most upgrades you just install the upgraded package and that’s the task
finished. There’s no need to stop and start the effected services because
this is generally done for you.

The clear layering of the application and OS and the reasonably clear layering
of the application and library code means that if there is a problem with
one of the layers the upgrade of it will not effect the other layers. This
is why when an application crashes on a Unix server you restart the application
while on a Windows server you reboot it.

LaTeX to HTML Converters

I’ve been using LaTeX for many years, I should say quickly for the freaks out there that it doesn’t mean I’m into vinyl or other strangeness. LaTeX is a document processing system that creates good quality documents
from text source, no hamsters or chains involved at all.

The standard processors you get with LaTeX are good at converting the source into Postscript or PDF (Acrobat) documents and for most of the time this will do. However there are occasions when you want to have your document output in HTML. In this case you need to have a different processor.

This page is about the various types of LaTeX to HTML converters out there. It is not an exhaustive list but should help other people looking around for converters. The main problem with them all is they are not
maintained that well.


Hyperlatex is the converter I have used the most. It does most jobs quite well and you get reasonable results from it. My major gripes with it is that it is written in Lisp so I cannot extend it (I don’t know Lisp) and that it doesn’t do CSS that well.

Despite those shortcomings, Hyperlatex is a good start for document conversion. Unlike most program on this page, it is actively maintained and keeps up with HTML standards. For example there is work for
Hyperlatex output to be in XHTML.


TTH has put a lot of effort into the formula conversion. Most converters make an image for the formulas while TTH generates HTML for it, giving the formulas a more consistent look in the document rather than looking like they were “pasted in” later.

TTH has a funny license in that (roughly) it is free for non-commercial use only. Depending on where you are going to use it, this may be a problem. You can buy a commercial license of TTF too.


HeVeA is one converter I haven’t used, but will try out soon. It looks like it would get confused by some of my documents, especially anything with nested environments.

The program is written in a language called Objective Caml which I know even less about than Lisp. That means no way of extending it for me.


At first I thought this would be the converter for me. It looks like it converts it pages rather well and it is written in a programming language I understand (Perl).

The main problem with this program is that it has not been maintained for years. A consequence of that is the HTML rendering is a bit old and doesn’t keep up with the latest standards.


Another one I’ve not tried yet. This one does look recently maintained and I will be trying it out.


This converter takes LaTeX as an input and instead of having an output file format of DVI makes it XML. It is written in Perl and was developed with a particular focus on the mathematical equations. To get HTML you use a post-processor.

Anti-Nimda/Code Red Auto-Emailer

The Nimda and Code Red worms really annoy the hell out of me. It puts about 10-15% more load on my servers and is caused by some slack half-witted fool who calls themselves a system administrator who cannot be bothered to fix their joke of an operating system…

Now I got **that** off my chest, this script will send an email to postmaster@their_domain explaining that they should fix their computer. Note that you will quite often get a lot of bounce-backs
because idiots who run unpatched servers are that same sort of idiots that don’t have common mailbox names as spelt out in RFC 2142.
The script will also send one email per worm attack, so they can quite often get lots of emails, perhaps they will fix their server and stop being a nuisance of The Internet then.

To use this you will need apache (most likely running on something that is not Windows NT), the mod_rewrite module for apache and php.

First you need a file. I called it nimda.php and put in at /var/www/nimda.php but it can go anywhere. You will need to do some editing, you can change the message to whatever you like.

Then edit apache configuration file to the following:

RewriteEngine On
RewriteRule ^(.*/winnt/.*) /var/www/nimda.php?url=$1
RewriteRule ^(.*/scripts/.*) /var/www/nimda.php?url=$1
RewriteRule ^(.*/Admin.dll) /var/www/nimda.php?url=$1
Alias /default.ida /var/www/nimda.php?url=default.ida

If that damn worm comes visiting, it should work out the domain the worm is from and email the postmaster there. Note that you will get a fair few bounced emails but I have had some success with this approach with people taking notice. I’m still getting about 100 worm visits a day per computer though 🙁

Linux load numbers

Many utilities, such as top in [procps]( display the percentages of time the cpu is busy doing things such as userland programs, system calls or just idle. This page describes the file /proc/stat and how programs interpret the numbers they find.

I am the [Debian]( maintainer for procps which contains top. Often I get bug reports about those numbers that appear at the top of top (called the summary area) so hopefully it will
help Debian users understand it too.

##The /proc/stat file
The file /proc/stat file is where the cpu numbers come from. As I am typing this, my single Athlon cpu computer running Linux 2.6.15 had the first two lines of the file looking like:

$ grep ^cpu /proc/stat
cpu  217174 10002 105629 7692822 90422 6491 22673 0
cpu0 217174 10002 105629 7692822 90422 6491 22673 0

The first thing you can see is I have 1 cpu, as there is only the aggregate line (starting with cpu) and then one individual cpu line (showing cpu0). Each field is describing how much time the cpu is been in various states, the values are in jiffies (more about them later). From left to right, the values are:

* Userland – running normal programs
* Nice – running niced programs
* System – running processes at the system level, eg the kernel
* Idle – CPU is doing nothing (running idle task)
* IOwait – CPU is waiting for IO to come back
* irq – servicing a hardware interrupt
* softirq – servicing a software interrupt
* Steal – To do with virtual machines, this cpu is waiting for the others

Quite often the kernel doesn’t count time in seconds, but counts them in a unit called jiffies. There is a concept of a value called Hz or Hertz which is the number of jiffies in a second. Happily for us, we’re only
looking at percentages, so it doesn’t really matter.

Debian GNU/Linux on Compaq nx6320

Last updated: 30 December 2006

##General Hardware Specifications of Compaq nx6320:

Hardware Components
Status under Linux
Intel Core Duo, 2GHz Works No special procedure required during installation.
1024×768 15″ TFT Display Works Select Generic LCD Display in Installer
Intel Graphics Media Accelerator 950 Works Used Standard Xorg drivers
2GB, DDR2 Works No special procedure required during installation
100 GB SATA Hard Drive Works Requires recent kernel, eg 2.6.18 for driver
10/100/1000 Integrated Network Card Works Installer found the Tigon driver for it fine
24X Max Variable CD-ROM Drive Works No special procedure required during installation
Internal Intel Wireless Networking Works Need to download specific driver, see below.
59 WHr Lithium-Ion Battery Works No special procedure required during installation
Intel 82801G Sound Card Works Used ALSA driver snd_hda_intel

This laptop is operating under Kernel version 2.6.18

##Basic Installation of Debian:
I used Debian Etch RC3 as I wanted to test the installer and also get Linux on a small partition of this laptop. It is only used for network testing and as a remote Xserver, so it doesn’t have much installed.

The sarge installer won’t work, the kernel is too old and it will not find your SATA drives.

##Setting up additional features for Debian
The wireless port was the trickiest part. You need to install some packages to get it going. Make sure you have contrib and non-free in your apt archives as these drivers are not in main.

Then install ipw3945d, firmware-ipw3945 and the module. The exact name of the module depends on what kernel you have installed. I have kernel from the package linux-image-2.6.18-4-686 so the module package is ipw3945-modules-2.6.18-4-686.

Nothing else needed to be done, no other module packages are required. It started off kinda weird but a depmod -a and reboot later I had solid link and, once I go my wireless key on, connected fine.

##Unresolved issues
None really, except I have not tried out the bluetooth, modem or smartcard reader. With the exception of the modem all are found.

* [Linux on Laptops](http://www.linux-on-laptops/)
* [Intel 3945ABG Driver](
* [Installing Linux on nx6320](