Author: dropbear

  • python and rrdtool

    RRDTool is a neat utility for collecting and graphing statistics such as server loads or network traffic. There are two main modules for interfacing with RRDTool files within python; rrdpython and pyrrd.

    rrdpython is the basic bindings of the rrdtool library within python. The API is very familiar for people who program in C or use the command line tools which for me is both so it works well. However if you were expecting a “pythonic” API you will be disappointed. As there is the direct binding, you have to have either a pre-compiled module or compile it yourself with librrd-dev package installed. Depending on your setup this could be trivial or a real pain.

    pyrrd initially looks good as it is a object-orientated style and (supposedly) a pure python code, so no trying to compile things… but!

    Well, the problem is its not pure python. The hooks are there for it (in backend.py) for it to be implemented in python but it falls back to using the external method, and external method is a bunch of Popen calls. pyrrd also does not
    support the full set of rrdtool commands.

    My intention is to keep using rrdpython despite the compile hassle and possibly even use a fancier graph setup such as High Charts though there is a problem with their license for me.

  • What I learnt from LCA 2013

    Well I’m back from LCA 2013 and what a great week it was. I learnt plenty of things over the week, including:

    • Cloud computing into the government will be coming but will be hard to do in a significant way
    • It wasn’t just me that found the GTK 2 to 3 API change hard going
    • OpenStack is the next cool kid on the block and there is a lot going on here
    • Try out novaprova for testing C programs
    • SELinux shouldnt slow my computer down and yes ill try it soon
    • git-annex looks very interesting, I’m just trying to find a use-case for it for me
    • The development methods for Free Software have come a long way since a decade ago
    • Test and release processes are important on large software projects
    • Pretty much everyone I knew has changed companies, at least once.
    • For multi-process communication, 0mq looks interesting

    It was great to actually meet people who I’ve known for years (over 10 for some) only as emails or irc chats as well. I’ve not been to Mount Stromlo for almost as long and its somewhere I’d like to visit again.

    Argh, the wordpress editor is still not fixed 🙁

  • Off to LCA2013

    While I’ve been involve in Linux and Debian for many (15 or more) years, I’ve only ever been to one “major Linux thing” in all that time and that was manning some stall for Debian about 10 years ago.

    Well, let’s call it two because next week I’m off to the Linux.conf.au 2013 conference. I can’t get too terribly excited about the location because its… Canberra, yep my (now) home town and about 4 blocks from my workplace; so no exotic locales for me!

    I’m hoping to catch up with some Debian folk while the conference is on. There is at the very least a keysigning party where there are some others from Debian on Wednesday.

    PS WordPress update broke the visual editing, I hope it looks ok.

  • Bootstrap – The hidden gem in Turbogears

    I’ve been trying to tidy up my Mako templates within my Turbogears 2 project.  As part of that I was looking at some of them that are quickstarted including one which is the About page.

    What was curious was there was all this CSS work all done already for you, including icons.  Digging further I found out that one of the many projects Turbogears takes in, is Bootstrap.  It’s website describes Bootstrap as “Sleek, intuitive, and powerful front-end framework for faster and easier web development.”  but what it means to me is a bunch of guys who understand HTML and CSS way better than me have made it easy for me to build a decent web application.

    While the framework won’t suit everyone’s tastes, it makes a lot of the formatting decisions so much easier.  The thing is, in all the Turbogears documentation I have read I’ve not heard it mentioned. Not sure why, its pretty awesome (not SQLAlchemy awesome but not many things are).

    Enhanced by Zemanta
  • Pre-selecting ToscaWidgets jqgrid values in TurboGears

    My Turbogears project has recently reached an important milestone; one of the back-end processes now runs pretty much continuously and the plugins it use (or at least the ones I can see) are also working.  That means I can turn to the front-end which displays the data the back-end collected.

    For some of the data I am using a ToscaWidgets (or TW2) widget called a jqGridWidget which is a very nice jquery device that separates the presentation and data using a json query.  I’ve mentioned previously about how to get a jqGridWidget working but left the pre-filtering out until now.  This meant that my grid showed all the items in the database, not just the ones I wanted to see, but it was a start.

    Now this widget displays things called Attributes which are basically children of another model called Hosts. Basically, Attributes are things you want to check or track about a Host.  My widget used to show all Attributes but often on a Host screen, I want to see its hosts only. So, this is how I got my widget to behave; I’m not sure this is THE CORRECT way of doing it, but it does work.

    First, in the Hosts controller you need to create the widget and pass along the host_id that the controller has obtained.  I was not able to use the sub-classing trick you see in some TW2 documentation but actually make a widget instance.

    class HostsController(BaseController):
        # other stuff here
        class por2(portlets.Portlet):
            title = 'Host Attributes'
            widget = AttributeGrid()
            widget.host_id = host_id

    Next, the prepare() method in the Widget needs to get hold of the host_id and put it into the postData list.  I needed to do it before calling super() because the options become one long string in the sub-class.

    class AttributeGrid(jqgrid.jqGridWidget):
        def prepare(self, **kw):
            if self.host_id is not None:
                self.options['postData'] = {'hostid': self.host_id}
            super(AttributeGrid, self).prepare()

    This means the jqgrid when it asks for its JSON data will include the hostid parameter as well.  We need that method to “see” the host ID so we can filter the database access.

    Finally in the JSON method for the Attribute we pick up and filter on the hostid.

        @expose('json')
        @validate(validators={'hostid':validators.Int()})
        def jqsumdata(self, hostid=0, page=1, rows=1, *args, **kw):
            conditions = []
            if hostid > 0:
                conditions.append(model.Attribute.host_id == hostid)
            attributes =model.DBSession.query(model.Attribute).filter(and_(*conditions))

    From there you run through the attributes variable and build your JSON reply.

     

    Enhanced by Zemanta
  • procps 3.3.6 and Mudlet 2.0

    Mudlet - Graphical MUD clientYesterday was a busy Debian day for me with the release of not one but two packages.

    procps

    procps version 3.3.6 was released both as an upstream and a Debian package.  While there were many bug fixes in this release, the main new feature for it was the top inspect feature.

    The top inspect feature allows you to run top in the normal way then you can inspect a specific process you have selected within top for more information.  Examples of this include fuser or lsof type output to see what files or libraries are open but really the limit is how clever the operator is and how often you would need this information.  Even scripts can be run off this feature if you have some customised or cooked output you need to see.  Once the output is there, you can search for text.

    There will be a small hold-up with the Debian packages as the section for the  library has been moved to libs, where it should be. At least I think that’s why there is a delay.

    procps infrastructure

    procps currently has only an email list and a gitorious git repository.  While this works most of the time, there are two main problems with this setup.

    The first is that the only two ways you can report bugs is send an email to the list or create a git merge request. The email has the problem with tracking and gathering history while the git sets the bar pretty high for reporting something has gone wrong.

    Second problem is there is no tarball file repository.  gitorious does have the feature of downloading tarballs of the tag checkpoints, but this is the raw repository.  When you make a tarball with “make dist” there are some generated files included in the tarball that don’t appear in the gitorious ones, which means its not a simple for some as you expect.

    So I’ll be creating a project for these two features soon.  Not sure where they will be but I use sourceforge for both these for a few other projects so that might be where they end up.

     

    Mudlet

    Mudlet, the multi-user dungeon or MUD client, has finally made it to version 2.0!  It looks a lot more stable and polished at least from the scripting side than the previous RC versions but has all the features that we’ve come to expect with it.  The most important thing is that the developers have finally put a line in the sand and said “this is version 2.0”.  The Debian package was uploaded yesterday and should be on its way to your local mirror soon.

     

    Enhanced by Zemanta
  • procps-ng 3.3.5

    Are you using procps 3.3.4?  My suggestion is you don’t!

    An API change slipped through which mean if you tried to run procps 3.3.5 on libproc 3.3.4 it did strange things. One of those strange things is ps ax crashes. Top looked rather interesting as well.  This in turn upsets the boot sequence around mounting drives so you get a horrible mess.  The problem is the testsuite didn’t catch it because we’re not testing the API yet and the binaries use the libtool hack to use the library in the build directory.  Lucky for me I only updated manually the binaries and not the library so I saw it early.

    procps 3.3.5 was released tonight and it has the SONAME bump ,so we are not at libprocps1 not libprocps0.

    Enhanced by Zemanta
  • More spam from nobistech.net

    I get a lot of spam.  Most of it, thankfully is blocked by dspam but occasionally i get some through the filter.  One that particularly caught my eye was interesting not so much what it was advertising (I don’t read that part of the email) but where it came from and goes to.

    Normally there are two service providers involved in spam.  The email comes from (or via) one and then the spamvertised website is another.  The interesting thing is for this spam both of these were the same service provider. The email came from 174.34.168.85 and the spamvertised website was 70.32.40.194. Both of these addresses are owned by nobistech.net.  I punted the email to spamcop and it said that they’re not interested in spam reports.

    A few google queries shows that these guys seem quite happy to have spam sources and destinations and have been doing it for years.  They either appear as nobistech.net or unbiquity servers but they are one and the same organisation, or at least related.

    I won’t bother to send anything to them, it seems this has been done many many times by others with no results. Instead some CIDR blocks will be put into my blacklist.

    Enhanced by Zemanta
  • jqGridWidget in Turbogears

     

     

    Turbogears 2 uses Toscawidgets 2 for a series of very clever widgets and base objects that you can use in your projects.  One I have been playing with is the jqGridWidget which uses jquery to display a grid. The idea with jquery is creating a grid or other object and then using javascript to call back to the webserver to obtain the data which usually comes from a database using the very excellent SQLAlchemy framework.

    I found it a bit fiddly to get going and it is unforgiving, you get one thing wrong and you end up with an empty grid but when it works, it works very well.

    from tw2.jqplugins.jqgrid import jqGridWidget
    from tw2.jqplugins.jqgrid.base import word_wrap_css
    
    class ItemsWidget(jqGridWidget):
        def prepare(self):
            self.resources.append(word_wrap_css)
            super(ItemsWidget, self).prepare()
        options = {
                'pager': 'item-list-pager2',
                'url': '/items/jqgrid',
                'datatype': 'json',
                'colNames': ['Date', 'Type', 'Area', 'Description'],

    The code above is a snippet of the widget I use to make the graph. This is the visual side of the graph and shows just the  graph with nothing in it.  You need the datatype and url lines in the options which tell the code the url to get the data.  I have an items controller and within it there is a jqgrid method.  Note the number of columns I have, the data sent by the jqgrid method has to match this.

    Deep inside the items controller is a jqgrid method. The idea is to return some rows of data back to the grid in the exact way the grid expects it otherwise you get, yep more empty grids.

       @expose('json')
        def jqgrid(self, page=1, rows=30, sidx=1, soid='asc', _search='false',
                searchOper=u'', searchField=u'', searchString=u'', **kw):
    
            qry = DBSession.query(Item)
            qry = qry.filter()
            qry = qry.order_by()
            result_count = qry.count()
            rows = int(rows)
    
            offset = (int(page)-1) * rows
            qry = qry.offset(offset).limit(rows)
    
            records = [{'id': rw.id,
               'cell': [ rw.created.strftime('%d %b %H:%M:%S'), rw.item_type.display_name), rw.display_name, rw.text()]} for rw in qry]
            total = int(math.ceil(result_count / float(rows)))
            return dict(page=int(page), total=total, records=result_count, rows=records)

    The four items in the value for the cell in the dictionary must equal the number defined for the colNames.  I found it was useful to have firebug looking at my gets and another widget based upon SQLAjqGridWidget which is a lot simpler to setup but much less flexible too.

     

    Enhanced by Zemanta
  • NTP control messages

    I have had an opportunity to rework some code to query NTP servers directly in python rather than running ntpq -pn and scraping the result.  The over-the-wire protocol format is reasonably straight forward and my little module is now passing all of its nosetests which is wonderful.

    While its only a single table, ntpq actually does multiple requests to get its results.  NTP control messages are all mode 6 which tells the client and server its not time sort of things, but more about queries. A control packet has a sequence number which the server responds with so you can line up your requests with responses.  For large responses, the more, offset and count fields work together to knit up a large block of data.  The first packet of the response will have the offset field of 0. The count is the size of the data payload in bytes.  If the response is too big, the more bit will be set to 1 to say there is more coming.  The second packet will have an offset that is the same value as the previous packets count value, so you can stitch them together.

    The first query is a READSTAT which uses opcode 1.  This returns a list of 4 byte responses with the association ID taking the first two bytes and the peer status taking the second.  If you look at the output of ntpq you will notice a character in the first column which is the peer selection.  The important one for us is asterix, which equates to 110 in the lower 3 bits of the first status byte.  This is the system peer which is the main one. There may be others, up to 2, (usually symbol +) which are compared to the system peer.  ntpq doesn’t really care about the different peers and just displays the status but I want to know what peer the server is synchronised to.

    Next we need more details about the system peer (or all peers in ntpq).  To do this a READVAR or opcode 2 is sent to the server. The association ID is set to the association or peer we are interested in.  The response for my setup is sent over two packets which need to be reassembled and it is all in plain ascii which is a comma-separated list with a key=value pair.  I use the following python line to split it up.

    self.assoc_data.update(dict([tuple(ad.strip().split("=")) for ad in data[header_len:].split(",") if ad.find("=") > -1]))

    I’m sure people that know python better than I will say I am doing it wrong (it looks a bit klunky to me too) but it does work. From that I can query the various values that are returned by the server about its peers, including the source of its clock.

    Enhanced by Zemanta