Archive for the ‘Uncategorized’ Category

Colorful Terminals: Theme Support for Tmux

Published December 22nd, 2011, updated December 26th, 2011.

Most modern terminals have 88/256 color support, but only few applications take advantage of this. Popular software like Irssi, Mignight Commander or Aptitude are still using 16 color mode.

To overcome this, I’ve created a patch for tmux (a screen-like terminal multiplexer). This patch adds a new “map-color” command, which can be used to translate from 16 to 256 color palettes.

map-colour 7 4 208 236

The example above would translate the 16 color pair “gray on blue” to 256 color pair “dark orange on dark grey”. It matches the default Irssi status line.

reset-colours
map-colour * 4 * 236
map-colour * 6 * 238

The second example illustrates the use of the color-reset command along with the map-color wildcard feature. First, all existing color mappings are cleared. Then, two new mappings are added: all blue backgrounds map to a grey shade, and all cyan backgrounds map to a similar grey shade.

With these commands, one can create complete themes. I’ve put some examples aside the source code. They can be activated using the source command like “source /usr/share/tmux/amber.tmux.conf”.

To apply the colormap patch, grab the current tmux-1.5 source tree, replace the patched files and run “aclocal && automake” for updating the configure script. Now, you can “./configure && make && make install” tmux as before.

Of course, contributions are welcome!

tmux project site
tmux colourmap patch

Remaining Anonymous

Published July 20th, 2011.

Sometimes, it’s better to remain anonymous. For this, I am using the Tor anonymity network from a dedicated user account on a Linux machine. This user account is somewhat special, hence it is locked down by the local firewall and cannot open any outgoing internet connection. The only way out is the Tor network, which ensures this user’s identity is effectively kept private. Here’s how:

# part of my bashrc
sudo /sbin/iptables -D OUTPUT -o eth0 -m owner --uid-owner $USER -j REJECT
sudo /sbin/iptables -A OUTPUT -o eth0 -m owner --uid-owner $USER -j REJECT
export LD_PRELOAD="/usr/lib/torsocks/libtorsocks.so"
export HOSTNAME="somewhere"
PS1="\A \[\e[30;100m\] $HOSTNAME \[\e[0m\]:\w\$ "

First, I ensure a proper firewall rule is set up which forbids all outgoing traffic (this requires sudo permissions). Then, the Torsocks library is preloaded to this environment. This ensures all programs that are invoked from this shell are also wrapped to use the Tor proxy. Torsocks wraps most programs that use TCP sockets and (in contrast to torify/socksify), it also wraps DNS requests properly. Any other UDP and ICMP traffic is effectively blocked by the local firewall.

Next thing is to figure out some special hostnames within the Tor network. For example, “elinks http://www.ip2location.com.klollely.exit/” will use the exit note klollely (which is in Russia) and “telnet towel.blinkenlights.nl.uhhhhhh.exit” will open a Telnet connection originating from Thailand. Have fund and use it for good.

Measuring Disk IO Performance

Published August 14th, 2010, updated March 17th, 2013.

Hard disk drives have become larger and larger over the years, but their rotation speeds remain at nearly the same level since decades. This has lead to some odd trait: we have seen greatly improved transfer rates for sequential input/output, but random input/output remains at nearly the same level since ever.

Diagram of a computer hard disk drive, (cc) SurachitThe reason for this is the physical build-up of hard disk drives: to read (or write) some random position on the magnetic layer, a drive needs to move its heads to a given track and wait until the requested sector arrives. Typical mean access times for this are in the range from 5ms to 15ms, resulting in 50-150 random input/output operations per second (IOPS).

In practice, there are some measures to deal with this constraint. Modern hard drives utilize native command queuing (NCQ) to optimize seek times, we use disk arrays to spread the io load on multiple spindles (RAID), various caching strategies reduce the amount of input/output operations that are issued to the drive and read-ahead/prefetching tries to load the data beforehand.

Though, the question arises: how many input/output operations can we actually perform with all these optimizations in place? Let’s benchmark this. One tool that we can use for this is iops(1), a benchmark utility that runs on Linux/FreeBSD/Mac OS X. Iops issues random read requests with increasing blocksizes:

$ sudo ./iops --num_threads 1 --time 2 /dev/md1
/dev/md1,   6.00 TB, 1 threads:
 512   B blocks:   43.9 IO/s,  21.9 KiB/s (179.8 kbit/s)
   1 KiB blocks:   46.7 IO/s,  46.7 KiB/s (382.9 kbit/s)
   2 KiB blocks:   46.4 IO/s,  92.7 KiB/s (759.6 kbit/s)
   4 KiB blocks:   37.5 IO/s, 150.0 KiB/s (  1.2 Mbit/s)
   8 KiB blocks:   33.6 IO/s, 268.5 KiB/s (  2.2 Mbit/s)
  16 KiB blocks:   29.5 IO/s, 471.4 KiB/s (  3.9 Mbit/s)
  32 KiB blocks:   26.0 IO/s, 833.3 KiB/s (  6.8 Mbit/s)
  64 KiB blocks:   24.0 IO/s,   1.5 MiB/s ( 12.6 Mbit/s)
 128 KiB blocks:   24.1 IO/s,   3.0 MiB/s ( 25.3 Mbit/s)
 256 KiB blocks:   20.1 IO/s,   5.0 MiB/s ( 42.1 Mbit/s)
 512 KiB blocks:   18.5 IO/s,   9.3 MiB/s ( 77.6 Mbit/s)
   1 MiB blocks:   16.9 IO/s,  16.9 MiB/s (142.0 Mbit/s)
   2 MiB blocks:   11.7 IO/s,  23.3 MiB/s (195.7 Mbit/s)
   4 MiB blocks:    9.2 IO/s,  36.6 MiB/s (307.3 Mbit/s)
   8 MiB blocks:    5.1 IO/s,  41.0 MiB/s (343.6 Mbit/s)
  16 MiB blocks:    3.8 IO/s,  60.8 MiB/s (510.2 Mbit/s)
  32 MiB blocks:    3.1 IO/s, 100.6 MiB/s (843.7 Mbit/s)
  64 MiB blocks:    2.0 IO/s, 127.2 MiB/s (  1.1 Gbit/s)
 128 MiB blocks:    1.1 IO/s, 141.7 MiB/s (  1.2 Gbit/s)
 256 MiB blocks:    0.5 IO/s, 136.1 MiB/s (  1.1 Gbit/s)

In this example, the tested device is a Linux software raid5 with four 2 TB, 5.400rpm disks. We have started iops(1) with a single thread and a sampling time of two seconds for each block size. The results show that we reach about 45 IOPS for very small block sizes (or 22ms per IO request).

Now, let’s increase the number of threads and see how this affects overall performance:

$ sudo ./iops --num_threads 16 --time 2 /dev/md1
/dev/md1,   6.00 TB, 16 threads:
 512   B blocks:  151.4 IO/s,  75.7 KiB/s (620.3 kbit/s)
   1 KiB blocks:  123.7 IO/s, 123.7 KiB/s (  1.0 Mbit/s)
   2 KiB blocks:  117.0 IO/s, 234.1 KiB/s (  1.9 Mbit/s)
   4 KiB blocks:   97.7 IO/s, 390.6 KiB/s (  3.2 Mbit/s)
   8 KiB blocks:   78.6 IO/s, 629.1 KiB/s (  5.2 Mbit/s)
  16 KiB blocks:   60.7 IO/s, 970.7 KiB/s (  8.0 Mbit/s)
caught ctrl-c, bye.

We see that concurrent requests increase the IO limit to 150 IOPS. This indicates that the requests are actually spread to multiple spindles or optimized by native command queuing. I guess it’s the spindles, but we could investigate further by benchmarking a single disk instead of the array. Though, this is beyond the scope of this blog post.

github repo

Building rsync3 on Mac OS X (Universal Binary)

Published July 26th, 2010.

Apple Mac OS X 10.4-10.6 ships with a modified version of rsync2 that has support for extended attributes and resource forks. Though, it “does not perform as well as rsync 3.x, consumes more memory (especially for transfers of many files), and will copy unmodified resource forks every single time” (Mike Bombich).

Luckily, you can install rsync3 from MacPorts, using Mike’s Carbon Copy Cloner (which ships with a mutilated binary) or compile it on your own. This is an receipt for building an rsync3 universal binary that runs on Mac OS X 10.4-10.6 ppc/x86/x86_64:

# 2010-07-07, benjamin: receipt for building rsync3 universal binary for
#   mac os x 10.4+ ppc/i386/x86_64 on a build host running 10.6
#   based upon http://www.bombich.com/mactips/rsync.html

# install xcode from http://developer.apple.com/technologies/xcode.html

# get sources
curl -O http://samba.anu.edu.au/ftp/rsync/rsync-3.0.7.tar.gz
curl -O http://samba.anu.edu.au/ftp/rsync/rsync-patches-3.0.7.tar.gz

# optionally verify signatures
curl -O http://samba.anu.edu.au/ftp/rsync/rsync-3.0.7.tar.gz.asc
gpg --verify rsync-3.0.7.tar.gz.asc
curl -O http://samba.anu.edu.au/ftp/rsync/rsync-patches-3.0.7.tar.gz.asc
gpg --verify rsync-patches-3.0.7.tar.gz.asc

# apply patches relevant for preserving Mac OS X metadata
tar xvzf rsync-3.0.7.tar.gz
tar xvzf rsync-patches-3.0.7.tar.gz
cd rsync-3.0.7/
patch -p1 <patches/fileflags.diff
patch -p1 <patches/crtimes.diff

# build 10.4+ ppc binary
CC="gcc-4.0" LDFLAGS="-arch ppc" CFLAGS="-arch ppc -isysroot /Developer/SDKs/MacOSX10.4u.sdk -mmacosx-version-min=10.4" ./configure
make -j4
mv rsync rsync3.ppc

# build 10.4+ x86 binary
CC="gcc-4.0" LDFLAGS="-arch i386" CFLAGS="-arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -mmacosx-version-min=10.4" ./configure
make -j4
mv rsync rsync3.i386

# build 10.5+ x86_64 binary
CC="gcc-4.2" LDFLAGS="-arch x86_64" CFLAGS="-arch x86_64 -isysroot /Developer/SDKs/MacOSX10.5.sdk -mmacosx-version-min=10.5" ./configure
make -j4
mv rsync rsync3.x86_64

# combine platform specific binaries into an universal binary
lipo -create rsync3.ppc rsync3.i386 rsync3.x86_64 -output rsync3

# eof.

You can find binaries, patches etc. in the download section below.

download binaries

Unix Terminals: Surviving the Encoding Hell

Published April 15th, 2010, updated March 29th, 2013.

Every now and then, I see people using misconfigured text terminals. People show up in chatrooms and post gibberish or they leave broken umlauts in text, html and source files. This is mostly the case because they (or you) have a broken terminal configuration. In this post, I will try to explain how terminal encodings work and how you can fix things.

Generally spoken, things break if you are using a different terminal encoding than your peers. When you enter text like umlauts and other international characters, it gets encoded using your local terminal encoding (something like latin1, utf-8 or cp850). If a different encoding is used to display this data, you are likely to see gibberish and other strange effects in your terminal. Thus, we need to define what encoding we want to use for a specific file, a chatroom or on a system-wide level. A good guess would be utf8 nowadays, but us-ascii/ascii7 is also pretty common.

First, let’s find out our actual terminal encoding. Just enter some umlauts like “äöü” and show the binary representation in hexadecimal:

$ printf "äöü" | xxd
0000000: c3a4 c3b6 c3bc                           ......

In this example, we find “c3a4 c3b6 c3bc” which indicates that the umlauts got encoded into utf8. Other possible results would be “e4 f6 fc” for win1252 or “84 94 81″ for cp850. You can lookup some more encodings here. (Of course, you can also check the manual for your terminal emulator).

Now that we know our actual terminal encoding, we need to tell this to the system libraries and other console software. This is done using locale(5), a standard that is used by almost any program that is capable of doing character encoding and not just passes dumb binary data. To do so, you can list all available encodings by running “locale -a” and pick an appropriate one:

$ locale -a
C
de_DE.utf8
en_US.utf8
POSIX

This list contains entries in the format language_location.encoding; additional locales can be created using tools like locale-gen(1). I use “en_US.utf-8 hence my terminal uses utf8 and I prefer English program output. This locale string should be set as $LC_ALL as environment variable (or LC_CTYPE if you want to ignore the language and location). Some terminals do this automatically, but we can also do this in our ~/.profile file which is sourced whenever a new terminal is started. For compatibility with older software, we also set $LANG to the same value:

export LC_ALL=en_US.utf-8
export LANG="$LC_ALL"

You can check the result in a new terminal by typing “locale”; if you see “C” instead of your locale string, something went wrong and the locales felt back to the default settings. Check that your locale string is in the list. When everything looks ok, you should see the utf8 line in my umlauts test file (just type “cat umlauts.bin”).

Now that we have checked the local terminal settings, we should do the same for hosts where we ssh into. Luckily, ssh can forward our locales settings, just append “SendEnv LANG LC_ALL” to ~/.ssh/config and check that your locale is also available on the remote host. Voila, you have a properly working terminal with defined locales.

If you still see malformed characters, it is likely that you use software that does not know about locales at all and just passes raw data. In theory, such software should fall back to us-ascii/ascii7 and strip or replace all other characters. If this fails, you can either use another program or you are forced to use a terminal program with the same binary encoding (or avoid umlauts if you are on IRC;-).

More Fun with the Python Class Dispatcher

Published March 26th, 2010.

In a recent post, I have demonstrated how to do prototype-style method injection in Python. Today, I’ll show how you can have even more fun with the class dispatcher by changing the base class of an object during run-time. But first, let me illustrate a real-world problem where this proposed solution becomes handy…

Like many others, I’ve jumped the distributed computing hype and spent a lot of time with nosql databases (I prefer Mongo). Due to the document-based storage model, the actual document type is stored inside a given document. In example, imagine you have something like {‘type’: ‘post’, ‘id’: 23, …} stored inside a collection, say it represents a blog post. When you load an object from the database, you cannot decide what type it is unless you have retrieved it from the database. If you want to represent the retrieved data as an object, you have to add a loader that fetches the raw data and decides what type of object it should create. So, it is likely that you end up with an interface like this:

db = DB()
post = Post()
id = db.save(post)
same_post = db.get(id)
db.delete(id)

This is fairly ok, but you’ll end up splitting the interface into a db object and a post object. The db object appears reasonable because it can load the raw data and create objects of different types like Post or Comment, depending on the type variable. This is ok, but I think we can do better. Imagine an interface like this:

post = Post()
id = post.save()
same_post = Post(id)
same_post.delete()

It feels more intuitive and reflects the way you would describe the actual task. You could have the database code in the same object (or a parent of it) and make things more explicit. Though, if you cannot determine the object type before you fetch it from the database, you cannot decide what type of object to create. So, if you invoke the constructor of a post object, but you find the actual type to be “comment”, how can you change the base class now? Like this:

class Generic:
    def __init__(self, class_name=None):
        if not class_name:
            return

        classes = globals()
        if not class_name in classes:
            raise Exception("%s not found in global scope" % class_name)
        _class = classes[class_name]
        if not type(_class) == type(self.__class__):
            raise Exception("%s is not a class" % class_name)

        self.__class__ = _class

class Specialized(Generic):
    pass

c = Generic("Specialized")
print c     # prints <Specialized>

In this example,we run the constructor of class Generic and dependent on some contextual data (class_name here), we change the base class of our object after instantiation. What we get is an object of class Specialized even though we invoked the constructor of Generic. This methodology can easily be applied to our blog example, making the interface much cleaner and more expressive.

Samsung LED TV: The Good, The Bad, The Ugly

Published March 24th, 2010, updated March 25th, 2010.

Recently, we’ve bought a shiny new Samsung LED TV. It’s a Series 6 model with a large screen and an integrated DVB-C decoder. The TV set is pretty fine, it runs a Linux-based firmware and has an integrated media player.

After reading the tech specs, I’ve found out about the differences of Series 6 and Series 7 models and started worrying. The hardware is almost the same: Series 7 models have a CI+ interface and an additional USB port but this is not important for me. Both TV sets run the same firmware, but on Series 6 models, the integrated media player does not play movies. Hence the hardware is nearly the same, this limitation has no technical reasons.

So, I’ve investigated further… A friend suggested that we could patch the firmware and enable the movie playback there. I’ve contacted Samsung and requested a copy of the GPL-licensed source code. Though, their customer support never responded to my request. After this, I started to tinker with the firmware binary files, but I had to find that they are encrypted and digitally signed (using OpenSSL, lol).

This means, even if you get Samsung to hand over the source code, they won’t allow you to use it in the sense of correcting bugs on your own television. Bad karma; this is clearly not the will of the original software authors.

In spite of everything, there’s yet a simple solution – at least for the media player issue. The firmware holds a hidden service menu that can be entered by pressing INFO-MENU-MUTE-POWER when the TV is in standby. From there, I was able to change the model number to a Series 7 model and reach the fully-featured media player (see here).

Improved Python Traceback Module

Published January 27th, 2010, updated March 17th, 2013.

Like any modern language, Python comes along with a nice traceback module. This module gives you stack traces from the line of code where an exception is raised up to the next try-except clause. So, you can easily catch exceptions and write stack traces into a debug log. This debugging technique is pretty handy to drill down bugs and I use it a lot in prototyping.

Using the traceback module is straight forward for evident programming mistakes. However, real bugs are context-sensitive and they can hardly be reproduced without the actual data that was processed when an exception was raised. If you can reproduce a specific bug, you can add some logging code in front and inspect the variables the next time the bug is triggered. But if a bug occurs once in a blue moon, you’d be better in logging the data the first time an exception raises.

import tracebackturbo as traceback

def erroneous_function():
    ham = u"unicode string with umlauts äöü."
    eggs = "binary string with umlauts äöü."
    i = 23
    if i>5:
        raise Exception("it's true!")

try:
    erroneous_function()
except:
    print traceback.format_exc(with_vars=True)

Here’s my solution; an improved Python traceback module the logs variables from the local scope next to the affected code. You can find a working copy in our Mercirual repository (see the below).

Traceback (most recent call last):
  File "test.py", line 11, in 
    Local variables:
      __builtins__ = 
      __doc__ = None
      __file__ = "x"
      __name__ = "__main__"
      __package__ = None
      erroneous_function = 
      traceback = <module 'tracebackturbo' from '/private/tmp/python-...
    erroneous_function()
  File "test.py", line 8, in erroneous_function
    Local variables:
      eggs = "binary string with umlauts \xc3\xa4\xc3\xb6\xc3\xbc."
      ham = u"unicode string with umlauts ???."
      i = 23
    raise Exception("it's true!")
Exception: it's true!

I am not sure if it is the “right” solution as sensitive information might be logged. This might have security implications for some real-world scenarios where webapps report stack traces to the end user (e.g. by using cgitb in production).

Credit: this code was inspired by format_exc_plus by Bryn Keller.

2010-01-28: there’s an active discussion on python-dev.
2011-06-25: I’ve renamed the module, enable print_vars by default and merge with upstream

github repository

Adding a Custom LDAP Schema to Open Directory on 10.5+

Published January 15th, 2010.

Open Directory is a key component of Mac OS X Server. It consists of OpenLDAP, MIT Kerberos, Password Server and a tool chain that enables GUI administration. Sadly, adding new ldap schemas to the directory server is not documented in the advanced administration guides and you have to tinker with the command line tools. I could not find any good documentation how you to add a custom LDAP schema, so I’ll show my solution here.

Mac OS X Server 10.5 ships with OpenLDAP 2.3. This release supports run-time configuration, which means that the LDAP schemas are stored within the directory server and you cannot simply put your new schema file in /etc/openldap/schema/; you have to convert it to an LDIF file and load this into the directory itself. This can be done during run-time but it breaks replication if you do so. So, instead you have to create a proper old-style config and run a manual conversion to the new run-time config.

To do so, you need to place the new schema file in /etc/openldap/schema/some-new.schema. This directory is copied  to new replicas when you join them, so you won’t break the Apple tool chain. Then, you need to include the new schema file from /etc/openldap/slapd.conf; this has no direct effect but slaptest(1) uses this to re-create the run-time config. Finally, convert the old-style config to a new run-time config using slaptest(1) like “slaptest -f slapd.conf -F slapd.d” and restart slapd:

cd /etc/openldap
cp some-new.schema schema/
cat >> slapd.conf <<HERE
include /etc/openldap/schema/some-new.schema
HERE
mv slapd.d slapd.d_bak
slaptest -f slapd.conf -F slapd.d
launchctl unload /System/Library/LaunchDaemons/org.openldap.slapd.plist
launchctl load /System/Library/LaunchDaemons/org.openldap.slapd.plist

Beware: we are deleting the old run-time config here and create a new one from the static config. If you have changed the config without adopting the old-style config, you might loose modifications. So, check twice if all required schemas are included from slapd.conf. AFAIK, Kerio Mailserver is troublesome here as it is not adding the include lines to slapd.conf. Though, thise procedure is exactly what the Apple tool chain does on replication and I suggest you do it exactly this way. Good luck!

Eight Questions on Twitter

Published January 7th, 2010.

Why…

  1. is it too slow for real-time communications (that is available on IRC since 1988)?
  2. can’t #hashtags contain unicode?
  3. is there a length limit for my nickname?
  4. can’t I sign on from multiple computers simultanously?
  5. does their web page no auto-refresh?
  6. is it always over capacity?
  7. didn’t they register appropriate country TLDs like .de?
  8. have they shut down their Jabber interface?

    Dragons Everywhere!

    Published December 27th, 2009, updated January 8th, 2010.

    It’s late December and like every year, hackers from accross Europe come together for the Chaos Communication Congress. The congress “is the annual four-day conference organized by the Chaos Computer Club (CCC). It takes place at the bcc Berliner Congress Center in Berlin, Germany. The Congress offers lectures and workshops on a multitude of topics and attracts a diverse audience of thousands of hackers, scientists, artists, and utopians from all around the world.”

    What’s new this year: there are some off-site hackcenters where people join the congress without being there in-real-life. “those unable to attend the Congress in Berlin [are invited] to celebrate their own Hack Center Experience, watch the streams, participate via twitter or chats, drink Tschunk, cook and have a good time.” This is exactly what some of us are going to do: we will meet at the UUGRN hackerspace on Monday and join the congress events.

    Recent Topics of #sickos

    Published October 30th, 2009, updated November 1st, 2010.

    Here’s a compilation of recent topics from #sickos on SickosNet.

    2010
    Nov 01 2010 10: goto 10
    Oct 22 2010 SICKOS.ORG | INTERNATIONAL CAPS LOCK DAY
    Oct 20 2010 Fortunately for computer science the supply of curly braces and angle brackets remains high.
    Oct 07 2010 OpenOffice is the new XFree86.
    Sep 19 2010 Ahoy, ye landlubbers!
    Sep 16 2010 all it needs is a bored teenager
    Aug 25 2010 "Watching Python 2.x get end-of-lifed is a bit like watching the Space Shuttle program wind down."
    Aug 17 2010 "I can type reliably up to about 2.5 mph" --Stephen Wolfram
    Aug 10 2010 we are here
    Jul 26 2010 "you can pretty much take a shit and it'll be syntactically correct Perl."
    Jul 16 2010 "Zed is literally the Kimbo Slice of the technical community."
    Jul 07 2010 "I've learned an important lesson: if they say they've solved their problem, never ask how." --xkcd
    Jun 28 2010 reloaded
    Jun 14 2010 You're Doing It Wrong
    May 18 2010 I'ms ins yours skynets, lollings aways ats yours futiles attempts ats contrllings ours internets.
    Apr 06 2010 sickos6
    Mar 25 2010 Douglas Crockford doesn't use try-catch. When Douglas codes there are no exceptions.
    Feb 18 2010 Some argue this makes sense. Some ppl also like to sniff glue.
    Feb 11 2010 dinosaurs were made up by the cia to discourage time travel
    Feb 07 2010 void sickos() {
    Feb 01 2010 <void>
    Jan 23 2010 "The fuckupability of plugs goes up with the square of their size. This is science." --jwz
    Jan 19 2010 zombie gourmet guide: php developer brains. small and chewy, but deliciously rotten. yummy!
    Jan 18 2010 rasmus lerdorf was killed by zombies while rescuing php from jason's dungeon!
    Jan 18 2010 php's dead, its's locked in my basement! *haha*
    Jan 18 2010 php is dead!
    Jan 12 2010 php must not die!
    Jan 11 2010 php must die!
    Jan 05 2010 ⚑
    
    2009
    Dec 31 2009 1262300400 looks nothing special to me
    Dec 31 2009  ...
    Dec 24 2009 "Back off, man. I'm a scientist." --Dr. Peter Venkmen
    Dec 15 2009 < !>    < !>             < !>    < !>       < !>                < !>                     < !>     < !>       < !>
    Nov 20 2009 The unicorns don't look realistic enough.
    Nov 01 2009 "Denying Physics Won't Save the Video Stars" --Cory Doctorow
    Oct 30 2009 The whole nine yards.
    Oct 27 2009 go go go!
    Oct 17 2009 Everything is OK
    Oct 05 2009 The Truth is out there.
    Sep 28 2009 arrr!
    Sep 18 2009 welcome to the encoding hell
    Sep 07 2009 knights of the infinite loop
    Aug 24 2009 world domination. fast.
    Aug 18 2009 computing sucks.
    Aug 13 2009 hacking at random
    Jul 31 2009 Happy SysAdminDay 2009!!1
    Jul 08 2009 ...
    Jun 29 2009 "The tenth sale is a thousand times easier than the second one (the first one doesn't count... beginner's luck)." --Seth Godin
    Apr 30 2009 SHA-1: Practical collisions are within resources of a well funded organisation.
    Apr 23 2009 "It's a trap!" --Admiral Ackbar
    Apr 10 2009 The road to hell is paved with good intentions.
    Mar 24 2009 "Damn it!" --Jack Bauer, CTU
    Mar 09 2009 "leadership is nature's way of removing morons from the productive flow" --Scott Adams in Dilbert
    Mar 02 2009 Some people, when confronted with a problem, think "I know, I'll quote Jamie Zawinski." Now they have two problems.
    Feb 26 2009 "The key to performance is elegance, not battalions of special cases." --Jon Bentley and Doug McIlroy
    Feb 23 2009 we like monkeys
    Feb 20 2009 cloud computing becomes fog when it goes down.
    Feb 14 2009 happy 1234567890
    Jan 26 2009 time.ctime(1234567890)
    Jan 15 2009 to /b/ or not to /b/...
    Jan 09 2009 beware of 'import skynet'.
    Jan 09 2009 @ack
    
    2008
    Dec 31 2008 happy new 1984
    Dec 27 2008 nothing to hide
    Dec 14 2008 Hail Eris. All hail Discordia.
    Dec 10 2008 rip silcnet. all hail sickosnet. > silc -c sickos.org
    Nov 19 2008 while True: pass
    Nov 06 2008 long live teh king !
    Nov 05 2008 remember remember teh 5th of november
    Sep 26 2008 for(;P("\n"),R=;P("|"))for(e=C;e=P("_"+(*u++/8)%2))P("|"+(*u/4)%2);
    Sep 19 2008 talk like a pirate day
    Sep 19 2008 Ahoy there Landlubbers!
    Sep 17 2008 0^0 := 1
    Jul 11 2008 we're the scene.
    Jul 09 2008 that's like knitting a sweater for a dead squirrel
    Jul 08 2008 computers suck and i hate them
    Jun 23 2008 osascript -e 'tell app "ARDAgent" to do shell script "whoami"'
    Jun 17 2008 Linux was fun
    May 24 2008 don't forget to bring a towel
    May 21 2008 Lost in Hyperspace
    May 03 2008 all your base are belong to us
    Apr 21 2008 look, it's making friends with the roomba!
    Apr 16 2008 The only laws on Internet are assembly and RFCs
    Apr 11 2008 inventors of the infinite loop
    Mar 19 2008 "the project suffers from lack of directions and frequent infighting between its developers" --Distrowatch.com on Gentoo Linux
    Mar 04 2008 "What you see on these screens up here is a fantasy; a computer enhanced hallucination!" --Wargames
    Feb 19 2008 a man collecting shoes... it's just not right.
    Jan 22 2008 | |  |   |     |        |             |                     |                                  |
    
    2007
    Dec 30 2007 pirates are better than ninjas
    Dec 27 2007 virtual congress
    Dec 21 2007 yankee white
    Dec 21 2007 rule 35 of the internet: if it doesn't exist on the internet, it must be created.
    Dec 18 2007 "[PHP] takes the worse-is-better approach to dazzling new depths" --Larry Wall
    Dec 16 2007 "Strong typing is for people with weak memories." --Tom Van Vleck
    Nov 30 2007 beiss mich, kratz mich, gib mir hostnamen!
    Nov 26 2007 tuttle-buttle
    Nov 23 2007 there's a buffer overflow but i won't disclouse
    Nov 22 2007 inventors of the infinite loop
    Sep 13 2007 'The Internet makes it so easy to get solutions to most of the problems that it has taken the fun out of it.' --Miguel de Icaza
    Sep 03 2007 'Sickos can wreak death and destruction from thousands of miles away!' --Arnold Yabenson, Weekly World News
    Jul 23 2007 "History is a set of lies agreed upon" --Napoleon Bonaparte
    Jul 19 2007 "Unix is a glorified video game" --Ed Post
    Jul 16 2007 "Programs should not attempt special solutions to general problems." --Pike & Kernighan
    Jul 09 2007 Hackerbande
    Jun 27 2007 Note: There are two BFGs in Hell.
    May 18 2007 0xdeadbeef
    May 02 2007 09:F9:11:02:9D:74:E3:5B:D8:41:56:C5:63:56:88:C0
    Apr 08 2007 "they seem to think that just because no one has ripped them apart means that no one can" --jf on apple security
    Mar 26 2007 "I think the fundamental mistake was this adoption of a democratic process" --Ian Murdock on Debian
    Mar 14 2007 foo
    Jan 08 2007 DEFCON 3
    Jan 07 2007 alert. blog.fefe.de is down. switching from DEFCON 3 to DEFCON 2.
    
    2006
    Dec 23 2006 Stell dir vor, es ist Congress, und keiner geht hin.
    Dec 14 2006 "Screensavers are sort of a poor man's LSD, without the bad trips." --Larry Wall
    Dec 06 2006 Ministry of Truth
    Dec 06 2006 Into the Box...
    Nov 15 2006 if the price matters, you're not a real gamer
    Nov 06 2006 The Revolution Will Not Be Televised
    Oct 10 2006 tut der router nicht mehr routen musst du booten
    Jul 28 2006 Happy System Administrator Appreciation Day!
    Jun 01 2006 We have done the impossible and that makes us mighty.
    May 31 2006 brilliant but empty
    May 17 2006 i wanna say something meaningful
    Apr 30 2006 wii
    Apr 20 2006 Bridge ahead.  Pay troll.
    Apr 09 2006 Beware! The Blob!
    Feb 13 2006 "Just because you're paranoid doesn't mean they aren't after you" --Kurt Cobain

    Today 16oo: Sickos Hack Nacht

    Published October 24th, 2009.

    We are going to have a Hack Nacht tonight. Sickos and other nerds are invited to exchange ideas, write code and to get to know each other. I’ve some ideas what to do and I’m looking forward to seeing you hackers tonight. I’ll update this article when the event is over, stay tuned (our join us on SickosNet).

    jQuery.postJSON()

    Published October 18th, 2009.
    /*
     * Hello Brandley, tonight I've tried to figure out how to do proper
     * JSON POSTs using "Content-type: application/json" and serialized JSON
     * data in the content portion (body) of my HTTP requests. This is
     * specified in the Twitter API for status updates and Identica JSON
     * webservices (found via Mark Pilgrim).
     * After some tests, I had to recognize that jQuery does not support JSON
     * encoding in the core distribution and aside of $.getJSON(), there
     * is no $.postJSON().
     * Below is an proposed update. As it relies on your json plugin, I'd ask
     * you to add it to your code base so that other jQuery users can benefit
     * of it.
     */
    
    $.postJSON = function(url, data, callback) {
        return jQuery.ajax({
            'type': 'POST',
            'url': url,
            'contentType': 'application/json',
            'data': $.toJSON(data),
            'dataType': 'json',
            'success': callback
        });
    };
    

    visit project

    Mercurial Repositories Available

    Published October 17th, 2009, updated April 7th, 2010.

    Dear Googlebot, I want to tell you that you can find my latest source code over at hg.sickos.org. That’s where my friend dlat has installed a Mercurial source code management server. I’ve already migrated some projects there and would like you to index these pages. Please note that some projects have individual pages in our Wiki, just follow this category page. See you again!

    Sniffing HTTP Traffic at HAR2009

    Published August 14th, 2009, updated April 7th, 2010.

    I’m currently visiting har2009, an international IT security conference in the Netherlands. It’s an amazing event with so many nice people, fresh lectures and a wonderful environment. There is a large wired and wireless network and everybody on the campsite is wearing a laptop, a pda or some other device that can connect to the Internet. And because there are so many security people around, I think it would be funny to demonstrate some insecurity here…

    First, there is the Web Proxy Autodiscovery Protocol (WPAD), which is used by your web browser when you use “proxy autoconfiguration” – the default setting on many systems. Second, there is a DHCP server for the campsite that does hostname registration in the DNS server. I asked myself what would happen if I could register the name wpad.visitors.har2009.net?

    Well, I have done so. And I have setup an appropriate proxy that intercepts all traffic that passes this machine. After 24 hours, there were more than 800 different hosts using this malicious proxy server – and many of them signed up to unencrypted web services like Twitter and others. That’s quite impressive as this are about 20 percent of the visitors! Now I’m wondering what happens if I break up SSL…

    Python-MagickWand or How to Work With Icons

    Published June 18th, 2009, updated March 17th, 2013.

    I’m currently working on a pet project where I want to convert favicons (in Windows .ico format) to Web standard .png format. I started using the Python Image Library (PIL) for this, which supports plenty of image formats and which is Python’s standard way for doing image manipulations. A basic any_to_png function using PIL looks like this:

    img = Image.open(StringIO(buf))
    img = img.resize((16,16),Image.ANTIALIAS)
    img.save(buf, format='PNG', transparcency=1 )
    return buf.getvalue()

    This is straight forward, but I had to find out that PIL’s .ico support is pretty outdated while Microsoft has updated the specs. Modern .ico files have switched from .bmp to .png format and added alpha masks. There is a patch available that brings you .png support, but alpha masks are still broken.

    So, I’ve searched for another image library that has proper support for icon files and stumpled upon my old friend ImageMagick. I’ve found that there are Python bindings for the MagickWand interface (the C API). Yet, those bindings are incomplete, ugly and not actively maintained. I’ve found alternate Python bindings for the MagickWand interface and those are pretty nice:

    img = Image(StringIO(buf), 'ico')
    if not i.select((16,16)):
        i.alpha(True)
        i.scale((16,16))
    return i.dump('png')

    You see, Ian Stevens‘ CDLL bindings are a straight forward implementation of the MagickWand C API using the CDLL wrapper library. I’ve added some missing functions, documentation and clarified the licensing issues (now available under a BSD license) and I think this is a clean and elegant solution for a long standing problem. You can find a snapshot of Python MagickWand here and the latest source code there. Enjoy.

    visit github repository
    visit original project page

    2009-12-08: We’ve cloned python-magickwand and accept patches. You can find our latest work in our mercurial repository. Today, we’ve commited a magickwand6.5 patch, please look at the hg changelog for details.

    2010-03-30: There’s an updatedrefactored package around by Oliver Berger, see pypi. I’ve not yet looked into the changes (aside of a modified copyright statement), but I’ll merge it into our repository soon. If you fork the codebase, please drop a line; I’d like to have a common repository where we can focus our efforts.

    2010-05-20: I’ve checked Oliver’s code, aside of refactoring there are no functional changes. There’s also no update form Ian (who is the original author), so our version here is still the recommended release. Feel free to send patches and bug reports.

    2011-09-19: I’ve added some minor updates and moved the source code to Github, see https://github.com/cxcv/python-magickwand.

    Command Line XML Processing

    Published January 12th, 2009.

    XMLStarlet is a command line interface to the Gnome XML- and XSLT libraries (libxml2, libxslt). It supports XPath queries and other usual permutations to XML data on the command line. I’ve stumpled upon this tool some years ago and today, I wanted to use it in practice. However, it looks like development has deceased and the last (yet incomplete) release is from 2005.

    So far, you can easily query XML data, but there is not way to append formatted XML elements (like parts of a tree) somewhere to the tree. Is anyone using XMLStarlet in real-world scenarios like shell scripts? Are there any better tools for DOM manipulations that are handy within shell scripts?

    btw: I’ve fixed the x86_64 compile errors on RHEL5 here.

    Directory Snapshots

    Published September 15th, 2008, updated March 17th, 2013.

    Creating backups of open files was a challenging endeavor in the past. The main problem here is inconsistency. This is because the original data could get changed while it is read by the backup software. If this happens, it can result in missing references or references pointing at incorrect data. A typical example would be any kind of database. Here we have a lot of indices that are stored aside of the raw data. When the backup software reads the index and an update occurs, the original data cannot be read anymore.

    A simple strategy to deal with this type of problem is the creation of offline backups. They are fairly simple to implement and they ensure exclusive access by the backup software. Though, this approach is inelegant as it requires a downtime for every backup. To overcome this limitation, many applications spawned custom backup mechanisms to enable online backups. These mechanisms include simple dumps, logshipping, single- and multi-master replication and others. Although, they enable online backups, they are proprietary and require application specific know-how.

    A more generic approach is the use of file system snapshots. They create a static copy of the original data within milliseconds. This copy can be backed up while the system is online and the original data gets changed. On Linux, this snapshot functionality is part of the Logical Volume Manager (LVM). It is included in the standard Kernel since version 2.4.x and most modern Linux distributions activate it in their default installation.

    dsnapshot on top of lvm

    To create a file system snapshot, one basically requires the file system to be on a logical volume (LV) and some free space on the underlying volume group (VG). Then, one can tell the logical volume manager to snapshot the particular volume. The LVM holds all IOs and creates a new copy-on-write (COW) volume within milliseconds. This new snapshot volume can be mounted and backed up safely, as it does not change when it is written to the original volume.

    Of course, there are good reasons to use the backup methods that are recommended by some software vendor. But there are also many situations where a generic approach is preferable. Think of small databases, virtual machines, mail servers and other applications that store custom index files. I’ve seen a lot of these situations in the wild and for many times, I desired to have an easy snapshotting functionality. This lead to various snapshotting scripts, consolidation, adoptions and so on. After all, I’ve created dsnapshot which I want to introduce here.

    The dsnapshot script provides a high-level interface to the Linux Logical Volume Manager. It uses its block-level snapshot support to create directory snapshots. In contrast to block-level snapshots, directory snapshots resemble the file system layer. Thus, you can snapshot any directory that is on a logical volume and you don’t have to worry about the actual logical volumes, mount points and paths.

    This is the actual syntax for creating…

    $ dsnapshot --create /srv/mysql/test/
    /var/lib/dsnapshot/srv-fdf2e6dc/mysql/test/

    … and removing a directory snapshot.

    $ dsnapshot --remove /var/lib/dsnapshot/srv-fdf2e6dc/mysql/test/

    I’ve found this script very handy when you need to backup single directories instead of whole volumes.

    github repo

    301

    Published July 9th, 2008, updated September 30th, 2008.

    301 is an uri redirector. It allows you to create short links for complex web addresses. Just submit a longish uri at 301.sickos.org, and you will get a short link that points to the original address. You can pass this on twitter, in irc or whereever you want to avoid complex web addresses.

    301

    This service was inspired by tinyurl.com and monkey.org/sl. In contrast to their services, 301 comes along with full python source code. This gives you the freedom to run your own 301 service and adopt it to your own needs. Get the source code at benjamin-schweizer.de/files/301/.

    hint: use pedit to manage the link database

    use service
    download source code

    Htpasswd Editor

    Published June 13th, 2008, updated April 28th, 2010.

    User authentication on unix systems typically relies upon password files or directory services. Both contain logon names, user ids, passwords, the location of your home directory and other information. The choice of the right authentication backend typically relies upon the amount of users you have to manage and your system environment.
    If you have decided to use simple password files, you can create different files for various services. This gives you the opportunity to separate system users from service users. Further, this enables you to delegate administrative rights to certain people.
    However, user management still requires you to twiddle with command line tools. This is fine if you are a unix lover, but if you want somebody with little command line experience to manager your users, you probably prefer a user interface that guides the unexperienced and reduces the risk of crashing the system.

    Htpasswd Editor

    This is exactly what htpasswd_editor does. It provides a text user interface for htpasswd(1) files and can easily be integrated with popular software like the Apache Web Server, VSFTP Daemon and other PAM-enabled programs (using pam_pwdfile).

    2008-09-17: there’s a new bugfix release available
    2010-04-28: there’s another bugfix for Debian #340366

    download source code

    Fun with the Python Class Dispatcher

    Published May 16th, 2008, updated March 26th, 2010.

    In object oriented programming, the class dispatcher is a built-in function which is invoked whenever you access an object’s method. When this happens, the class dispatcher looks up a given method in the base class of that object and it’s ancestors and executes it in the context of the given object. From there, it’s common practice to write code like this:

    # class-based programming style
    class Foo:
        pass
    
    class Bar(Foo):
        def bar(self):
            print "bar"
    
    bar = Bar()
    
    bar.bar() # prints "bar"

    In this example, we create a base class Foo. We subclass Foo as Bar and add the method bar() to adapt the class Bar to our needs. This methodology is straight forward if you have complete control over the source code. However, if you try to do minor changes somewhere upwards in the inheritance tree, you would have to copy lots of code and take care of all inheritors. Obviously, this is a bad programming style and impractical in many cases.

    Though, Python’s class dispatcher uses dynamic delegation and you can do something called method injection. This is a programming style that is typical for prototype-based programming/classless programming which is default in JavaScript in example. Here, you avoid sub-classing in favor of method injection. See below:

    # prototype-based programming style
    class Foo:
        pass
    
    foo = Foo()
    
    def bar(self):
        print "bar"
    
    foo.__class__.bar = bar
    
    foo.bar() # prints "bar"

    In this example, we inject the method bar into the base class of object foo (which is Foo). This enables us to modify Foo and all of its inheritors after sub-classing and instantiation took place.

    This methodology becomes pretty handy if you write plug-ins for some software that does not export proper interfaces for plugins or if you cannot change the code of some inheritors. Basically, this trick works because the class dispatcher uses dynamic delegation: Objects and classes are inspected at call-time and so, the dispatcher finds attributes even if they are added after object instantiation.

    Yahoo! Pipes

    Published May 11th, 2008, updated May 12th, 2008.

    Today, I’ve found some time to play around with Yahoo! Pipes. Basically, it is a visual programming environment, that aims at dealing with XML feeds and other sources from the semantic web. You can use a visual editor (see screenshot) to add various sources and operators. You can filter, rearrange and combine various feeds and finally create a new one.

    Yahoo! Pipes Visual Editor

    Because I use various Web 2.0 services like bookmarks.sickos.org (a del.icio.us clone), Google Reader, Flickr and others, I’ve many blog items at different locations. From there, I could write custom code to integrate these items with my blog or use some standardized technology. I’ve decided to use the latter, and this is where Yahoo! Pipes comes into the game. I’ve created a new pipe there and integrated the combined rss feed into my WordPress blog.