Skip navigation

Category Archives: Linux

Apparently, there has been a severe security breach at Fedora. They had to rebuild their repositories and change their signing keys, and it might just be they have only rebuilt repositories for Fedora 8 and 9. Which might just explain why I have been unable to use yum to install software on a Fedora Core 5 box for several weeks now! And, yes, people, I know FC5 is no longer officially supported, but the mirrors were there and I was still using them not long ago. So, attention Fedora users! If you are using a Fedora release below 8, you should probably consider re-installing a recent release or risk staying stuck with a system with no software updates and no packages.

Please have a look at this:

Tsung has a “proxy mode” which records SQL statements and produces an appropriate Tsung scenario file. What could be simpler? I shall just point my web application to speak to the Tsung proxy instead of the database and I will use it to generate “typical usage” cases.

Unfortunately, this is not an option if, say, your application uses a web framework which maintains several open connections to the database server. The Tsung proxy can only handle one connection at a time. So your application does not function properly and you are not able to use it to generate the “typical usage” scenaria.

Then there is pgFouine, a PostgreSQL log analyzer, which shows some promise, which produces Tsung compatible output on demand. But pgFouine principally analyzes log files to group and rank statements according to how well they perform in the database, and this approach has spilled over to Tsung scenario file generation: the order of the SQL statements is not preserved! This, by itself, perhaps would not be a problem, but I often record multiple use-cases in one go and pgFouine mixes them up.

The best way to create our test cases, therefore, is to use the log files from an idle Postgresql server, after enabling the logging of all SQL statements in the server. I have written a few scripts which help with the process, but this was after already changing the logging format of our Postgresql server to pgFouine’s requirements (syslog). Thus, the Postgresql server needs to log in this particular style:

Sep  1 16:21:19 pgtest postgres[4359]: [136-1] LOG:  statement: SELECT rolname FROM pg_roles ORDER BY 1
Sep  1 16:21:19 pgtest postgres[4359]: [137-1] LOG:  duration: 0.178 ms

To make sure this is the case, you probably need to edit your postgresql.conf file and set the following values:

log_destination = 'syslog'
redirect_stderr = off
silent_mode = on
log_min_duration_statement = 0
log_duration = off
log_statement = 'none'
log_line_prefix = 'user=%u,db=%d,host=%h '
syslog_facility = 'LOCAL0'
syslog_ident = 'postgres'

Then, you need to edit /etc/syslog.conf to set up a PostgreSQL facility and exclude it from the default log file:

local0.*   -/home/postgres/logs/postgresql.log

For the changes to have effect, you need to restart the syslog service (/etd/init.d/syslog restart) and Postgresql.

You are now ready to start capturing SQL statements in the Postgresql log file. To make sure you shall be able to filter the log file into separate use-cases, you should choose a unique string identifier (e.g. ‘complex search 001’) to throw at the database server at the beginning and end of a particular use-case. You may do this by connecting to the server via ssh and typing:

echo "SELECT 'complex search 001';" | psql -U postgres

… before using your web application (which must be configured to talk to this particular Postgresql server). At the end of this use-case (‘complex search 001’) all you need to do is repeat the line above.

When you have finished recording all batches (use-cases) of SQL statements, you need to locate the postgresql log file (e.g. /var/log/postgresql/postgresql.log) and use it as input for the perl script below:

I have created syslog-filter, a simple perl script you may run from the command line, like so:

./syslog-filter postgresql.log  'complex search 001' > complex-search-001.log

… assuming the script has permission to be executed and is located in the same directory as the postgresql.log file. This command creates complex-search-001.log, which contains only those SQL statement that belong to this use-case.

Here is the code for syslog-filter:

#!/usr/bin/perl -w
if(scalar(@ARGV) < 2) {
   print "Usage: ./syslog-filter <file> <token>\ne.g. ./syslog-filter scenario.log 'Quoted companies'\n"; exit(1);
open(MYFILE, '<'.$ARGV[0]) or die "Can't open ".$ARGV[0]." for reading...\n";
my $switch = 0; my $line = "";
while($line = <MYFILE>) {
    if($line =~ /$ARGV[1]/) { &toggle_switch(); }
    print $line if $switch;

sub toggle_switch { if($switch) { $switch=0; } else { $switch=1; } }

For the next step, you may want to use the following script, syslog-to-tsung-xml:

#!/usr/bin/perl -w
use Parse::Syslog;
if(scalar(@ARGV) < 1) {
   print "Usage: ./syslog-to-tsung-xml <logfile>\ne.g. ./syslog-to-tsung-xml my-scenario.log\n"; exit(1);
my $parser = Parse::Syslog->new( $ARGV[0] ); $s = 0; # $s is just a switch whether we should record/not
READINGLOOP: while(my $sl = $parser->next) {
   $line = $sl->{text}; # i don't want to write $sl->{text} all the time 🙂
   if ($line =~ /LOG:  execute/ or $line =~ /LOG:  statement/) { # if we see 'LOG:  execute' we know we should start recording...
      # but if the recording switch is already on, we need to save recorded statement into @selects
      if($s and $st ne "") { push @selects, $st; $s = 0; $st = ""; $g = undef; }
      # in other wordsd, a 'LOG:  execute' also means previous recording should end
      if($line =~ /\[(.+)-.+(SELECT .+)$/) { $s = 1; $g = $1; $st = $2; } # regular expression heaven - wish
      # if this is a SELECT statement it is put in $st, $s is set to 1, $g contains id filtering next lines
      next READINGLOOP; # ok, let's proceed with the next line - don't execute the rest...
   if ($s and $line =~ /\[(.+)-.+\] (.+)$/ and $g == $1) { $st .= $2; } # recording subsequent lines - concat
# just to be sure, we save whatever is inside $st once we reach the end of the file - no more 'LOG:  execute's
if($st ne "") { push @selects, $st; $s = 0; $st = ""; $g = undef; }
# now, we should scan the results for 'DETAIL:  parameters:' and perform all the described substitutions
my $array; my $hash; my $key; my $val; my $var; my $target; my $subs;
for($i=0;$i<scalar(@selects);$i++) {
   if ($selects[$i] =~ /^(.+)DETAIL:  parameters: (.+)$/) {
      # reading parameters, splitting them into key,val pairs for subsequent search and replace
      $array = (); $hash = {}; $subs = "";
      $target = $1;
      @$array = split ',' , $2;
      # print "\nBefore: ----------------------------------------------------------------------------------\n";
      # print $target, "\n";
      # print "------------------------------------------------------------------------------------------\n";
      foreach $var (@$array) {
         ($key,$val) = split '=', $var;
         $key =~ s/^ *(.+) +$/$1/;
         $val =~ s/^ *'(.+)' *$/$1/;
         $hash->{$key} = $val;
         # print $key, "\t", $val, "\n";
         $subs = "\\".$key;
         $target =~ s/$subs\:\:/\'$val\'::/g;
      # print "After: ----------------------------------------------------------------------------------\n";
      # print $target, "\n";
      # print "------------------------------------------------------------------------------------------\n";
      $selects[$i] = $target;
# and on to outputting our results...
# pure sql output if there is a second argument in the command line
if($ARGV[1]) { for($i=0;$i<scalar(@selects);$i++) { print $selects[$i],";\n"; } }
else {
# tsung compatible output
    <session name="$ARGV[0]" probability="100" type="ts_pgsql">
        <transaction name="connection">
                <pgsql type="connect" database="mydatabase" username="myusername" />
                <pgsql type="authenticate" password="mypassword"/>
        <thinktime value="5"/>
            <transaction name="requests"> <!-- start of requests -->
for($i=0;$i<scalar(@selects);$i++) {
   print "\t\t\t\t<request><pgsql type=\"sql\"><![CDATA["; print $selects[$i],"\n"; print "]]></pgsql></request>\n"
            </transaction> <!-- end of requests -->
            <thinktime value="5"/> <!-- delay between scenario re-play -->
        <request><pgsql type="close"></pgsql></request>

This is how you would run the above script:

./syslog-to-tsung-xml complex-search-001.log > complex-search-001.xml

This generates a partial Tsung file in the proper format. This process need to be repeated for every different use-case we would like to include. The resulting xml files may be concatenated into a single file, like so:

cat *.xml > my-tsung-scenario.xml

The resulting file (my-tsung-scenario.xml) will be completed into a full valid Tsung scenario file in section 2.4 In order to run the above scripts, you obviously need a working Perl environment and the Parse::Syslog perl module, which may be installed by typing (as root):

cpan Parse::Syslog

Before proceeding any further, you may want to manually edit all occurences of

<transaction name="requests">

…in my-tsung-scenario.xml, changing the name each time to reflect the use-case which follows. E.g.

<transaction name="complexSearch1">

Another required manual edit concerns the probability factors assigned to each use-case (session). Therefore, you need to adjust the probability settings of all such occurences:

 <session name="complex-search-001.log" probability="100" type="ts_pgsql">

… to reflect the desired frequency of each use-case in the tests. Changing 100 to 25 in the above line will force 1 in 4 users during the Tsung tests to replay the ‘complex-search-001’ scenario.

To turn a series of sessions described in the file my-tsung-scenario.xml into a full, valid scenario we need to type:

echo '<!DOCTYPE tsung SYSTEM "/usr/local/share/tsung/tsung-1.0.dtd" [] >

<!- <tsung loglevel="debug" dumptraffic="true"> -> <!- useful sometimes ->
      <client host="myclient" weight="1" cpu="2"></client>

      <server host="myserver" port="5432" type="tcp"/>

      <monitor host="myserver" type="erlang"></monitor> <!- postgresql server ->
      <monitor host="myclient" type="erlang"></monitor>

      <arrivalphase phase="1" duration="1800" unit="second">
         <users interarrival="4" unit="second"></users>
      <arrivalphase phase="2" duration="1800" unit="second">
         <users interarrival="2" unit="second"></users>


' >  head-tsung-scenario.xml

… to get a head-tsung-scenario.xml file which we can then edit accoring to our needs. If we keep the existing settings, Tsung will attempt to load-test a server called myserver (the names needs to be resolvable, please check your DNS service and/or your /etc/hosts file) from a single client, myclient, while trying to monitor hardware load on both machines. In the load section, two load phases have been defined, starting at “new user every 4 seconds” and then doubling the rate. Each of these phases is meant to last half an hour (1800s), but once the server reaches its breaking point, user sessions do not terminate properly and the duration of the load phase we are in is expanded, as Tsung waits for all users to finish before proceeding to the next one. Once you have changed head-tsung-scenario.xml according to your needs, you may complete the generation of a new scenario file by typing:

 cat head-tsung-scenario.xml my-tsung-scenario.xml > full-tsung-scenario.xml; echo '
' >> temp-tsung-scenario.xml

This file (temp-tsung-scenario.xml) is actually a full valid scenario file which may be used for testing. But you probably want to tweak one or two things to make this testing relevant to your system, which is what we shall discuss in the next installment of this tutorial.

If you suddenly needed a cronnable Postgresql database update command for SQL text files, you would probably just type:

cat /path/to/some/dir/*.sql | psql -U postgres someDatabase

So, I am asking myself, have I created something pointless?

As it turns out:

  • pgBee keeps track of the update process. If a pgBee instance is killed, the next invocation will carry on from where the previous one has stopped. And if it finds SQL errors, it will report how far it got in the input files before quitting.
  • pgBee is actually faster than psql when executing SQL statements from a text file. psql took 112m (with one transaction for each statement), psql -1 took 97m (with one transaction for the entire file) but pgBee finished in 21m !!! (with one transaction per batch) That’s a whopping 898 operations per second. All tests were run on the same database server (localhost), pgBee was batching groups of 100 statements at a time and a real data file was used, with 1131753 SQL statements in total (511335 DELETEs and 567577 INSERTs).

a Postgresql Bulk Updater in Java

pgBee is a set of Java classes I wrote for automating bulk updates of Postgresql databases on Linux servers. It requires Java (doh!) and Ant (as a build/execute front-end), it is cronnable and performs very well, especially in multi-threaded mode, which takes full advantage of multi-core CPUs in modern servers. The source of inspiration for pgBee has been previously described.

This code is released under a GNU General Public License (GPL).

Ant sometimes refuses to run in the background, so the best way to make pgBee work as a cron job is probably to call a simple shell script from cron, like the one below:

export JAVA_HOME=/usr
export ANT_HOME=/usr/local/ant
/usr/local/bin/ant -f /path/to/build.xml run </dev/null &

All configuration is done in the settings.xml file, but some options may be set through the command line, e.g.

ant -f /path/to/build.xml -Dlock=yes -Dthreads=8 -Dparallel=yes run

pgBee processes all files it finds in a particular (in) directory and moves them to either a done directory or a rejects directory, if there were SQL errors. You’ll need to create the right directory structure and configure pgBee settings before starting. The pgBee process catches SIGTERM, SIGHUP etc. signals and exits gracefully, ready to resume from where it stopped the next time it is run. So, it should be quite reliable, in the absence of hard resets and kill -9. Having said that, I am supplying no guarantees of fitness for any purpose of any kind 🙂 Please use at your own risk.

If you need to make sure a particular set of statements is processed in the same transaction, you only have to include all statements in the same line of an input file, separated by semi-colons. There’s no limit to how many SQL statements you may include in a single line. More information about input file format, usage and configuration may be found in the downloadable tarball

My new work computer is a Dell Vostro 1310 laptop. I am most chuffed with this new machine, as this is my first modern, up-to-date programming notebook for a long time now (some people think it’s boxy! but all I want is a no-nonsense machine). And it runs Debian Lenny, which marks a change from my old Ubuntu and Slackware days. So, this is me showing off a new laptop and sharing some issues for anyone wanting to install Debian Linux on a Vostro 1310.

Now, Vostro laptops may be customized considerably prior to order, so the hardware specs vary. Mine is a Intel Core2 Duo T9300 @ 2.50GHz (6MB L2 Cache, 800 MHz FSB), 4Gb RAM box, probably near the top of the range. For WiFi, it’s got the Dell 1505 miniPCI card (that’s probably a Broadcom 4328, capable for 802.11n) and for Bluetooth the standard Dell Wireless 360 Bluetooth Module. It’s got a 13.3 inch WXGA screen and a 128Mb NVIDIA GeForce 8400M GS (64 bit) video card.

Here’s the output from lspci:

00:00.0 Host bridge: Intel Corporation Mobile PM965/GM965/GL960 Memory Controller Hub (rev 0c)
00:01.0 PCI bridge: Intel Corporation Mobile PM965/GM965/GL960 PCI Express Root Port (rev 0c)
00:1a.0 USB Controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #4 (rev 03)
00:1a.1 USB Controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #5 (rev 03)
00:1a.7 USB Controller: Intel Corporation 82801H (ICH8 Family) USB2 EHCI Controller #2 (rev 03)
00:1c.0 PCI bridge: Intel Corporation 82801H (ICH8 Family) PCI Express Port 1 (rev 03)
00:1c.1 PCI bridge: Intel Corporation 82801H (ICH8 Family) PCI Express Port 2 (rev 03)
00:1c.3 PCI bridge: Intel Corporation 82801H (ICH8 Family) PCI Express Port 4 (rev 03)
00:1c.4 PCI bridge: Intel Corporation 82801H (ICH8 Family) PCI Express Port 5 (rev 03)
00:1d.0 USB Controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #1 (rev 03)
00:1d.1 USB Controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #2 (rev 03)
00:1d.2 USB Controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #3 (rev 03)
00:1d.7 USB Controller: Intel Corporation 82801H (ICH8 Family) USB2 EHCI Controller #1 (rev 03)
00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev f3)
00:1f.0 ISA bridge: Intel Corporation 82801HEM (ICH8M) LPC Interface Controller (rev 03)
00:1f.1 IDE interface: Intel Corporation 82801HBM/HEM (ICH8M/ICH8M-E) IDE Controller (rev 03)
00:1f.2 SATA controller: Intel Corporation 82801HBM/HEM (ICH8M/ICH8M-E) SATA AHCI Controller (rev 03)
00:1f.3 SMBus: Intel Corporation 82801H (ICH8 Family) SMBus Controller (rev 03)
01:00.0 VGA compatible controller: nVidia Corporation GeForce 8400M GS (rev a1)
06:00.0 Network controller: Broadcom Corporation BCM4328 802.11a/b/g/n (rev 03)
07:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 02)
08:05.0 FireWire (IEEE 1394): O2 Micro, Inc. Firewire (IEEE 1394) (rev 02)
08:05.2 SD Host controller: O2 Micro, Inc. Integrated MMC/SD Controller (rev 02)
08:05.3 Mass storage controller: O2 Micro, Inc. Integrated MS/xD Controller (rev 01)

Installing Debian Lenny 64-bit (amd64)

For the record, my first attempts at installing Linux on this box were very frustrating, as both Ubuntu 8.04 and 8.10 64-bit versions wouldn’t correctly recognise the Ethernet card (Realtek 8168) – which is the last thing I’d expect not to work. Same thing happened with 64-bit Debian Sarge. I was getting frustrated by the time I tried 64-bit Debian Lenny, but things suddenly worked out of the box and installation was a breeze (using the netinst CD).

I decided to go for the easy option and install Windows drivers for the WiFi card through ndiswrapper. The process is relatively straightforward:

Well, all you need to do (as root) is rmmod ssb ; rmmod ndiswrapper ; modprobe ndiswrapper ; modprobe ssb

You should now have a wlan0 interface to configure for WiFi connections (you might want also want to install wifi-radar). The rmmod ssb etc. stuff needs to happen every time the system boots. I have written a simple initialization script that does this.

Now, I thought I was having a Bluetooth problem, until I noticed I had switched off WiFi and Bluetooth using the little switch at the right side of the laptop, next to the DVD drive slot. As it happens, Bluetooth worked out of the box, but please have a look at this if you have Vista pre-installed:

I had opted for Windows XP pre-installed with Vista installation media, so I didn’t experience any problems. In fact, I routinely use Bluetooth to connect to my 3skypephone mobile and use it as a 3G modem. Please have a look at this, if you are interested.

I have also installed NVIDIA drivers for the video card (here’s one of many tutorials) and Compiz-Fusion, which looks quite nice! Here’s a brief video:

Screen capture (with recordmydesktop) was a bit flickery, sorry, but I was stressing the machine: I was using loads of Compiz-Fusion eye-candy and installing Vista as a virtual machine through VirtualBox at the same time.

Suspend and Hibernate work out-of-the-box. All-in-all, this laptop gives me everything I need for heavy development work – power management, connectivity, performance (and eye candy to impress co-workers). I don’t know if the fingerprint scanner works, I haven’t even thought about using it yet.

My only real complaint up to now is audio 😦 This is an interesting story, actually, because I had sound when I first installed Lenny about a month ago (well, without headphone jack sense) and then I went for a kernel update, which broke sound! The sound device now doesn’t even show up in the operating system, so it’s no use recompiling ALSA (which I have done, just in case). Now, Lenny has not been officially branded as a “stable” release yet, this is supposed to happen in 1-2 months, so here’s hoping one of these days I do a system update and suddenly everything works (again). But, as I said earlier, I am using this laptop as a development box, so lack of sound doesn’t really affect me. It’d be nice, however, to be able to listen to some mp3s while at work, which I do through my n800 (as a quick fix).

Update (2008-11-21): A prerelease version of Adobe Flash player 10 has just been released for Linux 64-bit systems. You may find it here. I installed it by extracting and copying to /usr/local/lib and updating the /etc/alternatives/ symbolic link to point to /usr/local/lib/

Update (2009-02-09): The real problem with sound on this laptop is that the operating system does not even recognise there is a soundcard in the system (there is no audio controller in the lspci output). A few days ago, I decided to update my kernel to 2.6.26-1-amd64 using apt-get, just in case it would make a difference. Well, it doesn’t 😦 I have downloaded my kernel’s headers and recompiled the latest version of ALSA (1.0.19), but the audio controller just doesn’t show up. So, I’ve bought myself a cheap (10 EUR) C-Media based USBsound card, which works fine (mic too).

Linux On Laptops
TuxMobil - Linux on Laptops, Notebooks, PDAs and Mobile Phones

Have you ever wondered how and why things are organised in Linux filesystems? Do /opt, /var, /home, /usr baffle and confuse you? Now, I have been a Linux user for several years and, having used several different Linux distributions I have a pretty good idea where things usually reside. But, somehow, I hadn’t come across this before, and it’s such a useful thing to have read, especially if you are a linux newbie! Thanks, Lance.

The pdf link at the bottom of the page is an interesting read.

So, you want to print from Java applications in Linux (through CUPS, which is the default printing system in Ubuntu and other distributions)…

Turns out it’s easy: Select a particular Page Orientation in your printer’s Job Options panel. Thank you techexplorer…