Skip navigation

Category Archives: System Administration

The biggest hurdle I have had to overcome in order to use Tsung for load-testing Postgresql servers has been a conceptual mismatch between Tsung and what I wanted to do. Tsung’s model probably originates in the load-testing of web servers: everything is described in terms of user arrival rate, hits, pages, transactions, thinktimes. Database usage may not be readily described in these terms.

Before going any further, I should probably make clear I didn’t need Tsung to do performance testing. Performance testing may be easily done by throwing a specific set of SQL queries to the database server (in controlled conditions) and checking/timing the results (this could be a separate tutorial :-). Tsung gives you the tools to model proper user interaction and real-life usage and I have been trying to determine a server’s load capacity.

In other words, how many times our typical or target load could a particular server/set-up handle?

And this load had to be expressed in a Tsung-compatible xml file describing mainly:

  • alternative user sessions (with associated probabilities)
  • user arrival rate

Here’s a quick reminder of what Tsung transactions mean:

Different parts of a session may be grouped into transactions (Tsung-speak — nothing to do with your normal database transactions) for statistical monitoring of SQL groups. Transactions are characterised by their name, and names may be shared across sessions. This way, there are tremendous reporting possibilities, as all sessions may have a “connection” transaction offering global connection statistics, while transactions with unique names produce statistics on a specific use-case basis (e.g. complex data search, typical page load etc.).

For simplicity, I have opted to include only two “transactions” in each alternative user “session” (use-case):

  • a connection transaction (identified as “connection” in all “sessions”)
  • a SQL block transaction (with a unique, “session”-specific name)

Know your (target) usage

Here comes the obvious but imporant bit: you need to know your real-life or your target usage to proceed! Expressing your (target) usage into Tsung values is the only thing that binds your experiment to real-life and allows some conclusions to be drawn from the tests.

The defined “sessions”, should, of course, reflect your usage profile. This boils down to including a representative variety of use-cases, with the right probability factor assigned to each case.

But you also need to express the number of new “sessions” per second Tsung initiates against your system, i.e. the Tsung user arrival rate.

Adapting the scenario file

This is a quick summary of what you should edit in your Tsung scenario file to specify the desired load:

  • allocate different probabilities to your alternative “sessions” (do they add up to 100?)
  • make sure you wrap the important bits of each session into unique “transactions”
  • define appropriate user arrival rates in your “load phases”

Load phases are defined in this section of the Tsung scenario file:

   <load>
      <arrivalphase phase="1" duration="1800" unit="second">
         <users interarrival="10" unit="second"></users>
      </arrivalphase>
      <arrivalphase phase="2" duration="1800" unit="second">
         <users interarrival="6" unit="second"></users>
      </arrivalphase>
      ... and so on...
    </load>

Analyzing the results

Assuming have managed to run your tests, now comes the tricky part of interpreting your results. The Tsung helper perl script generates a multitude of graphs, but here’s a quick shortcut. The files which have been most useful to me are the following:

  • report.html
  • images/graphes-Transactions-max_sample.png
  • images/graphes-Transactions-mean.png
  • images/graphes-Users-simultaneous.png

When looking at these graphs, the two most important things to remember are the length (in seconds) of each load phase and what each phase represents. For example, the following graph (manually colored for convenience), may be divided into four sections, each representing a particular load phase (each phase lasted 1800 seconds, i.e. half an hour). This graph basically tells us things start to fall apart at 8x our target load.

simultaneous DB users

The reason the interpretation of this graph is easy is that we are not using any loops in each user “session”. Each Tsung “user” simply connects, sends a particular SQL block to the server, receives some results and exits. The user arrival rate stays constant throughout a particular load phase. Statistically speaking, if the server is responding properly, the number of new users in the system is always matched by the number of users exiting. Therefore, you only get simultaneous Tsung users if things start going wrong, when the server’s response times are increasing. And when you see the green and red lines splitting, things have gotten out of hand: Tsung is introducing new users which are not even able to connect!

We should always, of course, check, if the server’s performance was acceptable while it was “coping” with our load. In addition to the numbers in report.html, you could get the big picture by simply looking at images/graphes-Transactions-max_sample.png. The horizontal line for each “session” corresponds to the longest response time ever recorded for a particular use-case.

max-elapsed-time-per-transaction

Armed with this knowledge, you may start experimenting further. Does your server recover from brief spikes of activity (e.g. long 4x phase, brief 16x phase, 4x phase etc.)? What effect do particular server configuration changes have on load capacity? And so on… This could easily turn into a full-time job 🙂

With modern servers, a lot of people are migrating to 64-bit architectures. Apparently, there are performance considerations in regards to using a 64-bit JVM. If you are using/considering installing a 64-bit JVM, you might want to read up on compressed oops (ordinary object pointers) — don’t worry, we are only talking about JVM command line options which affect performance. Please visit the links below:

http://www.lowtek.ca/roo/2008/java-performance-in-64bit-land/
http://blog.juma.me.uk/2008/10/14/32-bit-or-64-bit-jvm-how-about-a-hybrid/
http://publib.boulder.ibm.com/infocenter/javasdk/v6r0/index.jsp?topic=/com.ibm.java.doc.user.lnx.60/user/garbage_compressed_refs.html

Apparently, there has been a severe security breach at Fedora. They had to rebuild their repositories and change their signing keys, and it might just be they have only rebuilt repositories for Fedora 8 and 9. Which might just explain why I have been unable to use yum to install software on a Fedora Core 5 box for several weeks now! And, yes, people, I know FC5 is no longer officially supported, but the mirrors were there and I was still using them not long ago. So, attention Fedora users! If you are using a Fedora release below 8, you should probably consider re-installing a recent release or risk staying stuck with a system with no software updates and no packages.

Please have a look at this: http://www.redhat.com/archives/fedora-announce-list/2008-September/msg00007.html

Tsung has a “proxy mode” which records SQL statements and produces an appropriate Tsung scenario file. What could be simpler? I shall just point my web application to speak to the Tsung proxy instead of the database and I will use it to generate “typical usage” cases.

Unfortunately, this is not an option if, say, your application uses a web framework which maintains several open connections to the database server. The Tsung proxy can only handle one connection at a time. So your application does not function properly and you are not able to use it to generate the “typical usage” scenaria.

Then there is pgFouine, a PostgreSQL log analyzer, which shows some promise, which produces Tsung compatible output on demand. But pgFouine principally analyzes log files to group and rank statements according to how well they perform in the database, and this approach has spilled over to Tsung scenario file generation: the order of the SQL statements is not preserved! This, by itself, perhaps would not be a problem, but I often record multiple use-cases in one go and pgFouine mixes them up.

The best way to create our test cases, therefore, is to use the log files from an idle Postgresql server, after enabling the logging of all SQL statements in the server. I have written a few scripts which help with the process, but this was after already changing the logging format of our Postgresql server to pgFouine’s requirements (syslog). Thus, the Postgresql server needs to log in this particular style:

Sep  1 16:21:19 pgtest postgres[4359]: [136-1] LOG:  statement: SELECT rolname FROM pg_roles ORDER BY 1
Sep  1 16:21:19 pgtest postgres[4359]: [137-1] LOG:  duration: 0.178 ms

To make sure this is the case, you probably need to edit your postgresql.conf file and set the following values:

log_destination = 'syslog'
redirect_stderr = off
silent_mode = on
log_min_duration_statement = 0
log_duration = off
log_statement = 'none'
log_line_prefix = 'user=%u,db=%d,host=%h '
syslog_facility = 'LOCAL0'
syslog_ident = 'postgres'

Then, you need to edit /etc/syslog.conf to set up a PostgreSQL facility and exclude it from the default log file:

local0.*   -/home/postgres/logs/postgresql.log
*.info;mail.none;authpriv.none;cron.none;local0.none

For the changes to have effect, you need to restart the syslog service (/etd/init.d/syslog restart) and Postgresql.

You are now ready to start capturing SQL statements in the Postgresql log file. To make sure you shall be able to filter the log file into separate use-cases, you should choose a unique string identifier (e.g. ‘complex search 001’) to throw at the database server at the beginning and end of a particular use-case. You may do this by connecting to the server via ssh and typing:

echo "SELECT 'complex search 001';" | psql -U postgres

… before using your web application (which must be configured to talk to this particular Postgresql server). At the end of this use-case (‘complex search 001’) all you need to do is repeat the line above.

When you have finished recording all batches (use-cases) of SQL statements, you need to locate the postgresql log file (e.g. /var/log/postgresql/postgresql.log) and use it as input for the perl script below:

I have created syslog-filter, a simple perl script you may run from the command line, like so:

./syslog-filter postgresql.log  'complex search 001' > complex-search-001.log

… assuming the script has permission to be executed and is located in the same directory as the postgresql.log file. This command creates complex-search-001.log, which contains only those SQL statement that belong to this use-case.

Here is the code for syslog-filter:

#!/usr/bin/perl -w
if(scalar(@ARGV) < 2) {
   print "Usage: ./syslog-filter <file> <token>\ne.g. ./syslog-filter scenario.log 'Quoted companies'\n"; exit(1);
}
open(MYFILE, '<'.$ARGV[0]) or die "Can't open ".$ARGV[0]." for reading...\n";
my $switch = 0; my $line = "";
while($line = <MYFILE>) {
    if($line =~ /$ARGV[1]/) { &toggle_switch(); }
    print $line if $switch;
}
close(MYFILE);

sub toggle_switch { if($switch) { $switch=0; } else { $switch=1; } }

For the next step, you may want to use the following script, syslog-to-tsung-xml:

#!/usr/bin/perl -w
use Parse::Syslog;
if(scalar(@ARGV) < 1) {
   print "Usage: ./syslog-to-tsung-xml <logfile>\ne.g. ./syslog-to-tsung-xml my-scenario.log\n"; exit(1);
}
my $parser = Parse::Syslog->new( $ARGV[0] ); $s = 0; # $s is just a switch whether we should record/not
READINGLOOP: while(my $sl = $parser->next) {
   $line = $sl->{text}; # i don't want to write $sl->{text} all the time 🙂
   if ($line =~ /LOG:  execute/ or $line =~ /LOG:  statement/) { # if we see 'LOG:  execute' we know we should start recording...
      # but if the recording switch is already on, we need to save recorded statement into @selects
      if($s and $st ne "") { push @selects, $st; $s = 0; $st = ""; $g = undef; }
      # in other wordsd, a 'LOG:  execute' also means previous recording should end
      if($line =~ /\[(.+)-.+(SELECT .+)$/) { $s = 1; $g = $1; $st = $2; } # regular expression heaven - wish
      # if this is a SELECT statement it is put in $st, $s is set to 1, $g contains id filtering next lines
      next READINGLOOP; # ok, let's proceed with the next line - don't execute the rest...
   }
   if ($s and $line =~ /\[(.+)-.+\] (.+)$/ and $g == $1) { $st .= $2; } # recording subsequent lines - concat
}
# just to be sure, we save whatever is inside $st once we reach the end of the file - no more 'LOG:  execute's
if($st ne "") { push @selects, $st; $s = 0; $st = ""; $g = undef; }
# now, we should scan the results for 'DETAIL:  parameters:' and perform all the described substitutions
my $array; my $hash; my $key; my $val; my $var; my $target; my $subs;
for($i=0;$i<scalar(@selects);$i++) {
   if ($selects[$i] =~ /^(.+)DETAIL:  parameters: (.+)$/) {
      # reading parameters, splitting them into key,val pairs for subsequent search and replace
      $array = (); $hash = {}; $subs = "";
      $target = $1;
      @$array = split ',' , $2;
      # print "\nBefore: ----------------------------------------------------------------------------------\n";
      # print $target, "\n";
      # print "------------------------------------------------------------------------------------------\n";
      foreach $var (@$array) {
         ($key,$val) = split '=', $var;
         $key =~ s/^ *(.+) +$/$1/;
         $val =~ s/^ *'(.+)' *$/$1/;
         $hash->{$key} = $val;
         # print $key, "\t", $val, "\n";
         $subs = "\\".$key;
         $target =~ s/$subs\:\:/\'$val\'::/g;
      }
      # print "After: ----------------------------------------------------------------------------------\n";
      # print $target, "\n";
      # print "------------------------------------------------------------------------------------------\n";
      $selects[$i] = $target;
   }
}
# and on to outputting our results...
# pure sql output if there is a second argument in the command line
if($ARGV[1]) { for($i=0;$i<scalar(@selects);$i++) { print $selects[$i],";\n"; } }
else {
# tsung compatible output
print <<STARTOFSESSION;
    <session name="$ARGV[0]" probability="100" type="ts_pgsql">
        <transaction name="connection">
            <request>
                <pgsql type="connect" database="mydatabase" username="myusername" />
            </request>
            <request>
                <pgsql type="authenticate" password="mypassword"/>
            </request>
        </transaction>
        <thinktime value="5"/>
            <transaction name="requests"> <!-- start of requests -->
STARTOFSESSION
for($i=0;$i<scalar(@selects);$i++) {
   print "\t\t\t\t<request><pgsql type=\"sql\"><![CDATA["; print $selects[$i],"\n"; print "]]></pgsql></request>\n"
}
print <<ENDOFSESSION;
            </transaction> <!-- end of requests -->
            <thinktime value="5"/> <!-- delay between scenario re-play -->
        <request><pgsql type="close"></pgsql></request>
    </session>
ENDOFSESSION
}

This is how you would run the above script:

./syslog-to-tsung-xml complex-search-001.log > complex-search-001.xml

This generates a partial Tsung file in the proper format. This process need to be repeated for every different use-case we would like to include. The resulting xml files may be concatenated into a single file, like so:

cat *.xml > my-tsung-scenario.xml

The resulting file (my-tsung-scenario.xml) will be completed into a full valid Tsung scenario file in section 2.4 In order to run the above scripts, you obviously need a working Perl environment and the Parse::Syslog perl module, which may be installed by typing (as root):

cpan Parse::Syslog

Before proceeding any further, you may want to manually edit all occurences of

<transaction name="requests">

…in my-tsung-scenario.xml, changing the name each time to reflect the use-case which follows. E.g.

<transaction name="complexSearch1">

Another required manual edit concerns the probability factors assigned to each use-case (session). Therefore, you need to adjust the probability settings of all such occurences:

 <session name="complex-search-001.log" probability="100" type="ts_pgsql">

… to reflect the desired frequency of each use-case in the tests. Changing 100 to 25 in the above line will force 1 in 4 users during the Tsung tests to replay the ‘complex-search-001’ scenario.

To turn a series of sessions described in the file my-tsung-scenario.xml into a full, valid scenario we need to type:

echo '<!DOCTYPE tsung SYSTEM "/usr/local/share/tsung/tsung-1.0.dtd" [] >

<tsung>
<!- <tsung loglevel="debug" dumptraffic="true"> -> <!- useful sometimes ->
   <clients>
      <client host="myclient" weight="1" cpu="2"></client>
   </clients>

   <servers>
      <server host="myserver" port="5432" type="tcp"/>
   </servers>

   <monitoring>
      <monitor host="myserver" type="erlang"></monitor> <!- postgresql server ->
      <monitor host="myclient" type="erlang"></monitor>
   </monitoring>

   <load>
      <arrivalphase phase="1" duration="1800" unit="second">
         <users interarrival="4" unit="second"></users>
      </arrivalphase>
      <arrivalphase phase="2" duration="1800" unit="second">
         <users interarrival="2" unit="second"></users>
      </arrivalphase>
   </load>

   <sessions>

' >  head-tsung-scenario.xml

… to get a head-tsung-scenario.xml file which we can then edit accoring to our needs. If we keep the existing settings, Tsung will attempt to load-test a server called myserver (the names needs to be resolvable, please check your DNS service and/or your /etc/hosts file) from a single client, myclient, while trying to monitor hardware load on both machines. In the load section, two load phases have been defined, starting at “new user every 4 seconds” and then doubling the rate. Each of these phases is meant to last half an hour (1800s), but once the server reaches its breaking point, user sessions do not terminate properly and the duration of the load phase we are in is expanded, as Tsung waits for all users to finish before proceeding to the next one. Once you have changed head-tsung-scenario.xml according to your needs, you may complete the generation of a new scenario file by typing:

 cat head-tsung-scenario.xml my-tsung-scenario.xml > full-tsung-scenario.xml; echo '
    </sessions>
</tsung>
' >> temp-tsung-scenario.xml

This file (temp-tsung-scenario.xml) is actually a full valid scenario file which may be used for testing. But you probably want to tweak one or two things to make this testing relevant to your system, which is what we shall discuss in the next installment of this tutorial.

If you suddenly needed a cronnable Postgresql database update command for SQL text files, you would probably just type:

cat /path/to/some/dir/*.sql | psql -U postgres someDatabase

So, I am asking myself, have I created something pointless?

As it turns out:

  • pgBee keeps track of the update process. If a pgBee instance is killed, the next invocation will carry on from where the previous one has stopped. And if it finds SQL errors, it will report how far it got in the input files before quitting.
  • pgBee is actually faster than psql when executing SQL statements from a text file. psql took 112m (with one transaction for each statement), psql -1 took 97m (with one transaction for the entire file) but pgBee finished in 21m !!! (with one transaction per batch) That’s a whopping 898 operations per second. All tests were run on the same database server (localhost), pgBee was batching groups of 100 statements at a time and a real data file was used, with 1131753 SQL statements in total (511335 DELETEs and 567577 INSERTs).

In a previous post, I promised some examples/tutorials on load-testing Postgresql servers with Tsung. Well, I have tried to develop a database performance testing methodology that may be: a. application-specific, and b. easily applied to different servers and configurations, to assess their relative performance.

Tsung is ideally suited for application-specific Postgresql testing, as it supports a “proxy mode” to record SQL sessions, which are then turned into a scenario file and replayed any number of times. It also supports including alternative sessions in the same scenario file, so that each simulated new user may send a different set of SQL statements, according to the probability assigned to each session.

Different parts of a session may be grouped into transactions (Tsung-speak — nothing to do with your normal database transactions) for statistical monitoring of SQL groups. Transactions are characterised by their name, and names may be shared across sessions. This way, there are tremendous reporting possibilities, as all sessions may have a “connection” transaction offering global connection statistics, while transactions with unique names produce statistics on a specific use-case basis (e.g. complex data search, typical page load etc.).

I’d say there are two main preparation stages for meaningful Postgresql load-testing with Tsung:

These stages will be analyzed each in their respective post. It turns out capturing SQL statements and turning them into a Tsung scenario file was not as easy as I thought.

a Postgresql Bulk Updater in Java

pgBee is a set of Java classes I wrote for automating bulk updates of Postgresql databases on Linux servers. It requires Java (doh!) and Ant (as a build/execute front-end), it is cronnable and performs very well, especially in multi-threaded mode, which takes full advantage of multi-core CPUs in modern servers. The source of inspiration for pgBee has been previously described.

This code is released under a GNU General Public License (GPL).

Ant sometimes refuses to run in the background, so the best way to make pgBee work as a cron job is probably to call a simple shell script from cron, like the one below:

#!/bin/bash
export JAVA_HOME=/usr
export ANT_HOME=/usr/local/ant
/usr/local/bin/ant -f /path/to/build.xml run </dev/null &

All configuration is done in the settings.xml file, but some options may be set through the command line, e.g.

ant -f /path/to/build.xml -Dlock=yes -Dthreads=8 -Dparallel=yes run

pgBee processes all files it finds in a particular (in) directory and moves them to either a done directory or a rejects directory, if there were SQL errors. You’ll need to create the right directory structure and configure pgBee settings before starting. The pgBee process catches SIGTERM, SIGHUP etc. signals and exits gracefully, ready to resume from where it stopped the next time it is run. So, it should be quite reliable, in the absence of hard resets and kill -9. Having said that, I am supplying no guarantees of fitness for any purpose of any kind 🙂 Please use at your own risk.

If you need to make sure a particular set of statements is processed in the same transaction, you only have to include all statements in the same line of an input file, separated by semi-colons. There’s no limit to how many SQL statements you may include in a single line. More information about input file format, usage and configuration may be found in the downloadable tarball

Data models are good and they are clear, if you’re the person writing the application and devising the model. Hell, sometimes, they are not clear even then! So, imagine what happens when you get someone from the street to connect to your database and read your schema in order to understand it. No chance!

Now, this is not about some poor wardriver who doesn’t know how to read the implicit relationships between tables in your model – they had it coming! But what about your legit users, working on a particular aspect of your infrastructure or application, such as developers, DBAs etc. ? How on earth do they make sense of it all when they first start?

Yes, yes, in an ideal world everything’s properly documented, but when was the last time you saw that in a real life situation? Real IT people don’t write helpful comments when they create their tables, views, functions etc. Referential integrity? Don’t make me laugh! Most developers avoid database constraints, to keep the application portable between database systems and database error messages to a bare minimum. Integrity rules are usually enforced at the application level. From a DBA’s perspective, most enterprise-level databases are big collections of seemingly unrelated tables, with no business logic in the DB system itself.

But don’t despair! Help is at hand. Enter Schema Spy:

http://schemaspy.sourceforge.net/

You dowload the jar file, and then you run something along the lines of

java -jar schemaSpy_3.1.1.jar -t pgsql -cp /path/to/jdbc.jar \
                              -u user -p password -s schema \
                              -db dbname -host localhost:5432 \
                              -o output-dir

After a while, you have a look in output-dir, and the reports are really nice.

Schema Spy even deduces table relationships from field names and types. And it seems to support several different database systems, including Oracle and MySQL. Hurrah!

My new work computer is a Dell Vostro 1310 laptop. I am most chuffed with this new machine, as this is my first modern, up-to-date programming notebook for a long time now (some people think it’s boxy! but all I want is a no-nonsense machine). And it runs Debian Lenny, which marks a change from my old Ubuntu and Slackware days. So, this is me showing off a new laptop and sharing some issues for anyone wanting to install Debian Linux on a Vostro 1310.

Now, Vostro laptops may be customized considerably prior to order, so the hardware specs vary. Mine is a Intel Core2 Duo T9300 @ 2.50GHz (6MB L2 Cache, 800 MHz FSB), 4Gb RAM box, probably near the top of the range. For WiFi, it’s got the Dell 1505 miniPCI card (that’s probably a Broadcom 4328, capable for 802.11n) and for Bluetooth the standard Dell Wireless 360 Bluetooth Module. It’s got a 13.3 inch WXGA screen and a 128Mb NVIDIA GeForce 8400M GS (64 bit) video card.

Here’s the output from lspci:

00:00.0 Host bridge: Intel Corporation Mobile PM965/GM965/GL960 Memory Controller Hub (rev 0c)
00:01.0 PCI bridge: Intel Corporation Mobile PM965/GM965/GL960 PCI Express Root Port (rev 0c)
00:1a.0 USB Controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #4 (rev 03)
00:1a.1 USB Controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #5 (rev 03)
00:1a.7 USB Controller: Intel Corporation 82801H (ICH8 Family) USB2 EHCI Controller #2 (rev 03)
00:1c.0 PCI bridge: Intel Corporation 82801H (ICH8 Family) PCI Express Port 1 (rev 03)
00:1c.1 PCI bridge: Intel Corporation 82801H (ICH8 Family) PCI Express Port 2 (rev 03)
00:1c.3 PCI bridge: Intel Corporation 82801H (ICH8 Family) PCI Express Port 4 (rev 03)
00:1c.4 PCI bridge: Intel Corporation 82801H (ICH8 Family) PCI Express Port 5 (rev 03)
00:1d.0 USB Controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #1 (rev 03)
00:1d.1 USB Controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #2 (rev 03)
00:1d.2 USB Controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #3 (rev 03)
00:1d.7 USB Controller: Intel Corporation 82801H (ICH8 Family) USB2 EHCI Controller #1 (rev 03)
00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev f3)
00:1f.0 ISA bridge: Intel Corporation 82801HEM (ICH8M) LPC Interface Controller (rev 03)
00:1f.1 IDE interface: Intel Corporation 82801HBM/HEM (ICH8M/ICH8M-E) IDE Controller (rev 03)
00:1f.2 SATA controller: Intel Corporation 82801HBM/HEM (ICH8M/ICH8M-E) SATA AHCI Controller (rev 03)
00:1f.3 SMBus: Intel Corporation 82801H (ICH8 Family) SMBus Controller (rev 03)
01:00.0 VGA compatible controller: nVidia Corporation GeForce 8400M GS (rev a1)
06:00.0 Network controller: Broadcom Corporation BCM4328 802.11a/b/g/n (rev 03)
07:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 02)
08:05.0 FireWire (IEEE 1394): O2 Micro, Inc. Firewire (IEEE 1394) (rev 02)
08:05.2 SD Host controller: O2 Micro, Inc. Integrated MMC/SD Controller (rev 02)
08:05.3 Mass storage controller: O2 Micro, Inc. Integrated MS/xD Controller (rev 01)

Installing Debian Lenny 64-bit (amd64)

For the record, my first attempts at installing Linux on this box were very frustrating, as both Ubuntu 8.04 and 8.10 64-bit versions wouldn’t correctly recognise the Ethernet card (Realtek 8168) – which is the last thing I’d expect not to work. Same thing happened with 64-bit Debian Sarge. I was getting frustrated by the time I tried 64-bit Debian Lenny, but things suddenly worked out of the box and installation was a breeze (using the netinst CD).

I decided to go for the easy option and install Windows drivers for the WiFi card through ndiswrapper. The process is relatively straightforward:

Well, all you need to do (as root) is rmmod ssb ; rmmod ndiswrapper ; modprobe ndiswrapper ; modprobe ssb

You should now have a wlan0 interface to configure for WiFi connections (you might want also want to install wifi-radar). The rmmod ssb etc. stuff needs to happen every time the system boots. I have written a simple initialization script that does this.

Now, I thought I was having a Bluetooth problem, until I noticed I had switched off WiFi and Bluetooth using the little switch at the right side of the laptop, next to the DVD drive slot. As it happens, Bluetooth worked out of the box, but please have a look at this if you have Vista pre-installed: http://onemansjourneyintolinux.blogspot.com/2007/10/enabling-bluetooth.html

I had opted for Windows XP pre-installed with Vista installation media, so I didn’t experience any problems. In fact, I routinely use Bluetooth to connect to my 3skypephone mobile and use it as a 3G modem. Please have a look at this, if you are interested.

I have also installed NVIDIA drivers for the video card (here’s one of many tutorials) and Compiz-Fusion, which looks quite nice! Here’s a brief video:

Screen capture (with recordmydesktop) was a bit flickery, sorry, but I was stressing the machine: I was using loads of Compiz-Fusion eye-candy and installing Vista as a virtual machine through VirtualBox at the same time.

Suspend and Hibernate work out-of-the-box. All-in-all, this laptop gives me everything I need for heavy development work – power management, connectivity, performance (and eye candy to impress co-workers). I don’t know if the fingerprint scanner works, I haven’t even thought about using it yet.

My only real complaint up to now is audio 😦 This is an interesting story, actually, because I had sound when I first installed Lenny about a month ago (well, without headphone jack sense) and then I went for a kernel update, which broke sound! The sound device now doesn’t even show up in the operating system, so it’s no use recompiling ALSA (which I have done, just in case). Now, Lenny has not been officially branded as a “stable” release yet, this is supposed to happen in 1-2 months, so here’s hoping one of these days I do a system update and suddenly everything works (again). But, as I said earlier, I am using this laptop as a development box, so lack of sound doesn’t really affect me. It’d be nice, however, to be able to listen to some mp3s while at work, which I do through my n800 (as a quick fix).

Update (2008-11-21): A prerelease version of Adobe Flash player 10 has just been released for Linux 64-bit systems. You may find it here. I installed it by extracting and copying libflashplayer.so to /usr/local/lib and updating the /etc/alternatives/flash-mozilla.so symbolic link to point to /usr/local/lib/libflashplayer.so

Update (2009-02-09): The real problem with sound on this laptop is that the operating system does not even recognise there is a soundcard in the system (there is no audio controller in the lspci output). A few days ago, I decided to update my kernel to 2.6.26-1-amd64 using apt-get, just in case it would make a difference. Well, it doesn’t 😦 I have downloaded my kernel’s headers and recompiled the latest version of ALSA (1.0.19), but the audio controller just doesn’t show up. So, I’ve bought myself a cheap (10 EUR) C-Media based USBsound card, which works fine (mic too).


Linux On Laptops
TuxMobil - Linux on Laptops, Notebooks, PDAs and Mobile Phones

Yet another work-related post. I have been asked to write a better automatic database update system and against my natural tendencies toward Perl and Python I have opted to do it in Java. Now, previous attempts in Java had been abandoned because they were not performing very well, but I wanted to build something with potential for integration with the company’s infrastructure, so I rolled up my sleeves and decided to investigate.

A quick Google search produced some interesting discussions (please see the Interesting Links below). In summary, the official JDBC Postgresql driver does not support COPY operations and people complain that it’s slow for bulk updates, however, our update sql files are not very structured and, in fact, may contain any (as in different each time) valid SQL code. So, COPY is not what I’d use, anyway.

Some hope for reasonable performance appeared in the form of the driver’s batch mode. So, I wrote some Java classes which read multiple lines of sql statements from an sql text file into a String buffer of configurable size. When this size is reached, these sql statements are added to the reused Statement object with addBatch() and are executed in their own transaction (I have set auto-commit to off) through executeBatch().

Now, I have tried inserting one million rows into a table using a different buffer size each time, i.e. grouping sql statements in batches of one, ten, hundred and thousand statements per transaction. The results are quite promising, don’t you think? (low spec machine, btw)

  • batches of 1 –> 49m 55s
  • batches of 10 –> 15m 04s
  • batches of 100 –> 08m 21s
  • batches of 1000 –> 33m 12s

Interesting links (References):

multi-statetement JDBC updates in batch mode: http://archives.postgresql.org/pgsql-jdbc/2007-04/msg00076.php

making batch updates in JDBC applications: http://publib.boulder.ibm.com/infocenter/db2luw/v8/index.jsp?topic=/com.ibm.db2.udb.doc/ad/tjvbtupd.htm

no copy from postgres JDBC: http://archives.postgresql.org/pgsql-jdbc/2004-06/msg00027.php

copy for PostgreSQL 8.x JDBC Driver: http://kato.iki.fi/sw/db/postgresql/jdbc/copy/