Skip navigation

Monthly Archives: October 2008

Tsung has a “proxy mode” which records SQL statements and produces an appropriate Tsung scenario file. What could be simpler? I shall just point my web application to speak to the Tsung proxy instead of the database and I will use it to generate “typical usage” cases.

Unfortunately, this is not an option if, say, your application uses a web framework which maintains several open connections to the database server. The Tsung proxy can only handle one connection at a time. So your application does not function properly and you are not able to use it to generate the “typical usage” scenaria.

Then there is pgFouine, a PostgreSQL log analyzer, which shows some promise, which produces Tsung compatible output on demand. But pgFouine principally analyzes log files to group and rank statements according to how well they perform in the database, and this approach has spilled over to Tsung scenario file generation: the order of the SQL statements is not preserved! This, by itself, perhaps would not be a problem, but I often record multiple use-cases in one go and pgFouine mixes them up.

The best way to create our test cases, therefore, is to use the log files from an idle Postgresql server, after enabling the logging of all SQL statements in the server. I have written a few scripts which help with the process, but this was after already changing the logging format of our Postgresql server to pgFouine’s requirements (syslog). Thus, the Postgresql server needs to log in this particular style:

Sep  1 16:21:19 pgtest postgres[4359]: [136-1] LOG:  statement: SELECT rolname FROM pg_roles ORDER BY 1
Sep  1 16:21:19 pgtest postgres[4359]: [137-1] LOG:  duration: 0.178 ms

To make sure this is the case, you probably need to edit your postgresql.conf file and set the following values:

log_destination = 'syslog'
redirect_stderr = off
silent_mode = on
log_min_duration_statement = 0
log_duration = off
log_statement = 'none'
log_line_prefix = 'user=%u,db=%d,host=%h '
syslog_facility = 'LOCAL0'
syslog_ident = 'postgres'

Then, you need to edit /etc/syslog.conf to set up a PostgreSQL facility and exclude it from the default log file:

local0.*   -/home/postgres/logs/postgresql.log
*.info;mail.none;authpriv.none;cron.none;local0.none

For the changes to have effect, you need to restart the syslog service (/etd/init.d/syslog restart) and Postgresql.

You are now ready to start capturing SQL statements in the Postgresql log file. To make sure you shall be able to filter the log file into separate use-cases, you should choose a unique string identifier (e.g. ‘complex search 001’) to throw at the database server at the beginning and end of a particular use-case. You may do this by connecting to the server via ssh and typing:

echo "SELECT 'complex search 001';" | psql -U postgres

… before using your web application (which must be configured to talk to this particular Postgresql server). At the end of this use-case (‘complex search 001’) all you need to do is repeat the line above.

When you have finished recording all batches (use-cases) of SQL statements, you need to locate the postgresql log file (e.g. /var/log/postgresql/postgresql.log) and use it as input for the perl script below:

I have created syslog-filter, a simple perl script you may run from the command line, like so:

./syslog-filter postgresql.log  'complex search 001' > complex-search-001.log

… assuming the script has permission to be executed and is located in the same directory as the postgresql.log file. This command creates complex-search-001.log, which contains only those SQL statement that belong to this use-case.

Here is the code for syslog-filter:

#!/usr/bin/perl -w
if(scalar(@ARGV) < 2) {
   print "Usage: ./syslog-filter <file> <token>\ne.g. ./syslog-filter scenario.log 'Quoted companies'\n"; exit(1);
}
open(MYFILE, '<'.$ARGV[0]) or die "Can't open ".$ARGV[0]." for reading...\n";
my $switch = 0; my $line = "";
while($line = <MYFILE>) {
    if($line =~ /$ARGV[1]/) { &toggle_switch(); }
    print $line if $switch;
}
close(MYFILE);

sub toggle_switch { if($switch) { $switch=0; } else { $switch=1; } }

For the next step, you may want to use the following script, syslog-to-tsung-xml:

#!/usr/bin/perl -w
use Parse::Syslog;
if(scalar(@ARGV) < 1) {
   print "Usage: ./syslog-to-tsung-xml <logfile>\ne.g. ./syslog-to-tsung-xml my-scenario.log\n"; exit(1);
}
my $parser = Parse::Syslog->new( $ARGV[0] ); $s = 0; # $s is just a switch whether we should record/not
READINGLOOP: while(my $sl = $parser->next) {
   $line = $sl->{text}; # i don't want to write $sl->{text} all the time 🙂
   if ($line =~ /LOG:  execute/ or $line =~ /LOG:  statement/) { # if we see 'LOG:  execute' we know we should start recording...
      # but if the recording switch is already on, we need to save recorded statement into @selects
      if($s and $st ne "") { push @selects, $st; $s = 0; $st = ""; $g = undef; }
      # in other wordsd, a 'LOG:  execute' also means previous recording should end
      if($line =~ /\[(.+)-.+(SELECT .+)$/) { $s = 1; $g = $1; $st = $2; } # regular expression heaven - wish
      # if this is a SELECT statement it is put in $st, $s is set to 1, $g contains id filtering next lines
      next READINGLOOP; # ok, let's proceed with the next line - don't execute the rest...
   }
   if ($s and $line =~ /\[(.+)-.+\] (.+)$/ and $g == $1) { $st .= $2; } # recording subsequent lines - concat
}
# just to be sure, we save whatever is inside $st once we reach the end of the file - no more 'LOG:  execute's
if($st ne "") { push @selects, $st; $s = 0; $st = ""; $g = undef; }
# now, we should scan the results for 'DETAIL:  parameters:' and perform all the described substitutions
my $array; my $hash; my $key; my $val; my $var; my $target; my $subs;
for($i=0;$i<scalar(@selects);$i++) {
   if ($selects[$i] =~ /^(.+)DETAIL:  parameters: (.+)$/) {
      # reading parameters, splitting them into key,val pairs for subsequent search and replace
      $array = (); $hash = {}; $subs = "";
      $target = $1;
      @$array = split ',' , $2;
      # print "\nBefore: ----------------------------------------------------------------------------------\n";
      # print $target, "\n";
      # print "------------------------------------------------------------------------------------------\n";
      foreach $var (@$array) {
         ($key,$val) = split '=', $var;
         $key =~ s/^ *(.+) +$/$1/;
         $val =~ s/^ *'(.+)' *$/$1/;
         $hash->{$key} = $val;
         # print $key, "\t", $val, "\n";
         $subs = "\\".$key;
         $target =~ s/$subs\:\:/\'$val\'::/g;
      }
      # print "After: ----------------------------------------------------------------------------------\n";
      # print $target, "\n";
      # print "------------------------------------------------------------------------------------------\n";
      $selects[$i] = $target;
   }
}
# and on to outputting our results...
# pure sql output if there is a second argument in the command line
if($ARGV[1]) { for($i=0;$i<scalar(@selects);$i++) { print $selects[$i],";\n"; } }
else {
# tsung compatible output
print <<STARTOFSESSION;
    <session name="$ARGV[0]" probability="100" type="ts_pgsql">
        <transaction name="connection">
            <request>
                <pgsql type="connect" database="mydatabase" username="myusername" />
            </request>
            <request>
                <pgsql type="authenticate" password="mypassword"/>
            </request>
        </transaction>
        <thinktime value="5"/>
            <transaction name="requests"> <!-- start of requests -->
STARTOFSESSION
for($i=0;$i<scalar(@selects);$i++) {
   print "\t\t\t\t<request><pgsql type=\"sql\"><![CDATA["; print $selects[$i],"\n"; print "]]></pgsql></request>\n"
}
print <<ENDOFSESSION;
            </transaction> <!-- end of requests -->
            <thinktime value="5"/> <!-- delay between scenario re-play -->
        <request><pgsql type="close"></pgsql></request>
    </session>
ENDOFSESSION
}

This is how you would run the above script:

./syslog-to-tsung-xml complex-search-001.log > complex-search-001.xml

This generates a partial Tsung file in the proper format. This process need to be repeated for every different use-case we would like to include. The resulting xml files may be concatenated into a single file, like so:

cat *.xml > my-tsung-scenario.xml

The resulting file (my-tsung-scenario.xml) will be completed into a full valid Tsung scenario file in section 2.4 In order to run the above scripts, you obviously need a working Perl environment and the Parse::Syslog perl module, which may be installed by typing (as root):

cpan Parse::Syslog

Before proceeding any further, you may want to manually edit all occurences of

<transaction name="requests">

…in my-tsung-scenario.xml, changing the name each time to reflect the use-case which follows. E.g.

<transaction name="complexSearch1">

Another required manual edit concerns the probability factors assigned to each use-case (session). Therefore, you need to adjust the probability settings of all such occurences:

 <session name="complex-search-001.log" probability="100" type="ts_pgsql">

… to reflect the desired frequency of each use-case in the tests. Changing 100 to 25 in the above line will force 1 in 4 users during the Tsung tests to replay the ‘complex-search-001’ scenario.

To turn a series of sessions described in the file my-tsung-scenario.xml into a full, valid scenario we need to type:

echo '<!DOCTYPE tsung SYSTEM "/usr/local/share/tsung/tsung-1.0.dtd" [] >

<tsung>
<!- <tsung loglevel="debug" dumptraffic="true"> -> <!- useful sometimes ->
   <clients>
      <client host="myclient" weight="1" cpu="2"></client>
   </clients>

   <servers>
      <server host="myserver" port="5432" type="tcp"/>
   </servers>

   <monitoring>
      <monitor host="myserver" type="erlang"></monitor> <!- postgresql server ->
      <monitor host="myclient" type="erlang"></monitor>
   </monitoring>

   <load>
      <arrivalphase phase="1" duration="1800" unit="second">
         <users interarrival="4" unit="second"></users>
      </arrivalphase>
      <arrivalphase phase="2" duration="1800" unit="second">
         <users interarrival="2" unit="second"></users>
      </arrivalphase>
   </load>

   <sessions>

' >  head-tsung-scenario.xml

… to get a head-tsung-scenario.xml file which we can then edit accoring to our needs. If we keep the existing settings, Tsung will attempt to load-test a server called myserver (the names needs to be resolvable, please check your DNS service and/or your /etc/hosts file) from a single client, myclient, while trying to monitor hardware load on both machines. In the load section, two load phases have been defined, starting at “new user every 4 seconds” and then doubling the rate. Each of these phases is meant to last half an hour (1800s), but once the server reaches its breaking point, user sessions do not terminate properly and the duration of the load phase we are in is expanded, as Tsung waits for all users to finish before proceeding to the next one. Once you have changed head-tsung-scenario.xml according to your needs, you may complete the generation of a new scenario file by typing:

 cat head-tsung-scenario.xml my-tsung-scenario.xml > full-tsung-scenario.xml; echo '
    </sessions>
</tsung>
' >> temp-tsung-scenario.xml

This file (temp-tsung-scenario.xml) is actually a full valid scenario file which may be used for testing. But you probably want to tweak one or two things to make this testing relevant to your system, which is what we shall discuss in the next installment of this tutorial.

Advertisements