Segunda-feira, 12 de Janeiro de 2009

WSTest, some numbers - Updated

In my last post I've talked about my Java implementation of WSTest that can compete performance wise with the Microsoft implementation. My initial tests were performed using Virtual Machines and Windows2003 but since then I've managed to get my hands on a install CD of Windows2008 and a gigabit switch, so I finally could perform tests with some resemblance of a valid setup.


I still don't have server class hardware to play but at least I can run the tests on more than one machine. The web service host server is a Thinkpad T61p (Core2 Duo T7700 @ 2.40GHz, 4GB RAM), the client machines are a Macbook (Core2 Duo @ 2.16GHz, 1GB RAM) and a HP Compaq 2510p (Core2 Duo U7600 @ 1.20GHz, 2GB RAM)

I've used ApacheBench as the tool for generating load, example of use for the GetOrder test:

ab -c30 -n500000 -k -p post_files/getorder20.xml
 -H 'SOAPAction: "uri:WSTestWeb-TestService/GetOrder"'
 -T "text/xml;charset=UTF-8"

The WCF implementation tested was the "WSTestSelfHost" (the numbers for the "WSTest_IISHosted" are lower) running on Windows2008 Std with all the latest updates and .NET 3.5SP1. The Java implementation runs on Ubuntu 8.10 with the Generic kernel using the sun-java-jdk-1.6.10 jvm.


WSTest (results in tps, higher is better)

Test NameWindows/.NET/WCFWindows/MinaLinux/MinaLinux/Grizzly

Some notes:

  • The EchoSynthetic values aren't present because it's not clear to me what the "20" and the "100" are supposed to be on this test.
  • My Java implementation has a huge drop in troughtput in the GetOrder test going from 20 to 100 items, I will have to investigate the reason for this pathological behaviour. This has been fixed.
  • In some tests the Java results are more than 3 times as high!


Update: I've update the values to include tests runs with the Mina based http engine, both on Windows and on Linux. Results for Grizzly on Windows are not included because Grizzly aborts/resets connections way to frequently when running on Windows2008

tags: ,
published by luisneves às 00:45
perm link | comment | add to favourites
Quarta-feira, 31 de Dezembro de 2008

A fast implementation of WSTest in Java

I've recently come across the Microsoft updated versions of the WSTest Web Services Benchmark and the .NET StockTrader Sample Application. They wasted no time bragging about the results :-)

Microsoft encourages people to download the benchmark kit and perform their own tests, so I did that. I will ignore the StockTrader App for now because is more complex to install and analyze, I will focus on the WSTest benchmark. The .NET/WCF results are very good and the guys at benchmark labs seem to really know their stuff. It's a pity that the benchmark choose to compare .NET/WCF against WebSphere, probably the most expensive, slow and cumbersome of all Java Application servers.

In the Java-Land there are faster solutions to choose from. I decided to implement my own version of the benchmark to verify just how fast or how slow can a Java implementation be.

The test is essentially a XML serialization/deserialization benchmark, so I picked the speedy JiBX as the framework for Java/XML data binding. JiBX is only as fast as the underlying XML parser and the fastest Stax parser that I know is the Aalto XML Processor. We will also need an HTTP layer and for this I really like the Mina Http Codec.

With all the ingredients in place it didn't took long to produce a benchmark implementation that doesn't suck :-). The code is available here.

And what about the results? Unfortunately I don't have a server class machine laying around for running proper tests. However, I do have VirtualBox and two virtual machines, one with "Windows2003 Server" that runs the "Self-Hosted" WSTest application and another with "Ubuntu 8.10 Server" that runs the Java implementation using the sun-java-jdk-1.6.10 jvm. Using soapUI as a load generator the "linux/java" setup runs circles around the "windows/.net/wcf", in some cases the throughput numbers are more than twice as high. Of course that these results should be taken with a truckload of salt. The tests should have been performed with a proper server machine, using Windows 2008 Server in the .NET setup and with several machines running the load generators. I would love to hear from someone that has a "benchmark lab".

Update: The http bits are now handled by Grizzly, the performance seems to be better.

Update: Check the follow-up post for a more detailed performance test.

tags: ,
published by luisneves às 01:01
perm link | comment | add to favourites
Sábado, 31 de Maio de 2008

Need for speed

Sapo Broker just got an huge performance boost.

There are two reasons for it. The first was the change from H2 to Berkeley DB. H2 is a very nice database but not adequate to a high performance message store. I chose H2 initially because of the ease of use and familiarity with SQL, but it just doesn't hold up against our usage patterns. I picked BDB only looking for increase stability under load, which I got, the boost in performance was a surprise. Tests in various scenarios show that BDB is rock solid and an increase in throughput that ranges from 100% to 300%!

The other source of performance improvement has to do with XML parsing. We now use Woodstox. I've heard of Woodstox, but I didn't imagine that the performance difference would be so significant compared with SJSXP - that comes bundled with the JVM. Zero code changes and an extra ".jar" file, that was the cost of having nearly twice as much throughput... I couldn't believe my eyes when I saw it.

Unexpected performance improvements.... I like them!

tags: ,
published by luisneves às 22:21
perm link | comment | add to favourites

.search this blog

.Fevereiro 2009


.recent posts

. WSTest, some numbers - Up...

. A fast implementation of ...

. Need for speed


. Fevereiro 2009

. Janeiro 2009

. Dezembro 2008

. Outubro 2008

. Maio 2008


. todas as tags

.subscrever feeds