Saturday, June 11, 2016


What is ORCA?

ORCA is a command line tool written in JAVA using AKKA for performance testing of APIs. It provides an interactive console where a user can submit or manage a performance testing job. It uses Actor Model instead of a thread model and thus scales far better than any thread based tools like Jmeter. It supports clustering and thereby for very high load generation you can use a swarm of orca agents. One of the problem in doing performance testing of apis is generation of dynamic request body of request url like In most of the tools you either have to generate these kind of URLs and give it as CSV or write some plugin to generate it. Orca has inbuilt support for dynamic URL and Request body generation. Hence you can test apis with dynamic data and not static data. It uses mustache templates, data files and data generator functions to support it.

Load Generation

Below are some testing screenshoots which i performed using ORCA. For testing I  had setup a nginx server on 4 core 16 GB RAM machine and on a separate box with same configuration I started ORCA.

Number of requests: 100000, concurrency: 5000  

Above, right side picture shows the number of HTTP Connections in ESTABLISHED state during test. 

Below pictures show load generated with 1000, 2000 and 4000 concurrency.

How to use and download?

You can download the latest release from my github page for complete details and features head to the wiki section :

Comments, bugs, suggestions are welcome.

Monday, January 7, 2013

Lets Play with JMX-1

Lets just say you have written one program which prints your name every 0.5 sec for 10^6 times. You started your program and then you realized "Enough of my name, now I want to see my friend's name" What will you do?? Set your name variable and restart your program?? What if next time you realized  I want 50000 times your name and then 50000 times your dog's name, or lets just say I don't want you to kill the program and change the name you are printing on the fly??

Welcome to JMX(Java Management Extension).

What is JMX??

In real simple words, JMX is a technology which allows you to dynamically manage and monitor  JAVA resources. Using it you can also manage your JVM.  A nice tutorial.

How JMX works??

JMX has three layers :
a.) Instrumentation Level :
     At instrumentation level you create an interface which defines the requirements to implement a
    manageable resource, in other words you expose all the methods through which you will manage
    your resource. In JMX this interface has a special name MBean(there are different types of mbeans
    but details about them, later). After creating the MBeans you need to register it to MBeanServer.

b.) Agent Level :
      Similarly Agent level defines requirements for implementing agents. Agents control and expose
      the managed resources which registered with agent.

c.) Connectors and Protocol Adapters:
     This layer gives you connectors and different protocol adapters which allows you to connect to
     the agents and manage your resources.

Let's take an example, now!!

Lets take the scenario given at the start(though its a really bad use case of JMX but for starters we
can use it).  To solve the above problem we will write one class which has two fields loop and name where I want to manage both fields and I will have one method which prints the name for given loop time. So first lets write the Mbean(Interface).

public interface TestExampleMBean {
    public int getLoops();
    public void setLoops(int n);
    public void setMessage(String a);
    public String getMessage();

Now lets implement it and create a resource

public class TestExample implements TestExampleMBean, Runnable{
    private int loops;
    private String message
    public int getLoops() {
        return loops;
    public void setLoops(int loops) {
        this.loops = loops;
    public String getMessage() {
        return message;
    public void setMessage(String message) {
        this.message = message;
    public void run() {
        for (int i=0;i<loops;i++){
    try {
     } catch(Exception e){
         System.out.println("Exception caught");

Now the main class :

public class Main {
    public static void main(String[] args) throws Exception {
        System.out.println("Starting Main");
        MBeanServer mbs = ManagementFactory.getPlatformMBeanServer();
        ObjectName name = new ObjectName("");
       TestExample te = new TestExample();
       mbs.registerMBean(te, name);
       te.setMessage("Shwet Shashank");
       Thread t = new Thread(te);;


As you see first we created an interface, then we implemented it to create a resource and then we 
registered with our MBeanServer. Important lines in Main class are :

MBeanServer mbs = ManagementFactory.getPlatformMBeanServer(); // create one mbean server

ObjectName name = new ObjectName("");

Each object must be registered with a name, ObjectName has some outlined rules to create one,
the basic format is "domain:[<key, value>]" where domain is the package name and key value is a list of string pair.

Now run the above program and in a separate console start jConsole. Click on new connection,
under local process tab you will see your program running, connect to it, click on tab Mbeans. You will see something like this :

Open the tab have all my classes in this package), click on message, you will see something like this:

Now give a new name:

Click refresh and check the console where your program was printing your name, its printing the new name now. DONE!!!

Monday, December 10, 2012

Load Balancers-3

Okay the final post in this series, lets talk about the big daddy. The last load balancer I explored was Haproxy and I fell in love with it because of its light weight, high reliability and awesome performance.


Haproxy is a very light, fast, highly reliable load balancer and proxy solution for TCP(it handles any TCP communication not just http) based applications. Its based on event model and is single process system(which enables it to handle heavy load). Its a pure proxy, unlike apache and nginx it doesn't serves any files etc, remember its not a web server. One of the really good feature it has is a status page which has all the details like how many request went to which server, bytes transfered etc which helps a lot to understand what exactly is happening.


You can download the setup from there official download page
On linux you can install by

$> sudo apt-get install haproxy

Note : If you want ssl support use a version >= 1.5dev12(You will have to compile and build)

Configure :

In my case I needed ssl support with haproxy (Authentication server was talking to the app using ssl) so I tried to install and configure version 1.5dev12 but I couldn't figure out where to put the ssl certs and enable ssl port and failed to configure it, so I needed decided to put some ssl offloader in front of Haproxy which can offload the ssl and then pass the request down to haproxy. Stunnel  is a popular option for these kind of scenario  but I really didn't have time to learn how to install and configure stunnel so once again I went ahead with my beloved Apache :).

So the final setup was something like this :

Okay enough talk, lets configure both apache and haproxy and start the whole system.
For configuration suppose haproxy and apache are one machine and apps on, etc.

Apache Config :

Created a virtual host which is listening on ssl port :

<IfModule mod_ssl.c>
Listen 8443
        ProxyRequests off
        SSLEngine on
        SSLProxyEngine on
        SSLCertificateFile    /home/apache_certs/server.crt
        SSLCertificateKeyFile /home/apache_certs/server.key

        ProxyPass /           #passing it to haproxy
        ProxyPassReverse  / #passing it to haproxy

Here i am listening on port 8443 and after offloading the ssl i am sending request to haproxy.

Haproxy config :

At haproxy side I am starting to listening ports one for direct http communications and one port which will listen the requests being forwarded by apache, and then haproxy is forwarding them down to one of the application.

        log   local0
        log   local1 notice
        #log loghost    local0 info
        maxconn 4096
        #chroot /usr/share/haproxy
        #user haproxy
        #group haproxy

        log     global
        mode    http
        option  httplog
        option  dontlognull
        retries 3
        option redispatch
        maxconn 2000
        contimeout      5000
        clitimeout      50000
        srvtimeout      50000

listen ha_stats
          balance roundrobin
          mode http
          timeout client 30000ms
          stats enable
          stats uri /lb?stats

listen app_non_ssl
        mode http
        option httplog
        balance roundrobin
        option httpclose
        option redispatch
        maxconn 1000
        reqadd X-Forwarded-Proto:\ http
        server webserver1 maxconn 100 weight 100
        server webserver2 maxconn 100 weight 100

listen app_from_apache
        mode http
        option httplog
        balance roundrobin
        option httpclose
        option redispatch
        maxconn 1000
        reqadd X-Forwarded-Proto:\ https
        server webserver1  maxconn 100 weight 100
        server webserver2  maxconn 100 weight 100

In haproxy basically there are three sections global, default, listen global section contains all the settings for the haproxy instance like log server location, max connections etc. The default section has the default settings for each listen port(lets just say a server instance you start) you open. Listen block is where you mention on what port will you listen (you can have multiple listen blocks).  In listen block I have mentioned my backend servers where haproxy is forwardng requests(see the server definition).  I suggest to go through haproxy documentation to see all the options available. Most of the options in listen block are pretty straight forward but ill discuus these options

1. balance :  This option tells which algorithm its using to distribute the load.
2. maxconn : Maximum number of connections it will open.
3. server : What is the backend server it should forward the request to.

And you are done!!

This was the final setup I used for my Performance testing. :-)

Saturday, November 24, 2012

Load Balancers - 2

Last post was about using NGINX as the load balancer, this post is about using apache http server as a load balancer.  Lets get started with apache ( My favorite).


Apache http server needs no introduction,  its like the backbone of www. According to wikipedia :

The Apache HTTP Server, commonly referred to as Apache (/əˈpæ/ ə-pa-chee), is a web server software notable for playing a key role in the initial growth of the World Wide Web.[3] In 2009 it became the first web server software to surpass the 100 million website milestone.

Post intro now lets install and configure it.

Installation :

You can download apache http server from apache download site 
On linux you can install the package using :

sudo apt-get install apache2

Configure :

Apache provides modules to use it as a load balancer but by default they are not enabled, so first step is to enable load balancer modules and proxy modules. Lets enable them 

1. Enable Modules

  • sudo a2enmod proxy_balancer
  • sudo a2enmod proxy_connect
  • sudo a2enmod proxy_http

2. Restart apache 

  • sudo /etc/init.d/apache2 restart

3. Now we need to configure one virtual host. Lets take the last post example where we had    two app servers and and we have load balancer machine
We will direct the load from load balancer to app servers. Create one new file  /etc/apache2/sites-enabled/my_load_balancer and enter :

Listen 80
        ProxyRequests off
        ProxyPreserveHost On

        <Proxy balancer://my_app_servers>
                Balancermember loadfactor=1
                BalancerMember loadfactor=2
                #Order deny, allow
                Allow from all
                ProxySet lbmethod=byrequests

        <Location /balancer-manager>
                SetHandler balancer-manager
                Order deny,allow
                Allow from all
        ProxyPass /balancer-manager !
        ProxyPass / balancer://my_app_servers/

Here we are creating a virtual host which is listening on port 80.

4. Restart apache.

  • sudo /etc/init.d/apache2 restart

Note : You need to comment out the default NameVirtualHost (/etc/apache2/ports.conf) in case you are configuring your load balancer to listen on port 80.

Discuss : 

So as you can see here we mention the backend servers using Balancermember and can configure how much load should be directed to that member using loadfactor. And you can also configure which algorithm you can use to distribute the load using lbmethod. These settings are the bare minimum to start your load balancer.

If you want to know what all options are available with proxy module please check this as there are many options and its not possible to discuss all of them here.

Problems with apache :

Only problem with apache is when you increase the load its performance starts degrading, in my case I had decent load and it was performing well under 5000 requests in an hour.

Thursday, November 15, 2012

Load Balancers

"Scalability" First time I heard this word, I never thought one day it's gonna haunt me so much that I will have few sleepless nights.  It all started when I was asked to do a horizontal scale testing of our backend system. But don't worry I won't lecture you on scalability test rather I want to share few new things that I learned while doing it (interesting ones).

So the scenario was something like this :
Our webapp(supply chain system) is distributed on 12 VMs (test env) and I had to see how system behaves if we add one more similar setup and use load balancers on top to distribute the load. Can it handle more load. Can it scale ??I said lets do it, only problem  was I had no clue about what load balancer I can use, how load balancer works or which one is best for me.

I  googled and found three names Apache webserver, Nginx and Haproxy  which people use as load balancers. I did a small research on these three tried all of them one by one. This post is all about the pros and cons of these three software load balancers( I am not doing any benchmarking for any of these here, just sharing how to configure and use them and what problems I faced).

So lets start with the easiest to configure and really good load balancer Nginx.


Nginx is a webserver and a reverse proxy server for HTTP, SMTP, IMAP and POP3 protocols plus it can work as a software load balancer. Nginx is really fast when it comes to serve static contents, it can scale upto 10000 req/sec. What makes it so fast is its event driven architecture, it doesn't have apache type process or thread model architecture and because of this it has very small memory requirement.


Linux  :   sudo apt-get install nginx


Suppose you have two backend servers and and you installed nginx on .
Create a file /etc/nginx/sites-enabled/myloadbalancer.cfg

upstream myservers {
               server ;

server {
              listen 80;
              server_name localhost;
              access_log /var/log/nginx/access.log;
             location / {
                        proxy_pass http://myservers;
                        proxy_set_header $host;

And you are done. One important thing if your application needs the hostname you will have to explicitly set the host header( I needed it and it took me 2 hours to figure out why suddenly our application started giving bad hostname exceptions).

Problems with Nginx

After configuring the load balancers I was happy everything looked fine only problem being some particular rest calls started failing, which was unexpected. After a two days of debugging I finally found that some of the headers which our application was setting up and then making calls, were missing. Nginx was stripping off all the headers starting with X_ . I googled and finally found because of some security measures nginx does strips off certain type of headers.  So if your application needs headers starting with X_ or if any header whose name has _ in it ( it converts _ to - )  then probably nginx is not a good idea for load balancing. Though there is a patch which prevents  _(underscore) to -(dash) conversion, but in my case it was simply stripping them off so it didn't helped me.

And my 3 days of work went in gutter because I needed those headers, and it forced me to move from nginx to apache, the next easiest one to configure. Lets configure apache in next post.


Monday, August 20, 2012

Singleton Class in Java

It was my first interview when I got this question, Can you design a class for which you can create only one object?? At that time it went above my head and then i searched what is this all about and got to know about singleton pattern. In one of my latest interview once again this question was asked but this time it was something like this "Can you design a singleton class??How will you test it??What if I serialize and then deserialize the object, won't it create two different objects??What if we have multiple classloaders?? Lets see.

My first Singleton Class

class Singleton {
    private static Singleton sg=null;
    private Singleton(){
    public static Singleton getInstance() throws Exception{
        if ( sg == null){
            sg = new Singleton();
        return sg;

So far so good!! This example works fine in single threaded programs. Lets take the case of multi threaded programs, I have two threads t1 and t2 who called getInstance() method, t1 came in checked that sg is null by the time it can instantiate, JVM comes in and suspends the thread and starts the t2, t2 comes in checks sg is null creates a new object and returns, JVM brings t1 now since t1 already had checked so it will go for creating the new object and returns it. You are doomed, Your JVM has two instances of a singleton class. So we need synchronization, lets modify our class.

class Singleton {
    private static Singleton sg=null;
    private Singleton(){
    public static synchronized Singleton getInstance() throws Exception{
        if ( sg == null){
            sg = new Singleton();
        return sg;

So multiple threads problem solved, but is everything okay with this class?? Isn't it too expensive to synchronize the getInstance method given the fact that you need it only the first time?? Lets see one more conservative way :

class Singleton {
      public final static Singleton sg = new Singleton();
      private Singleton(){

So we have two options either synchronize the getInstance() or use the above implementation. Next on multiple class loaders and serialization in next post.

Wednesday, August 1, 2012


One of the most important testing in software field is Load Testing, everyone wants to know how much  his application can scale, what's the breaking point, is there any memory leak any deadlock happening,
how much cpu is being utilized?? There are lots of ways to generate load and test, but we are not talking about any testing methodologies or any testing tool here, rather we will talk about monitoring. How to monitor a running java application to determine its performance??

One very nice monitoring tool provided by JDK is JConsole. Its a GUI tool which monitors a JVM, all you need is to start your application with jmx management agent and connect jconsole to it. Lets have a look.

How to start JMX Management Agent on an Application

To start JMX agent you need to set following java options before starting application.<some port>

if you want authentication and ssl you need to set following options<some port><path to access file><path to password>

Start JConsole

To start jconsole type jconsole on cmd, make sure JAVA_HOME is set on your machine. You will see something like this on your monitor.


If you have any JVM running local you can see it under local process tab, you can connect directly double clicking the process. 

You can even connect to some remote JVM using host:port combination or using following complete URL  service:jmx:rmi:///jndi/rmi://hostName:portNum/jmxrmi 

Start Monitoring

Once you are connected to a JVM you will see four blank graphs, you are done, Now sit back and relax and let your application run(Make some dummy requests, calls to your app). After some time you will see graphs getting generated.

At the top you can find tabs (overview, memory,threads classes, vm summary, Mbeans). These are basically different resources you can monitor. 

Memory Monitoring

Memory tab shows you the heap utilization, you can even select type of memory you want to monitor ie eden space, survivor space, permanent space. If after a long time any memory graph's base line is going up it means your application has a memory leak and it will crash after some time. Usually a memory graph is an up and down graph about a flat baseline. See the graph below

Same you can monitor Threads, CPU utilization, VM summary etc.


Jconsole is not about only monitoring you can even manage your VM. You can even pause your service, perform GC, kill threads. For all the information I suggest play with it. You will enjoy.

When You are done playing with jconsole try to play with yourkit.