How to check performance in J2EE applications

How to check performance in J2EE applications

The below article provides some tips on finding out the application performance and an insight on what to look out for while performance tuning J2EE based applications. This is predominantly for Solaris platform

Measuring the CPU Utilization:

  Sample output  
  spdim502:# mpstat 2

CPU minf mjf xcal  intr ithr  csw icsw migr smtx  srw syscl  usr sys  wt idl
  0   10   0   84   111    6  298   16   52   13    0   308    0   1   0  99
  1   10   0   99    43   26  273   14   55   12    0   313    0   0   0  99
  2   10   0  101     9    1  257   12   46   10    0   316    0   0   0  99
  3   12   0  102   306  202  245    7   30   20    0   268    0   0   0  99

Let’s understand what IMP parameters are:

csw – Voluntary Context switches. When this number slowly increases, and the application is not IO bound, it may indicate a mutex contention.
icsw – Involuntary Context switches. When this number increases past 500, the system is under a heavy load.
smtx – if smtx increases sharply, for instance from 50 to 500, it is a sign of a system resource bottleneck (ex., network or disk)
 
What all can we control from a developer point of view :

Do you see increasing csw? For a Java application, an increasing csw value will most likely have to do with network use. A common cause for a high csw value is the result of having created too many socket connections–either by not pooling connections or by handling new connections inefficiently Do you see increasing icsw? A common cause of this is preemption, most likely because of an end of time slice on the CPU. For a Java application, this could be a sign that there is room for improvement in code optimization.
  

Measuring the I/O operation:


  
   spdim502:# iostat -xn 10
    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0t0d0
    0.1    2.0    1.0    8.7  0.0  0.0    0.0    9.6   0   1 c1t0d0
    0.2    1.8    1.6    7.6  0.0  0.0    0.0    9.5   0   1 c1t1d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.1   0   0 spdim502:vold(pid391)

Let’s understand what IMP parameters are:

%b – Percentage of time the disk is busy (transactions in progress).
%w – Percentage of time there are transactions waiting for service (queue non-empty).
asvc_t – Reports on average response time of active transactions, in milliseconds.
What all can we control from a developer point of view

For a Java application, disk bottlenecks can often be addressed by using software caches. An example of a software cache would be a JDBC result set cache, or a generated pages cache. Disk reads and writes are slow; therefore, limiting disk access is a sure way to improve performance.

Measuring the Network statistics:


 
      spdim502:# netstat -sP tcp
      TCP     tcpRtoAlgorithm     =     4     tcpRtoMin           =   400
        tcpRtoMax           = 60000     tcpMaxConn          =    -1
        tcpActiveOpens      =847072     tcpPassiveOpens     = 29000
        tcpAttemptFails     =671984     tcpEstabResets      =  1457
        tcpCurrEstab        =    63     tcpOutSegs          =18652415
        tcpOutDataSegs      =13951961   tcpOutDataBytes     =2132917090
        tcpRetransSegs      =206754     tcpRetransBytes     =561128
        tcpOutAck           =4691540    tcpOutAckDelayed    =1493199
        tcpOutUrg           =     0     tcpOutWinUpdate     =  9876
        tcpOutWinProbe      =  2360     tcpOutControl       =1514896
        tcpOutRsts          =237987     tcpOutFastRetrans   =   320
        tcpInSegs           =20992759
        tcpInAckSegs        =13066738   tcpInAckBytes       =2132951249
        tcpInDupAck         =448174     tcpInAckUnsent      =     0
        tcpInInorderSegs    =16777336   tcpInInorderBytes   =3624221608
        tcpInUnorderSegs    =  8896     tcpInUnorderBytes   =8928844
        tcpInDupSegs        = 46180     tcpInDupBytes       =704531
        tcpInPartDupSegs    =   533     tcpInPartDupBytes   =254518
        tcpInPastWinSegs    =   263     tcpInPastWinBytes   =403130
        tcpInWinProbe       =     0     tcpInWinUpdate      =  2359
        tcpInClosed         =   174     tcpRttNoUpdate      =  1057
        tcpRttUpdate        =12872694   tcpTimRetrans       =  8949
        tcpTimRetransDrop   = 31804     tcpTimKeepalive     =149621
        tcpTimKeepaliveProbe= 45556     tcpTimKeepaliveDrop =     0
        tcpListenDrop       =     0     tcpListenDropQ0     =     0
        tcpHalfOpenDrop     =     0     tcpOutSackRetrans   =     2
 
 

Let’s understand what IMP parameters are:
tcpListenDrop – If after several looks at the command output the tcpListenDrop continues to increase, it could indicate a problem with queue size.

What all can we control from a developer point of view:

Increase Java application thread count if tcpListenDrop  is high for a long time.

Measuring the socket connection

 

 spdim502:# netstat -a | grep spdim502 | wc -l
        73

Let’s understand what IMP parameters are:
 Gives the no. of socket connections are open.

What all can we control from a developer point of view:

For a Java application, a common cause of too many sockets is inefficient use of sockets. It is common practice in Java applications to create a socket connection each time a request is made. Creating and destroying socket connections is not only expensive, but can cause unnecessary system overhead by creating too many sockets. Creating a connection pool may be a good solution to investigate        
 

Java Application Tuning Parameters

Brief suggestions for basic Java server applications are listed below.

Number of Execution Threads

A general rule for thread count is to use as few threads as possible. The JVM performs best with the fewest busy threads. A good starting point for threadcount can be found with the following equations.

(Number of Java Execution Threads) = Number of Transactions / Time(in seconds)

or

(Number of Execution Threads)=Throughput(transactions/sec)

It is important to remember that these equations give a good starting point for thread count tuning, not the best value for thread count for your application. The number of execution Threads can greatly influence performance; therefore, the proper sizing of this value is very important.

Number of Database Connections

The number of database connections, commonly known as a connection or resource pool, is closely tied to the number of execution threads. A rule of thumb is to match the number of database connections to the number of execute threads. This is a good starting point for finding the correct number of database connections. Over-configuring this value could cause unnecessary overhead to the database, while under-configuring could tie up all execution threads waitingon database I/O.

(Number of Database Connections) = (Number of Execution Threads)

Software Caches

Many server-side Java applications implement some type of software cache, commonly for JDBC result sets, or commonly generated, dynamic pages. Software caches are the most likely part of an application to cause unnecessary garbage collection overhead resulting from the software cache architecture and the replacement policy of the cache.

Most middle tier applications will have some sort of caching. These caches should be studied with GC in mind to see if they result in greater GC. Choose the architecture and replacement strategy that has lower GC. Careful implementation of caches with garbage collection in mind greatly improves performance simplyby limiting garbage.  
  

What to explore ?

The JVMPI (Java Virtual Machine Profiler Interface) is a two-way function call interface between the Java virtual machine and an in-process profiler agent.

On one hand, the virtual machine notifies the profiler agent of various events, corresponding to, for example, heap allocation, thread start, etc. On the other hand, the profiler agent issues controls and requests for more information through the JVMPI. For example, the profiler agent can turn on/off a specific event notification, based on the needs of the profiler front-end.