Tuesday 9 April 2013

Per thread CPU usage on Linux


To investigate the per-thread CPU usage on Linux, use command ‘top’ with the -H option, which provides an additional per thread information, which is not provided by default ‘top’ usage.

The output of ‘top -H’ on Linux shows the breakdown of the CPU usage on the machine by individual threads. The top output has the following sections of interest:

 top - 16:15:45 up 21 days,  2:27,  3 users,  load   average: 17.94, 12.30, 5.52   
 Tasks: 150 total,  26 running, 124 sleeping,   0   stopped,   0 zombie     Cpu(s): 87.3% us,  1.2% sy,  0.0% ni, 27.6% id,  0.0%   wa,  0.0% hi,  0.0% si     Mem:   4039848k total,  3999776k used,   40072k free,    92824k buffers     Swap:  2097144k total,      224k used, 2096920k free,  1131652k cached   


The Cpus(s) row in this header section shows the CPU usage in terms of the following:
us - Percentage of CPU time spent in user space.
sy - Percentage of CPU time spent in kernel space.
ni - Percentage of CPU time spent on low priority processes.
id - Percentage of CPU time spent idle.
wa - Percentage of CPU time spent in wait (on disk).
hi - Percentage of CPU time spent handling hardware interrupts.
si - Percentage of CPU time spent handling software interrupts.

The "us", "sy" and "id" values are useful as the user, system (kernel) and idle CPU time respectively.

The next section shows the per-thread breakdown of the CPU usage.

PID USER    PR  NI  VIRT  RES  SHR S %CPU   %MEM    TIME+  COMMAND   
 31253 user1   16   0 2112m 2.1g 1764 R 37.0   53.2   0:39.89 java   
 31249 user1   16   0 2112m 2.1g 1764 R 15.5   53.2   0:38.29 java   
 31244 user1   16   0 2112m 2.1g 1764 R 13.6   53.2   0:40.05 java   
 31250 user1   16   0 2112m 2.1g 1764 R 13.6   53.2   0:41.23 java   
 31242 user1   16   0 2112m 2.1g 1764 R 12.9   53.2   0:40.56 java   
 31238 user1   16   0 2112m 2.1g 1764 S 12.6   53.2   1:22.21 java   
 31246 user1   16   0 2112m 2.1g 1764 R 12.6   53.2   0:39.62 java   
 31248 user1   16   0 2112m 2.1g 1764 R 12.6   53.2   0:39.40 java   
 31258 user1   16   0 2112m 2.1g 1764 R 12.6   53.2   0:39.98 java   
 31264 user1   17   0 2112m 2.1g 1764 R 12.6   53.2   0:39.54 java   
 31243 user1   16   0 2112m 2.1g 1764 R 12.2   53.2   0:37.43 java   
 31245 user1   16   0 2112m 2.1g 1764 R 12.2   53.2   0:37.53 java   
 ...


This provides the following information per thread basis:
PID - The thread ID. This can be converted into hexadecimal and used to correlate to the "native ID" in a java thread dump file.
USER - The user ID of the user that started the process.
PR - The priority of the thread.
NI - The "nice" value for the process.
VIRT - The virtual memory (allocated) usage of the process.
RES - The resident memory (committed) usage of the process.
SHR - The shared memory usage of the process.
S - The state of the thread. This can be one of the following:
R - Running
S - Sleeping
D - Uninterruptible sleep
T - Traced
Z - Zombie
%CPU - The percentage of a single CPU usage by the thread.
%MEM - The percentage of the memory used by the process.
TIME+ - The amount of CPU time used by the thread.
COMMAND - The name of the process executable.

Note that the "Cpu(s)" line in the header of the output shows the percentage usage across all of the available CPUs, whereas the %CPU column above represents the percentage usage of a single CPU. Thus for example, on a four-CPU machine the Cpu(s) row will total 100% and the %CPU column will total to 400%. To see the per cpu usage in header section, press 1.

What to look for from the top -H output?

In the per-thread breakdown of the CPU usage shown above, the Java process is taking approximately 75% of the CPU usage. This value is found by totaling the %CPU column for all the Java threads (not all threads are shown above) and dividing by the number of CPUs. The Java process is not limited by other processes, because the CPU there is still approximately 25% idle.

You can also see that the CPU usage of the Java process is spread reasonably evenly over all of the threads in the Java process. This spread implies that no one thread has a particular problem. Although the application is allowed to use most of the available CPU, approximately 25% of the total CPU is idle meaning that some points of contention or delay in the Java process can be identified.

A report indicating that active processes are using a small percentage of CPU, even though the machine appears idle, means that the performance of the application is probably limited by points of contention or process delay, preventing the application from scaling to use all of the available CPU.

If a deadlock is present, the reported CPU usage for the Java process is low or zero.

If threads are looping, the Java CPU usage approaches 100%, but a small number of the threads account for all of that CPU time.

Whenever you have threads of interest, note the PID values, convert them to a hexadecimal value and look up the threads in thread dump file to discover the name of application thread. Then look at the thread stack trace to understand the kind of work it is doing.

How to generate thread dump for Java process?

The thread dump for a java process can be generated using command:
/bin/kill -3 <pid>

Wednesday 27 March 2013

“Too many open files” error in highly multithreaded application


Error:
Caused by: java.io.FileNotFoundException: errors.txt (Too many open files)
    at java.io.FileOutputStream.openAppend(Native Method)
    at java.io.FileOutputStream.<init>(FileOutputStream.java:192)
    at java.io.FileOutputStream.<init>(FileOutputStream.java:116)
    at hydra.FileUtil.appendToFile(FileUtil.java:528)
    ... 5 more


If we see “Too many open files” error in highly multi- threaded application, this means we have reached to maximum number of file handles.



To see where you are at with file handles on Linux:

Run the command


sysctl fs.file-nr


From this command you will get 3 numbers. First is the number of used file descriptor the second is the number of allocated but not used file descriptor and the last is the system max file descriptor.




To up this limit:
You will need to add something similar to the following in the file /etc/sysctl.conf:


fs.file-max = 204708



Also there is an individual users file handle limit, that is where the "ulimit -n" command. If you are running to this limit you will have to add entries to the /etc/security/limits.conf file for the user like the following:

@users hard nofile 81920
@users soft nofile 8192
@users hard nproc unlimited
@users soft nproc 501408
@users soft core unlimited
@users hard core unlimited


Apply the changes using :

sysctl -a

Wednesday 27 February 2013

OutOfMemoryError because of default stack size

Error String:
       java.lang.OutOfMemoryError: unable to create new native thread
              at java.lang.Thread.start0(Native Method)
              at java.lang.Thread.start(Thread.java:597)

The application dies with “java.lang.OutOfMemoryError: unable to create new native thread.” The solution is to reduce the stack size. The JVM has an interesting implementation, the design of which I don’t completely understand, but the implication is that the more memory is allocated for the heap (not necessarily used by the heap), the less memory available in the stack, and since threads are made from the stack, in practice this means more “memory” in the heap sense (which is usually what people talk about) results in less threads being able to run concurrently.

Each thread by default gets a stack of 1MB. In a highly multithreaded system (~1500 threads), it is possible that we run out of contiguous blocks of memory on that machine. The limitation is normally stack space (which must be in contiguous blocks) and since every thread consumes this scattered about you rapidly run out of contiguous blocks. Reference:http://stackoverflow.com/questions/481900/whats-the-maximum-number-of-threads-in-windows-server-2003

To reduce the stack size, add “-Xss64kb” to the JVM options. Start with 64k, try the application, then if it doesn’t work (it will fail with a Java.lang.StackOverFlowError), increase the stack to 128k, then 256k, and so on. The default stack size is 8192k so there’s a wide range to test.
Also, on Linux, you’ll need to set the Linux thread stack size to the same value as the JVM stack size to get full benefits. To do that, use “ulimit -s <size in kb>”. Note that the stack size applies per user, so you have to modify the init script, or edit /etc/security/limits.conf (on Debian/Ubuntu at least).
 

GemFire is a highly multi-threaded system and at any given point in time there are multiple thread pools and threads that are in use. The default stack size setting for a thread in java is 1MB. Stack size has to be allocated in contiguous blocks and if the machine is being used actively and there are many threads running in the system (Task Manager shows the number of active threads), you may encounter OutOfMemoryError: unable to create new native thread , even though your process has enough available heap. If this happens, consider reducing the stack size requirement for threads on the cache server. The stack size setting for cache servers can be set using the -Xss vm arg. This can be set to 384k or 512k in such cases