I'm having some issues with java process and nrpe checks. We have some processes that sometimes use 1000% cpu on a 32 core system. The system is pretty responsive until you do a
ps aux
or try to do anything in the /proc/pid# like
[root@flume07.domain.com /proc/18679]# ls
hangs..
A strace of ps aux
stat("/etc/localtime", {st_mode=S_IFREG|0644, st_size=2819, ...}) = 0
stat("/etc/localtime", {st_mode=S_IFREG|0644, st_size=2819, ...}) = 0
stat("/dev/pts1", 0x7fffb8526f00) = -1 ENOENT (No such file or directory)
stat("/dev/pts", {st_mode=S_IFDIR|0755, st_size=0, ...}) = 0
readlink("/proc/15693/fd/2", "/dev/pts/1", 127) = 10
stat("/dev/pts/1", {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 1), ...}) = 0
write(1, "root 15693 15692 0 06:25 pt"..., 55root 15693 15692 0 06:25 pts/1 00:00:00 ps -Af
) = 55
stat("/proc/18679", {st_mode=S_IFDIR|0555, st_size=0, ...}) = 0
open("/proc/18679/stat", O_RDONLY) = 5
read(5, "18679 (java) S 1 18662 3738 3481"..., 1023) = 264
close(5) = 0
open("/proc/18679/status", O_RDONLY) = 5
read(5, "Name:\tjava\nState:\tS (sleeping)\nT"..., 1023) = 889
close(5) = 0
open("/proc/18679/cmdline", O_RDONLY) = 5
read(5,
the java process is working and will complete just fine but the issue is it makes our monitoring go nuts thinking processes are down because it timeouts waiting for a ps aux to complete.
I've tried doing something like
nice -19 ionice -c1 /usr/lib64/nagios/plugins/check_procs -w 1:1 -c 1:1 -a 'diamond' -u root -t 30
with no luck
EDIT
System specs
- 32 core Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz
- 128gig of ram
- 12 4Tb 7200 drives
- CentOS 6.5
- I'm not sure the model but the vendor is SuperMicro
The load when this happens is around 90-160ish for 1 minute.
The odd part is I can go into any other /proc/pid# and it works just fine. The system is responsive when i ssh in. Like when we get alerted of high load I can ssh right in just fine.
Another edit
I've been using deadline for scheduler
[root@dn07.domain.com ~]# for i in {a..m}; do cat /sys/block/sd${i}/queue/scheduler; done
noop anticipatory [deadline] cfq
noop anticipatory [deadline] cfq
noop anticipatory [deadline] cfq
noop anticipatory [deadline] cfq
noop anticipatory [deadline] cfq
noop anticipatory [deadline] cfq
noop anticipatory [deadline] cfq
noop anticipatory [deadline] cfq
noop anticipatory [deadline] cfq
noop anticipatory [deadline] cfq
noop anticipatory [deadline] cfq
noop anticipatory [deadline] cfq
noop anticipatory [deadline] cfq
Mount looks like
[root@dn07.manage.com ~]# mount
/dev/sda3 on / type ext4 (rw,noatime,barrier=0)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
/dev/sda1 on /boot type ext2 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
/dev/sdb1 on /disk1 type xfs (rw,nobarrier)
/dev/sdc1 on /disk2 type xfs (rw,nobarrier)
/dev/sdd1 on /disk3 type xfs (rw,nobarrier)
/dev/sde1 on /disk4 type xfs (rw,nobarrier)
/dev/sdf1 on /disk5 type xfs (rw,nobarrier)
/dev/sdg1 on /disk6 type xfs (rw,nobarrier)
/dev/sdh1 on /disk7 type xfs (rw,nobarrier)
/dev/sdi1 on /disk8 type xfs (rw,nobarrier)
/dev/sdj1 on /disk9 type xfs (rw,nobarrier)
/dev/sdk1 on /disk10 type xfs (rw,nobarrier)
/dev/sdl1 on /disk11 type xfs (rw,nobarrier)
/dev/sdm1 on /disk12 type xfs (rw,nobarrier)
Ok I tried to install tuned and have it set to throughput performance.
[root@dn07.domain.com ~]# tuned-adm profile throughput-performance
Switching to profile 'throughput-performance'
Applying deadline elevator: sda sdb sdc sdd sde sdf sdg sdh[ OK ] sdk sdl sdm
Applying ktune sysctl settings:
/etc/ktune.d/tunedadm.conf: [ OK ]
Calling '/etc/ktune.d/tunedadm.sh start': [ OK ]
Applying sysctl settings from /etc/sysctl.d/99-chef-attributes.conf
Applying sysctl settings from /etc/sysctl.conf
Starting tuned: [ OK ]