1

i encounter a very instresting problem, and it seems that some physical may disapper quietly. i am very puzzled, so if anyone could give some help, I would be very appreciated.

here is my top show:

sort by memory usage

Cpu(s):  0.8%us,  1.0%sy,  0.0%ni, 81.1%id, 14.2%wa,  0.0%hi,  2.9%si,  0.0%st
Mem:   4041160k total,  3947524k used,    93636k free,      736k buffers
Swap:  4096536k total,  2064148k used,  2032388k free,    41348k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
15168 root      20   0 3127m 290m 1908 S 108.2  7.4  43376:10 STServer-1
18303 root      20   0 99.7m  12m  912 S  0.0  0.3   0:00.86 sshd
 7129 root      20   0 17160 7800  520 S  0.5  0.2   5:37.52 thttpd
 2583 root      10 -10  4536 2488 1672 S  0.0  0.1   1:19.33 iscsid
 4360 root      20   0 15660 2308  464 S  0.0  0.1  15:42.71 lbtcpd.out
 4361 root      20   0  186m 1976  964 S  0.5  0.0  82:00.36 lbsvr.out
 3932 root      20   0  100m 1948  836 S  0.0  0.0  30:31.38 snmpd
18604 root      20   0 66212 1184  820 S  0.0  0.0   0:00.06 bash
18305 root      20   0 66112 1136  764 S  0.0  0.0   0:00.03 bash
18428 root      20   0 12924 1076  708 R  1.0  0.0   0:21.10 top
15318 root      20   0 99.7m 1020  996 S  0.0  0.0   0:01.15 sshd
15320 root      20   0 66228  996  788 S  0.0  0.0   0:00.80 bash
 1719 root      20   0 90216  980  884 S  0.0  0.0   0:02.29 sshd
15492 root      20   0 66216  972  780 S  0.0  0.0   0:00.20 bash
15382 root      20   0 90300  964  892 S  0.0  0.0   0:00.57 sshd
 1688 root      20   0 90068  960  852 S  0.0  0.0   0:00.57 sshd
 2345 root      20   0 90068  928  852 S  0.0  0.0   0:00.50 sshd
16175 root      20   0 90216  924  884 S  0.0  0.0   0:00.64 sshd
 2377 root      20   0 90068  908  852 S  0.0  0.0   0:00.44 sshd
 2725 root      20   0 90216  896  884 S  0.0  0.0   0:05.27 sshd
 3929 root      20   0  182m  896  816 S  0.0  0.0   0:43.61 systemInfoSubAg
15986 root      20   0 66216  884  772 S  0.0  0.0   0:00.03 bash

and here is my free shows:

[root@ric ~]# free -m
             total       used       free     shared    buffers     cached
Mem:          3946       3846        100          0          0         48
-/+ buffers/cache:       3796        149
Swap:         4000       2037       1963

here is my iostat shows:

[root@ric ~]# iostat -x -d -m 2
Linux 2.6.37 (ric)         08/16/2011

Device:         rrqm/s   wrqm/s   r/s   w/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await  svctm  %util
sda              93.24   222.57 95.44 64.40     4.10     1.12    66.96     1.37   25.46   2.78  44.44
sda1              0.00     0.00  0.00  0.00     0.00     0.00    40.80     0.00    4.00   3.10   0.00
sda2              0.00     0.00  0.00  0.00     0.00     0.00    22.35     0.00   22.52  14.80   0.00
sda4              0.00     0.00  0.00  0.00     0.00     0.00     2.00     0.00   33.00  33.00   0.00
sda5             92.73     7.49 53.39 45.79     0.57     0.21    16.08     0.72   34.67   3.19  31.67
sda6              0.50   215.08 42.06 18.61     3.53     0.91   150.14     0.65   55.27   6.36  38.58

Device:         rrqm/s   wrqm/s   r/s   w/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await  svctm  %util
sda             596.02   139.30 248.26 153.73     3.38     1.14    23.02   147.54  482.67   2.49  99.90
sda1              0.00     0.00  0.00  0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sda2              0.00     0.00  0.00  0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sda4              0.00     0.00  0.00  0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sda5            596.02   129.35 244.28 150.25     3.30     1.09    22.79   146.51  488.14   2.53  99.90  this is swap partition
sda6              0.00     9.95  3.98  3.48     0.08     0.05    35.20     1.03  193.60  75.20  56.12

some number got from /proc/meminfo

MemTotal:        4041160 kB
MemFree:          130288 kB
Buffers:             820 kB
Cached:            40940 kB
SwapCached:        82632 kB
SwapTotal:       4096536 kB
SwapFree:        2005408 kB

uname -a shows: Linux ric 2.6.37 #4 SMP Fri Jan 14 10:23:46 CST 2011 x86_64 x86_64 x86_64 GNU/Linux

we can find that the swap fs is heavily used. And it consumes much IO resouce. but when we take the RSS colume in top into account, we find the sum of all processes RES is not too much.

so my question is: is this a kernel level leak? or there is something wrong with STServer-1 process? (STServer uses momery pool to cache file data that was swapped out due to no use for a few days).

any comment is welcome. thanks!

udpate 1, slabtop shows

 Active / Total Objects (% used)    : 487002 / 537888 (90.5%)
 Active / Total Slabs (% used)      : 39828 / 39873 (99.9%)
 Active / Total Caches (% used)     : 102 / 168 (60.7%)
 Active / Total Size (% used)       : 145605.37K / 154169.46K (94.4%)
 Minimum / Average / Maximum Object : 0.02K / 0.29K / 4096.00K

  OBJS ACTIVE  USE OBJ SIZE  SLABS OBJ/SLAB CACHE SIZE NAME
133920 133862  99%    0.02K    930      144      3720K avtab_node
 98896  94881  95%    0.03K    883      112      3532K size-32
 74052  73528  99%    1.00K  18513        4     74052K size-1024
 72112  70917  98%    0.44K   9014        8     36056K skbuff_fclone_cache
...

update 2, add pmap -x 15168 (STServer-1) results

0000000000881000   45116   17872   17272 rw---    [ anon ]
00000000403a1000       4       0       0 -----    [ anon ]
00000000403a2000    8192       8       8 rw---    [ anon ]
...
00000000510aa000       4       0       0 -----    [ anon ]
00000000510ab000    8192       0       0 rw---    [ anon ]
... up to 32 8192

00007f8f2c000000    9832    4004    3964 rw---    [ anon ]
00007f8f2c99a000   55704       0       0 -----    [ anon ]
00007f8f34000000   11992    5068    5032 rw---    [ anon ]
00007f8f34bb6000   53544       0       0 -----    [ anon ]
00007f8f38000000    9768    4208    4164 rw---    [ anon ]
00007f8f3898a000   55768       0       0 -----    [ anon ]
00007f8f3c000000   13064    4080    4024 rw---    [ anon ]
00007f8f3ccc2000   52472       0       0 -----    [ anon ]
00007f8f40000000   11244    3700    3688 rw---    [ anon ]
00007f8f40afb000   54292       0       0 -----    [ anon ]
00007f8f44000000   11824    7884    7808 rw---    [ anon ]
00007f8f44b8c000   53712       0       0 -----    [ anon ]
00007f8f4c000000   19500    6848    6764 rw---    [ anon ]
00007f8f4d30b000   46036       0       0 -----    [ anon ]
00007f8f54000000   18344    6660    6576 rw---    [ anon ]
00007f8f551ea000   47192       0       0 -----    [ anon ]
00007f8f58774000 1434160       0       0 rw---    [ anon ] memory pool
00007f8fb0000000   64628   32532   30692 rw---    [ anon ]
00007f8fb7dfe000    1028    1016    1016 rw---    [ anon ]
00007f8fb8000000  131072   69512   65300 rw---    [ anon ]
00007f8fc0000000   65536   52952   50220 rw---    [ anon ]
00007f8fc40a8000    3328    1024    1024 rw---    [ anon ]
00007f8fc4aa5000    1028    1028    1028 rw---    [ anon ]
00007f8fc4d12000    1028    1020    1020 rw---    [ anon ]
00007f8fc4f15000    2640     988     936 rw---    [ anon ]
00007f8fc53b6000    2816     924     848 rw---    [ anon ]
00007f8fc5bf6000  102440       0       0 rw---    [ anon ]

total kB         3202160  348944  327480

it seems that the kernel swap the old memory (not used for a few days) to swap partition, but the private memory is not too much. if this program leaks memory, then where is it? in swap? in RSS?

update 3, kill STServer-1 I try to kill the STServer-1 process. the use free -m to check the physical memory. but there is still not too much left. about 400MB left. no cache, no buffer yet. I write a small program to allocate memory, it can only request 400M in the physical memory, after that, swap will be heavily used again.

so should I say that there is a kernel memory leak?

update 4, it happened again! here is the grep ^VmPea /proc/*/status | sort -n -k+2 | tail shows:

/proc/3841/status:VmPeak:         155176 kB
/proc/3166/status:VmPeak:         156408 kB
/proc/3821/status:VmPeak:         169172 kB
/proc/3794/status:VmPeak:         181380 kB
/proc/3168/status:VmPeak:         210880 kB
/proc/3504/status:VmPeak:         242268 kB
/proc/332/status:VmPeak:          254184 kB
/proc/5055/status:VmPeak:         258064 kB
/proc/3350/status:VmPeak:         336932 kB
/proc/28352/status:VmPeak:       2712956 kB 

top shows:

Tasks: 225 total,   1 running, 224 sleeping,   0 stopped,   0 zombie
Cpu(s):  1.9%us,  1.3%sy,  0.0%ni, 51.9%id, 43.6%wa,  0.0%hi,  1.3%si,  0.0%st
Mem:   4041160k total,  3951284k used,    89876k free,     1132k buffers
Swap:  4096536k total,   645624k used,  3450912k free,   382088k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
28352 root      20   0 2585m 1.6g 2320 D 52.2 42.7 267:37.28 STServer-1
 3821 snort     20   0  165m 8320 3476 S 10.2  0.2   1797:20 snort
21043 root      20   0 17160 7924  520 S  0.0  0.2   1:50.55 thttpd
 2586 root      10 -10  4536 2488 1672 S  0.0  0.1   0:28.59 iscsid

iostat shows:

Device:         rrqm/s   wrqm/s   r/s   w/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await  svctm  %util
sda              72.50     0.00 351.00  2.50    12.25     0.01    71.02   174.22  213.93   2.83 100.20
sda1              0.00     0.00  0.00  0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sda2              0.00     0.00  0.00  0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sda4              0.00     0.00  0.00  0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sda5             64.00     0.00 50.00  0.00     0.43     0.00    17.76    76.06   59.44  20.04 100.20  swap partition
sda6              8.50     0.00 301.00  2.50    11.81     0.01    79.79    98.16  239.39   3.30 100.20

any idea??

ric
  • 11
  • 3
  • 1
    What is STServer-1 and why is it there? Isn't that strange that the process which should help you with "the memory problem" is the one that causes the memory problem?! – mailq Aug 16 '11 at 10:50
  • STServer-1 is our app daemon server. if it is the evil, then why its RSS is only 290m ? that's very strange. – ric Aug 16 '11 at 15:09
  • yeah STServer-1 is doing some serious work on this server. its not just hogging the memory its doing some serious calculations. can you log this process? check your configuration. – Silverfire Aug 17 '11 at 02:10
  • yes, it is a working process to handle some data distribution. we are now testing its stablity by hundreds of concurrent tcp connection. this is the reason that it consumed much cpu time. the question is: as the RSS of it is small, where is the left physical memory? (less physical memory left causes too much swap) – ric Aug 17 '11 at 03:08
  • May the memory be allocated to the high network usage? So its basically the kernel that is the culprit? – artifex Aug 17 '11 at 07:05
  • are there some tools to show the memory that was hold by the kernel? – ric Aug 17 '11 at 13:37
  • Post us the output of `slabtop` if that memory leak happens again. – Janne Pikkarainen Aug 25 '11 at 10:14
  • might be offtopic: i found it neccessary to free the cached memory every then and there: echo 1 > /proc/sys/vm/drop_caches – that guy from over there Sep 09 '13 at 08:56
  • Please provide the output of slabtop sorted by cache size ('c') in slabtop when running. Also, please can you check the size/usage of any tmpfs filesystems you have running. – Matthew Ife Oct 11 '13 at 21:13

2 Answers2

1

Check the VmPeak out of /proc:

$ grep ^VmPea /proc/*/status | sort -n -k+2 | tail
/proc/32253/status:VmPeak:         86104 kB
/proc/5425/status:VmPeak:          86104 kB
/proc/9830/status:VmPeak:          86200 kB
/proc/8729/status:VmPeak:          86248 kB
/proc/399/status:VmPeak:           86472 kB
/proc/19084/status:VmPeak:         87148 kB
/proc/13092/status:VmPeak:         88272 kB
/proc/3065/status:VmPeak:         387968 kB
/proc/26432/status:VmPeak:        483480 kB
/proc/31679/status:VmPeak:        611780 kB

This should show which pid has tried to consume the most VM resources and should point at the source of the usage. If you don't see the massive amount of memory in this list then you need to look at the rest of the numbers in /proc/meminfo.

polynomial
  • 3,968
  • 13
  • 24
0

Top doesnt show system memory and you maybe using too much memory if you havent tuned network buffers for your use case.

Ochoto
  • 1,174
  • 7
  • 12
  • yes, but after I killed the app process, there still is no physical memory available yet. I can only alloc about 400M around before the system touches the swap. – ric Aug 18 '11 at 09:10