17

I have continous oom&panic situation unresolved. I am not sure system fills up all the ram (36GB). Why this system triggered this oom situation? I suspect it as related to lowmem zone in 32 bit linux systems. How can I analize the logs from kernel panic and oom-killer?

Best Regards,

Kernel 3.10.24

Dec 27 09:19:05 2013 kernel: : [277622.359064] squid invoked oom-killer: gfp_mask=0x42d0, order=3, oom_score_adj=0
Dec 27 09:19:05 2013 kernel: : [277622.359069] squid cpuset=/ mems_allowed=0
Dec 27 09:19:05 2013 kernel: : [277622.359074] CPU: 9 PID: 15533 Comm: squid Not tainted 3.10.24-1.lsg #1
Dec 27 09:19:05 2013 kernel: : [277622.359076] Hardware name: Intel Thurley/Greencity, BIOS 080016  10/05/2011
Dec 27 09:19:05 2013 kernel: : [277622.359078]  00000003 e377b280 e03c3c38 c06472d6 e03c3c98 c04d2d96 c0a68f84 e377b580
Dec 27 09:19:05 2013 kernel: : [277622.359089]  000042d0 00000003 00000000 e03c3c64 c04abbda e42bd318 00000000 e03c3cf4
Dec 27 09:19:05 2013 kernel: : [277622.359096]  000042d0 00000001 00000247 00000000 e03c3c94 c04d3d5f 00000001 00000042
Dec 27 09:19:05 2013 kernel: : [277622.359105] Call Trace:
Dec 27 09:19:05 2013 kernel: : [277622.359116]  [<c06472d6>] dump_stack+0x16/0x20
Dec 27 09:19:05 2013 kernel: : [277622.359121]  [<c04d2d96>] dump_header+0x66/0x1c0
Dec 27 09:19:05 2013 kernel: : [277622.359127]  [<c04abbda>] ? __delayacct_freepages_end+0x3a/0x40
Dec 27 09:19:05 2013 kernel: : [277622.359131]  [<c04d3d5f>] ? zone_watermark_ok+0x2f/0x40
Dec 27 09:19:05 2013 kernel: : [277622.359135]  [<c04d2f27>] check_panic_on_oom+0x37/0x60
Dec 27 09:19:05 2013 kernel: : [277622.359138]  [<c04d36d2>] out_of_memory+0x92/0x250
Dec 27 09:19:05 2013 kernel: : [277622.359144]  [<c04dd1fa>] ? wakeup_kswapd+0xda/0x120
Dec 27 09:19:05 2013 kernel: : [277622.359148]  [<c04d6cee>] __alloc_pages_nodemask+0x68e/0x6a0
Dec 27 09:19:05 2013 kernel: : [277622.359155]  [<c0801c1e>] sk_page_frag_refill+0x7e/0x120
Dec 27 09:19:05 2013 kernel: : [277622.359160]  [<c084b8c7>] tcp_sendmsg+0x387/0xbf0
Dec 27 09:19:05 2013 kernel: : [277622.359166]  [<c0469a2f>] ? put_prev_task_fair+0x1f/0x350
Dec 27 09:19:05 2013 kernel: : [277622.359173]  [<c0ba7d8b>] ? longrun_init+0x2b/0x30
Dec 27 09:19:05 2013 kernel: : [277622.359177]  [<c084b540>] ? tcp_tso_segment+0x380/0x380
Dec 27 09:19:05 2013 kernel: : [277622.359182]  [<c086d0da>] inet_sendmsg+0x4a/0xa0
Dec 27 09:19:05 2013 kernel: : [277622.359186]  [<c07ff3a6>] sock_aio_write+0x116/0x130
Dec 27 09:19:05 2013 kernel: : [277622.359191]  [<c0457acc>] ? hrtimer_try_to_cancel+0x3c/0xb0
Dec 27 09:19:05 2013 kernel: : [277622.359197]  [<c050b208>] do_sync_write+0x68/0xa0
Dec 27 09:19:05 2013 kernel: : [277622.359202]  [<c050caa0>] vfs_write+0x190/0x1b0
Dec 27 09:19:05 2013 kernel: : [277622.359206]  [<c050cbb3>] SyS_write+0x53/0x80
Dec 27 09:19:05 2013 kernel: : [277622.359211]  [<c08f72ba>] sysenter_do_call+0x12/0x22
Dec 27 09:19:05 2013 kernel: : [277622.359213] Mem-Info:
Dec 27 09:19:05 2013 kernel: : [277622.359215] DMA per-cpu:
Dec 27 09:19:05 2013 kernel: : [277622.359218] CPU    0: hi:    0, btch:   1 usd:   0
Dec 27 09:19:05 2013 kernel: : [277622.359220] CPU    1: hi:    0, btch:   1 usd:   0
Dec 27 09:19:05 2013 kernel: : [277622.359222] CPU    2: hi:    0, btch:   1 usd:   0
Dec 27 09:19:05 2013 kernel: : [277622.359224] CPU    3: hi:    0, btch:   1 usd:   0
Dec 27 09:19:05 2013 kernel: : [277622.359226] CPU    4: hi:    0, btch:   1 usd:   0
Dec 27 09:19:05 2013 kernel: : [277622.359228] CPU    5: hi:    0, btch:   1 usd:   0
Dec 27 09:19:05 2013 kernel: : [277622.359230] CPU    6: hi:    0, btch:   1 usd:   0
Dec 27 09:19:05 2013 kernel: : [277622.359232] CPU    7: hi:    0, btch:   1 usd:   0
Dec 27 09:19:05 2013 kernel: : [277622.359234] CPU    8: hi:    0, btch:   1 usd:   0
Dec 27 09:19:05 2013 kernel: : [277622.359236] CPU    9: hi:    0, btch:   1 usd:   0
Dec 27 09:19:05 2013 kernel: : [277622.359238] CPU   10: hi:    0, btch:   1 usd:   0
Dec 27 09:19:05 2013 kernel: : [277622.359240] CPU   11: hi:    0, btch:   1 usd:   0
Dec 27 09:19:05 2013 kernel: : [277622.359242] CPU   12: hi:    0, btch:   1 usd:   0
Dec 27 09:19:05 2013 kernel: : [277622.359244] CPU   13: hi:    0, btch:   1 usd:   0
Dec 27 09:19:05 2013 kernel: : [277622.359246] CPU   14: hi:    0, btch:   1 usd:   0
Dec 27 09:19:05 2013 kernel: : [277622.359248] CPU   15: hi:    0, btch:   1 usd:   0
Dec 27 09:19:05 2013 kernel: : [277622.359250] CPU   16: hi:    0, btch:   1 usd:   0
Dec 27 09:19:05 2013 kernel: : [277622.359253] CPU   17: hi:    0, btch:   1 usd:   0
Dec 27 09:19:05 2013 kernel: : [277622.359255] CPU   18: hi:    0, btch:   1 usd:   0
Dec 27 09:19:05 2013 kernel: : [277622.359258] CPU   19: hi:    0, btch:   1 usd:   0
Dec 27 09:19:05 2013 kernel: : [277622.359260] CPU   20: hi:    0, btch:   1 usd:   0
Dec 27 09:19:05 2013 kernel: : [277622.359262] CPU   21: hi:    0, btch:   1 usd:   0
Dec 27 09:19:05 2013 kernel: : [277622.359264] CPU   22: hi:    0, btch:   1 usd:   0
Dec 27 09:19:05 2013 kernel: : [277622.359266] CPU   23: hi:    0, btch:   1 usd:   0
Dec 27 09:19:05 2013 kernel: : [277622.359268] Normal per-cpu:
Dec 27 09:19:05 2013 kernel: : [277622.359270] CPU    0: hi:  186, btch:  31 usd:  34
Dec 27 09:19:05 2013 kernel: : [277622.359272] CPU    1: hi:  186, btch:  31 usd:  72
Dec 27 09:19:05 2013 kernel: : [277622.359274] CPU    2: hi:  186, btch:  31 usd:  40
Dec 27 09:19:05 2013 kernel: : [277622.359276] CPU    3: hi:  186, btch:  31 usd:   0
Dec 27 09:19:05 2013 kernel: : [277622.359279] CPU    4: hi:  186, btch:  31 usd:  39
Dec 27 09:19:05 2013 kernel: : [277622.359281] CPU    5: hi:  186, btch:  31 usd:  49
Dec 27 09:19:05 2013 kernel: : [277622.359283] CPU    6: hi:  186, btch:  31 usd:  50
Dec 27 09:19:05 2013 kernel: : [277622.359285] CPU    7: hi:  186, btch:  31 usd:  25
Dec 27 09:19:05 2013 kernel: : [277622.359286] CPU    8: hi:  186, btch:  31 usd:  42
Dec 27 09:19:05 2013 kernel: : [277622.359289] CPU    9: hi:  186, btch:  31 usd:  39
Dec 27 09:19:05 2013 kernel: : [277622.359290] CPU   10: hi:  186, btch:  31 usd: 155
Dec 27 09:19:05 2013 kernel: : [277622.359293] CPU   11: hi:  186, btch:  31 usd:  56
Dec 27 09:19:05 2013 kernel: : [277622.359295] CPU   12: hi:  186, btch:  31 usd:   2
Dec 27 09:19:05 2013 kernel: : [277622.359297] CPU   13: hi:  186, btch:  31 usd: 162
Dec 27 09:19:05 2013 kernel: : [277622.359299] CPU   14: hi:  186, btch:  31 usd:  67
Dec 27 09:19:05 2013 kernel: : [277622.359301] CPU   15: hi:  186, btch:  31 usd:   0
Dec 27 09:19:05 2013 kernel: : [277622.359303] CPU   16: hi:  186, btch:  31 usd:  68
Dec 27 09:19:05 2013 kernel: : [277622.359305] CPU   17: hi:  186, btch:  31 usd:  38
Dec 27 09:19:05 2013 kernel: : [277622.359307] CPU   18: hi:  186, btch:  31 usd:  56
Dec 27 09:19:05 2013 kernel: : [277622.359308] CPU   19: hi:  186, btch:  31 usd:   0
Dec 27 09:19:05 2013 kernel: : [277622.359310] CPU   20: hi:  186, btch:  31 usd:  54
Dec 27 09:19:05 2013 kernel: : [277622.359312] CPU   21: hi:  186, btch:  31 usd:  35
Dec 27 09:19:05 2013 kernel: : [277622.359314] CPU   22: hi:  186, btch:  31 usd:   2
Dec 27 09:19:05 2013 kernel: : [277622.359316] CPU   23: hi:  186, btch:  31 usd:  60
Dec 27 09:19:05 2013 kernel: : [277622.359318] HighMem per-cpu:
Dec 27 09:19:05 2013 kernel: : [277622.359320] CPU    0: hi:  186, btch:  31 usd:  32
Dec 27 09:19:05 2013 kernel: : [277622.359322] CPU    1: hi:  186, btch:  31 usd:  52
Dec 27 09:19:05 2013 kernel: : [277622.359324] CPU    2: hi:  186, btch:  31 usd:   9
Dec 27 09:19:05 2013 kernel: : [277622.359326] CPU    3: hi:  186, btch:  31 usd:   0
Dec 27 09:19:05 2013 kernel: : [277622.359328] CPU    4: hi:  186, btch:  31 usd: 125
Dec 27 09:19:05 2013 kernel: : [277622.359330] CPU    5: hi:  186, btch:  31 usd: 116
Dec 27 09:19:05 2013 kernel: : [277622.359332] CPU    6: hi:  186, btch:  31 usd: 126
Dec 27 09:19:05 2013 kernel: : [277622.359333] CPU    7: hi:  186, btch:  31 usd:   0
Dec 27 09:19:05 2013 kernel: : [277622.359336] CPU    8: hi:  186, btch:  31 usd:  79
Dec 27 09:19:05 2013 kernel: : [277622.359338] CPU    9: hi:  186, btch:  31 usd:  34
Dec 27 09:19:05 2013 kernel: : [277622.359340] CPU   10: hi:  186, btch:  31 usd: 111
Dec 27 09:19:05 2013 kernel: : [277622.359341] CPU   11: hi:  186, btch:  31 usd: 144
Dec 27 09:19:05 2013 kernel: : [277622.359343] CPU   12: hi:  186, btch:  31 usd:  15
Dec 27 09:19:05 2013 kernel: : [277622.359345] CPU   13: hi:  186, btch:  31 usd: 166
Dec 27 09:19:05 2013 kernel: : [277622.359347] CPU   14: hi:  186, btch:  31 usd: 185
Dec 27 09:19:05 2013 kernel: : [277622.359349] CPU   15: hi:  186, btch:  31 usd:   0
Dec 27 09:19:05 2013 kernel: : [277622.359351] CPU   16: hi:  186, btch:  31 usd:  58
Dec 27 09:19:05 2013 kernel: : [277622.359353] CPU   17: hi:  186, btch:  31 usd: 122
Dec 27 09:19:05 2013 kernel: : [277622.359356] CPU   18: hi:  186, btch:  31 usd: 170
Dec 27 09:19:05 2013 kernel: : [277622.359358] CPU   19: hi:  186, btch:  31 usd:   0
Dec 27 09:19:05 2013 kernel: : [277622.359360] CPU   20: hi:  186, btch:  31 usd:  30
Dec 27 09:19:05 2013 kernel: : [277622.359362] CPU   21: hi:  186, btch:  31 usd:  33
Dec 27 09:19:05 2013 kernel: : [277622.359364] CPU   22: hi:  186, btch:  31 usd:  28
Dec 27 09:19:05 2013 kernel: : [277622.359366] CPU   23: hi:  186, btch:  31 usd:  44
Dec 27 09:19:05 2013 kernel: : [277622.359371] active_anon:658515 inactive_anon:54399 isolated_anon:0
Dec 27 09:19:05 2013 kernel: : [277622.359371]  active_file:1172176 inactive_file:323606 isolated_file:0
Dec 27 09:19:05 2013 kernel: : [277622.359371]  unevictable:0 dirty:0 writeback:0 unstable:0
Dec 27 09:19:05 2013 kernel: : [277622.359371]  free:6911872 slab_reclaimable:29430 slab_unreclaimable:34726
Dec 27 09:19:05 2013 kernel: : [277622.359371]  mapped:45784 shmem:9850 pagetables:107714 bounce:0
Dec 27 09:19:05 2013 kernel: : [277622.359371]  free_cma:0
Dec 27 09:19:05 2013 kernel: : [277622.359382] DMA free:2332kB min:36kB low:44kB high:52kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15968kB managed:6960kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:8kB slab_unreclaimable:288kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes
Dec 27 09:19:05 2013 kernel: : [277622.359384] lowmem_reserve[]: 0 573 36539 36539
Dec 27 09:19:05 2013 kernel: : [277622.359393] Normal free:114488kB min:3044kB low:3804kB high:4564kB active_anon:0kB inactive_anon:0kB active_file:252kB inactive_file:256kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:894968kB managed:587540kB mlocked:0kB dirty:0kB writeback:0kB mapped:4kB shmem:0kB slab_reclaimable:117712kB slab_unreclaimable:138616kB kernel_stack:11976kB pagetables:0kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:982 all_unreclaimable? yes
Dec 27 09:19:05 2013 kernel: : [277622.359395] lowmem_reserve[]: 0 0 287725 287725
Dec 27 09:19:05 2013 kernel: : [277622.359404] HighMem free:27530668kB min:512kB low:48272kB high:96036kB active_anon:2634060kB inactive_anon:217596kB active_file:4688452kB inactive_file:1294168kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:36828872kB managed:36828872kB mlocked:0kB dirty:0kB writeback:0kB mapped:183132kB shmem:39400kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:430856kB unstable:0kB bounce:367564104kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
Dec 27 09:19:05 2013 kernel: : [277622.359406] lowmem_reserve[]: 0 0 0 0
Dec 27 09:19:05 2013 kernel: : [277622.359410] DMA: 3*4kB (U) 2*8kB (U) 4*16kB (U) 5*32kB (U) 2*64kB (U) 0*128kB 0*256kB 0*512kB 0*1024kB 1*2048kB (R) 0*4096kB = 2428kB
Dec 27 09:19:05 2013 kernel: : [277622.359422] Normal: 5360*4kB (UEM) 3667*8kB (UEM) 3964*16kB (UEMR) 13*32kB (MR) 0*64kB 1*128kB (R) 1*256kB (R) 0*512kB 0*1024kB 0*2048kB 0*4096kB = 115000kB
Dec 27 09:19:05 2013 kernel: : [277622.359435] HighMem: 6672*4kB (M) 74585*8kB (UM) 40828*16kB (UM) 17275*32kB (UM) 3314*64kB (UM) 1126*128kB (UM) 992*256kB (UM) 585*512kB (UM) 225*1024kB (UM) 78*2048kB (UMR) 5957*4096kB (UM) = 27529128kB
Dec 27 09:19:05 2013 kernel: : [277622.359452] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
Dec 27 09:19:05 2013 kernel: : [277622.359454] 1505509 total pagecache pages
Dec 27 09:19:05 2013 kernel: : [277622.359457] 4 pages in swap cache
Dec 27 09:19:05 2013 kernel: : [277622.359459] Swap cache stats: add 13, delete 9, find 0/0
Dec 27 09:19:05 2013 kernel: : [277622.359460] Free swap  = 35318812kB
Dec 27 09:19:05 2013 kernel: : [277622.359462] Total swap = 35318864kB
Dec 27 09:19:05 2013 kernel: : [277622.450529] 9699327 pages RAM
Dec 27 09:19:05 2013 kernel: : [277622.450532] 9471490 pages HighMem
Dec 27 09:19:05 2013 kernel: : [277622.450533] 342749 pages reserved
Dec 27 09:19:05 2013 kernel: : [277622.450534] 2864256 pages shared
Dec 27 09:19:05 2013 kernel: : [277622.450535] 1501243 pages non-shared
Dec 27 09:19:05 2013 kernel: : [277622.450538] Kernel panic - not syncing: Out of memory: system-wide panic_on_oom is enabled

Dec 27 09:19:05 2013 kernel: : [277622.450538]

and

# cat /proc/meminfo
MemTotal:       37426312 kB
MemFree:        28328992 kB
Buffers:           94728 kB
Cached:          6216068 kB
SwapCached:            0 kB
Active:          6958572 kB
Inactive:        1815380 kB
Active(anon):    2329152 kB
Inactive(anon):   170252 kB
Active(file):    4629420 kB
Inactive(file):  1645128 kB
Unevictable:           0 kB
Mlocked:               0 kB
HighTotal:      36828872 kB
HighFree:       28076144 kB
LowTotal:         597440 kB
LowFree:          252848 kB
SwapTotal:      35318864 kB
SwapFree:       35318860 kB
Dirty:                 0 kB
Writeback:             8 kB
AnonPages:       2463512 kB
Mapped:           162296 kB
Shmem:             36332 kB
Slab:             208676 kB
SReclaimable:     120872 kB
SUnreclaim:        87804 kB
KernelStack:        6320 kB
PageTables:        42280 kB
NFS_Unstable:          0 kB
Bounce:              124 kB
WritebackTmp:          0 kB
CommitLimit:    54032020 kB
Committed_AS:    3191916 kB
VmallocTotal:     122880 kB
VmallocUsed:       27088 kB
VmallocChunk:      29312 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
DirectMap4k:       10232 kB
DirectMap2M:      901120 kB

sysctl:

vm.oom_dump_tasks = 0
vm.oom_kill_allocating_task = 1
vm.panic_on_oom = 1

vm.admin_reserve_kbytes = 8192
vm.block_dump = 0
vm.dirty_background_bytes = 0
vm.dirty_background_ratio = 10
vm.dirty_bytes = 0
vm.dirty_expire_centisecs = 3000
vm.dirty_ratio = 20
vm.dirty_writeback_centisecs = 500
vm.drop_caches = 0
vm.highmem_is_dirtyable = 0
vm.hugepages_treat_as_movable = 0
vm.hugetlb_shm_group = 0
vm.laptop_mode = 0
vm.legacy_va_layout = 0
vm.lowmem_reserve_ratio = 256   32      32
vm.max_map_count = 65530
vm.min_free_kbytes = 3084
vm.mmap_min_addr = 4096
vm.nr_hugepages = 0
vm.nr_overcommit_hugepages = 0
vm.nr_pdflush_threads = 0
vm.overcommit_memory = 0
vm.overcommit_ratio = 50
vm.page-cluster = 3
vm.percpu_pagelist_fraction = 0
vm.scan_unevictable_pages = 0
vm.stat_interval = 1
vm.swappiness = 30
vm.user_reserve_kbytes = 131072
vm.vdso_enabled = 1
vm.vfs_cache_pressure = 100

and

# ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 292370
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 36728
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 292370
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
Mikko Rantalainen
  • 858
  • 12
  • 27
seaquest
  • 668
  • 2
  • 11
  • 25

4 Answers4

62

A 'sledgehammer' approach though would be to upgrade to a 64bit O/S (this is 32bit) because the layout of the zones is done differently.

OK, so here I will attempt to answer why you have experienced an OOM here. There are a number of factors at play here.

  • The order size of the request and how the kernel treats certain order sizes.
  • The zone being selected.
  • The watermarks this zone uses.
  • Fragmentation in the zone.

If you look at the OOM itself, there is clearly lots of free memory available but OOM-killer was invoked? Why?


The order size of the request and how the kernel treats certain order sizes

The kernel allocates memory by order. An 'order' is a region of contiguous RAM which must be satisfied for the request to work. Orders are arranged by orders of magnitude (thus the name order) using the algorithm 2^(ORDER + 12). So, order 0 is 4096, order 1 is 8192, order 2 is 16384 so on and so forth.

The kernel has a hard coded value of what is considers a 'high order' (> PAGE_ALLOC_COSTLY_ORDER). This is order 4 and above (64kb or above is a high order).

High orders are satisfied for page allocations differently from low orders. A high order allocation if it fails to grab the memory, on modern kernels will.

  • Try to run memory the compaction routine to defragment the memory.
  • Never call OOM-killer to satisfy the request.

Your order size is listed here

Dec 27 09:19:05 2013 kernel: : [277622.359064] squid invoked oom-killer: gfp_mask=0x42d0, order=3, oom_score_adj=0

Order 3 is the highest of the low-order requests and (as you see) invokes OOM-killer in an attempt to satisfy it.

Note that most userspace allocations don't use high-order requests. Typically its the kernel that requires contiguous regions of memory. An exception to this may be when userspace is using hugepages - but that isn't the case here.

In your case the order 3 allocation is called by the kernel wanting to queue a packet into the network stack - requiring a 32kb allocation to do so.

The zone being selected.

The kernel divides your memory regions into zones. This chopping up is done because on x86 certain regions of memory are only addressable by certain hardware. Older hardware may only be able to address memory in the 'DMA' zone for example. When we want to allocate some memory, first a zone is chosen and only the free memory from this zone is accounted for when making an allocation decision.

Whilst I'm not completely up to knowledge on the zone selection algorithm, the typical use-case is never to allocate from DMA, but to usually select the lowest addressable zone that could satisfy the request.

Lots of zone information is spat out during OOM which can also be gleaned from /proc/zoneinfo.

Dec 27 09:19:05 2013 kernel: : [277622.359382] DMA free:2332kB min:36kB low:44kB high:52kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15968kB managed:6960kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:8kB slab_unreclaimable:288kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes
Dec 27 09:19:05 2013 kernel: : [277622.359393] Normal free:114488kB min:3044kB low:3804kB high:4564kB active_anon:0kB inactive_anon:0kB active_file:252kB inactive_file:256kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:894968kB managed:587540kB mlocked:0kB dirty:0kB writeback:0kB mapped:4kB shmem:0kB slab_reclaimable:117712kB slab_unreclaimable:138616kB kernel_stack:11976kB pagetables:0kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:982 all_unreclaimable? yes
Dec 27 09:19:05 2013 kernel: : [277622.359404] HighMem free:27530668kB min:512kB low:48272kB high:96036kB active_anon:2634060kB inactive_anon:217596kB active_file:4688452kB inactive_file:1294168kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:36828872kB managed:36828872kB mlocked:0kB dirty:0kB writeback:0kB mapped:183132kB shmem:39400kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:430856kB unstable:0kB bounce:367564104kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no

The zones you have, DMA, Normal and HighMem indicate a 32-bit platform, because the HighMem zone is non-existent on 64bit. Also on 64bit systems Normal is mapped to 4GB and beyond whereas on 32bit it maps up to 896Mb (although, in your case the kernel reports only managing a smaller portion than this:- managed:587540kB.)

Its possible to tell where this allocation came from by looking at the first line again, gfp_mask=0x42d0 tells us what type of allocation was done. The last byte (0) tells us that this is a allocation from the normal zone. The gfp meanings are located in include/linux/gfp.h.

The watermarks this zone uses.

When memory is low, actions to reclaim it are specified by the watermark. They show up here: min:3044kB low:3804kB high:4564kB. If free memory reaches 'low', then swapping will occur until we pass the 'high' threshold. If memory reaches 'min', we need to kill stuff in order to free up memory via the OOM-killer.

Fragmentation in the zone.

In order to see whether a request for a specific order of memory can be satisfied, the kernel accounts for how many free pages and available of each order. This is readable in /proc/buddyinfo. OOM-killer reports additionally spit out the buddyinfo too as seen here:

Normal: 5360*4kB (UEM) 3667*8kB (UEM) 3964*16kB (UEMR) 13*32kB (MR) 0*64kB 1*128kB (R) 1*256kB (R) 0*512kB 0*1024kB 0*2048kB 0*4096kB = 115000kB

For a memory allocation to be satisfied there must be free memory available in the order size requested or a higher allocation. Having lots and lots of free data in the low orders and none in the higher orders means your memory is fragmented. If you get a very high order allocation its possible (even with lots of free memory) for it to not be satisfied due to there being no high-order pages available. The kernel can defragment memory (this is called memory compaction) by moving lots of low order pages around so they don't leave gaps in the addressable ram space.

OOM-killer was invoked? Why?

So, if we take these things into account, we can say the following;

  • A 32kB contiguous allocation was attempted. From the normal zone.
  • There was enough free memory in the zone selected.
  • There was order 3, 5 and 6 memory available 13*32kB (MR) 1*128kB (R) 1*256kB (R)

So, if there was free memory, other orders could satisfy the request. what happened?

Well, there is more to allocating from an order than just checking the amount of free memory available for that order or higher. The kernel effectively subtracts memory from all lower orders from the total free line and then performs the min watermark check on what is left.

What happens in your case is to check our free memory for that zone we must do.

115000 - (5360*4) - (3667*8) - (3964*16) = 800

This amount of free memory is checked against the min watermark, which is 3044. Thus, technically speaking -- you have no free memory left to do the allocation you requested. And this is why you invoked OOM-killer.


Fixing

There are two fixes. Upgrading to 64bit changes your zone partitioning such that 'Normal' is 4GB up to 36GB, so you wont end up 'defaulting' your memory allocation into a zone which can get so heavily fragmented. Its not that you have more addressable memory that fixes this problem (because you're using PAE already), merely that the zone you select from has more addressable memory.

The second way (which I have never tested) is to try to get the kernel to more aggressively compact your memory.

If you change the value of vm.extfrag_threshold from 500 to 100, its more likely to compact memory in an attempt to honour a high-order allocation. Although, I have never messed with this value before - it will also depend on what your fragmentation index is which is available in /sys/kernel/debug/extfrag/extfrag_index. I dont have a box at the moment with a new enough kernel to see what that shows to offer more than this.

Alternatively you could run some sort of cron job (this is horribly, horribly ugly) to manually compact memory yourself by writing into /proc/sys/vm/compact_memory.

In all honestly though, I don't think there really is a way to tune the system to avoid this problem -- its the nature of the memory allocator to work this way. Changing the architecture of the platform you use is probably the only fundamentally resolvable solution.

Matthew Ife
  • 22,927
  • 2
  • 54
  • 71
  • Going into64 bit impossible at the moment. I have to tune the system not to get the error. – seaquest Dec 30 '13 at 10:38
  • 6
    This is an awesome answer. Have an upvote :) – wzzrd Dec 30 '13 at 12:14
  • Hi Mlfe, excellent answer. I am curious about this part of your post. "The kernel effectively subtracts memory from all lower orders from the total free line and then performs the min watermark check on what is left." - Do you have the relevant source code that I can go through? Because, well, I have dealt a lot of OOM issues but I haven't seen this logic in source. Perhaps, I missed. Where are you seeing anyway? oom_kill.c? Or anything else? – Soham Chakraborty Dec 30 '13 at 15:56
  • 3
    The code is in mm/page_alloc.c and done in the function __zone_watermark_ok – Matthew Ife Dec 30 '13 at 16:05
  • @SohamChakraborty If you're interested in this stuff I should also point out you can figure out what causes an OOM in the kernel by following the stack trace in OOM-killer output supplied, as is the case here. – Matthew Ife Dec 30 '13 at 16:16
  • Gotcha, Initially I was bit puzzled with the trace. I have read tons of traces like this but what obviously I didn't pay much attention here :p Now, I see it. sk_page_frag_refill() is called when we enter the Normal zone only. When I saw the TSO generation part in the trace, I was inclined to think it of as rather a DMA zone problem. With that, I couldn't match the fragmention/buddy allocator chunks. Yeah, I need to pay more attention :) Thanks buddy. – Soham Chakraborty Dec 30 '13 at 18:51
  • @seaquest This system should have been _installed_ as 64-bit. Fire whoever originally set it up. – Michael Hampton Dec 30 '13 at 19:56
  • @seaquest 36 GB of RAM on 32-bit? Uh... – Nathan C Dec 30 '13 at 20:12
  • Very nice response. Does anyone have any insights as to why the kernel does what Soham questioned? – waffleman Sep 03 '14 at 18:44
6

Off the start: you should really go for a 64-bit operating system. Do you have a good reason to stay at 32-bit here?

It is hard to diagnose this problem without taking a look at the system more closely, preferably around the time it fails, so my (quick) post is more or less generically aimed at memory issues on 32-bit systems. Did I mention going 64-bit would make this all go away?

You problem is three-fold.

First of all, even on a PAE kernel, the per process address space is limited to 4GiB[1]. This means that your squid instance will never be able to eat more than 4GiB of RAM per process. I'm not that familiar with squid, but if this is your main proxy server, that might not be enough anyway.

Second, on a 32-bit system with vast amounts of RAM, a lot of memory in what is called 'ZONE_NORMAL' is used to store data structures that are needed to use memory in ZONE_HIGHMEM. These datastructure cannot be moved into ZONE_HIGHMEM themselves, because the memory the kernel uses for it's own purposes must always be in ZONE_NORMAL (i.e. in the first 1GiB-ish). The more memory you have in ZONE_HIGHMEM (a lot, in your case), the more this becomes a problem, because the kernel then needs more and more memory from ZONE_NORMAL to manage ZONE_HIGHMEM. As the amount of free memory in ZONE_NORMAL dries up, your system may fail at some tasks, because ZONE_NORMAL is where a lot of stuff happens on a 32-bit system. All the kernel related memory operations, for example ;)

Third, even if there is some memory left in ZONE_NORMAL (I haven't gone through your logs in detail), some memory operations will require unfragmented memory. For example, if all your memory is fragmented into really small pieces, some operations that need more than that, will fail. [3] A brief look at your logs does show a fairly significant amount of fragmentation in ZONE_DMA and ZONE_NORMAL.

Edit: Mlfe's answer above has an excellent explanation of how this works in detail.

Again: on a 64-bit system, all memory is in ZONE_NORMAL. There is no HIGHMEM zone on 64-bit systems. Problem solved.

Edit: You could take a look here [4] to see if you can tell oom-killer to leave your important processes alone. That will not solve everything (if anything at all), but it might be worth a try.

[1] http://en.wikipedia.org/wiki/Physical_address_extension#Design

[2] http://www.redhat.com/archives/rhelv5-list/2008-September/msg00237.html and https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/5/html/Tuning_and_Optimizing_Red_Hat_Enterprise_Linux_for_Oracle_9i_and_10g_Databases/sect-Oracle_9i_and_10g_Tuning_Guide-Hardware_Architectures_and_Linux_Kernels-a32_bit_Architecture_and_the_hugemem_Kernel.html

[3] http://bl0rg.krunch.be/oom-frag.html

[4] http://lwn.net/Articles/317814/

wzzrd
  • 10,269
  • 2
  • 32
  • 47
  • 2
    The memory is being allocated from the normal zone (see gfp_mask), albeit at first glance zone normal has enough free memory to commit to this allocation. I do have an actual explanation for this but it requires quite a long edit to my post. – Matthew Ife Dec 30 '13 at 10:45
5

@MIfe already provided excellent write up about how memory allocations in kernel are handled and also provided you with proper solution like switching to 64-bit OS and nasty hack like manual memory compaction via /proc/sys/vm/compact_memory in cron.

My 2 cents would be another workaround that may help you:
I've noticed that you have tcp_tso_segment in your kernel backtrace, so doing:

# ethtool -K ethX tso off gso off lro off

can decrease pressure on mm by forcing it to use lower orders.

PS. list of all offloads can be obtained via # ethtool -k ethX

SaveTheRbtz
  • 5,621
  • 4
  • 29
  • 45
4

The panic is because the sysctl "vm.panic_on_oom = 1" is set -- the idea is that rebooting the system returns it to a sane state. You can change this in sysctl.conf.

Right at the top we read squid invoked oom killer. You might check your squid configuration and its maximum memory usage (or just move to a 64-bit OS).

/proc/meminfo shows high memory zone in use, so you are running a 32-bit kernel with 36GB memory. You can also see that in the normal zone, in order to meet squid's demand for memory, the kernel scanned 982 pages without success:

pages_scanned:982 all_unreclaimable? yes
ramruma
  • 2,730
  • 1
  • 14
  • 8