I'm seeing the error message "No buffer space available" when processes call "connect" on a Linux virtual machine. I'm having trouble tracking down the cause - hopefully someone can help!
I've checked the following:
(1) File handles:
cat /proc/sys/fs/file-nr
4672 0 810707
I'm reading this as (allocated, unused, available) so this looks OK.
(2) Sockets or TCP memory:
cat /proc/sys/net/ipv4/tcp_mem
191889 255854 383778
cat /proc/net/sockstat
sockets: used 579
TCP: inuse 169 orphan 0 tw 245 alloc 187 mem 5
UDP: inuse 31 mem 4
UDPLITE: inuse 0
RAW: inuse 0
FRAG: inuse 0 memory 0
Reading this as only a total of 579 sockets in use, page totals way below the maximum.
There are lots of random TCP tweaks shown on Google - what I'm hoping for in an answer is (1) the resource I'm running out of, (2) how to determine the current value and (3) how to adjust the ceiling. Most of the pages I've found are missing everything except (3)!
** Update #1 **
On Flup's suggestion I did a systrace when it happens (using ping):
socket(PF_INET, SOCK_DGRAM, IPPROTO_IP) = 4
connect(4, {sa_family=AF_INET, sin_port=htons(1025), sin_addr=inet_addr("10.140.0.65")}, 16) = -1 ENOBUFS (No buffer space available)
** Update #2 **
I don't know much about the linux kernel source, but I had a dig around and the only place in the connect() path I can see ENOBUFS is here: http://lxr.free-electrons.com/source/net/ipv4/af_inet.c?v=3.11#L353
This looks like it is allocating things in the kernel though with kmem_cache_alloc
and security_sk_alloc
...?