17

What's the difference between net.core.rmem_max and the third value of net.ipv4.tcp_rmem? Which has the higher priority for tcp connections?

For below two examples, what's the max buffer for tcp connections?

Case 1:
sysctl -w net.core.rmem_max=7388608
sysctl -w net.ipv4.tcp_rmem='4096 87380 8388608'

Case 2:
sysctl -w net.core.rmem_max=8388608
sysctl -w net.ipv4.tcp_rmem='4096 87380 7388608'
bydsky
  • 273
  • 1
  • 2
  • 7

1 Answers1

7

Core is the overall max receive buffer, while tcp relates to just that protocol.

As for the priority-question: It seems that the tcp-setting will take precendence over the common max setting, which is a bit confusing. Setting max has no effect on the current tcp setting (just tested on CentOS 5).

A more correct description would have been: default_max - but that was propably too longish.

Nils
  • 7,657
  • 3
  • 31
  • 71
  • 2
    Your explanation makes sense, but this conflicts with what `man tcp` says about `tcp_rmem`'s max value: `the maximum size of the receive buffer used by each TCP socket. This value does not override the global net.core.rmem_max` - see also http://stackoverflow.com/questions/31546835/tcp-receiving-window-size-higher-than-net-core-rmem-max. Is `man tcp` wrong? – nh2 Feb 16 '16 at 16:27
  • @nh2 That would not be the first time where a man page is wrong. – Nils Feb 20 '16 at 21:46
  • 2
    How exactly did you test it? – Wildcard May 14 '17 at 15:09
  • 1
    @Wildcard I set the value and read the other value after Settings the first – Nils May 16 '17 at 18:44
  • 3
    @Nils, simply reading the values won't tell you if one overrides another -- you have to actually try to get a TCP buffer which exceeds the net.core.[wmem/rmem]_max buffer in order to test out such overriding. – Jordan Pilat Sep 20 '17 at 21:28
  • 1
    I've reported the apparent man page bug here: https://bugzilla.kernel.org/show_bug.cgi?id=209327 – nh2 Sep 19 '20 at 01:17