28

The default nofile limit for OS X user accounts seems to be about 256 file descriptors these days. I'm trying to test some software that needs a lot more connections than that open at once.

On a typical Debian box running the pam limits module, I'd edit /etc/security/limits.conf to set higher limits for the user that will be running the software, but I'm mystified where to set these limits in OS X.

Is there a GUI somewhere for it? Is there a config file somewhere for it? What's the tidiest way to change the default ulimits on OS X?

kenorb
  • 5,943
  • 1
  • 44
  • 53
archaelus
  • 383
  • 1
  • 4
  • 5

7 Answers7

28

Under Leopard the initial process is launchd. The default ulimits of each process are inherited from launchd. For reference the default (compiled in) limits are

$ sudo launchctl limit
    cpu         unlimited      unlimited      
    filesize    unlimited      unlimited      
    data        6291456        unlimited      
    stack       8388608        67104768       
    core        0              unlimited      
    rss         unlimited      unlimited      
    memlock     unlimited      unlimited      
    maxproc     266            532            
    maxfiles    256            unlimited

To change any of these limits, add a line (you may need to create the file first) to /etc/launchd.conf, the arguments are the same as passed to the launchctl command. For example

echo "limit maxfiles 1024 unlimited" | sudo tee -a /etc/launchd.conf

However launchd has already started your login shell, so the simplest way to make these changes take effect is to restart our machine. (Use >> to append to /etc/launchd.conf.)

hvrauhal
  • 103
  • 3
Dave Cheney
  • 18,307
  • 7
  • 48
  • 56
6

Shell limits

Resources available to the shell and processes can be changed by ulimit command which can be added to startup scripts such as ~/.bashrc or ~/.bash_profile for individual users or /etc/bashrc for all users. Example line to add:

ulimit -Sn 4096 && ulimit -Sl unlimited

See: help ulimit and man bash for more information.

System limits

In general, system limits are controlled by Launchd framework and can be changed by launchctl command, e.g.

launchctl limit maxfiles 10240 unlimited

To make the changes persistent, you need to create a property list file in specific Launch compliant folders which acts as a startup agent.

Here is the example command creating such startup file:

sudo /usr/libexec/PlistBuddy /Library/LaunchAgents/com.launchd.maxfiles.plist -c "add Label string com.launchd.maxfiles" -c "add ProgramArguments array" -c "add ProgramArguments: string launchctl" -c "add ProgramArguments: string limit" -c "add ProgramArguments: string maxfiles" -c "add ProgramArguments: string 10240" -c "add ProgramArguments: string unlimited" -c "add RunAtLoad bool true"

The file would be loaded at the system launch, however, to load to manually run:

sudo launchctl load /Library/LaunchAgents/com.launchd.maxfiles.plist

To verify the current limits, run: launchctl limit.

See: Creating Launch Daemons and Agents.

Kernel limits

  • Kernel limits are controlled by the sysctl command.
  • To see the current kernel limits, run: sysctl -a | grep ^kern.max.
  • To change the maximum of files allowed to open, run: sudo sysctl -w kern.maxfiles=20480.
  • To make the changes persistent, use similar above method to create the property list file in system startup folder.

Related:


Deprecated methods

In earlier version of macOS, you could set these limits in /etc/sysctl.conf system-wide as normally you do on Unix, however, it seems it is not supported.

Using ~/.launchd.conf or /etc/launchd.conf appears that it is also not supported in any existing version of macOS either.wiki

Same with /etc/rc.local startup file, it is not supported on macOS.

kenorb
  • 5,943
  • 1
  • 44
  • 53
4
sudo echo "limit maxfiles 1024 unlimited" >> /etc/launchd.conf

does not work because sudo is in the wrong place, try this:

echo 'limit maxfiles 10000 unlimited' | sudo tee -a /etc/launchd.conf
andi
  • 41
  • 1
3

On OS X, if you are trying to modify the soft limits for a daemon or process or task, the right way to change these soft limits is not by changing the default launchd config for all processes, but by setting it for the process you are trying to run.

This is accomplished in your launchd .plist file for your process.

If you have a daemon or process running that you need to have more open files for, create a plist file for it and add these params to it:

    <key>SoftResourceLimits</key>
    <dict>
        <key>NumberOfFiles</key>
        <integer>1024</integer>
    </dict>

An example, using mongodb. I create a .plist file called org.mongo.mongodb.plist, and save it to /Library/LaunchDaemons/org.mongo.mongodb.plist. The file looks like this:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
  <key>Disabled</key>
  <false/>
  <key>Label</key>
  <string>org.mongo.mongod</string>
  <key>ProgramArguments</key>
  <array>
    <string>/usr/local/lib/mongodb/bin/mongod</string>
    <string>--dbpath</string>
    <string>/Users/Shared/mongodata/</string>
    <string>--logpath</string>
    <string>/var/log/mongodb.log</string>
  </array>
  <key>QueueDirectories</key>
  <array/>
  <key>RunAtLoad</key>
  <true/>
  <key>UserName</key>
  <string>daemon</string>
  <key>SoftResourceLimits</key>
  <dict>
    <key>NumberOfFiles</key>
    <integer>1024</integer>
    <key>NumberOfProcesses</key>
    <integer>512</integer>
  </dict>
</dict>
</plist>

Now your process has the resources it needs, without mucking with the global configuration for the system. This will automatically be set up on restart. Or, if you don't want to restart, you can run

sudo launchctl load /Library/LaunchDaemons/org.mongod.plist

If your process or task is more of an agent than a daemon, you can put the .plist in /Library/LaunchAgents instead. Different rules apply for how launchd will control your process in either case. LaunchDaemons seems reserved for processes that launchd will try to keep up at all times.

apotek
  • 131
  • 2
2
% ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) 6144
file size               (blocks, -f) unlimited
max locked memory       (kbytes, -l) unlimited
max memory size         (kbytes, -m) unlimited
open files                      (-n) 2560
pipe size            (512 bytes, -p) 1
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 266
virtual memory          (kbytes, -v) unlimited
%

Now I have to find why there exists 2 means of checking/setting limits....


Okay - seems like ulimit and sysctl give a false-positive sense that they actually do something - but instead they seem to be useless. Could someone verify that?


Okay, I'm beginning to understand. As of v10.4, there is no init process anymore, it has been replaced by launchd, which also runs with a PID of 1.

% ps -fu root
  UID   PID  PPID   C     STIME TTY           TIME CMD
    0     1     0   0   0:30.72 ??         0:46.72 /sbin/launchd

And of course worth mentioning is that ulimit is a shell built-in, launchctl is a shell-independent program.

Xerxes
  • 4,133
  • 3
  • 26
  • 33
1

The following should resolve most solutions (and are listed in order of their hierarchy):

echo 'kern.maxfiles=20480' | sudo tee -a /etc/sysctl.conf
echo -e 'limit maxfiles 8192 20480\nlimit maxproc 1000 2000' | sudo tee -a /etc/launchd.conf
echo 'ulimit -n 4096' | sudo tee -a /etc/profile

Notes:

  1. You will need to restart for these changes to take effect.
  2. AFAIK you can no longer set limits to 'unlimited' under OS X
  3. launchctl maxfiles are bounded by sysctl maxfiles, and therefore cannot exceed them
  4. sysctl seems to inherit kern.maxfilesperproc from launchctl maxfiles
  5. ulimit seems to inherit it's 'open files' value from launchctl by default
  6. you can set a custom ulimit within /etc/profile, or ~/.profile ; while this isn't required I've provided an example
  7. Be cautious when setting any of these values to a very high number when compared with their default - the features exist stability/security. I've taken these example numbers that I believe to be reasonable, written on other websites.
  8. When launchctl limits are lower than the sysctl ones, there have been reports that the relevent sysctl ones will be bumped up automatically to meet the requirements.
errant.info
  • 199
  • 2
  • 2
1

My experience is that my high-process-count task only succeeded with:

kern.maxproc=2500  # This is as big as I could set it.

kern.maxprocperuid=2048

ulimit -u 2048

The first two can go into /etc/sysctl.conf and the ulimit value into launchd.conf, for reliable setting.

Since tcp/ip was part of what I was doing, I also needed to bump-up

kern.ipc.somaxconn=8192

from its default 128.

Before I increased the process limits, I was getting "fork" failures, not enough resources. Before I increased kern.ipc.somaxconn, I was getting "broken pipe" errors.

This was while running a fair number (500-4000) of detached processes on my monster Mac, OS 10.5.7, then 10.5.8, now 10.6.1. Under Linux on my bosses' computer it just worked.

I thought the number of processes would be closer to 1000 but it seems that every process I started included its own copy of the shell in addition to the actual item doing the actual work. Very festive.

I wrote a display toy that went something like:

#!/bin/sh

while[ 1 ]

do

    n=netstat -an | wc -l

    nw=netstat -an | grep WAIT | wc -l

    p=ps -ef | wc -l

    psh=ps -ef | fgrep sh | wc -l

    echo "netstat: $n   wait: $nw      ps: $p   sh: $psh"

    sleep 0.5

done

and watched the maximum number of processes in ps -ef and hanging around in netstat waiting for TIME_WAIT to expire... With the limits raised, I saw 3500+ TIME_WAIT items at peak.

Before I raised the limits I could 'sneak' up on the failure threshold, which started out below 1K but rose to a high value of 1190.. everytime it was pushed into failure it could take a little more next time, probably because of something cached that expanded to its limit every time it failed.

Although my test case had a "wait" as its final statement there were still PLENTY of detached processes hanging around after it exited.

I got most of the info I used from postings on the internet, but not all of it was accurate. Your milage may vary.

kenorb
  • 5,943
  • 1
  • 44
  • 53