Does GNU/Linux counts processes and threads together when I limit their number?

11

3

I want to limit the number of processes per user on my machine, with /etc/security/limits.conf and the nproc value.

I have read here that Linux dosen't distinguish between processes and threads?

My current nproc limit per user is 1024, but if this includes also threads, it is too low in my point of view. The man-page of limits.conf only mentions "process" for nproc and nothing else.

// edit // sample code in C++ with Boost // g++ -o boost_thread boost_thread.cpp -lboost_thread

#include <unistd.h>
#include <iostream>
#include <boost/thread.hpp>
using namespace std;

int counter;

void print_thread(int i) {
    counter++;
    cout << "thread(" << i << ") counter " << counter << "\n";
    sleep(5);
    counter--;
}

int main() {
    int i = 0;
    int max = 1000000;

    while (i < max) {
        boost::thread(print_thread, i);
        i++;
    }

    return 0;
}

test (removed some lines):

$ ulimit -u
1024
$ ./thread 
...
...
...
thread(828) counter 828
thread(829) counter 829
thread(830) counter 830
thread(831) counter 831
thread(832) counter 832
thread(610) counter thread(833833) counter 834

thread(834) counter 835
thread(835) counter 836
thread(836) counter 837
thread(837) counter 838
thread(838) counter 839
thread(839) counter 840
thread(840) counter 841
thread(841) counter 842
thread(842) counter 843
thread(843) counter 844
thread(844) counter 845
thread(845) counter 846
thread(846) counter 847
thread(847) counter 848
terminate called after throwing an instance of 'boost::exception_detail::clone_impl<boost::exception_detail::error_info_injector<boost::thread_resource_error> >'
  what():  boost::thread_resource_error
Aborted (core dumped)

My laptop uses ~130 processes while being idle. So nproc, or Linux in a wider view, doesn't disinguish between processes and threads. Which seems reasonable to me, because threads could also exhausting, not only processes.

Peter Weber

Posted 2012-01-09T18:41:05.377

Reputation: 237

Answers

14

The nproc limit you are talking about applies to runnable entities, it is thus limiting threads (and therefore, processes containing them). Every process has at least one thread (the primary thread), so that only threads can be run. Strictly speaking, processes are not "runnable".

This answer explains the real difference between threads and processes in Linux.

I tested the code in daya's answer (also added sleep(1); in thread code) and unlike him (?!), I hit the limit when too many threads were created: pthread_create() was returning EAGAIN. The pthread_create(3) documentation says the following about this error:

EAGAIN

Insufficient resources to create another thread, or a system-imposed limit on the number of threads was encountered. The latter case may occur in two ways: the RLIMIT_NPROC soft resource limit (set via setrlimit(2)), which limits the number of process for a real user ID, was reached; or the kernel's system-wide limit on the number of threads, /proc/sys/kernel/threads-max, was reached.

I see no mention of a specific per-thread limit in the kernel source, I see only RLIMIT_NPROC there, which is the limit you can change in limits.conf (with nproc), ulimit -u or setrlimit(2).

Totor

Posted 2012-01-09T18:41:05.377

Reputation: 1 100

0

ulimit limits number of processes only. Therefore a value set using

ulimit -u 1024

will limit number of proceses.

eg.

#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>

void* test(void *ptr){
   return 0;
}



int main()
{
        pthread_t thread[50];
        int i=0;

      for(i=0;i<50;i++){
      if(!pthread_create( &thread[i], NULL,test,NULL))
         printf("%d ",i);

       }


      for(i=0;i<50;i++)
       pthread_join( thread[i], NULL);
       return 0;
}

set ulimit and check

lab@x:/tmp$ ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 20
file size               (blocks, -f) unlimited
pending signals                 (-i) 16382
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) unlimited
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
lab@x:/tmp$ 
lab@x:/tmp$ 
lab@x:~$ cd /home/x
lab@x:/home/x$ ./thread 
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 lab@x:/home/x$ 
lab@x:/home/x$ 
lab@x:/home/x$ ulimit -u 10
lab@x:/home/x$ 

process limit is set to 10

lab@x:/home/x$ ./thread 
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 lab@x:/home/x$ 
lab@x:/home/x$ 

here 50 threads can be created.

daya

Posted 2012-01-09T18:41:05.377

Reputation: 2 445

3At a first glance your code and rationale looks right, but I'm afraid your code and rationale is wrong. Your threads are returning immediately, with a sleep(5) or something else time requiring in test() your code should fail. – Peter Weber – 2012-01-15T19:45:34.073

Well, i did add a while(1){} in test() and still i get the same result as above. – daya – 2012-01-15T20:08:24.630

I've edited my request. You could test my code, also. Your first answer "Yes linux systems counts POSIX threads and processes together" looks perfectly correct. – Peter Weber – 2012-01-15T20:28:04.893

Yes that's what i thought until i tried it in a program. – daya – 2012-01-15T20:43:39.143

I tried your program with max=10000. and ulimit -u 10 , its working fine (no errors). – daya – 2012-01-15T21:07:21.537

Very funny. I running Archlinux and Fedora, both act in the same way and throw an exception. – Peter Weber – 2012-01-15T21:21:54.453

Is it possible, that some sort of compiler voodoo tricks us here? – Peter Weber – 2012-01-15T21:44:52.333

I tested your source also, with ulimit -u 50 it stops immediately with "bash: fork: retry: No child processes". – Peter Weber – 2012-01-16T17:01:06.973

This is really strange, we will have to check on a third machine :) – daya – 2012-01-16T17:11:33.557

2I don't agree with your conclusion. When I tried your program, I hit the limit when too many threads were created. The Linux limit *does* apply to threads. See my answer. – Totor – 2013-04-16T21:55:00.477