6

We have an Ubuntu Server (16.04) running R-Studio Server where we do statistical simulations. Those simulations are sometimes heavy on RAM and CPU so i would like to know how memory and RAM is allocated by the core if e.g. two users are logged in and each of them runs an individual R session where they "compete" for memory and CPU.

Since none of us is a server administrator we do not really want to apply manual changes, however we are interested if RAM and CPU allocation is more less equal to all users.

Note: The R-Studio Server Pro version allows to allocate a given amount of memory to single users in a quite easy way but since we do not have the Pro version we cannot change those settings.

joaoal
  • 173
  • 6

4 Answers4

5

RAM is first-come-first-serve. If userA runs 9 processes that allocate 10% of memory each, and then userB logs on, userB will see only 10% of memory left. In the event that memory is exhausted, Linux will start killing processes. The OOM killer is not tuned for multi-user, as far as I know, so it may be unfair in this scenario.

CPU time is generally allocated on a per-process basis, not per-user (but see below).

Any process which is ready to run (not sleeping, waiting on I/O, etc.) is considered for scheduling. (Processes which are not ready to run, are ignored, and so "don't count". (This is a slight oversimplification, but close enough.))

In the simplest model, if two users are running one process each, they each get roughly half of available CPU time. But if userA is running 10 processes, and userB is running 1 process, then userA gets 90% of CPU and userB gets 10% of CPU (all other things being equal).

However, the Linux scheduler can refine this by grouping processes together, and then allocating CPU time between those groupings.

Further, Linux has the capability to automatically group processes based on the session ID (typically associated with terminals, terminal windows, and/or X login sessions). This is called "autogrouping". The goal is that a single user running a heavy background task in one window, and an interactive task in another window, will still see responsive interactive performance.

Both of these capabilities are enabled by default on Ubuntu, as far as I can determine.

I cannot find information on how task groups and/or autogrouping behave in a multi-user workload. In theory, if the scheduler put each user in a separate task group, then users would always get balanced access to the CPU (50/50 for two users). However, I don't find anything that says this will happen automatically.

Further reading:

Ben Scott
  • 360
  • 1
  • 7
  • 1
    "RAM is first-come-first-serve." - unless defaults has been changed by Ubuntu it is not exactly true. Linux will always promise to allocate some memory (assuming there is VA space). If any program would actually need memory (as oppose to just allocating) AND Linux cannot page out memory to backing file (or swap for anonymous mappings) [OOM killer](https://linux-mm.org/OOM_Killer) will be invoked. It takes into account multiple things but it is likely that process using 90% of memory will be killed. – Maciej Piechotka Dec 01 '17 at 22:44
  • @MaciejPiechotka - Well, RAM is still first-come-first-serve, but you're right that I should mention how Linux handles an out-of-memory condition. It's not exactly what I'd call graceful, but it should be mentioned. I'll revise. Thanks. – Ben Scott Dec 05 '17 at 16:08
  • 1
    This answer is not complete. The 'ulimit' values and the 'nice' values are the biggest determination of who gets the CPU. Also heavily involved is the 'size' of the application, the executing ratio between I/O time and CPU time of each application. Linux also has execution/dispatch counter for each process, so even a very low priority process will get some CPU time. When linux is running more processes than can fit into memory the 'extra' processes are paged out to 'virtual' memory, Not necessarily the whole process. – user3629249 Dec 07 '17 at 08:58
  • cont: then each process has its' total pages of memory allocated, but (in crowded conditions) pages that are not currently being executed are 'paged out' to 'virtual' memory This leaves a 'working set' of pages of the allocated memory that Linux will try to keep in memory. When a page is about to be executed, read from, written to, and the desired page is not in memory, then a 'page fault' event occurs and the 'least recently used 'total' memory page is written to 'virtual' memory and the needed page is read into memory and executed. If things get really crowded, ... – user3629249 Dec 07 '17 at 09:04
  • cont: then a 'memory threshing' condition can occur, where all that is being accomplished is 'page swapping' – user3629249 Dec 07 '17 at 09:05
  • @user3629249: OP stipulated defaults, so nice and ulimit do not apply. "size of the application" is vague. I/O time is not CPU time. Again, priority is equal per OP. "Virtual memory" refers to memory as presented by the MMU in protected/long mode. What you are referring to is called simply "swapping". "thrashing" (not "threshing") is the usual term for when the system spends so much time swapping that performance is significantly impacted. Memory available to a process consists of RAM and swap space; there was no need to address that explictly. – Ben Scott Dec 07 '17 at 12:22
5

If you need to limit memory usage on the same server, your best bet will be to either

  1. Use two Virtual Machines, ideally KVM so that you can use the existing Ubuntu server to host the VMs. However, this will prevent you from easily sharing unused memory from one user with another.
  2. Use cgroups to limit resource usage
    2.1. https://askubuntu.com/questions/510913/how-to-set-a-memory-limit-for-a-specific-process 2.2 http://www.fernandoalmeida.net/blog/how-to-limit-cpu-and-memory-usage-with-cgroups-on-debian-ubuntu/
zymhan
  • 1,351
  • 1
  • 14
  • 30
  • 2
    +1 for redirecting to 'cgroups'. If we run into trouble some day I'll give it definitely a try. – joaoal Dec 01 '17 at 15:54
3

By default, users are unlimited memory-wise in Ubuntu, and in this case it's "first come, first serve". In other words, User A could use up all the memory and leave nothing for a second user.

Note though: If you configure limits, they will be always the same and not dependent on the number of current users, so you will restrict your users even if they are alone on the machine.

For the CPU, things are a bit better and the kernel scheduler will distribute CPU time between processes (not users!).

Sven
  • 97,248
  • 13
  • 177
  • 225
  • Memory can be over-committed so even if the first user has used up all the memory it is still possible that the second user will get some. There are three things which can happen when applications try to use too much memory. The kernel may eventually refuse to allocate any more. The system may slow down. The kernel may decide to kill processes to free up memory. – kasperd Dec 01 '17 at 15:09
-2

This will depend on what background processes are doing, what versions of the software are installed when you install Ubuntu, whether cron jobs are running, etc. The only real way to find out is to check RAM usage in all the scenarios you are interested in, no one will be able to tell you since even things like the number of processors and network cards will impact RAM usage.

You should be able to use control groups to limit the RAM usage, but I don't know if you need root permissions for that. Ideally you would create multiple virtual machines and allocate resources that way.

user
  • 1
  • Would it also depend on what foreground processes are doing? – Danila Ladner Dec 01 '17 at 14:24
  • @DanilaLadner Absolutely. Every process on your system that is running will require system resources (CPU, RAM, I/O). Most of the processes running on an Ubuntu server with nothing else happening are quite minimal, however. Unless you are also using this computer to serve files or host a database, then the only thing you need to focus on is the foreground processes. The answer given above is not very accurate, and is not going to lead you on a productive path. – zymhan Dec 01 '17 at 14:27