7

I am trying to learn how cgroups v2 work. I did this:

mount -t cgroup2 none /mnt/cgroup2

That gave me a list of objects in /mnt/cgroup2

root@ubuntu-s-1vcpu-1gb-lon1-01:~# ls -la /mnt/cgroup2/
total 4
dr-xr-xr-x  5 root root    0 Sep  2 16:04 .
drwxr-xr-x  3 root root 4096 Sep  2 16:05 ..
-r--r--r--  1 root root    0 Sep  2 16:04 cgroup.controllers
-rw-r--r--  1 root root    0 Sep  2 16:04 cgroup.max.depth
-rw-r--r--  1 root root    0 Sep  2 16:04 cgroup.max.descendants
-rw-r--r--  1 root root    0 Sep  2 16:04 cgroup.procs
-r--r--r--  1 root root    0 Sep  2 16:04 cgroup.stat
-rw-r--r--  1 root root    0 Sep  2 16:07 cgroup.subtree_control
-rw-r--r--  1 root root    0 Sep  2 16:04 cgroup.threads
drwxr-xr-x  2 root root    0 Sep  2 16:04 init.scope
drwxr-xr-x 59 root root    0 Sep  2 16:00 system.slice
drwxr-xr-x  3 root root    0 Sep  2 15:59 user.slice

However, the file /mnt/cgroup2/cgroup.controllers is empty. I thought it should have the list of controllers, is that not correct? Reading docs here: http://man7.org/linux/man-pages/man7/cgroups.7.html

Thomas
  • 4,155
  • 5
  • 21
  • 28
ACC
  • 249
  • 1
  • 4
  • 12

1 Answers1

8

cgroup controllers can only be mounted in one hierarchy (v1 or v2). If you have a controller mounted on a legacy v1 hierarchy, then it won't show up in the cgroup2 hiearchy. This limitation is documented in cgroup-v2 - Mounting as well as the cgroups(7) manual page:

It is not possible to mount the same controller against multiple cgroup hierarchies. For example, it is not possible to mount both the cpu and cpuacct controllers against one hierarchy, and to mount the cpu controller alone against another hierarchy. It is possible to create multiple mount points with exactly the same set of comounted controllers. However, in this case all that results is multiple mount points providing a view of the same hierarchy.

Note that on many systems, the v1 controllers are automatically mounted under /sys/fs/cgroup; in particular, systemd(1) automatically creates such mount points.

To avoid this legacy behavior, boot with the systemd.unified_cgroup_hierarchy=1 option. This option might become the default in the future according to the NEWS entry for systemd v233.

To illustrate, these cgroup filesystems are mounted on an Arch Linux system using systemd 239:

tmpfs    on  /sys/fs/cgroup                   type  tmpfs    (ro,nosuid,nodev,noexec,mode=755)
cgroup2  on  /sys/fs/cgroup/unified           type  cgroup2  (rw,nosuid,nodev,noexec,relatime,nsdelegate)
cgroup   on  /sys/fs/cgroup/systemd           type  cgroup   (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd)
cgroup   on  /sys/fs/cgroup/cpu,cpuacct       type  cgroup   (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup   on  /sys/fs/cgroup/cpuset            type  cgroup   (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup   on  /sys/fs/cgroup/net_cls,net_prio  type  cgroup   (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup   on  /sys/fs/cgroup/memory            type  cgroup   (rw,nosuid,nodev,noexec,relatime,memory)
cgroup   on  /sys/fs/cgroup/pids              type  cgroup   (rw,nosuid,nodev,noexec,relatime,pids)
cgroup   on  /sys/fs/cgroup/blkio             type  cgroup   (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup   on  /sys/fs/cgroup/rdma              type  cgroup   (rw,nosuid,nodev,noexec,relatime,rdma)
cgroup   on  /sys/fs/cgroup/freezer           type  cgroup   (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup   on  /sys/fs/cgroup/perf_event        type  cgroup   (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup   on  /sys/fs/cgroup/devices           type  cgroup   (rw,nosuid,nodev,noexec,relatime,devices)
cgroup   on  /sys/fs/cgroup/hugetlb           type  cgroup   (rw,nosuid,nodev,noexec,relatime,hugetlb)

The read-only /sys/fs/cgroup/unified/cgroup.controllers file is initially empty. After unmounting the cpu controller (cpu,cpuacct), the cpu controller becomes available. Unfortunately, not all controllers (like memory) become available even after unmounting all v1 cgroup filesystems. The ones that are available are:

cpu io rdma

When booting with systemd.unified_cgroup_hierarchy=1, no v1 filesystems are mounted:

cgroup2  on  /sys/fs/cgroup           type  cgroup2  (rw,nosuid,nodev,noexec,relatime,nsdelegate)

And now some more controllers become available:

cpu io memory pids rdma
Lekensteyn
  • 6,111
  • 6
  • 37
  • 55
  • Hi @Lekensteyn, in your example is there a way to make the `memory` controller available without rebooting the node? – user2279952 Jul 22 '22 at 04:26