12

On Ubuntu 14.04, I have created an unprivileged container that I can manually start and stop.

But I would like this to start and stop along with the system.

I have added the following to the container's configuration: lxc.start.auto = 1 lxc.start.delay = 5

However, the system scripts don't seem to pick unprivileged containers.

There is a thread related to this on linuxcontainers.org, but the solution seems to be restricted to root user.

Is there a clean way to do this for a non-root user (with root user's consent)?

HRJ
  • 225
  • 2
  • 10

9 Answers9

9

I think I have found a better solution than the ones currently presented here. In part because as far as I can tell cgmanager is dead, in part because my solution doesn't feel like a hacky workaround, but mostly because this discussion still shows up when searching for a solution to the problem. It's pretty simple actually: use systemd user mode.

Granted if you don't use systemd this solution isn't going to help. In that case I'd advice you to figure out if your init system has some way of allowing unpriviliged users to run services at boot and use that as a starting point.

Using systemd user mode to autostart unprivileged lxc containers

I'm assuming you have unprivileged lxc containers working properly and that running lxc-autostart as the container's user works. If so, do the following:

  1. Create the file ~/.config/systemd/user/lxc-autostart.service in the home of whatever user has the lxc containers:
[Unit]
Description="Lxc-autostart for lxc user"

[Service]
Type=oneshot
ExecStart=/usr/bin/lxc-autostart
ExecStop=/usr/bin/lxc-autostart -s
RemainAfterExit=1

[Install]
WantedBy=default.target
  1. Then as that user run:
systemctl --user enable lxc-autostart

(Note, the --user option tells systemctl you're using it in user mode. All of the things I normally do with systemctl, start, stop, statuc, enable, etc, work with --user.)

  1. Then run the following, where $user is the name of the user that has the lxc containers:
sudo loginctl enable-linger $user

This is necessary for systemd to start a systemd user instance for $user at boot. Otherwise it would only start one at the moment $user logs in.

For more information I'd recommend the archlinux wiki systemd/timer page and the systemd man pages.

Accessing a user's systemd instance as root

You can actually start/stop/whatever a user's systemd service as root, however this requires you to set the XDG_RUNTIME_DIR environment variable. Assume $user is the user whose instance you want to access and $uid it's uid, then this is how you'd start the lxc-autostart.service defined above:

sudo -u $user XDG_RUNTIME_DIR=/run/user/$uid systemctl --user start lxc-autostart

You can even use systemd-run to run arbitrary commands as that user in a way that doesn't break lxc. I'm using the following commands to stop/start my containers before/after backup, where $name is the name of the lxc container that's being backed up:

sudo -u $user XDG_RUNTIME_DIR=/run/user/$uid systemd-run --user --wait lxc-stop -n $name
sudo -u $user XDG_RUNTIME_DIR=/run/user/$uid systemd-run --user --scope lxc-start -n $name

(Note that without --wait systemd-run doesn't block until the container is stopped.)

Wieke
  • 106
  • 1
  • 2
8

I'd recommend using the handy @reboot alias in Ubuntu's cron to run lxc-autostart.

As the user that owns the unprivileged container, run crontab -e and add the following line:

@reboot lxc-autostart

encoded
  • 181
  • 1
  • 3
  • This sounds great. However, there doesn't seem to be a way to run a command on shutdown (via cron). Any ideas? – HRJ Jan 30 '15 at 01:48
  • I'm not aware of any simple ways to run a job at shutdown. You'd probably have to, as root, add an [upstart job](http://upstart.ubuntu.com/cookbook/#session-job) to shutdown containers for each user that owns them. You could look at `/etc/init/lxc.conf` for pointers. It's the upstart job that starts the privileged containers. It shouldn't be too hard to copy it and modify it to shutdown unprivileged containers as well. – encoded Jan 30 '15 at 04:29
  • 1
    It occurs to me that, since each process in the container is visible from the host, the container likely doesn't need anything special to shut it down, each process should receive the TERM signal from the host. Chances are you don't need to do anything special on shutdown. If you want to run some scripts or other such things on shutdown, that's different, but most of the processes should have a chance to shutdown normally. – encoded Jan 30 '15 at 19:30
  • Does the crontab approach work? On Ubuntu 14.04, I get the "call to cgmanager_move_pid_sync failed: invalid request" error which occurs because PAM, namely libpam-systemd is not involved in the user changing process. You can see in `/proc/self/cgroup` that it contains sequences like `/user/0.user/1.session` instead of `/user/1000.user/1.session` – Daniel Alder May 23 '15 at 12:20
3

In case anyone stumbles on to this q&a for the answer to autostarting unprivileged LXC containers (I certainly check back here a lot), here is a solution that works well and which I followed to get it working on my server:

http://blog.lifebloodnetworks.com/?p=2118 by Nicholas J Ingrassellino.

In a nutshell, it involves creating two scripts, and they work together at startup to allow LXC to start the unprivileged containers of each listed user without having to actually log in to the user account; in other words, executing the command as the user with all the CGroups magic intact. In keeping with SO best practice, I'll quote the bones of it here but it's worth reading his original article.

Allow our user account to use the bridge…

echo "$USER veth lxcbr0 1024" | sudo tee -a /etc/lxc/lxc-usernet

Create Upstart script… In /etc/init/lxc-unprivileged.conf add…

description "LXC Unprivileged Containers"
author "Mike Bernson <mike@mlb.org>"

start on started lxc

script
    USERS="[user]"

    for u in $USERS; do
        cgm create all lxc$u
        cgm chown all lxc$u $(id -u $u) $(id -g $u)
        lxc-autostart -L -P /home/$u/.local/share/lxc | while read line;
        do
            set -- $line
            /usr/local/bin/startunprivlxc lxc$u $u $1
            sleep $2
        done
    done
end script

Make sure to replace [user] with your user account.

Create the container start script… In /usr/local/bin/startunprivlxc add…

#!/bin/sh

cgm movepid all $1 $$
sudo -iH -u $2 -- lxc-start -n $3 -d

…and make it executable…

sudo chmod +x /usr/local/bin/startunprivlxc

I'd just like to emphasise that it does appear to work safely, correctly, and doesn't require the root to SSH into the other user accounts.

There is also more on the subject (touching on related gotchas) here: https://gist.github.com/julianlam/4e2bd91d8dedee21ca6f which can be useful in understanding why this is the way it is.

2

I've written a small script to work around the issue, just follow the commented instructions.

Thiago Padilha
  • 289
  • 1
  • 4
  • 10
1

There is a way to start an unprivileged container that is not owned by root without enable-linger. System-wide configuration files should be adjusted however. Solution involves systemd and tested on Ubuntu-20.04 and Ubuntu-18.04.

Crucial part is PAM configuration to assign proper subuid range to the process. It seems, cron PAM configuration is suitable for such purpose but it could lead to confusing logs. Let's create a dedicated name (it is used in the .service file as well). /etc/pam.d/lxc-unpriv file:

@include cron

A .service file in /etc/systemd/system/

[Unit]
Description=LXC unprivileged container CONTAINERNAME
Wants=lxc-net.service
After=lxc-net.service
Wants=lxcfs.service
After=lxcfs.service

[Service]
User=CONTAINERUSER
Group=CONTAINERGROUP
PAMName=lxc-unpriv
Type=simple
KillMode=mixed
ExecStart=/usr/bin/lxc-start --name CONTAINERNAME --foreground
ExecStop=/usr/bin/lxc-stop --name CONTAINERNAME

If the container is created for some network service, it is even possible to add .socket file and postpone container startup till the first request.

0

SORRY: answered too soon. It didn't work even though lxc-ls shows "AUTOSTART" as "YES".

Here is a link with a lot more useful info, and maybe someone can make use of it: http://www.geeklee.co.uk/unprivileged-privileged-containers-ubuntu-14-04-lxc/

I landed on this page because I had the same issue. After reading this thread, I realized that lxc-create cannot write to the usual "/var/lib/lxc/" directory if it is not run with sudo.

I looked around and located the rootfs for my unprivileged container in "~/.local/share/lxc", and putting the two lines in the question into config in that directory.

I looked at the template I used, "lxc-download" for clue, but I think that path was passed in when "lxc-download" is invoked. I have not looked at how the system looks for unprivileged containers during boot.

lxc n00b
  • 3
  • 1
0

i am running each unprivileged container with a same named user for better isolation and this is how i do it:

#!/bin/bash

LXC_CONTAINERS="container1 container2"

for LXC_CONTAINER in $LXC_CONTAINERS; do
 su - $LXC_CONTAINER -c "lxc-start -n $LXC_CONTAINER --logfile /home/$LXC_CONTAINER/.local/share/lxc/lxc-$LXC_CONTAINER.log --logpriority DEBUG"
done
sam
  • 1
-1

Assuming (which are the mother of all ways to screw things up), you are logging in as the user which "owns" the unprivileged lxc container, then the following command should address what you are looking for...

$ echo "lxc-start -n LXC-CONTAINER-NAME -d" >> .bashrc

This will simply run the above command when you log in via bash. This is also assuming that bash is the login shell. Please replace the name: LXC-CONTAINER-NAME with the name of your LXC container which you'd like to start.

Glueon
  • 3,514
  • 2
  • 22
  • 31
-1

I have used a different approach and it is working

1º Add the following entries at container config file

AUTO START CONFIG

lxc.start.auto = 1 lxc.start.delay = 5

2º Create a trust relationship between container user and himself at the same server

userlxc@GEST-4:~$ ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/home/userlxc/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/userlxc/.ssh/id_rsa. Your public key has been saved in /home/userlxc/.ssh/id_rsa.pub. The key fingerprint is: c9:b4:e1:f3:bf:a3:25:cc:f8:bc:be:b6:80:39:59:98 userlxc@GEST-AMENCIA-4 The key's randomart image is: +--[ RSA 2048]----+ | | | | | o | | * + | | E S | | = * | | = o = . | | . +.+. | | oO=oo | +-----------------+

userlxc@GEST-4:~$ cat .ssh/id_rsa.pub >> .ssh/authorized_keys userlxc@GEST-4:~$ ls -lrt .ssh/authorized_keys -rw-rw-r-- 1 userlxc userlxc 404 Nov 19 17:23 .ssh/authorized_keys

Check the ssh connection, you have to be able to use it without password  userlxc@GEST-4:~$ ssh userlxc@localhost "lxc-ls --fancy"

NAME STATE IPV4 IPV6 AUTOSTART

EXTLXCCONT01 STOPPED - - YES
UBUSER1404USERCONT01-test STOPPED - - NO
UBUSER1404USERLXCCONT01 STOPPED - - NO

3º Create a crontab entry at container owner

@reboot ssh userlxc@localhost "lxc-autostart"