LXD
Setup
Required software
Install the lxd package, then enable lxd.socket
.
Alternatively, you can enable the lxd.service
directly, in case you want instances to autostart for example.
Setup for unprivileged containers
It is recommended to use unprivileged containers (See Linux Containers#Privileged containers or unprivileged containers for an explanation of the difference).
For this, modify both /etc/subuid
and /etc/subgid
(if these files are not present, create them) to contain the mapping to the containerized uid/gid pairs for each user who shall be able to run the containers. The example below is simply for the root user (and systemd system unit):
You can either use usermod
as follows:
usermod -v 1000000-1000999999 -w 1000000-1000999999 root
Or modify the above mentioned files directly as follows:
Now, every container will be started by default.
For the alternative, see howto set up privileged containers.
Configure LXD
On the first start, LXD needs to be configured.
Run as root:
# lxd init
This will start an interactive configuration guide in the terminal, that covers different topics like storages, networks etc.
You can find an overview in the official Getting Started Guide.
Accessing LXD as an unprivileged user
By default, the LXD daemon allows users in the group access, so add your user to the group:
# usermod -a -G lxd username
Usage
LXD consists of two parts:
- the daemon (the lxd binary)
- the client (the lxc binary)
The client is used to control one or multiple daemon(s).
The client can also be used to control remote LXD servers.
Overview of commands
You can get an overview of all available commands by typing:
$ lxc
Create a container
You can create a container with lxc launch
, for example:
$ lxc launch ubuntu:20.04
Container are based on images, that are downloaded from image servers or remote LXD servers.
You can see the list of already added servers with:
$ lxc remote list
You can list all images on a server with , for example:
$ lxc image list images:
This will show you all images on one of the default servers: images.linuxcontainers.org
You can also search for images by adding terms like the distribution name:
$ lxc image list images:debian
Launch a container with an image from a specific server with:
$ lxc launch servername:imagename
For example:
$ lxc launch images:centos/8/amd64 centos
To create an amd64 Arch container:
$ lxc launch images:archlinux/current/amd64 arch
Create a virtual machine
Just add the flag to :
$ lxc launch ubuntu:20.04 --vm
Use and manage a container or VM
See "Manage instances" in the official Getting Started Guide of LXD.
Container/VM configuration (optional)
You can add various options to instances (containers and VMs).
See Configuration of instances in the official Advanced Guide of LXD for details.
Tips and tricks
Access the containers by name on the host
This assumes that you are using the default bridge, that it is named lxdbr0 and that you are using systemd-resolved.
# systemd-resolve --interface lxdbr0 --set-domain '~lxd' --set-dns $(lxc network get lxdbr0 ipv4.address | cut -d / -f 1)
You can now access the containers by name:
$ ping containername.lxd
Other solution
It seems that the systemd-resolve solution stops working after some time.
Another solution is to create a that contains (replace x and y to match your bridge IP):
[Match] Name=lxdbr0 [Network] DNS=10.x.y.1 Domains=~lxd IgnoreCarrierLoss=yes [Address] Address=10.x.y.1/24 Gateway=10.x.y.1
Use Wayland and Xorg applications
There are multiple methods to use GUI applications inside containers.
You can find an overview in the official Forum of LXD: https://discuss.linuxcontainers.org/t/overview-gui-inside-containers/8767
Method 1: Use the host's Wayland or Xorg Server
Summary: In this method we grant containers access to the host's sockets of Wayland (+XWayland) or Xorg.
1. Add the following devices to a containers profile.
See also: LXD-Documentation regarding Devices
General device for the GPU:
mygpu: type: gpu
Device for the Wayland Socket:
Notes:
- Adjust the Display (wayland-0) accordingly.
- Add the folders in /mnt and /tmp inside the container, if they do not already exist.
Waylandsocket: bind: container connect: unix:/run/user/1000/wayland-0 listen: unix:/mnt/wayland1/wayland-0 uid: "1000" gid: "1000" security.gid: "1000" security.uid: "1000" mode: "0777" type: proxy
Device for the Xorg (or XWayland) Socket:
Note: Adjust the Display Number accordingly (for example X1 instead of X0).
Xsocket: bind: container connect: unix:/tmp/.X11-unix/X0 listen: unix:/mnt/xorg1/X0 uid: "1000" gid: "1000" security.gid: "1000" security.uid: "1000" mode: "0777" type: proxy
2. Link the sockets to the right location inside the container.
Note: These Scripts need to be run after each start of the container; you can automate this with systemd for example.
Shell-Script to link the Wayland socket:
#!/bin/sh mkdir /run/user/1000 ln -s /mnt/wayland1/wayland-0 /run/user/1000/wayland-0
Link the Xorg (or XWayland) socket:
#!/bin/sh ln -s /mnt/xorg1/X0 /tmp/.X11-unix/X0
3. Add Environment variables to the users config inside the container.
Note: Adjust the Display Numbers and/or the filename (.profile) accordingly.
For Wayland:
$ echo "export XDG_RUNTIME_DIR=/run/user/1000" >> ~/.profile $ echo "export WAYLAND_DISPLAY=wayland-0" >> ~/.profile $ echo "export QT_QPA_PLATFORM=wayland" >> ~/.profile
For Xorg (or XWayland):
$ echo "export DISPLAY=:0" >> .profile
Reload the .profile:
$ . .profile
4. Install necessary software in the container.
5. Start GUI applications.
Now, you should be able to start GUI applications inside the container (via terminal for example) and make them appear as a window on your hosts display.
You can try out "glxgears" for example.
Privileged containers
If you want to set up a privileged container, you must provide the config key .
Either during container creation:
$ lxc launch ubuntu:20.04 ubuntu -c security.privileged=true
Or for an already existing container, you may edit the configuration:
Add a disk device
If you want to share a disk device from the host to a container, all you need to do is add a device to your container. The virtual device needs a name (only used internally in the LXC configuration file), a path on the host's filesystem pointing to the disk you want to mount, as well as a desired mountpoint on the container's filesystem.
$ lxc config device add containername virtualdiskname disk source=/path/to/host/disk/ path=/path/to/mountpoint/on/container
Do not forget that if you mount a disk onto an unprivileged container, the container may not be able to access that disk's contents, even if you are logged in as root on the container. You will need to edit the permissions of your source directory on the host to allow unprivileged read/write access.
Another method for read/write access is to set the "shift" config key to "true".
See also: LXD Documentation on disk devices
Troubleshooting
lxd-agent inside a virtual machine
Inside some virtual machine images, the is not enabled by default.
In this case, you have to enable it manually, for example by mounting a network share. This requires console access with a valid user.
1. Login with :
Replace accordingly.
$ lxc console virtualmachine-name
Login as root:
$ su root
Mount the network share:
$ mount -t 9p config /mnt/
Go into the folder and run the install script (this will enable the lxd-agent inside the VM):
$ cd /mnt/ $ ./install.sh
After a successful install, reboot with:
$ reboot
Afterwards, the is available and lxc exec
should work.
Check kernel config
By default, the Arch Linux kernel is compiled correctly for Linux Containers and its frontend LXD. However, if you are using a custom kernel or changed the kernel options, the kernel might be configured incorrectly. Verify that your kernel is properly configured:
$ lxc-checkconfig
Resource limits are not applied when viewed from inside a container
lxd will need to be restarted. Enable for the service to be started at boot time.
Starting a virtual machine fails
If you see the error:
Error: Required EFI firmware settings file missing: /usr/share/ovmf/x64/OVMF_VARS.ms.fd
Install the required EFI firmware with the package.
Arch Linux does not distribute secure boot signed ovmf firmware, to boot virtual machines you need to disable secure boot for the time being:
$ lxc launch ubuntu:18.04 test-vm --vm -c security.secureboot=false
This can also be added to the default profile by doing:
$ lxc profile set default security.secureboot=false
No IPv4 with systemd-networkd
Starting with version version 244.1, systemd detects if is writable by containers. If it is, udev is automatically started and breaks IPv4 in unprivileged containers. See commit bf331d8 and discussion on linuxcontainers.
On containers created past 2020, there should already be a override to work around this issue, create it if it is not:
You could also work around this issue by setting in the profile of the container to ensure is read-only for the entire container, although this may be problematic, as per the linked discussion above.
No networking with ufw
When running LXD on a system with ufw, the output of will contain an empty IPv4 field, outbound requests will not be forwarded out of the container, and inbound requests will not be forwarded into the container. As seen in a thread on LXC's Discourse instance, ufw will block traffic from LXD bridges by default. The solution is to configure two new ufw rules for each bridge:
# ufw route allow in on lxdbr0 # ufw allow in on lxdbr0
For more information on these two commands, check out this thread which describes these commands and their limitations in more detail.
Uninstall
Stop and disable lxd.service
and lxd.socket
. Then uninstall the lxd package.
If you uninstalled the package without disabling the service, you might have a lingering broken symlink at .
If you want to remove all data:
# rm -r /var/lib/lxd
If you used any of the example networking configuration, you should remove those as well.