11

We are developing an app that is to be deployed on site to various installations (not cloud). Our OEM partner is asking us to provide them with an ISO to be able to quickly provision new servers. Our app is built around containers and we have a private internet-facing registry setup to be able to pull the latest passing builds. I'm not yet sure if the OEM partner will be able to pull these images themselves, so we are investigating the possibility of pre-packing the docker images along with the ISO but are having some difficulties. Some things we've tried:

  1. systemback - We tried provisioning a fresh ubuntu install with our preferred setup (as defined by an ansible role we have) and then capturing the result with systemback. Upon re-installation of the resulting ISO we are met with a docker error: Error response from daemon: open /var/lib/docker/aufs/layers/blahblahblah: no such file or directory similar to #22343

  2. chroot jail - Again, we tried creating our user, installing docker, but upon trying to pull our images we're greeted with: failed to register layer: Error processing tar file(exit status 1): invalid argument regardless of what image we pull (even official docker ubuntu image, for example). Google is of no help with this error.

  3. RancherOS - Here we found instructions to prepack docker images but not how to bundle them with the iso. It looks like we're having the same use case as #1449 but there's no real solution.

Now all I can think to try next is to docker save our images and include the tarballs with the ISO, then when first launching the iso run a script to check if the docker images exist or not and if they don't, do a docker load on each and then run them though this seems extremely hacky and unreliable so I was wondering if anyone has any experience with this sort of thing and might be able to point me in the right direction.

030
  • 5,731
  • 12
  • 61
  • 107
DTI-Matt
  • 249
  • 1
  • 6
  • 20
  • 1
    I wish I had a longer answer but I think what you're suggesting seems like the most appropriate option, though I'd be tempted to bundle puppet or chef in with the ISO too so you can 're-pull' as needed. – Chopper3 Feb 20 '17 at 15:50
  • 3
    Not sure why you think "this seems extremely hacky" as it's exactly what `docker load` is for. – Michael Hampton Feb 23 '17 at 14:55
  • 1
    @MichaelHampton yes, fair, the hacky part is more concerning having to have a first boot script that loads the images and then removes itself and also adds systemd scripts to auto-start the containers on all subsequent boots. So if the images/systemd scripts could be included in the ISO, it'd remove that level of uncertainty/dependence on scripts. – DTI-Matt Feb 23 '17 at 16:42
  • Did @MichaelHampton answer your question? If not what answers do you still have? – 030 Feb 24 '17 at 09:15
  • Welcome to the painful world of offline platforms! :D – shearn89 Dec 17 '21 at 09:31

3 Answers3

1

Docker save/load is exactly what I would do here.

Pre-populating /var/lib/docker directly, whether by a package installation, or copying it from CD/DVD, seems extremely hacky. And it's also dependent on the storage engine you're using.

Docker's own save/load is how Docker likes to get images into and out of the local image cache, and is completely agnostic about what storage engine is used by the Docker installation.

Rob F
  • 386
  • 1
  • 6
  • I agree that save/load in general is a more robust approach. However, in some cases it has important drawbacks, for example requiring that the docker daemon is fully operational during the operation. This can be awkward to achieve during installation, depending on the installer type. It's also quite slow. In my case, I was creating an image for an embedded device, where I controlled the entire stack, so I had no concerns about this approach. YMMV. – Matt Zimmerman Feb 11 '22 at 02:56
1

@Michael Hampton's answer seems about right to me.

What I would suggest is that you package the relevant systemd scripts into your ISO, and have those scripts deal with loading or building your images.

So, for say a nginx container, your unit file might look like:

[Unit]
Description=nginx
After=docker.service
Requires=docker.service


[Service]
TimeoutStartSec=0
ExecStartPre=-/usr/bin/docker kill nginx
ExecStartPre=-/usr/bin/docker rm nginx
ExecStartPre=-/usr/bin/docker load /path/to/compressed/image
ExecStart=/usr/bin/docker run --rm [your config here]

If you preferred to build rather than load, you can do that too, by replacing:

ExecStartPre=-/usr/bin/docker load /path/to/compressed/image

With:

ExecStartPre=-/usr/bin/docker build --rm -t 'my-nginx:latest' /path/to/folder_with_Dockerfile

You can also use systemd dependencies to ensure thing start up in the right order, and/or link containers together (see e.g. systemd's After/Before, PartOf, and so on).

In general, I don't think I would spend too much time worrying about only loading/building once - particularly if you were going to build images (rather than load them), this would allow you to update a container by updating its local Dockerfile. Which may be a useful feature to have if your registry will not be available.

iwaseatenbyagrue
  • 3,588
  • 12
  • 22
1

You can do this by pre-installing a /var/lib/docker tree. This directory contains all of the images, containers, etc. that are present in a docker environment, so you'll want to start from a clean one to create your /var/lib/docker containing only the desired images etc. When docker starts up, it will use the existing directory structure which already has the images pre-installed.

Matt Zimmerman
  • 361
  • 1
  • 10