We've got a Jenkins CI server that fetches our code from Git, builds it, makes a Docker image, and then ships it off to some production servers.
Our project is primarily written in Python, so "building" involves running
pip install -r requirements.txt
That works fine, except it's kind of slow. It has to fetch packages over the network, plus it has to build C libraries for a few of them (and 'lxml' is not small!).
In development, I've had success using pip-accel
to speed up this process. It has the same interface as pip
but it caches both the Python downloads and the built C code, so
pip-accel install -r requirements.txt
is fast.
I want to do this for our production builds, but I'm running into some obstacles.
Obviously, pip-accel
needs a directory in which to store the cache. Since our CI server is what runs the builds, that's the logical place to put it. But the pip install
command runs inside a fresh Docker container, so it can't just access a common directory on that server.
Docker "volumes" seem like they're designed for sharing directories with containers, but our build happens (surprise surprise) inside docker build
, and only docker run
lets you attach volumes. You can't attach volumes with docker build
.
Is there something I'm missing? How can I run a docker build
and share a cache folder with my host, outside of the container I'm in?