10

I'm trying to integrate Amazon's new Elastic Container Registry (ECR) with my Jenkins build service. I'm using the Cloudbees Docker Build & Publish plugin to build container images and publish them to a registry.

To use ECR instead of my private registry, I've ran the AWS CLI command aws --region us-east-1 ecr get-login which spews a docker login command to run - but I just copied out the password and created a Jenkins credentials of type "Username with password" from that password (the username is always "AWS").

And that works fine! The problem is that the ECR password generates by the AWS CLI is only valid for 12 hours. So right now, I have to manually regenerate the password twice a day and update the Jenkins credentials screen manually, otherwise my builds start failing.

Is there a way to generate permanent ECR login tokens, or somehow automate the token generation?

Guss
  • 2,520
  • 5
  • 32
  • 55

4 Answers4

6

This is now possible using amazon-ecr-credential-helper as described in https://aws.amazon.com/blogs/compute/authenticating-amazon-ecr-repositories-for-docker-cli-with-credential-helper/.

The short of it is:

  • Ensure that your Jenkins instance has the proper AWS credentials to pull/push with your ECR repository. These can be in the form of environment variables, a shared credential file, or an instance profile.
  • Place docker-credential-ecr-login binary at one of directories in $PATH.
  • Write the Docker configuration file under the home directory of the Jenkins user, for example, /var/lib/jenkins/.docker/config.json. with the content {"credsStore": "ecr-login"}
  • Install the Docker Build and Publish plugin and make sure that the jenkins user can contact the Docker daemon.
  • Finally, create a project with a build step that publishes the docker image
Klugscheißer
  • 161
  • 1
  • 2
4

As @Connor McCarthy said, while waiting for Amazon to come up with a better solution for more permanent keys, in the mean time we'd need to generate the keys on the Jenkins server ourselves somehow.

My solution is to have a periodic job that updates the Jenkins credentials for ECR every 12 hours automatically, using the Groovy API. This is based on this very detailed answer, though I did a few things differently and I had to modify the script.

Steps:

  1. Make sure your Jenkins master can access the required AWS API. In my setup the Jenkins master is running on EC2 with an IAM role, so I just had to add the permission ecr:GetAuthorizationToken to the server role. [update] To get any pushes complete successfully, you'd also need to grant these permissions: ecr:InitiateLayerUpload, ecr:UploadLayerPart, ecr:CompleteLayerUpload, ecr:BatchCheckLayerAvailability, ecr:PutImage. Amazon has a built-in policy that offers these capabilities, called AmazonEC2ContainerRegistryPowerUser.
  2. Make sure that the AWS CLI is installed on the master. In my setup, with the master running in a debian docker container, I've just added this shell build step to the key generation job: dpkg -l python-pip >/dev/null 2>&1 || sudo apt-get install python-pip -y; pip list 2>/dev/null | grep -q awscli || pip install awscli
  3. Install the Groovy plugin which allows you to run Groovy script as part of the Jenkins system.
  4. In the credentials screen, look for your AWS ECR key, click "Advanced" and record its "ID". For this example I'm going to assume it is "12345".
  5. Create a new job, with a periodic launch of 12 hours, and add a "system Groovy script" build step with the following script:

import jenkins.model.*
import com.cloudbees.plugins.credentials.impl.UsernamePasswordCredentialsImpl    

def changePassword = { username, new_password ->  
    def creds = com.cloudbees.plugins.credentials.CredentialsProvider.lookupCredentials(
        com.cloudbees.plugins.credentials.common.StandardUsernameCredentials.class,
        Jenkins.instance)

    def c = creds.findResult { it.username == username ? it : null }

    if ( c ) {
        println "found credential ${c.id} for username ${c.username}"
        def credentials_store = Jenkins.instance.getExtensionList(
            'com.cloudbees.plugins.credentials.SystemCredentialsProvider'
            )[0].getStore()

        def result = credentials_store.updateCredentials(
            com.cloudbees.plugins.credentials.domains.Domain.global(), 
            c, 
            new UsernamePasswordCredentialsImpl(c.scope, "12345", c.description, c.username, new_password))

        if (result) {
            println "password changed for ${username}" 
        } else {
            println "failed to change password for ${username}"
        }
    } else {
        println "could not find credential for ${username}"
    }
}

println "calling AWS for docker login"
def prs = "/usr/local/bin/aws --region us-east-1 ecr get-login".execute()
prs.waitFor()
def logintext = prs.text
if (prs.exitValue()) {
  println "Got error from aws cli"
  throw new Exception()
} else {
  def password = logintext.split(" ")[5]
  println "Updating password"
  changePassword('AWS', password)
}

Please note:

  • the use of the hard coded string "AWS" as the username for the ECR credentials - this is how ECR works, but if you have multiple credentials with the username "AWS", then you'd need to update the script to locate the credentials based on the description field or something.
  • You must use the real ID of your real ECR key in the script, because the API for credentials replaces the credentials object with a new object instead of just updating it, and the binding between the Docker build step and the key is by the ID. If you use the value null for the ID (as in the answer I linked before), then a new ID will be created and the setting of the credentials in the docker build step will be lost.

And that's it - the script should be able to run every 12 hours and refresh the ECR credentials, and we can continue to use the Docker plugins.

Guss
  • 2,520
  • 5
  • 32
  • 55
3

I was looking into this exact same issue too. I didn't come up with the answer either of us was looking for, but I was able to create a workaround with shell scripting. Until AWS comes out with a better solution to ECR credentials, I plan on doing something along these lines.

I replaced the Docker Build and Publish step of the Jenkins job with and Execute Shell step. I used the following script (could probably be written better) to build and publish my container to ECR. Replace the variables in < > brackets as needed:

#!/bin/bash

#Variables
REG_ADDRESS="<your ECR Registry Address>"
REPO="<your ECR Repository>"
IMAGE_VERSION="v_"${BUILD_NUMBER}
WORKSPACE_PATH="<path to the workspace directory of the Jenkins job>"

#Login to ECR Repository
LOGIN_STRING=`aws ecr get-login --region us-east-1`
${LOGIN_STRING}

#Build the containerexit
cd ${WORKSPACE_PATH}
docker build -t ${REPO}:${IMAGE_VERSION} .

#Tag the build with BUILD_NUMBER version and Latests
docker tag ${REPO}:${IMAGE_VERSION} ${REPO_ADDRESS}/${REPO}:${IMAGE_VERSION}

#Push builds
docker push ${REG_ADDRESS}/${REPO}:${IMAGE_VERSION}
  • Sounds very reasonable. the thing is - I like Docker Build and Publish, and I rather continue to use it, as it simplifies my life. I have several container builds in the system, and want to add more, and integrating that script to each build is more of hassle than I'm willing to live with. I have an alternative solution that I'm adding as an answer. – Guss Dec 23 '15 at 18:18
2

Using https://wiki.jenkins-ci.org/display/JENKINS/Amazon+ECR with the Docker Build and Publish plugin works just fine.

Danilo
  • 121
  • 3
  • I've installed it - but couldn't figure out what to do with it: it has no configuration and no UI. – Guss Jan 26 '16 at 14:14
  • Install the plugin. In the Docker Build and Publish step you have a drop down called "Registry credentials". Click on "Add" next to it, select as type "AWS Credentials" in the dialog. Enter access key / secret key. – Danilo Jan 26 '16 at 14:41
  • Now I see. Too bad it doesn't support instance profiles. – Guss Jan 26 '16 at 20:24
  • Yes. But for now i prefer this solution. – Danilo Jan 26 '16 at 22:03