3

What's the best way to deploy dozens of resources such as CloudFormation templates, Stack Sets, and Lambda functions using Code Pipeline?

In AWS I have a multi-account architecture running an AWS Organization. I want a pipeline running in a single account. That pipeline will deploy CloudFormation templates to one or more accounts within the Organization.

The options I've found so far are:

  • Have a pipeline stage or action for each source file. This works quite well, but means every time you add a source file you need to modify your pipeline, which seems like overhead that could be automated or eliminated. You can't deploy StackSets with this approach. You also need a stage per template per account to deploy to, so it's impractical.

  • Use nested stacks. The problems with this are 1) Within the master stack I don't know what naming convention to use to call the other stacks direct from CodeCommit. I could work around that by having CodeBuild copy all the files to S3, but it seems inelegant. 2) Nested stacks are more difficult to debug, as they're torn down and deleted if they fail, so it's difficult to find the cause of the problem

  • Have CodeBuild to run a bash script that deploys all the templates using the AWS CLI.

  • Have CodeBuild run an Ansible playbook to deploy all the templates.

  • Have Lambda deploy each template, after being invoked by CodePipeline. This is likely not a great option as each invocation of Lambda would be for a single template, and there wouldn't be information about which account to deploy to. A single Lambda function that does all the deployments might be an option.

Ideally I'd like to have CodePipeline deploy every file with specific extensions in a CodeCommit repo, or even better deploy what's listed in a manifest file. However I don't think this is possible.

I'd prefer to avoid any technologies or services that aren't necessary. I would also prefer not to use Jenkins, Ansible, Teraform, etc, as this script could be deployed at multiple customer sites and I don't want to force any third party technology on them. If I have to use third party I'd rather have something that can run in a CodeBuild container than have to run on an instance like Jenkins.

--

Experience since I asked this question

  • Having to write Borne Shell (sh) scripts in CodeBuild is complex, painful and slow.

  • There needs to be some logic around creation or update of StackSets. If you simply call "create stackset" it will fail on update.

  • There's a reason the AWS Landing Zone pipeline is complex, using things like step functions.

  • If there was an easy way to write logic such as "if this stackset exists then update it" things would be a lot simpler. The ASW CDK is one possible solution to this, as it lets you create AWS infrastructure using Java, .Net, JavaScript, or TypeScript. Third party tools such as Teraform and such may also make help, but I don't know enough about them to comment.

I'm going to leave this question open in case someone comes up with a great answer.

--

Information from AWS Support

AWS have given the following advice (I've paraphrased it, filtered through my understanding, any errors are my own rather than incorrect advice from AWS):

  • CodePipeline can only deploy one artifact (eg CloudFormation template) per action

  • CodePipeline cannot directly deploy a StackSet, which would allow for deployment of templates across accounts. StackSets can be deployed by calling CodeBuild / Lambda.

  • CodePipeline can deploy to other accounts by specifying a role in that other account. This only deploys to one account at a time, so you would need one action per template per account

  • CodeBuild started as part of a CodePipeline running in a container gives more flexibility, you can do whatever you like here really

  • CodePipeline can start Lambda, which is very flexible. If you start Lambda from a CodePipeline action you get the URL of a single resource, which may be limiting. (My guess) You can probably invoke Lambda in a way that lets it do the whole deployment.

Tim
  • 30,383
  • 6
  • 47
  • 77
  • Have you tried running terraform in Codebuild? I have made a docker image with terraform installed on ECR. CodeBuild pulls it and applies changes from tf code and state. Lambda fn can trigger a build without having to have CodePipeline created. – AlexanderF Jan 31 '20 at 19:32

1 Answers1

2

I would look at deploying all the templates through a single Ansible playbook. In the playbook.yml you can have many tasks, one per CFN template, give each template the required parameters, feed outputs from one stack to the next, etc. Also Ansible is idempotent so when re-running the playbook it (re-)deploys only what's modified.

This can all be a single step in CodePipeline.

Now how to actually run it? CodePipeline can execute CodeBuild, CodeDeploy, ECS Task or Elastic Beanstalk. I would probably choose CodeBuild with an Ansible docker image. Why don't you want to use CodeBuild?

If you really really want to do CodePipeline deployment through the CloudFormation method you can probably create some custom resource that executes the ansible playbook, but that seems quite convoluted.

My choice would be CodePipeline ➜ CodeBuild ➜ Ansible playbook ➜ deploy lots of CloudFormation stacks.


BTW To debug nested templates failures you can always change the Filter in the console to Failed or Deleted and examine the failed stacks events there. When they are deleted they only disappear from the default view but the details are still there.

However I don't like complex nested templates, I find them harder to manage and update than using Ansible.

Hope that helps :)

MLu
  • 23,798
  • 5
  • 54
  • 81
  • Thanks MLu. I want to do this using only AWS services to make it more generic, I don't want to force ansible on our customers. I would prefer to avoid CodeBuild as it runs an instance, which is relatively slow. I wouldn't mind running Lambda functions as the startup time is much better. Thanks for the tips on the filter :) – Tim Apr 02 '19 at 21:26
  • 1
    @Tim CodeBuild runs a docker container (not an instance, IIRC) and that container can be spun up straight from the official Ansible docker image from docker hub. This way it won't have any external dependencies, all you'll need in your repo is the standard `buildspec.yml`, the Ansible's `playbook.yml` and your CFN templates. I wouldn't count it as *forcing* Ansible on your customers. Besides they may actually *like* Ansible once they start using it ;) – MLu Apr 02 '19 at 21:33
  • Thanks MLu, docker will probably start up a bunch faster. Still, I'd prefer to avoid ansible because it's another technology to add to the stack and one I'd have to learn myself. I'll do it if there's no other way, but I'd prefer not to. I'm hoping CodePipeline can do what we need, using other AWS services if necessary. – Tim Apr 02 '19 at 23:43
  • 1
    @Tim Simple [Ansible playbook creating CloudFormation stacks](https://github.com/mludvig/aws-autoscaling-demo/blob/master/demo.yml) to get you started. Ok, I'll stop now ;) – MLu Apr 02 '19 at 23:52
  • What value is Ansible adding in your recommended option? CloudFormation isn't _fully_ idempotent, but it can update a stack so it's largely idempotent. CodeBuild without Ansible can run a script that simply runs all the CloudFormation templates with the cli, which is inelegant but likely effective. I've added some notes from AWS support to my question, and refined my question a little. – Tim Apr 04 '19 at 02:05
  • @Tim it has some smarts around variables handling with close to pythonic syntax, unified syntax, etc. You can certainly achieve the same with a bash script and aws cli. – MLu Apr 04 '19 at 03:36