2

i wrote a cloudformation script for creating s3 bucket with versioning and lifecycle rule. now i want to upload one file (text document) from my local machine to newly created s3 bucket but i dont want to do it manually. Is there any way i can give path to my file in cloudformation script or any other suggestion? Here's my CF script

{
"AWSTemplateFormatVersion" : "2010-09-09",
"Resources" : {
"S3Bucket" : {
"Type" : "AWS::S3::Bucket",
"Properties" : {
 "AccessControl" : "PublicRead",
 "BucketName" : "s3testdemo",
 "LifecycleConfiguration": {
  "Rules": [
    {
     "Id": "GlacierRule",
     "Status": "Enabled",
     "ExpirationInDays": "365",
     "Transition": {
        "TransitionInDays": "30",
        "StorageClass": "Glacier"
                    }
                  }
                 ]
              },                  
     "VersioningConfiguration" : {
     "Status" : "Enabled"
            }
        }
      }
   }
 }

2 Answers2

4

If I understand you correctly, you're asking if there's a way to upload a file to an S3 bucket via the CloudFormation stack that creates the bucket. Then answer is yes, but it is not simple or direct.

There are two ways to accomplish this.

1) Create an EC2 instance that uploads the file on startup. You probably don't want to start an EC2 instance and leave it running just to submit a single file, but it would work.

2) Use a Lambda-backed custom resource. See http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-custom-resources.html for information about custom resources. In CloudFormation you could create the Lambda function itself, then create a custom resource based on that Lambda function. When the custom resource is created, the Lambda function would get called and you could use that function invocation to upload the file.

Note, that both approaches would also require creating an IAM role to grant the permissions required to perform the S3 upload.

Peter Dolberg
  • 266
  • 1
  • 3
  • i will try Lambda-backed custom resource option .Thanks a lot – Devendra Prakash Date Apr 25 '17 at 05:33
  • I'm trying to do something similar, but I'm having trouble seeing how these solutions help. For (1), how does the EC2 get access to the file? I'd have to scp it there manually from my local machine. For (2), same question - how does the lambda invocation get access to my local file? For small files, you can include their content in the cloudformation template, but this has a size limit of around 460kB, which is too small for my purpose. – MatthewD Dec 19 '17 at 05:17
  • Indeed, both solutions are poor workarounds for something that CloudFormation simply doesn't support. EC2 could get access to the file by pulling it from somewhere else, generating it from available data, or by supplying it via EC2 user data. For Lambda, it could be pulled from elsewhere or embedded in the code. For my own use, I prefer using Terraform which supports adding s3 objects. See https://www.terraform.io/docs/providers/aws/r/s3_bucket_object.html – Peter Dolberg Dec 20 '17 at 16:57
1

Please see AWS Docs:

For some resource properties that require an Amazon S3 location (a bucket name and filename), you can specify local references instead. For example, you might specify the S3 location of your AWS Lambda function's source code or an Amazon API Gateway REST API's OpenAPI (formerly Swagger) file. Instead of manually uploading the files to an S3 bucket and then adding the location to your template, you can specify local references, called local artifacts, in your template and then use the package command to quickly upload them. A local artifact is a path to a file or folder that the package command uploads to Amazon S3. For example, an artifact can be a local path to your AWS Lambda function's source code or an Amazon API Gateway REST API's OpenAPI file.

I've just tested it with AWS SAM for a Glue job ScriptLocation and it worked a charm

Also see this answer for more complex use cases.

Stefan
  • 111
  • 4