22

I'm trying to setup a simple Amazon AWS S3 based website, as explained here.

I've setup the S3 bucket (simples3websitetest.com), gave it the (hopefully) right permissions:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AddPerm",
            "Effect": "Allow",
            "Principal": {
                "AWS": "*"
            },
            "Action": [
                "s3:GetObject"
            ],
            "Resource": [
                "arn:aws:s3:::simples3websitetest.com/*"
            ]
        }
    ]
}

I uploaded index.html, setup website access, and it is accessible via http://simples3websitetest.com.s3-website-us-west-2.amazonaws.com/index.html

So far so good, now I want to setup Amazon Route53 access and this is where I got stuck.

I've setup a hosted zone on a domain I own (resourcesbox.net), and clicked "create record set", and got to the "setup alias" step, but I get "No targets available" under S3 website endpoints when I try to set the alias target.

What did I miss??

Amir Zucker
  • 323
  • 1
  • 3
  • 7
  • Starting from October 2012 Amazon introduced a function to handle redirects (HTTP 301) for S3 buckets. You can read my previous response here. stackoverflow.com/a/24218895/1160780 – Alberto Spelta Jun 14 '14 at 11:25

2 Answers2

36

The A-record alias you create has to be the same as the name of the bucket, because virtual hosting of buckets in S3 requires that the Host: header sent by the browser match the bucket name. There's not really another practical way in which virtual hosting of buckets could be accomplished... the bucket has to be identified by some mechanism, and that mechanism is the http headers.

In order to create an alias to a bucket inside the "example.com" domain, the bucket name is going to have to also be a hostname you can legally declare within that domain... the Route 53 A-Record "testbucket.example.com," for example, can only be aliased to a bucket called "testbucket.example.com" ... and no other bucket.

In your question, you're breaking this constraint... but you can only create an alias to a bucket named "simples3websitetest.com" inside of (and at the apex of) the "simples3websitetest.com" domain.

This is by design, and not exactly a limitation of Route 53 nor of S3. They're only preventing you from doing something that can't possibly work. Web servers are unaware of any aliasing or CNAMEs or anything else done in the DNS -- they only receive the original hostname that the browser believes it is trying to connect to, in the http headers sent by the browser ... and S3 uses this information to identify the name of the bucket to which the virtual hosted request applies.

Amazon S3 requires that you give your bucket the same name as your domain. This is so that Amazon S3 can properly resolve the host headers sent by web browsers when a user requests content from your website. Therefore, we recommend that you create your buckets for your website in Amazon S3 before you pay to register your domain name.

http://docs.aws.amazon.com/gettingstarted/latest/swh/getting-started-create-bucket.html#bucket-requirements

Note, however, that this restriction only applies when you are not using CloudFront in front of your bucket.

With CloudFront, there is more flexibility, because the Host: header can be rewritten (by CloudFront itself) before the request is passed through to S3. You configure the "origin host" in your CloudFront distribution as your-bucket.s3-website-xx-yyyy-n.amazonaws.com where xx-yyyy-n is the AWS region of S3 where your bucket was created. This endpoint is shown in the S3 console for each bucket.

Michael - sqlbot
  • 21,988
  • 1
  • 57
  • 81
  • 1
    This was indeed the problem, I created a bucket called resourcesbox.net and it did show up. Thank you! Quick follow up question: What this means is that if I want to have different buckets for that domain, I must have subdomains to suit each bucket right? There's no way around it? – Amir Zucker Mar 27 '14 at 07:26
  • I'm not exactly sure what you mean by "I must have subdomains." You need to create an A record in Route 53 with a hostname matching each bucket that you want to use to host a web site in S3, yes. – Michael - sqlbot Mar 28 '14 at 03:14
  • This is great, but on the other hand, the wildcard SSL certificates issued by Amazon are incompatible with dots in the bucket name http://docs.aws.amazon.com/AmazonS3/latest/dev/VirtualHosting.html - catch22 – oberstet Jan 03 '15 at 14:18
  • 1
    @oberstet this question is about Route 53 `alias` records pointed to S3 buckets with web site hosting enabled, which causes the DNS to resolve to the web site endpoint, not the REST endpoint. The web site endpoints [don't support SSL at all](http://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteEndpoints.html#WebsiteRestEndpointDiff); only the REST endpoints do. Also, [all wildcard certs only support a maximum of one](https://tools.ietf.org/html/rfc6125) `*` and it can appear only in the leftmost hostname component, so that isn't really an S3 limitation. – Michael - sqlbot Jan 05 '15 at 07:21
  • @Michael-sqlbot Right. RFC6125 6.4.3.2 disallows a single, left-most `*` to match periods (e.g., `*.example.com` would match `foo.example.com` but not `bar.foo.example.com`), but where does the RFC say that a wildcard cert for `*.*.example.com` (presumably then matching `foo.example.com` *and* `bar.foo.example.com`) is disallowed? Probably I've overlooked it, could you point me to? In any case, this is causing trouble: https://github.com/boto/boto/issues/2836 – oberstet Jan 05 '15 at 08:49
  • 1
    @oberstet *6.4.3.1 The client SHOULD NOT attempt to match a presented identifier in which the wildcard character comprises a label other than the left-most label.* So, there's no such thing as a multi-tiered wildcard. At any rate, your boto issue *is* a matter of the "calling format" option being apparently implemented incorrectly. Every bucket can be accessed over https with bucket name as the first *path* element under the S3 URL *for the bucket's correct region* e.g. `https://s3-us-west-2.amazonaws.com/my-bucket.with-dots.in-us-west-2/key`. Wrong regional endpoint = redirect error. – Michael - sqlbot Jan 06 '15 at 00:17
  • I ran into this same issue. I had to sign out and back in to get the list to populate. – jwadsack Mar 22 '15 at 02:53
  • I ran into this issue with the correct bucket name and everything setup correctly, signing out and back in did not populate the s3 target menu. The 'fix' is simple, just enter the S3 regional url on its own, in this case "s3-website-us-west-2.amazonaws.com". – James Griffin Apr 19 '15 at 12:16
  • Seems AWS don't really help you with this, going from S3 settings and Route53 settings it looks like you an just enable web hosting on the bucket and point the record where to go, so thanks for this answer. Shame people can take other peoples domain names for their buckets easily too. – Martin Lyne Sep 27 '15 at 21:42
  • 1
    @MartinLyne thanks. I've added a reference to the S3 documentation, about the bucket name and domain name needing to be the same, as well as mentioning the workaround for an already-taken bucket name, using CloudFront. In the us-east-1 and us-west-2 regions, and possibly others, the cost of using CloudFront is negligible and can potentially even save a little, since CF downloads are $0.005/GB cheaper on bandwidth than S3 direct at some edge locations. – Michael - sqlbot Sep 28 '15 at 03:00
  • @Michael-sqlbot oh, interesting, thanks! – Martin Lyne Sep 28 '15 at 11:20
1

Assume you have a hosted zone abc.com. and you create a bucket abc.com (which doesnt show up in the list in routes aliases) - you may think it's the . after the name - which you can't name the buckets with

Try this as well. Because the first time I created the bucket with the correct name and still didn't work. Believe me I have OCD so I didn't miss a fullstop or a comma.

  1. Create another hosted zone with the same name abc.com
  2. You will now see 2 of the same hosted zone (abc.com. and abc.com.)
  3. Delete the new one
  4. Go back to the old hosted zone abc.com
  5. You might be able to see the s3 endpoints coming up - this may be an issue in Route53

This worked for me trying out almost everything - Some suggestions I see is to logout and login for some sort of cache clear - not sure

Chathushka
  • 111
  • 2