3

My goal is to get a simple "# of requests per day" value against a Route 53 hosted zone.

I see no straightforward way to do this.

I have created the query logging as explained here: https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/query-logs.html

However this is extensive logging and it's split per global edge server. All I want is "domain example.com was queried 40,000 times in the last 24 hours" and similar metrics.

Is this possible? The logging seems overkill and I'll have to do quite a bit of parsing through all the subdirs to get that type of sum.

emmdee
  • 1,935
  • 9
  • 35
  • 56
  • 1
    You can probably work it out from your bill. Curious why you need this. It's largely irrelevant except for billing, given that there can be many caches between Route53 and the end user, for example an ISP cache. Would number of requests to your web server be a good enough number? With a DNS TTL probably around 5 minutes this number is going to be pretty large for a busy website. – Tim Apr 11 '19 at 19:19
  • 1
    Main purpose is a pretty dashboard for management and to help find good TTL values. I agree it's not crazy important nor accurate just trying to fulfill tasks given to me. – emmdee Apr 11 '19 at 19:31
  • 1
    TTL values shouldn't be set based on the number of queries that result, they should be set based on how quickly you might need to change the records' data and have it reflected everywhere. – ceejayoz Apr 11 '19 at 19:40
  • 1
    AFAIK DNS TTL is only relevant when you want to change where your site is hosted, or other similar things. If you use a load balancer you have no choice on your TTL anyway. I suggest this isn't a good investment of your time. – Tim Apr 11 '19 at 19:52
  • 1
    I know what TTL is. AWS charges per-million-lookups and higher TTL means overall less lookups but less change time for DNS changes, so I would like to have some insight into DAILY QUERIES sheesh it's a simple question honestly. We do logs of dns chagnes because our systems are cattle, not pets so lots of DNS changes for A/B deployments That's kind of out of the scope of this question though since I'm looking for a "quick" way to check the values and not looking to invest much time hence the simple question simple answer --- Sounds like "it can't be done" is the right answer here. – emmdee Apr 11 '19 at 20:11
  • @emmdee You can use the Cost Explorer API to get the billing counts programmatically. I just doubt it'll be worth the implementation effort versus just going to the billing page in the console once in a while. – ceejayoz Apr 11 '19 at 20:23
  • I would suggest adding a logger+dashboard(Cloudwatch+Elasticsearch + Kibana Dashboards) and configuring it as per your need. Here is a such example - https://aws.amazon.com/blogs/aws/cloudwatch-logs-subscription-consumer-elasticsearch-kibana-dashboards/ - https://logz.io/blog/aws-route-53-logging/ . There are premade templates in kibana which would help you with minimal set up. – mightyteja Apr 12 '19 at 01:00
  • Given DNS entries can be cached by downstream DNS servers this metric really wouldn't tell you much. Active clients will probably look up DNS every 5 minutes or so, but if you had a million person corporate all using it that might only be one request per 5 minutes if they cache DNS. This is why counting things like web server page requests is usually a better metric - though if you use a CDN you have to count requests to the source and to the CDN. – Tim Apr 17 '19 at 00:38

2 Answers2

1

Each request to Route53 creates its own log entry, so rather than parsing you should be able to count the log entries in a given log group over time with an empty metric filter. Amazon docs explain how to do that here. You can have each hosted zone log to its own log group (to count them separately), or can have some hosted zones log to the same log group (to count them together or to split using metric filters).

In console:

 1. Open the CloudWatch console at
    https://console.aws.amazon.com/cloudwatch/.
 2. In the navigation pane, choose Logs.
 3. In the contents pane, select a log group, and then choose Create
    Metric Filter.
 4. On the Define Logs Metric Filter screen, leave Filter Pattern blank.
 5. Choose Assign Metric, and then on the Create Metric Filter and
    Assign a Metric screen, for Filter Name, type EventCount.
 6. Under Metric Details, for Metric Namespace, type MyNameSpace.
 7. For Metric Name, type MyAppEventCount.
 8. Choose Show advanced metric settings and confirm that Metric Value
    is 1. This specifies that the count is incremented by 1 for every
    log event.
 9. For Default Value type 0, and then choose Create Filter. Specifying
    a default value ensures that data is reported even during periods
    when no log events occur, preventing spotty metrics where data
    sometimes does not exist.

You can then pull up the data in CloudWatch Metrics and graph it with whatever interval and statistic you like. In your case, "Sum" with "1 day" intervals should get you a line graph of the total DNS requests per day.

Brian Bauman
  • 216
  • 1
  • 2
  • 10
-1

There is no quick & easy solution here. Short answer, use the billing dashboard for historical metrics.

The billing dashboard is the only way to get values like this right now unless you want to write a tool to parse and analyze CloudWatch logs using the query-logs exporter, which is not a straightforward solution as the question is asking for.

emmdee
  • 1,935
  • 9
  • 35
  • 56