5

We are running our services on AWS's ECS platform, and we send our logs to AWS CloudWatch.

We have two types of logs, any container can produce either type:

  1. the usual application logs (access, error, whatnot); these must be easily viewable by devs and admins
  2. audit logs (human readable "who did what when" logs); access to these must be restricted

The audit logs are mandated by regulations, and in addition to stricter access control requirements, they have a longer retention time than the app logs, so putting the two in the same log stream is not really an option. So we use two log streams, one in a CloudWatch log group that has a strict access policy.

Currently, we are writing the logs to separate disk files, from where a log agent sends the log entries off to CloudWatch. However, we'd like to switch to "The Docker Way" of logging, that is, write all logs to STDOUT or STDERR, and let a log driver take care of the rest. This sounds particularly attractive, because the log disks are (very nearly) the only disk mounts we are using, and getting rid of them would be Very Nice indeed. (Apart from the log disks, our containers are strictly read-only.)

The problem is, we cannot figure out a sensible way to keep the log streams separate. The obvious thing to do is to somehow tag the log messages and separate them later, but there's still a problem:

  • The sensible way would be to have the log driver separate the messages to different log streams based on the message tags. The awslogs log driver for Docker doesn't support this.
  • The "brute force" way would be to write to a single CloudWatch log stream, and reprocess that stream with a self-written filter that writes to two other log streams. Since CloudWatch billing is based on API calls, this would basically double the costs, and is therefore out of the question.
  • We could possibly also set up a log host, and use another docker log driver (eg. syslog) to send all the logs there. We could then split the log streams, and forward them to CloudWatch. This would add a choke point and a SPOF to all logging, so it doesn't sound too good either.

Hopefully, we are missing something obvious, in which case we'd greatly appreciate the help.

If not, are there any workarounds (or proper solutions, even) to get this kind of thing working?

Bass
  • 601
  • 4
  • 8

3 Answers3

0

We are still looking for a better way to do this, but so far what we are going to do in the company I work is to attach a volume to the container and write there, install the agent and treat as a normal log file.

I don't know if it is a viable solution, because I found your question while I was reading about it, but maybe fluentd would be suitable for your needs. It has a docker driver and you can use the tagging strategy to rout the log.

Driver Documentation

Using Fluent bit with ECS

A example using Fargate

Thiago
  • 101
  • 1
0

At least there are two native ways of AWS,

  1. You can create the subscription of the CloudWatch logs to process the stream of logs for auditing purpose
  2. Use recent launched feature Firelens to collect your logs to any destination you want.
Kane
  • 101
  • 2
  • Thank you for your input! Unless I'm missing something, the first option shares the drawbacks of the "brute-force" option in the second bullet point, and (as far as I know) there's no way to make FireLens (or fluentd) route log messages to different log streams based on the message content. My information may be outdated, though, so if you have actually seen this solution work, I'd very much appreciate a confirmation that it can be done. – Bass Feb 26 '20 at 16:30
  • Sorry for late response. Using Firelens you can publish your logs to stream(such as Kinesis) then you can use different processors to process them for different business usages(app logs and audit log for your case). – Kane Mar 02 '20 at 15:37
0

About the audit log, would you like to share what do you want to audit? In general, for this kind of purpose you may want to use CloudTrail or GuardDuty.

Linh
  • 1