17

I'd like to setup statsd/graphite so that I can log JS apps running on HTML devices (ie. not in a contained LAN environment, and possibly with a large volume of incoming data that I don't directly control).

My constraints:

  • entry point must speak HTTP: this is solved by a simple HTTP-to-UDP-statsd proxy (eg. httpstatsd on github)
  • must resist the failure of a single server (to battle Murphy's law :)
  • must be horizontally scalable: webscale, baby! :)
  • architecture should be kept as simple (and cheap) as possible
  • my servers are virtual machines
  • the data files will be stored on a filer appliance (with NFS)
  • I have tcp/udp hardware load balancers at disposal

In short, the data path: [client] -(http)-> [http2statsd] -(udp)-> [statsd] -(tcp)-> [graphite] -(nfs)-> [filer]

My findings so far:

  • scaling the http2statsd part is easy (stateless daemons)
  • scaling the statsd part doesn't seem straightforward (I guess I'd end up with incoherent values in graphite for aggregate data like sum, avg, min, max ...). Unless the HTTP daemon does consistent hashing in order to shard the keys. Maybe an idea ... (but then there's the HA question)
  • scaling the graphite part can be done through sharding (using carbon-relay) (but that doesn't solve the HA question either). Obviously several whisper instances shouldn't write the same NFS file.
  • scaling the filer part isn't part of the question (but the less IO, the better :)
  • scaling the webapp seems obvious (although I haven't tested) as they only read the shared NFS data

So I was wondering if anyone had experiences and best practices to share for a solid statsd/graphite deployment ?

David142
  • 353
  • 1
  • 2
  • 9
  • Reading the comments on Etsy's blogpost about StatsD, they write that they feed StatsD 10,000-30,000 metrics every 10 second. I would suggest linking one http2statsd client to one statsd server and shard it *if* the number of metrics sent to statsd becomes a bottleneck. – pkhamre Nov 12 '12 at 10:00
  • Did you implement this in the end? If so, could you share details? – gxx Jan 17 '16 at 02:55

1 Answers1

1

There's a statsd proxy with consistent hashing, that makes is possible to spread statsd traffic between multiple statsd aggregators, each using its own set of metric names. It's a crucial scalability element in your architecture, allowing you to scale statsd processes.

Graphite is also tricky, but hopefully you won't need infinite scale and can do just fine sharding by service or some other static parameter.

Hardest part is scaling webapp, and it depends a lot on what are your heaviest graph queries. However you can always pre-aggregate data for hardest graphs and get rid of most of the load.

I've been using HostedGraphite for quite some time to avoid all this pain, these guys have implemented their own Riak backend for Carbon and do all the scaling there.

DukeLion
  • 3,239
  • 1
  • 17
  • 19