5

I am looking for a monitoring system like Cacti which doesn't loose data over a time, all the tools I have found use rrd files which averages the data as time goes by.
I would like a to be able to go back to (for example) April 1 at 12:00 and see what the data captured at that time was, not what the average was for that whole day.

Is there a monitoring system which can do this?

Epaphus
  • 1,011
  • 6
  • 8
  • 2
    RRD can be configured to keep more data you know. I should also ask, how many datapoints are you polling, and how often? Disk requirements go up... fast :) – SpacemanSpiff Dec 16 '12 at 23:51
  • On the server I am looking for this to be on to start with it is about 50 items over about 17 graphs, more are likely to be added depending on how this goes – Epaphus Dec 17 '12 at 00:13
  • I think you'll benefit more from a tool like Solarwinds that stores its data in SQL, and also lets you define how its averaged over time more flexibly. What kind of data is this? CPU? Network? Disk? – SpacemanSpiff Dec 17 '12 at 02:01
  • its for a storage server so will be monitoring CPU, memory, disks, array loads, network. I suspect Solarwinds might do the job but I think the price tag will be an issue – Epaphus Dec 17 '12 at 13:42
  • How about Ganglia: http://serverfault.com/questions/448573/how-to-bring-up-an-hourly-graph-for-a-specified-period-rather-than-only-last-h? – quanta Dec 18 '12 at 08:23
  • Shopping Questions and product recommendations are Off-Topic on any of the [se] sites. See [Q and A is hard, lets go Shopping](http://blog.stackoverflow.com/2010/11/qa-is-hard-lets-go-shopping) and the [FAQ] for more details. – Mark Henderson Feb 20 '13 at 04:42

2 Answers2

1

OpenTSDB can do what you want but, as pointed out in the comments, disk requirements are pretty big.

Nupraptor
  • 413
  • 4
  • 7
0

From the question tags I guess you checked both Nagios and Zabbix.

In Zabbix, all monitored items come with a given predefined

  • history (the amount of time the raw data will be kept, for most of the items, 90 days)
  • trends (the amount of time an hourly average will be kept, for most of the items, 365 days, used to generate longer term graphs.)

you can can customize both, item by item. The 70k+ items I'm monitoring are currently consuming ~5Gb in the DB.

It rolls with any of the following DB backend:
MySQL, Oracle, PostgreSQL, DB2

Joao Figueiredo
  • 208
  • 2
  • 9