4

In follow-up to an old question (and somewhat related to this), what trending tools/methods can be best utilized to help anticipate growth in a virtualized environment?

For example, how do you go about determining:

  • number of images that will need to be made
  • over-provisioning "safe" levels (1x, 1.5x, 2x, etc) for
    • vCPU
    • RAM
    • disk space
  • other factors?
warren
  • 17,829
  • 23
  • 82
  • 134

1 Answers1

3

Graphs. Lots and lots of graphs. If you're collecting all the relevant performance metrics for your environment, you can graph them up, look at where they're heading, and make a "squint-n-guess" estimate of where you'll hit capacity. If you want to get more scientific, you can also all sorts of fancy mathematical and statistical voodoo that will get you a result that might feel "more accurate", but given the limitations of the data and variance in the real-world operational environment, I'm not sure the extra effort is justified.

Given that you provide no indications of what environment you're in or what you've already got running, it's impossible to provide any reasonable suggestions for specific tools, but I'll just say that I've never felt a need to go any further than rrdtool. That thing is freaking magical.

womble
  • 95,029
  • 29
  • 173
  • 228
  • I intentionally asked the question with "no indications of what environment" I am in: in the same vein as the previous questions referenced, I'm looking more for what things will be needed in general rather than what are needed in X situation on Y timeframe :) – warren Aug 29 '11 at 12:48