0

Hi,
I wondered what was a good setup for AWS/MongoDB in terms of machines and the sizes of their disks.

Current setup

  • 3 micro machines for the config servers, 1 mongos and arbiters. The 8Gb limit is almost reached. (and I ran the arbiters with --nojournal)
  • per shard : a replica set of 2 machines m1.large with 8Gb for system + 20Gb for data
  • everything is on EBS.

Questions

  1. is 20Gb too big or too small ? Should I go with 100Gb for example ?
  2. Am I supposed to inform mongodb about the 20Gb (or other) disk limit ?
  3. Do you see anything wrong that I don't see ? Im new to mongodb and aws but Im an ok experienced SWE

Plan of use

My database should allow a 100 qps (mostly writes), and should grow up to 1Tb over the next 3 years. The plan is to add as many shards as needed, more or less manually (with scripts), when we see that more memory is needed on the database.

We will also run a few mapreduce over this and have some scripts that do aggregates with the data over the past 15 minutes, every 15 minutes.

We are a very small company, spending up to a few hundred $ per month on our servers would be ok but we can't go crazy on cash.

We hope that we won't have to manually take care of too many machine failures, manually taking care of things once a month would be fine.

Thanks for telling me what you think about that.

Thomas

Thomas
  • 157
  • 8

1 Answers1

1

First your specific questions:

is 20Gb too big or too small ? Should I go with 100Gb for example ?

This completely depends on your data requirements and how many documents you intend to insert. If you intend to have 5GB of documents then you should be fine, even with overheads for replication (oplog is 5% of free space) and storage (there is always an empty file pre-allocated for each database). If you plan to have 10-12GB of data (and remember you have to store indexes, journal, logs as well) then I would go for a larger disk.

Since you say you plan to grow to 1TB in a year then you will probably exceed 20GB inside a month and need to increase disk anyway, hence it will probably be easier to go for 100GB immediately. At 1TB in a year, assuming constant growth, that will only give you about 1 month of room (1TB per year ~= 83GB per month).

Am I supposed to inform mongodb about the 20Gb (or other) disk limit ?

No, there have been improvements in how it handles the situation but MongoDB will currently just use all available space until there is none left - you need to monitor your disk space independently.

Do you see anything wrong that I don't see ? Im new to mongodb and aws but Im an ok experienced SWE

Never use micro instances for anything in production - in particular do not use them for config servers. Your config servers are critical for the operation of a sharded cluster. But, no need to take my word for it - see page 6 of the updated Amazon whitepaper:

T1.micro instances are not recommended for production MongoDB deployments, including arbiters, config servers, and mongos shard managers.

Generally I would recommend reading through the whitepaper and following the guidelines therein - you'll find recommendations for Linux settings (readahead, hugepages etc.), storage, pIOPS and more. Also worth checking out are the Production Notes - some duplication, but it's updated more often than a whitepaper.

Finally, get some idea of your working set size for your database (per shard) - that will dictate how much RAM you need, which is really the key to selecting instance size on EC2 for MongoDB. You may have enough with 8GB, but if not you will see significant performance hits for hitting disk.

Adam C
  • 5,132
  • 2
  • 28
  • 49
  • An underlying question is : is MongoDB good at handling shards of 100Gb or would it be faster to have it handle shards of 20Gb ? (These numbers come more or less from nowhere, Im just trying to poke at people's experience about disk size) – Thomas Aug 29 '13 at 13:13
  • Again, it would generally depend on the working set size, not the on-disk storage - I have seen single nodes with terabytes of data, but a working set that is significantly slower. It's all about the amount of the data you are trying to pull off the disk and how quickly you want to do that - storage concerns in terms of volume are purely a matter of making sure you have enough disk to store the data you want. Asking if a database can handle X data is about the same as asking if an OS or a filesytem can handle X data - unless you are hitting a limit it's really not a meaningful question. – Adam C Aug 29 '13 at 14:44
  • This is the case : I will need to do some mapreduce or aggregation over the data in the past 15 minutes. The data is timestamped and the timestamp serves as the hash sharding key. I will also see if I can index it. I won't do reads from this DB. How will the working set be used and will the mapreduce will run over ALL the data (hence if I have 1 Tb of data on 1 machine it will be super slow) or not ? – Thomas Aug 29 '13 at 14:53
  • Map Reduce will generally be a read on the database, and as to how much of the data will be read in, that will depend on how specific you make the query that feeds it - if you don't use criteria (like last 15 minutes based on a timestamp or similar) then it will run over all the data and likely be horribly slow (even if indexed). If you are selective and have the aforementioned timestamp field indexed, then you are more likely to see decent performance. There are too many variables to say one way or another here - I would recommend testing on a reasonable sample set first to get a baseline – Adam C Sep 02 '13 at 17:21
  • The recommendation around micro instances may have changed. The April 2016 version of the Mongo AWS white paper ( http://s3.amazonaws.com/info-mongodb-com/AWS_NoSQL_MongoDB.pdf ) states "A micro instance is a great candidate to host an arbiter node." – Zxaos May 16 '17 at 16:54