2

We have a mongodb instance running on an amazon ec2 large (7.5GB) ubuntu instance (same machine that our node.js server is running from). Traffic has increased a LOT recently and we are starting to see some erratic behavior from mongodb. The current state:

We noticed some slow queries using the profiler:

query   mydb.user 1327ms Wed Aug 01 2012 14:01:39
query:{ "_id" : ObjectId("500f45486562e7053d070363") } idhack responseLength:178 client:127.0.0.1 user: 

Entries in the user table are small but there are about 50 million entries in the table. This happens every minute or two and a series of slow queries follow it. When we execute the slow queries from the command line using explain(), nothing bad is reported.

mongostat tells me:

insert  query update delete getmore command flushes mapped  vsize    res faults locked % idx miss %     qr|qw   ar|aw  netIn netOut  conn   set repl       time
138    804      9      0      96      36       0  60.2g   121g  3.42g      2      1.8          0       0|0     1|0    93k   479k    19 fgset    M   14:15:59
94    755      4      0      71      35       0  60.2g   121g  3.41g      0      1.5          0       0|0     1|0    78k   344k    19 fgset    M   14:16:00
93     17      4      0      75      27       0  60.2g   121g  3.41g      0      1.2          0       0|0     1|0    24k    31k    19 fgset    M   14:16:01
87     86      6      0      73      33       0  60.2g   121g  3.41g      0      0.9          0       0|0     1|0    31k   260k    19 fgset    M   14:16:02
101    531      3      0      62      19       0  60.2g   121g  3.41g      0        1          0       0|0     1|0    60k     1m    19 fgset    M   14:16:03
92    713      2      0      66      24       0  60.2g   121g  3.41g      1      0.9          0       0|0     0|0    72k     1m    17 fgset    M   14:16:04
163     91      6      0      93      46       0  60.2g   121g  3.41g      2      9.5          0       0|0     1|0    44k   256k    17 fgset    M   14:16:05
108     62      6      0      79      38       0  60.2g   121g  3.41g      4      1.2          0       0|0     1|0    32k   122k    17 fgset    M   14:16:06
137     23      6      0      81      32       0  60.2g   121g  3.41g      0      2.3          0       0|0     0|0    32k    67k    17 fgset    M   14:16:07

pidstat -r -p <pid> 5 tells me:

02:18:01 PM      1700    647.00      0.80 126778144 3578036  46.80  mongod
02:18:06 PM      1700   1092.00      1.20 126778144 3586364  46.91  mongod
02:18:11 PM      1700    689.60      0.20 126778144 3578912  46.81  mongod
02:18:16 PM      1700    740.80      1.20 126778144 3577652  46.79  mongod
02:18:21 PM      1700    618.60      0.20 126778144 3578100  46.80  mongod
02:18:26 PM      1700    246.00      1.00 126778144 3577392  46.79  mongod

Note that our database volume is a single ext4 volume and NOT a raided set as recommended.

I am not sure what the next step is to understand the problem enough to implement a fix. Any input is appreciated.

Hersheezy
  • 356
  • 1
  • 16

1 Answers1

3

I'd have to get a better look at the trend over time to be sure here (MMS would help), but you may be running into an issue where you have hit the maximum resident memory for MongoDB on that instance - the page faults aren't that high, but I do see a small drop in resident memory. If there is memory pressure elsewhere (from another process) you may be evicting pages from MongoDB and/or having to page to disk more often than you should (a page to disk on EBS is quite slow).

There are a couple of things you can do to make your RAM usage more efficient here:

  1. Remove unnecessary indexes - they'll just take up valuable RAM if used - good candidates for removal are single indexes that are the leftmost element of a compound index elsewhere. It will really depend on your usage and schema here as to what can be removed, so all I can give are general recommendations.
  2. Tune readahead on the EBS volume down - this is counter to what you will read about tuning EBS volumes in general but readahead set too high is actually a drag on memory usage when your access profile is random as opposed to sequential.

To take a look at your readahead settings for a volume you run this command (requires root/sudo privileges):

sudo blockdev --report

The output will list something like this:

RO    RA   SSZ   BSZ   StartSec            Size   Device
rw   256   512  4096          0     10737418240   /dev/xvda1

The RA column (at 256, which I believe is the default on Amazon) is what we want to tweak here. You do that by running something like this:

blockdev --setra <value> <device name>

For the example above, I would start by halving the value:

blockdev --setra 128 /dev/xvda1

I go into far more detail about how low you should set this value and the reasoning behind it in this answer if you would like to know more. Note that the changed require a mongod process restart to take effect.

After you have done both of those things you may be able to squeeze more performance out of the RAM on that xlarge instance. If not, or if the memory pressure is coming from elsewhere and being more efficient is not enough, then it is time to get some more RAM.

Upgrading the EBS storage to a RAID volume as you mentioned or using the new Provisioned IOPS and EBS optimized instances (or the SSD Cluster Compute nodes if you have money to burn) will help the "slow" part of the operations (paging from disk) but nothing beats the benefits of in-memory operations - they are still an order of magnitude faster, even with the disk subsystem improvements.

Adam C
  • 5,132
  • 2
  • 28
  • 49
  • Excellent. We are very careful about using indexes so looking into moving our node.js processes off the main server and tune the RA. Question: what MMS are you referring to? – Hersheezy Aug 02 '12 at 16:02
  • 1
    Oh, sorry - MMS is the free MongoDB Monitoring Service from 10gen (creators of MongoDB). It tracks page faults, memory, ops, IO (if you enable munin-node) and lets you easily correlate spikes, drops, trends etc. - full disclosure, I work for 10gen. I changed the answer to make the MMS reference a link to the MMS docs :) – Adam C Aug 02 '12 at 16:31
  • 1
    sorry for taking so long to get back on this one :( we implemented your blockdev suggestion as well as optimizing a few queries / making some collections capped that needed to be. In short, we are running about as hard as we can on the machine and it just needed some tuning. thanks! – Hersheezy Aug 28 '12 at 20:16
  • Glad to hear it helped :) – Adam C Aug 29 '12 at 00:15