I have a high hit nginx web server on CentOS that provides static large content. when the number concurrent connection are low, server could easily provide up to 4Gb of traffic , but when the number of concurrent connections increase, the server performance drops dramatically down to 400Mb with 100% I/O utilization. I have tried SSD caching, mounting file system with noatime , changing I/O scheduler, increase server Memory up to 256GB , different configuration on nginx like aio , file send but no success. are there any other configuration to improve it's performance?
-
What is the output of `$ free -m` during peak load? Also, what is the volume of the data you're serving? – EEAA Sep 04 '14 at 17:07
-
3Why not put a CDN in front of it? – ceejayoz Sep 04 '14 at 17:07
-
Does this server have a 10GigE uplink then? – EEAA Sep 04 '14 at 17:09
-
One of the servers has 10Gb uplink and the others have several 1Gb NIC with bonding. Also this is "free -m" output: total used free shared buffers cached Mem: 129178 127936 1241 0 140 126955 -/+ buffers/cache: 840 128337 Swap: 11572 0 11572 – anthonio mackay Sep 05 '14 at 05:46
-
right now I'm using several servers for this purpose, But I'm wondering if it is possible to provide more bandwidth with a signle server. – anthonio mackay Sep 05 '14 at 05:48
-
NIC bonding - I'm not sure how this works in Linux, but under Windows, if you team NICs and perform transfers only between two servers it will work with max speed of the single link. If you have 'many to one' type of connection then you can achieve speeds over 1 Gbit/s. PS - it would be helpful to know your infrastructure and connections better to advice. – toffitomek Sep 05 '14 at 07:33
-
The server on which I'm using NOC bonding, has four 1GigE connections that are configures on adaptive load balancing mode. – anthonio mackay Sep 05 '14 at 13:58
2 Answers
What about create a ramdisk and putting content there? You can run rsync to backup the data to physical disk and prevent data loss.
- 5,272
- 2
- 23
- 34
I suppose that when you're serving low number of clients then your server is able to cache most of needed data in RAM, so it is serving it almost exclusively from RAM.
But when more and more clients are served most of data does not fit to your RAM and your server needs to read it from your IO subsystem. Cache is much less used, as most of the time needed data just isn't there. And mechanical drives would need to seek a lot.
I don't know how much data you have and how is your IO configured, but I think mechanical drives just won't suffice. Also any SSD cache smaller than data you use.
You could use several SSD's with high random read performance in RAID1. Or maybe you could "shard" your data - instead of using one huge filesystem split your files to large number of small SSD disks based for example on crc32(filepath) % 8
for 8 SSD's.
- 2,649
- 4
- 26
- 32
-
It is a file storage with 40TB of data. hardware raid (50) is configured. – anthonio mackay Sep 05 '14 at 05:51