-1

I'm setting up a server that is being used as a hosting server that gets about 500 - 1000 new images per day. Most images are in jpg format, around 200kb - 2mb big. These images are being embedded across 1000-2000 websites using https.

Software wise I'll be using Ubuntu along with ServerPilot.

Considering the server only hosts images for over 1000 websites, the https requests will be high. What should my main focus be on the hardware? CPU, RAM or HDD/SSD?

When it comes to pure https requests for images, what matters more, CPU or RAM?

Will the loading time of an image be noticeable between a server with an SSD and one with HDD?

Here are 2 possible choices.

Option 1

  • CPU: Intel® Xeon® E3-1275 v5 Quad-Core Skylake
  • RAM: 64 GB DDR4 ECC
  • HDD: 2 x 4 TB (Raid 1)

Option 2:

  • CPU: Intel® Core™ i7-3770 Quad-Core
  • RAM: 32 GB DDR3 RAM
  • HDD: 4 x 6 TB SATA 3 Gb/s 7200 rpm (Raid 1)
Kevin M
  • 109
  • 3
  • If you're simply setting up a glorified file cabinet that will be accessed via https, why not use a public cloud object storage system? – Spooler Mar 12 '18 at 04:09
  • Because that is much more expensive and I am very limited with what I can run when using public cloud services. I need full root access to run multiple scripts, of them being to resize the images if they are being too big. – Kevin M Mar 12 '18 at 04:12
  • Thanks for the links. I checked them out. Unfortunately not information that helps, nor duplicate of my question. My question is very specific to http requests and what hardware (CPU or RAM) is important to handle that kind of requests. – Kevin M Mar 12 '18 at 04:32
  • you gotta love the stackoverflow community. If they can't answer a simple question, they downvote and call it duplicate. Like seriously, anyone can read my comment and compare with the suggested duplicate ones. Not the same, different questions. But okay sure, downvote and mark as duplicate. The correct answer was given to me by another user, which is: CPU is important to handle http requests! ECC RAM is also important. - The end. That's how easy it could have been. – Kevin M Mar 12 '18 at 04:38
  • 2
    You're asking a question that was answered by that duplicate. This is capacity planning, which you'll generally need to test for. You should know what your workload looks like before scaling it, rather than try to match arbitrary hardware to some nebulous expectation of usage with no data. Since we don't even know what the entire workload looks like, we can't really offer you any quality answer. Besides, is this a single server offering images to over 1000 sites with no HA? Using two cheaper nodes could be more reasonable than a single vertical, for example. – Spooler Mar 12 '18 at 04:43

1 Answers1

1

Option one is the best option, ECC RAM is important when serving a large amount of information at once, as well most importantly is the CPU which needs to interpret the information that is being recieved and determine what to do with the request.

user460178
  • 26
  • 1