23

I have a large, busy site; it currently runs completely on a dedicated server that I rent each month for ~$700.

It has three parts that I think I could carve off to a cloud solution:

  • Media (image/video) file hosting. Currently I have something like 236 GB of static images, currently all just parked on my server. If I moved these to the cloud I would probably combine with a CDN (to minimize the cost of data transfer out of the cloud service for every image request).

  • Database. Currently running MySQL with about 3 GB of data on my server.

  • Web server. The same server runs nginx serving static files and PHP.

I'm not having any production issues now but I expect my site to double in traffic/server load next year. So I want to think about scalability now.

My question is this: how can I figure out if it would be cost-effective to move any/all of these onto a cloud platform, instead of keeping them on my current server?

(I already know some of the other factors in place: it would be easier to do backups with cloud, I wouldn't have one point of failure like I do now with my single server, etc. But I have no sense of how much more/less it would cost to carve off one of these services. How can I compute that?)


EDIT - thank you all for these amazing answers and comments. A few folks have asked for more info so I'm summarizing all below and adding a little more data:

Data Transfer ("Bandwidth") Used - the site sends ~17 TB outbound data per month (!) and I am planning to double that figure next year (!!). Almost all of this outbound is static media (pics and video clips), so perhaps a CDN would be a good idea, not only for better discoverability but to move the burden of transmitting all that data to the CDN network, so the media storage server doesn't have so much data transfer directly. --EDIT: it seems that CDNs are damn expensive for this much data transfer. So maybe the static media stays on a simple server that gives me a very high bandwidth cap (hello OVH!) and if I can find a cost-effective way to put a CDN in front of it, terrific.

Traffic Not Spiky - my traffic is fairly steady; my goal with moving to a more cloud-based solution is to be able to easily scale up. I.e. my current setup has everything on one hard drive and the drive is 60% full; this infrastructure literally couldn't deal with double the amount of data (and I'm not sure it would have enough computing power to run the web server and DB server at double the traffic, either).

Static Media - As I mentioned above, I have about 236 GB of static media, mostly all images and video clips. This seems like the most obvious (and maybe easiest?) piece to carve off first and put in the cloud.

Database - while the DB runs fine now, I will be having some more complex queries soon and like the idea of something a little more powerful there. So while I don't think my current needs (power and amount of data) dictate that I should move the DB server into the cloud, it's all about being able to scale up.

Busy Hours - I always have at least 1,000 users on the site 24/7, voraciously consuming media. The server is never idle.

Currently Dedicated Server - I misspoke earlier and said it was colo (implying that I owned the hardware). That was wrong. I have a dedicated server (owned by my hosting company) that I rent each month. Not a big distinction but just want to mention.

Eric
  • 1,087
  • 2
  • 12
  • 24
  • 10
    Have you even checked out the different price calculators that the different cloud providers have? – Orphans Aug 12 '20 at 07:21
  • 3
    as a not-so-side-note: if a lot of stuff goes in storage consider a subsequent refactor of the website to make use of object storage for the media files (if it fits): persistent storage is way more expensive that object storage! – matteo nunziati Aug 12 '20 at 16:06
  • @Eric Additional information request. RAM size, # cores, any SSD or NVME devices on MySQL Host server? Post on pastebin.com and share the links. From your SSH login root, Text results of: B) SHOW GLOBAL STATUS; after minimum 24 hours UPTIME C) SHOW GLOBAL VARIABLES; D) SHOW FULL PROCESSLIST; AND Optional very helpful information, if available includes - htop OR top for most active apps, ulimit -a for a Linux/Unix list of limits, iostat -xm 5 3 for IOPS by device and core/cpu count, for server workload tuning analysis to provide suggestions. – Wilson Hauck Aug 12 '20 at 19:37
  • 1
    Eric, how much traffic do you generate (outbound) per week/month? – Ron Trunk Aug 12 '20 at 19:47
  • 18
    If I can be frank, almost nobody moves to the cloud saves money. People move to AWS/Azure/GCP _thinking_ they will save money, but they've usually been mislead. People move to the cloud for flexibility, redundancy, scaling, quick prototyping, and dozens of other reasons. But you probably won't save money. – Mark Henderson Aug 12 '20 at 21:03
  • @MarkHenderson You can do *a lot* in one cloud or another for $700/month. I’m pretty sure the OP can run his site in a cloud for a fraction of this. – MLu Aug 12 '20 at 22:47
  • @MLu This also sounds a lot for a stand alone solution so there might be hidden requirements or a lot of traffic – Frank Hopkins Aug 13 '20 at 00:24
  • @MarkHenderson you can easily save money for very spiky services: instead of paying for a month, you pay for the two hours over three days a month you need it, but this does not sound like one. (and you are right that many people blindly assume cloud saves money or they forget to re-evaluate when their service gets more traffic). – Frank Hopkins Aug 13 '20 at 00:45
  • @MarkHenderson how much would that flexibility, redundancy, scaling, quick prototyping and dozens of other reasons cost in bare metal? You might be spending more because you're getting more features – Blueriver Aug 13 '20 at 03:22
  • @MarkHenderson, Blueriver, Ron Trunk, and everyone! Thank you all for your valuable comments. I've edited my question to add some more info at the end. – Eric Aug 13 '20 at 06:59
  • AWS traffic charges are outrageous. AWS would charge $3300 for 34TB of outbound EC2 traffic. Lightsail with a load balancer and five of their largest instances would include 35TB of traffic and would cost about $800 per month. You could maybe reduce bandwidth and price if you use CloudFlare for caching, but I think you're probably better off with a dedicated server for high bandwidth sites. – Tim Aug 13 '20 at 07:40
  • The first time static content is sent to a CDN node the transfer is billed, but the second time it's cached and you're not billed. Some CDN provides have many nodes so you don't just send it once. Data transferred from S3 / EC2 to CloudFront nodes is not billed, but the data sent out from CloudFront is billed, so CloudFront gives you performance and security but doesn't really reduce your costs significantly. For that reason an external CDN such as CloudFlare (or any other really) is likely to reduce transfer costs more than CloudFront https://aws.amazon.com/cloudfront/pricing/ – Tim Aug 13 '20 at 09:32
  • @Tim it seems the most important factor when I'm looking for a CDN is the bandwidth costs, no? I looked at KeyCDN and it seems like just serving 17 TB of data each month would be around $680. (https://www.keycdn.com/pricing) Crazy expensive for my purposes... so yeah. Maybe a single dedicated server at a high-bandwidth provider (like OVH) is the right answer for static media. – Eric Aug 13 '20 at 09:41
  • Bandwidth charges are outrageous in aws and gcp. Take a look at digital ocean. Their bandwidth costs are low. Maybe it could help you. – SkrewEverything Aug 13 '20 at 13:56
  • Eric, bandwidth cost is a key factor selecting your hosting provider and your CDN. CloudFlare has a free plan and paid plans with more features, they don't charge by GB. I'm just a happy user of their free service https://www.cloudflare.com/en-au/plans/ – Tim Aug 13 '20 at 18:57
  • I'd look at something like Wasabi or Backblaze for your object storage, to bring your egress to Cloudflare costs down to 0. You could probably use only Wasabi, skip Cloudflare and be fine too. Only once you're being billed for <1TB/mo of bandwidth do you really have any hope of keeping that bill lower than what you're paying now. – Jay Kominek Aug 14 '20 at 17:11
  • @MarkHenderson. In my general experience cloud if a money saver for very big or very small projects. Everything in the middle is a mixed bag. And yes it tends to cost more very often. – matteo nunziati Aug 16 '20 at 07:18
  • Yeah, my takeaway from all the excellent comments and answers I've received is: because my bandwidth needs are nuts for the static assets (pics, video files), I should probably just keep those parked on a server that gives me super high bandwidth caps... and MAYBE add a CDN if I can find a cost-effective way to do so. The rest of my infrastructure (web server, db server) seems like it could be cloud-ified pretty easily. Well-- "easily" is all relative ;-) but you know what I mean. My main goal is to be able to scale, scale, scale. – Eric Aug 16 '20 at 15:16
  • 1
    I don't know where you're located, or who your provider is *now - but I *do* know I could recreate your service in *extreme* HA from Heztner for less than half what you're spending today. With more transfer. And space. – warren Aug 17 '20 at 19:10
  • 1
    @warren I'm intrigued; I can see the prices are VERY Good and 20 TB data cap is nice. The only hiccup is that 50% of my traffic comes from the USA, so I either need a CDN or a second server there to keep delivery speeds up. – Eric Aug 18 '20 at 10:02
  • @Eric - 90% of my traffic comes from the US ... but I run in both their Helsinki and Germany (forget which one) data centers :) – warren Aug 18 '20 at 14:00
  • @Eric - I just deployed a split SSD/HDD server (500G SSD (OS, /boot, /home), 6TB HDD (long-term storage (eg Nextcloud, archives, etc))) with 12 CPU cores, 64G of RAM for ~$70/mo. – warren Aug 18 '20 at 14:03
  • Nice. And so much cheaper than what I'm paying. ;-) – Eric Aug 18 '20 at 14:30

12 Answers12

16

Update

AWS would charge $3300 a month for 35TB of outbound bandwidth. Five of the largest Lightsail instances would cost a bit over $800 and would include 35GB of traffic. I assume that you can use the instance bandwidth if you use a load balancer. Their CDN pricing would get you to $2300 per month. You'd probably need another server as a web server, so the better part of $1000 a month.

Given your bandwidth needs I would rule out EC2 / CloudFront. You could consider Lightsail and a load balancer, after you verify load balancers effectively use the instance bandwidth. However, staying with a co-lo might be easier, though less flexible.

Previous Post

MLu gave you a good option, but rearchitecting a website can be difficult. Simply moving the image hosting to S3 with CloudFront (or CloudFlare) might be fairly simple and would be cheaper and faster than hosting it yourself.

Basic Suggestion

If you just want a VPS, work out the specs required in terms of CPU / RAM / disk and put it into the AWS Calculator. Ignore the warning to use the new calculator, the new one isn't very good.

LightSail is a cheap way into AWS - bandwidth is especially cheap. You can get 8 cores, 32GB RAM, and 7TB transfer for $160/month, which would cost about $330 for the server plus $600 for bandwidth. Combine a couple of them (or smaller instances) with a $16 Lightsail load balancer you get a lot of power for not a lot of money. Lightsail is a lot simpler than full AWS.

Architecture Suggestion

Your best option for your architecture is like:

  • EC2 instance running Nginx / PHP
  • AWS RDS for MySQL
  • AWS ALB for load balancing

The difficult part here is sizing the resources. You can take a guess based on CPU usage while watching "top" if you like.

RDS

RDS you need to size for your peak load. Say you have a 4 core server now and MySQL looks to be taking two cores at peak then you probably need a two core RDS MySQL server.

To map that to instance type depends on your off-peak usage. T2 / T3 instances give you a fraction of a CPU, with a burst balance to use more sometimes. If you have a lot of time the website isn't busy it can build up CPU credits off-peak, use them on-peak. db.t2.medium gives you two cores and 4GB RAM, db.t3.medium gives you 2 cores, 8GB RAM, and more CPU credits. If the website is fairly busy most of the time you'll need dedicated CPUs, db.m5.large gives you two cores. You can change DB type fairly easily, but there will be some downtime if you don't have a multi-az instance (google that term to learn more).

EC2

EC2 can be more flexible as you can scale the number of instances based on load. You might choose an m5.large (or m5a for AMD, or m6g for ARM) as your base server, with 2 cores and 8GB RAM. Once it hits a threshold, say 60% CPU usage, AWS can spin up as many instances as are required to help cope with load, then take them down when not needed. You don't typically use t2 / t3 instances in load balancer as they can run out of CPU credits which makes things tricky.

Sizing and Price

Once you work out your architecture and sizing you can plug that into an AWS calculator. You'll need RDS instance, EC2 instances, account for egress bandwidth from the server, account for S3 storage of images and image bandwidth, EBS disk space and snapshots for backup, plus space for an AMI image to auto scale from. You probably then want services like Guard Duty to monitor your account (cheap), CloudTrail logs as audit logs which is just the storage price, and other bits and pieces. It can start to add up.

AWS bandwidth can be very expensive. Before you get into the detail of a calculation do a rough guess of maybe a db.m5.large RDS database, a couple of m5.large EC2 instance, 300GB EBS disk, and your outgoing bandwidth. If you use a lot of bandwidth that might cost more than your current co-lo. If most of your bandwidth is static resources an external CDN like CloudFlare can significantly reduce your costs, if you set up caching headers properly. I don't know how much of your 236GB they would cache, but they'd cache all the often used stuff. All of their 100+ data centers will download resources from your server though, so you'll still use a fair bit of bandwidth.

I have deliberately not explained every term I've used. AWS is complex and can be difficult to do well, securely. You'd really want to do some training to understand AWS before you start to use it. Once you understand AWS it's very powerful, but can be time consuming. Or just use Lightsail as mentioned above.

Tim
  • 30,383
  • 6
  • 47
  • 77
  • 1
    @Eric Any T2/T3 type solution is NOT meant to be used for production level data storage. https://aws.amazon.com/ec2/instance-types/t2/ They are fine for development and low volume testing. – Wilson Hauck Aug 12 '20 at 16:27
  • @WilsonHauck: OP hasn't given any metrics beyond telling us that his database is only 3GB. T2 might be a legitimate option for OP; we don't know. – Brian Aug 12 '20 at 19:00
  • 2
    @Brian This is enough to tell me T2/T3 is NOT a candidate. "I have a large, busy site" in the first sentence. Along with the perspective - "I expect my site to double in traffic/server load next year." – Wilson Hauck Aug 12 '20 at 19:31
  • A large site that's busy all the time obviously isn't a candidate for t2/t3 instances, which is why I explained that above. However, if it's a local site for local people, busy 8 hours a day and idle 16 hours a day, maybe it would work. m5 instances are more likely to be useful, or even better, Lightsail as it's really cheap compared with EC2. – Tim Aug 12 '20 at 19:38
  • We have no way of knowing whether the OP's definition of "large" and "busy all the time" matches our own. – Brian Aug 12 '20 at 19:45
  • @tim I hope Brian understands this perspective at the end of the day. – Wilson Hauck Aug 12 '20 at 19:46
  • Editing my answer to include info about traffic. – Eric Aug 13 '20 at 07:00
  • what about platform.sh? this tends to be cheaper for these type of sites compared to "bare bone cloud" like aws, azure, gcp,... – Pinoniq Aug 14 '20 at 22:40
  • I don't know anything about platform.sh, I'm a full time AWS architect. I wouldn't call AWS, Azure or GCP "bare bones", they're fully featured, comprehensive enterprise platforms. Honestly if price is the main driver the big three clouds can be quite expensive unless cleverly architected, particularly high bandwidth sites. – Tim Aug 15 '20 at 03:17
  • "and would include 35GB of traffic" 35TB? – liori Aug 17 '20 at 15:03
9

As a rule of thumb, using a cloud is always more expensive than using dedicated servers. As an example, for my private projects I have a fairly beefy server (metal) that costs me 40€ a month that would cost me over a hundred euros a month on AWS.

If you are a business, that is not your real cost calculation though. For my own server, I have to do:

  • linux distribution updates
  • software updates
  • general maintenance
  • vpn configuration
  • load balancer configuration
  • ssl certificates
  • possibly mirroring on other continents
  • all the other configuration stuff
  • ...

As a private person, those things are essentially free. I do them in my spare time and figuring out how to do it can be fun. As a result, I pay 40€ a month for my server and that's the full extent of my expenses.

As a company, all those things cost money. Someone - who is most likely paid - has to do all of that. You might have to hire a server administrator or DevOps, who wants to be paid at least a high 5-figure amount a year, maybe even 6-figure, depending on the location. If you are doing those things yourself, they will take time which you could instead spend on actually developing or promoting your application. Time is money.

The cloud can save you all of that, especially if you use things like containerization, which remove the need to worry about the actual servers and only require you to maintain the actual software you are using.

To say whether it will be cost effective or not requires taking the administrative time into account. You will most likely spend 4-5 times as much money on the cloud infrastructure compared to your dedicated server, and the costs will rise the more users you get. Whether that is more than you would spend on administrating your current infrastructure, either yourself or by hiring a sysadmin, is impossible to say.

As a private person, I would always pick a dedicated server.
As a company, it becomes a difficult calculation, often trending towards the cloud.

Morfildur
  • 191
  • 3
  • Cloud only makes sense if you actually use the dynamic scaling options. This usually means changing most of the software already running. And it is only cheaper if there are dynamic load peaks. – Josef Aug 14 '20 at 08:03
  • Especially the bandwidth cost on AWS is quite ridiculous compared to dedicated server rental. I rent a server from one, which comes with I think 10TB included in the base price, and an additional 10 euros per 2TB after that (or something like that; don't remember exactly) and you have to authorize the additional charges or they'll just shut your server down. When I read that, my reaction was: wait, you're going to charge me 20 times less than AWS *and* it's a big enough deal that you want my permission, when AWS would just go ahead and charge me? – user253751 Aug 14 '20 at 16:28
8

Only one concern here when you mind about price: Public Cloud sells in terms of virtual CPUs (basically hyperthreads) with a number of different generations of CPUs.

So, do not consider the: 1 on-prem core = 1 cloud cpu. This is wrong!

At most consider: 1 on-prem hyperthread = 1 cloud cpu. This is almost right!

the 'almost' here is because different generations of CPUs have different per-hyperthread performances.

On the other side consider that very often on-prem specs are oversized. So really assess your power needs before even compare CPUs.

Then online calculators are your friends for rough estimates.

matteo nunziati
  • 624
  • 1
  • 4
  • 13
7

As no-one has mentioned Azure yet here are my two cents in that respect.

In general I would recommend to tear things apart and move them to PaaS services whenever possible. This would prepare your solution for growth and comes with many other benefits, e.g. like built-in backup that you already mentioned, but also scaling and additional security features.

Azure Database for MySQL

This DBaaS solution would cost you around 100 USD. Storage would be cheap (0.69 USD/month = 5 GB * 0.138 USD) and it would include another 5 GB of storage for backup. Additional backup costs may apply if longer backup retention periods are required. For the computational part, a one year reserved instance would cost around 99 USD (general purpose, 2 vCores Intel E5-2673 v4 2.3 GHz)

Azure App Service

Would cost you something between 73 USD - 292 USD dependent on the amount of storage, CPU and RAM your PHP site requires. I would choose at least a Standard tier, as this would allow for auto-scaling and VNet connectivity so that your web app can talk directly to the MySQL DB via service endpoints (data stays on Microsoft backbone, good for latency and security).

Azure CDN

Outbound traffic from zone 1 (North America, Europe, Middle East and Africa) would be (10'000 * 0.081 USD) + (7'000 * 0.075) = 1'335 USD/per month. Plus a monthly fee of around 21 USD for the storage of 250 GB of data in the CDN static zone 1.

Also a storage account would be required (see below). However no charges would apply for the transfer between the storage account and the Azure CDN (Microsoft only, not Akamai/Verizon) in case an object is not at the edge location.

Azure Storage Account

The estimation of this cost factor requires more information, as the monthly price depends on a) the volume of data stored per month b) the quantity and types of operations performed (along with any data transfer costs) c) data redundancy options.

So for an amount of 500 GB of hot block blob storage with the lowest redundancy (LRS) we would have to pay 10.40 USD/month. Now what's missing is the price tag that comes with the operations and data transfers. For more details have a look here: https://azure.microsoft.com/en-us/pricing/details/storage/blobs/

To summarize:

  • Azure Database for MySQL: ~100 USD
  • Azure App Service: ~73-292 USD
  • Azure CDN (Microsoft): ~1'356 USD
  • Azure Storage Account: ~50 USD (estimated)

This would result in total charge between 1'579 USD and 1'798 USD per month.

Matthias Güntert
  • 2,358
  • 11
  • 38
  • 58
  • Thank you for this answer. Their CDN is no good for me since I have 17 TB of data... JUST the CDN alone would be (10000*0.081)+(7000*0.075) = $1335/mo. https://azure.microsoft.com/en-us/pricing/details/cdn/ – Eric Aug 13 '20 at 09:45
  • 1
    You were right. I had my numbers and content wrong. Fixed and updated now. – Matthias Güntert Aug 13 '20 at 11:42
  • No worries, I appreciate you taking the time to get into this level of detail! – Eric Aug 13 '20 at 13:04
6

The naive way is to match your current server specs to one of the cloud instance offerrings roughly 1:1 and price that up. E.g. if your server is 4 CPU / 16 GB RAM then in AWS you may look at m5.xlarge that costs $0.192/hr which is ca $140/month. Once you are confident that the instance size is right for your needs you can commit to a 1 or 3 years reserved instance term for up to 60% savings. On top of that you’ll need some disk space at ca $0.10/GB/month and the cost of egress traffic. That’s the easy but potentially more expensive way.

Another option is to rearchitect the website. Store the images in e.g. S3 bucket (much more scalable and cheaper per GB), which means you could probably do with a smaller and cheaper instance since it won’t be overloaded with serving the static images. Likewise you can offload the database to a managed database service (e.g. AWS RDS) or use a NoSQL db like AWS DynamoDB. But all that may require code changes.

If you are happy to rearchitect the website and make use of the cheaper cloud native services you can save a lot. How much? It depends, there is no easy answer until you decide what services you’re going to use.

On the other hand if you just wish to migrate from your colo server like for like to a cloud server that’s very easy to calculate. See above.

Hope that helps :)

MLu
  • 23,798
  • 5
  • 54
  • 81
5

I was essentially in the same situation as you but found all the offered virtual services to be extremely confusing and completely unpredictable when it came to calculating costs. So I rented a dedicated server, which guarantees a fixed cost per month for a true CPU and given maximum amount of RAM, disk, and throughput. Predicting your ultimate cost is trivial compared to using the "calculators" offered by the virtual services. Since you're already using a colocated server, which I assume you own, finding an equivalent or greater dedicated server should be straightforward.

$700 sounds very high for your needs, and you should be able to find the capacity and speed you want for far less. I/O is going to be your bottleneck.

At one time or another I have rented dedicated servers from quickpacket, serverhub, and needaserver (because an application required redundant geographically dispersed datacenters). All three vendors were more or less comparable in price, availability, performance, support, etc.

4

One additional comment to all the other answers:

In determining capacity/CPUs, remember that one of the advantages of cloud services is the ability to scale up as your needs increase. You don't mention your traffic loads or number of sessions, etc., but you can start relatively small and increase capacity as needed, whether that means standing up larger instances or scaling out with more instances.

The biggest cost variable will be your traffic loads, i.e. how much traffic you're serving from your website.

Ron Trunk
  • 2,149
  • 1
  • 10
  • 19
3

You can benefit on moving to the Google Cloud Platform by moving your static data (which from your description is the majority of the files stored on your server) to GCP buckets and store your static data there.

If you want to calculate how much it will cost you can use pricing page and do the math. Everything depends on how much data will be stored, how much egress traffic you will generate and how much IO operations will be needed.

Or you can just use the official Google Cloud Pricing Calculator and put in all the data you can to get an estimate.

You can also get monthly cost estimes for running GCP VM's while creating new ones - after you put in all the details (how many cores, ram etc) you will see monthly cost. But this is just for running and instance.

You can also get additional commited use discount.

Wojtek_B
  • 931
  • 3
  • 12
3

You have, overall, two primary components here:

  • Media storage.
  • Everything else.

Note that I'm listing both the PHP-powered web server and the database as one thing here. Moving those to separate cloud services will almost certainly cost you quite a lot in the short term because of the overhead of rearchitecting a large part of the site in a way that's not likely to e trivial.

For the first part, you're down to just total storage space. For most offerings, you're looking at either about 30 USD a month (if you go with block storage accessed by your server), or less than 10 USD per month for object storage (not counting load balancing/edge caching costs, which is likely to be a mostly fixed charge in the 20-200 USD range).

For the second part, look at a service like Vultr Compute Cloud, Digital Ocean Droplets, or AWS Lightsail. They all provide 'traditional' VPS hosting where you get X CPU threads, Y amount of RAM, and Z amount of disk space as one package with a fixed price. With these, you just pick whichever one matches up in terms of processing power with what you're already using and go from there. Pricing on these is usually about 10 USD per CPU core per month, though on the small end there are often lower cost single CPU offerings that have less RAM/storage than the 10 USD offering.


There's one other thing to consider though: network usage. Almost all cloud providers charge in some way for network usage. Typically, you will see one of two approaches:

  1. Only outbound data or cross-region data transfer are charged, ingress is free.
  2. Only the higher total value of inbound or outbound traffic is charged (the other direction is functionally free for that billing period).

Most also have some minimum amount of traffic that they will not charge you for (for example, AWS doesn't charge fo the first 5GB/Month of outbound traffic, or Vultr gives you a few TB of bandwidth for free and then pro-rates overages each month per GB).

This particular aspect often gets overlooked because in on-prem and colo setups, you usually pay for whatever bandwidth cap you have, while cloud offerings typically have very high bandwidth caps (many cloud offerings will guarantee 40Gbit speeds at least one way), but you pay per unit of data transferred. Most cases I've heard of people jumping on moving to the cloud and then having to pay a lot more than expected come down to this, so it's something you should make a point to look into thoroughly before making the switch.

Austin Hemmelgarn
  • 2,070
  • 8
  • 15
  • I have very negative experience with Vultr. Have one server which is always slow when I download weekly data from it on weekends. Added another one today and had 78Kb/s download speed installing CentOS components.. Got response from support - it is what it is.. a joke. moving to GCP tomorrow. – Boppity Bop Oct 30 '21 at 18:16
3

It's too soon to worry about scaling because you have better capacity options for less than you're currently paying.

I'm guessing your CPU, memory load, and network input aren't really significant, and the cost of outgoing bandwidth is the only real issue.

I can easily rent a $50/month dedicated server with 50TB/month of I/O that probably can easily handle your current needs. You're currently paying for the equivalent of 14 of those servers!

Switch to a cheaper dedicated server, forget those expensive virtual solutions, and just look into load balancing if your requirements ever outgrow a single server.

joe snyder
  • 146
  • 2
2

You say that you have 17 TB of outgoing bandwidth a month included in your $700 colocated server. This is actually the easiest part of the whole thing to price. Assuming that almost all of the 17 TB is from static files that you would be serving through either S3 or CloudFront, it's simple enough to check AWS prices (Google or Microsoft may have different prices but I'm less familiar with their offerings). Using 17,000 GB as a reasonable approximation, simply multiply by the cost per GB. That's about $.08 in the USA/Canada (actually $.085 for the first 10TB). Or $1360 total. So ignoring any other costs, just migrating your static files to S3/CloudFront would increase your costs by at least $660.

Source: https://aws.amazon.com/cloudfront/pricing/

This does not include the storage, database, or web serving costs, just the bandwidth costs. So this is very much a lower bound.

Note that this migration might also improve your ability to serve files (speed, reliability, etc.). So it's not definite that it is not worth doing. But this does highlight that your costs would increase if you migrated to the cloud.

I also did the same calculation assuming you used EC2 as you are using your colocated server, just running Nginx and serving the static files directly. Again ignoring all costs but bandwidth, the AWS calculator gave $1530 for 17 TB outgoing from EC2 in Virginia.

I suspect that you could bring down your other costs significantly if you migrated to the cloud. Because it sounds like your main cost is bandwidth. So a modest sized server (less than $100 a month) would probably be sufficient to run your PHP/MySQL. But that doesn't change the fact that AWS would charge you more for just your bandwidth than you are now paying for everything.

mdfst13
  • 306
  • 1
  • 2
1

As @mark-henderson comments with 17 upvotes says,  "If I can be frank, almost nobody moves to the cloud saves money. People move to AWS/Azure/GCP thinking they will save money, but they've usually been mislead. People move to the cloud for flexibility, redundancy, scaling, quick prototyping, and dozens of other reasons. But you probably won't save money."

CDN is great because you can flip a switch and shift your bandwidth load to another provider.  Unfortunately, CDN is usually more expensive than hosting yourself. So let's talk about how to get the flexibility without the cost. 

First, I would just get out from under overpriced hosting.  There are P2V ("physical to virtual") converters to help get virtualized so then it becomes easier to move workloads around as needed. https://www.vmware.com/products/converter.html 

Then YES break things up into smaller services.  90% of what you need to do is separate images from everything else.  I would think more in terms of static vs dynamic than individual services (apache/mysql) and figure out a caching strategy. This lets you shift your resource consumption as desired to wherever you get good deals on bandwidth and hosting while ALSO improving performance with content closer to users.

Work toward three goals: (1) scalable/secure/fault tolerant core infrastructure and then (2) have "dumb" cheap distributed resources to cache static/simple things (images) near the users (maybe just 1 cache server in US and another in EU.  Any need for Asia?) and then (3) consider if you want to get smarter about caching/distributing PHP and DB data near the user too.  

I would be inclined to keep image caching contained in one drop dead "keep it simple" solution (#2) and then everything else under #3.

#1 is first PROTECT THE CORE.....  just make sure your core site functionality is as resilient to hardware failures, network problems, acts of god, whatever as possible.  That's what I like about VMware.  So much is taken care of without thinking about it (distributed mirroring of data, failover to alternative hardware or even another data center, etc.)   But I recommend SOME sort of virtualized/containerized solution so you can then worry about your physical infrastructure as more of a commodity, and a lot more distinctly from your code.  Virtualized or not, you have to make sure your data is protected, regularly backed up, etc and you have whatever redundancy and fail over capabilities in place that you need/want.  Think about multiple data centers and multiple providers.  Azure, EC2 could also be on standby for failover...... some tiny instance that could spawn whatever quantity of fail over resources you need on the fly. (AWS etc may have advantages of rapid scaling and minor standby costs but may require more work than just adding more bare metal into your choice of virtualization/container platform.)

#2 "dumb" self-hosted caching/reverse proxy so you can move your content around to where bandwidth is cheap.* You don't need a lot of failure tolerance here as long as you have a way to activate/deactivate individual caches.  No concerns about data loss because all that data is protected above as part of #1.   The only thing that really matters is how fast you can cutover/failover/add/remove a cache from your site (even to turn caching off so some/all/affected users hit the main core site/images). Of course a cache is self populating so you don't even have to worry about that.  And self pruning so you can keep storage costs minimal, fixed (and speedy! put the cache on SSD) 

#3 smarter caching and content distribution - move PHP and other code closer to the user, but for anything DB related you will realistically need to also have the DB there or cached.  This is just a whole different ballgame than the dumb #2 cache so I would think about this stuff separately, and make sure your dumb cache can't break smart cache and vice versa.   Does your current architecture employ API's to extrapolate dynamic user data away from your PHP?

There are a bunch of open source caching options or ways you can even code a simple cache yourself... for images, just fetch them if they are not present and then clean up old files on a regular basis. Here'e an apache product for more sophisticated "roll your own" CDN.... https://trafficcontrol.apache.org/

The only trick with any of them is how you will enable/disable and dynamically assign users to a cache.  A simple, crude way to do this would be on a user's stated location/preferences and just point images to eu.images.mysite.com vs us or asia etc. If a cache is down, then just dynamically change links for that user in your PHP code.   I believe there are DNS solutions but just have to be careful with cutover time if a cache should go down.... don't want to have the IP cached in a user's local DNS cache.  One way or another, it shouldn't be hard to figure out a users continent if that is the only level of granularity you care about.

There are so many benefits to caching distributed content, maybe even some DDOS protection (perhaps even on distinct domains).  Seems like a natural fit.

CA_Tallguy
  • 101
  • 3