7

I have to do a setup for Magento. My constraint is primarily ease of setup and fault tolerance/fail over. Furthermore costs are an issue. I have three identical physical servers to get the job done. Each server node has an i7 quad core, 16GB RAM, and 2x3TB HD in a software RAID 1 configuration. Each node runs Ubuntu 12.04. right now. I have an additional IP address which can be routed to any of these nodes.

The Magento shop has max. 1000 products, 50% of it are bundle products. I would estimate that max. 10 users are active at once. This leads me to the conclusion, that performance is not top priority here.

My first setup idea

One node (lb) runs nginx as a load balancer. The additional IP is used with domain name and routed to this node by default. Nginx distributes the load equally to the other two nodes (shop1, shop2). Shop1 and shop2 are configured equally: each server runs Apache2 and MySQL. The Mysqls are configured with master/slave replication.

My failover strategy:

  • Lb fails => Route IP to shop1 (MySQL master), continue.
  • Shop1 fails => Lb will handle that automatically, promote MySQL slave on shop2 to master, reconfigure Magento to use shop2 for writes, continue.
  • Shop2 fails => Lb will handle that automatically, continue.

Is this a sane strategy? Has anyone done a similar setup with Magento?

My second setup idea

Another way to do it would be to use drbd for storing the MySQL data files on shop1 and shop2. I understand that in this scenario only one node/MySQL instance can be active and the other is used as hot standby. So in case shop1 fails, I would start up MySQL on shop2, route the IP to shop2, and continue. I like that as the MySQL setup is easier and the nodes can be configured 99% identical. So in this case the load balancer becomes useless and I have a spare server.

My third setup idea

The third way might be master-master replication of MySQL databases. However, in my optinion this might be tricky, as Magento isn't build for this scenario (e.g. conflicting ids for new rows). I would not do that until I have heard of a working example.

Could you give me an advice which route to follow? There seems not one "good" way to do it. E.g. I read blog posts which describe a MySQL master/slave setup for Magento, but elsewhere I read, that data might get duplicated when the slave lags behind the master (e.g. when an order is placed, a customer might get created twice). I'm kind of lost here.

spa
  • 293
  • 2
  • 8
  • 3
    You're right about performance not being the big penality here, but for the love of - please don't use anything larger than 1TB disks in a RAID setup for critical services. The rebuild time will make you consider jumping off a cliff.. – pauska Sep 21 '12 at 10:55

2 Answers2

14

KISS

Keep it simple silly.

I'm kind of lost here.

For this very reason, don't begin to overcomplicate something that needs not be complicated. If you don't know the right method to implement something in the first instance - you certainly won't know what to do when something goes wrong.

First, lets address the hardware

Ref: https://www.sonassihosting.com/help/magestack/cpu-sizing/

a) A standard Magento demo store is capable of delivering roughly 230 uniques per GHz, per hour.

b) A typical web store, with admin user activity, development activity, product addition/deletion can see this degrade by around 100%, to 115 uniques per GHz, per hour.

Using your figure of 100 active visitors at any given time,

hourly_hits = (60 / time_on_site (mins)) * concurrent_users

So, we'll assume an industry average time on site of 8 minutes and 8 page views per visit.

hourly_hits = (60 / 8) * 100
hourly_hits = (7.5) * 100
hourly_hits = 750 

Which gives a figure of 750 hourly unique visitors, or around 7,500 daily unique visitors.

To support 750 visitors per hour, at 115 uniques/GHz - you'll need the equivalent of 7x 1GHz CPU cores. So lets assume your i7 Quad Core is 2.5GHz - that will give a cumulative total of 10GHz.

Secondly, lets address your configuration

What is your goal exactly?

  1. High availability
  2. Reliability
  3. Simplicity of administration
  4. Performance
  5. Scalability

None of your ideas are particularly good, your load balancer is a single point of failure and I feel you're getting a bit too caught up in MySQL redundancy.

Master-Master is a configuration nightmare, and you have no benefits from doing it. Magento IS NOT bound by MySQL, in the slightest. See Which should I put on my bigger machine? Magento Webserver of Magento Database?

And unless you are planning to make EVERYTHING in your architecture redundant, Ie.

  1. Bonded network interfaces
  2. A+B switches
  3. A+B firewalls
  4. A+B separate power feeds from diverse UPS
  5. Multihomed upstream connectivity

... there isn't much point trying to build some resilience in at the software layer.

How we would do it

Has anyone done a similar setup with Magento?

In a word. Yes.

We configure anything from a single-server to n servers in MageStack - by containerising every single node.

So in your case, we typically would set up the following (assuming you requested HA).

**Server 1**        **Server 2**        **Server 3**
LB  (m)    <==>     LB  (s)             
Web (m)             Web (m)             Web (m)
                    DB  (s)    <==>     DB  (m)

The LB and DB virtual servers would have their root partitions on a DRBD mirror (represented by <==>). The web nodes would either use a common NFS store, or more commonly, a repo pull on the live web nodes.

Just to reference a reply here How to arrange web servers with Varnish?

Our typical architecture is

lvs (initial ssl load balancing)
 -> pound (ssl-unwrapping) 
 -> varnish (caching) 
 -> haproxy (load balancing) 
 -> nginx (static content) 
 -> php (dynamic content) 
 -> mysql (db)

Heartbeat would maintain healthchecks between machines and provide failover of IP and start/stop the respective virtual servers.

So the resultant containerised architecture would look like this ... (excuse the graphic, I poached it from a marketing PDF).

MageStack example configuration

How I would recommend you do it

Don't use master/slave, don't use DRBD and just keep it really, really simple - so its easy for you to manage and debug when things don't work.

**Server 1**        **Server 2**        **Server 3**
LB                           
Web                 Web                 Web 
                                        DB  

That way, you get load distribution and full utilisation of hardware. Worst case scenario - if Server 1 or Server 3 fail - then you pull the hard drives and put them in Server 2. With remote hands at a DC - this could be done within ~5 minutes. It will be a damn site easier to manage, it will mean you won't have to produce a 30 page document on the configuration of the machines and run-book procedures and will take considerably less time to set up.

We've got servers that have uptimes of over 3 years - so it should put into perspective how often server grade equipment fails. More often than not, the most common cause for issues on a server is purely down to a dodgy software configuration.

My only concern is that your hardware isn't server grade - so you might run the risk of higher failure rates - but the risk your choosing to take by using it.

In summary

I wouldn't advise attempting to build, manage and oversee the server configuration yourself for an e-Commerce store, where the hosting and support are the single most important parts of keeping your business online.

Ben Lessani
  • 5,174
  • 16
  • 37
  • Uups my fault. I checked the answer but overlook on key typo. It's not 100 concurrent user, but 10. I'm *very* sorry about that. However, thanks for the thorough answer! – spa Sep 21 '12 at 10:54
  • Then in that case. 1 server is ALL you need. Don't bother with an external DB server, it will actually be ***slower***. – Ben Lessani Sep 21 '12 at 11:35
  • Awesome answer. +1 We are currently looking into Varnish/SOLR glad you confirmed it. – ehime Jun 06 '13 at 17:13
  • 1
    There's some pretty dodgy maths going on here. If you've got 100 concurrent users on average, and each lingers for 8 minutes on average, you'll get a new 100 users after 8 minutes. That's 750/hour, not 5625. I can't work out what's behind dividing by 3600, but if you divide by 60 (seconds in a minute), and cancel out the 8/8 (instead of dividing by 8*8 = 64) you get hits/hour. – Mark Jan 19 '15 at 15:25
  • @BenLessani-Sonassi The link http://docs.sonassihosting.com/go_dedicated.pdf is broken. Can you please provide the correct link. – Gaurav Pandey Aug 12 '15 at 07:43
  • See https://www.sonassihosting.com/help/magestack/cpu-sizing/ – Ben Lessani Aug 12 '15 at 09:20
0

A single server?

I'd never recommend a solution that contains a SPoF (Single Point of Failure) for ecommerce

FWIW, I'm currently working on a set-up guide for Amazon's Elastic Beanstalk service, that will allow you to scale in all dimensions automatically, whilst only paying for the resources you actually use.

Ofcourse, it's all multi-zone and has redundancy and failover built-in :)

Maintenance is a breeze - and you can update your Magento either directly using the AWS command line tools and git or by uploading a zip of your Magento application into the AWS console - it doesn't get any easier than that!

  • 1
    Yes, a single server. As mentioned, we have servers with up times of over 3 years. Yet we have several VPS's that we use for external monitoring, some on Amazon's cloud service. And after rebuild/fail over times exceeding 45 minutes, that's far greater than anything we've ever experienced on a single server. And we're talking a few hundred servers. For the OP - using his hardware, a single server deployment is by far the most appropriate solution for his hardware and his knowledge. FYI, we have customers turning over £15M a year on single server configurations. Don't drink the amazon cool aid. – Ben Lessani Sep 21 '12 at 20:54