-3

Suppose I have some web applications & also some desktop applications running on a server in production environment.

In production environment, even a single minute downtime is not tolerated. Currently I know only about HA Cluster system which is used for this purpose. I would like to if this is the only way to prevent system shutdown or apart from this are there any other ways used ?

What does big companies like google use for High Availability since they also wont tolerate even a single second down time?

Thanks

Harshit
  • 105
  • 4
  • 1
    I would also recommend researching downtime related to HA systems. In a lot of cases, a single server has higher uptime than HA systems, because the HA setup can also cause downtime. – Halfgaar Aug 22 '15 at 08:04
  • 1
    What makes you think there are companies who cannot afford even a single second of downtime? If the service is back before the user can figure out if the outage was due to their own ISP or the service they were trying to access, then that is good enough for every company I know of. – kasperd Aug 22 '15 at 19:49

3 Answers3

2

Basically you want to have an automatic failover for every service needed to run your application.

One solution could be the following described aproach:

  1. Keepalived installed on both systems.
  2. HAProxy as Loadbalancer with failover to HAP LB 2. Monitored by keepalived
  3. Apache/NGINX behind HAP. If one failes, HAP will have that monitored and redirects to Apache/NGINX on the other server
  4. MySQL Master / Master replication, load balanced and monitored over HAP

Basically HAProxy spreads the load on your system and only forwards to one of the services if it is up and running.

The architecture you are looking for might look like this:

enter image description here

merlin
  • 2,033
  • 11
  • 37
  • 72
0

It depends -- for every application and every level of OSI model must be created its own HA system

But for start you can learn about ha-proxy, keepalived and nginx with backend

0

The system is only as strong as the weakest link.

In a typical production environment there would be multiples of everything. A small web cluster may consist of a load balancer, several reverse proxies, several http servers, master/slave or master/master database nodes. In this setup the single load balancer would be the weak link; if it dies, nothing works. Larger environments replicate this but on a larger scale.

Ultimately, network design will depend on intended use.

minus8
  • 72
  • 1
  • 1
  • 7