1

With millions of users searching for so many things on google, yahoo and so on. How can the server handle so many concurrent searches? I have no clue as to how they made it so scalable. Any insight into their architecture would be welcomed.

voretaq7
  • 79,345
  • 17
  • 128
  • 213
  • 2
    Considering how thousands of great Google engineers have been working on this for over a decade, it's probably fair to say a single ServerFault question won't do it any justice – philfreo May 29 '10 at 17:22

3 Answers3

2

There are numerous case studies and talks by Google engineers available online with a little searching. Suffice it to say that Google Search is highly distributed and pushed out datacenters all over the world.

There's a ton of information available over at http://highscalability.com/google-architecture.

obfuscurity
  • 761
  • 3
  • 7
  • One example is the paper published years ago on disk failures (at the time, estimates were they had 800,000 servers). And the paper on MapReduce, just to name a two. – ChuckCottrill Oct 09 '13 at 02:32
2

As has been mentioned, the networks and architecture of large-scale websites is highly distributed across many data centers and tens of thousands of servers. If you're interested in how this works, I'd recommend a book called Scalable Internet Architectures which describes some of the concepts and theories behind scalable and distributed systems.

Justin Scott
  • 8,748
  • 1
  • 27
  • 39
0

Among other things, they use 100,000's of servers, and MapReduce. Massively parallel.

ChuckCottrill
  • 181
  • 1
  • 4