1

We are planning to implement a VDI Solution. We had some discussions about Blade vs Rack. As we are only planning to implement 75-100 Clients, we calculated, that we would need 2 Servers with Dual 8C Processors - and a shared storage server. This calculation is based on a paper by ORACLE, that says, 12 active virtual machines per core.

Now, for buying to servers, a blade does not scale financially. But the Blade has some other advantages:

a) The interconnectivity between the blades is super-fast. b) IO Virtualisation

Are there other advantages, that we should consider, that would make up for price - and are this advantages so important, that we should think about investing in the blade?

ChrisZZ
  • 737
  • 1
  • 8
  • 13

6 Answers6

5

Based on this and your other questions, I don't think you need blade servers. Modern systems have enough processing power, available cores and RAM capacity to reduce your virtualization footprint to only a few standalone servers. You stated this yourself in the question: you only need 2-3 hosts and a SAN to support your planned environment.

For example, most of my VMWare installations are FIVE or fewer host servers. On a Nehalem, Westmere or Sandy Bridge CPU architecture, that's a tremendous amount of capacity!

I wouldn't be concerned about physical connectivity between servers because most virtualization suites have emulated NIC's that keep intra-server traffic running at a high speed.

I rarely see blades used for small virtual environments... But I DO see them when there are 20-host VMWare clusters and environments where there may be 5,000 virtualized servers... You don't need that type of density.

ewwhite
  • 194,921
  • 91
  • 434
  • 799
2

Any hypervisor worth its salt offers I/O virtualization now.
Unless you're deploying an Oracle-specific solution, I would not depend on any Oracle papers to learn the Giant Truth.

Yes, blades offer advantages wrt network connectivity - but once you hit 10gbit/s, this doesn't matter as much; there will not be many VMs that need a fraction of that bandwidth,

The only thing a blade has going for it - factoring in the much greater price per cpu - is rack density.

You can stuff 8 dual-socket blades in a 4U rack space - that's 16 sockets where you would normally have 4 or 8.

But as you already discovered, you get to pay for that density.

adaptr
  • 16,479
  • 21
  • 33
1

Depending on the blade you plan to buy, I would say for a hundred of Virtual Machine use rack servers. The disadvantage with blade is the cost of add-in cards like add another network mezzanine cards or changing/adding switch...etc With rack server you can choose your own infratructure. But it's just my point of view and it depends on which blade you want to buy.

dje31
  • 46
  • 1
1

Like everyone else I'd not recommend using blades, I don't see any TCO gains in such a small environment and I doubt you'll need any other benefits that might be.

For just a hundred clients, depending on load, you could even do with a single server, since CPU power is not going to be a bottleneck. I've seen figures around 6-8 vCPU per physical core, but my experience is that 8 cores would be more than ok for 100 Win 7 clients.

But obviously you might still want two physical servers for redundancy, but you won't need super-fast connectivity between servers. There's going to be heavy traffic between storage and servers but probably not more than can be handled by bonding a couple of 1G ethernet links. IO on the storage side and lots of RAM is where you'll want to spend your money.

Mikael Grönfelt
  • 627
  • 3
  • 7
  • 14
1

Blade servers have huge advantages for the account managers who sell them, because HP (et al) have invested so much money in their development that they have incentivised each secured sale to a level that makes that Porsche 911 S4 look more like a reality than a distant dream.

Wait for HP's quarter-end, or even better, their October year-end, to see the advantage coming your way in the form of your account manager all of a sudden calling you to tell you how good they are.

Sorry, I'm a cynic, after enduring numerous HP, IBM and Dell hard-sales...

Back to VDI though, you should easily be able to run your estate on a dual quad-core server (thinking Intel X5672 or above). You'll hit memory and storage bottlenecks before CPU.

Simon Catlin
  • 5,222
  • 3
  • 16
  • 20
0
  1. In large companies the switches are managed by networking team. If you have a blade, the switch is yours.
  2. Less space used in rack per server
  3. Better resources over power usage ratio
  4. May be TCO (total cost) smaller if you buy all blades at once and use all of them at once
  5. Uniformity
  6. They have some remote management that some servers will not have.
  7. They look nice.

But if you want to grow slowly it does not worth the price.

Mircea Vutcovici
  • 16,706
  • 4
  • 52
  • 80
  • 1
    1. nonsense reason, 2. yes, 3. centralized DC power distribution, but no. 4. absolutely not. 5. meh. 20 2U servers have that too. 6. real servers have IPMI too. 7. go away. – adaptr Dec 07 '12 at 14:57
  • 1
    1: yes, 2: nosense (check supermicro twin), 3: disputable, 4: bull (tried to make sense just from buying and that was a loss), 5: bull (buy similar Servers - same result), 6: yes, some, 7: definitely ;) SADLY i just am not able to make that stuff work financially. Twins are always better. – TomTom Dec 07 '12 at 15:11