1

I'm about to configure some new GPU capable nodes for our needs and I was wondering if someone has any experience with using simple video cards like Nvidia GTX 680 (actively cooled with a fan) in a 1U server? The fan would be pretty close to the chassis (e.g. SuperMicro SC818G-1400B) and I'm not sure it will get enough cool air. In a usual ATX case the video card gets up to 80°C which is well below upper limit of 98°C.

Does anyone tried any similar configurations with activly cooled video cards in a 1U server and would recommend it?

Thanks!

Pavel
  • 988
  • 1
  • 8
  • 29

2 Answers2

2

There won't be enough clearance in front of the fan entry on a 1U server, it'll overheat.

I too use GPGPU cards (Tesla's in my case) for OpenCL/Cuda work but made sure I picked a machine that can scrub the heat.

Chopper3
  • 100,240
  • 9
  • 106
  • 238
  • What kind of chassis are you using - 2U? Or more? – Pavel Jan 23 '13 at 14:44
  • 2
    We actually use HP SL390's, which are specifically designed for CUDA work, but any 2U server should be ideal - I know there are others on here that use 4U servers as they want multiple GPGPU cards in one box and that's worked out great for them. Is this for CUDA/OpenCL work or to actually act as a GPU/frame-buffer? – Chopper3 Jan 23 '13 at 14:46
  • It's for CUDA based scientific computing, so I expect the video cards to get pretty hot most of the time. I will look for a 2U server that fits our needs. Thanks for your answer! – Pavel Jan 23 '13 at 15:27
  • In that case why are you trying to use a 680? that's not really a very good CUDA card, it's a gaming card, why not try a Tesla, it's what people such as myself who do this kind of thing use, and will more happily fit into a 1U server as they're built for such a thing. – Chopper3 Jan 23 '13 at 15:40
  • We did some comparisons with a K20c and the difference in performance (with our application) was only about 10-20%, while the K20c is about 4-5x the price of a GTX680. – Pavel Jan 23 '13 at 16:11
1

I'd personally use a better chassis for GPU work. My preference is 2U because 1U is always a compromise in cooling and/or expansion. In my case, the systems that required the CUDA cards also needed additional 10GbE PCIe cards as an interconnect, so I was forced into a bigger chassis. That may not be the case for your environment, though.

Edit: I have a quote here for a new 30-node GPU scientific computing cluster... The systems spec'd are these 2U chassis.

ewwhite
  • 194,921
  • 91
  • 434
  • 799
  • 1U or 2U doesn't matter that much to me. I'm more curious if I can use cheap GTX video cards that are actively cooled while the fan is pretty close to the chassis top. Does it work well for you, or have you used passively cooled GPUs? – Pavel Jan 23 '13 at 14:34
  • See my edit above. – ewwhite Jan 23 '13 at 15:30
  • Well there's an [active ("K20C")](http://www.nvidia.de/content/PDF/kepler/Tesla-K20-Active-BD-06499-001-v02.pdf) and a [passive](http://www.nvidia.de/content/PDF/kepler/Tesla-K20-Passive-BD-06455-001-v05.pdf) version of the K20 video card. The passive one is designed for server units like in those specs, so I guess this is also what you've used? – Pavel Jan 23 '13 at 17:16