2

We've got a Linux-based build system in which a build consists of many different embedded targets (with correspondingly different drivers and featuresets enabled), each one built with another single main source tree.

Rather than try to convert our make-based system to something more multiprocess-friendly, we want to just find the best way to fire off builds for all of these targets simultaneously. What I'm not sure about is how to get the best performance.

I've considered the following possible solutions:

  • Lots of individual build machines. Downsides: lots of copies of the shared code, or working from a (slow) shared drive. More systems to maintain.
  • A smaller number of multiprocessor machines (dual quadcores, perhaps), with fast striped RAID local storage. Downsides: I'm unsure of how it will scale. It seems that the volume would be the bottleneck, but I don't know how well Linux handles SMP these days.
  • A similar SMP machine, but with a hypervisor or Solaris 10 running VMware. Is this silly, or would it provide some scheduling benefits? Downsides: Doesn't address the storage bottleneck issue.

I intend to just sit down and experiment with these possibilities, but I wanted to check to see if I've missed anything. Thanks!

Allan Anderson
  • 181
  • 1
  • 5

3 Answers3

3

Are you running make -j and creating parallel jobs? Sun has a nice guide about that.

The hypervisor idea is a bit silly. You want speed and I/O performance, something that a single vmware server is going to take away from you. You probably want to setup as many cores and disks as possible. These would be the two real limiting factors of your build system.

Is there a reason why you cannot run them in serial rather than parallel? Take the simplest route not the easiest.

For future reference: a nice list of alternatives to make.

Joseph Kern
  • 9,809
  • 3
  • 31
  • 55
0

Virtualise and use templates, one code-base, lots of options - big win :)

Chopper3
  • 100,240
  • 9
  • 106
  • 238
  • Yeah, we were also thinking of doing some neat cloud kinda thing to generate virtual build machines on demand from the build env template. Still have to grab the code for each one, though, unless we go back to accessing it from a network share. – Allan Anderson Jun 13 '09 at 22:46
0

You can virtualize and use high speed disks or go to solid state storage. The cost on the solid state might be a limiting factor though. Also there is a limit on the data capacity with solid state.

In this case its probably best to have fast systems wit good virtualization software. Its simple easy and cost effective.

  • Ah, so you think that virtualizing would be better at using the multiple cores than just running a bunch of parallel makes on a single OS? – Allan Anderson Jun 13 '09 at 22:42