The basic path for exploiting an overflow-related vulnerability is to find a crash (often by fuzzing), evaluate the crash and whether it presents an attack path, and then build something to exploit it.
Sometimes where one looks can involve knowledge of the architecture, such as when Charlie Miller noted that iOS 4.3 has a section of memory which will run unsigned code. In the case of all his Apple attacks, I believe the source was closed. SQL injection weaknesses are similar -- you're starting with a crib knowing where a good place to look is.
While it is simplest to identify and correct a bug via source after finding a crash, the work to setup the environment can be without value if one's goal is simply to write a working exploit. Often, the relevant space of a program will involve looking at less than 1 KB of machine code.
From my perspective, open source allows one to view bugs being added as they happen and say, "Hey, that bit of source looks insane." Beyond that, the work for exploiting closed-source and open-source work is often the same. As for reverse engineering, that's rare unless you're trying to alter or recreate an application rather than launch an exploit from it.
I wouldn't underestimate the ability of hacker groups to dedicate a lot of computing resources to a task. Warez groups have been known throughout the history of the Internet to muster substantial storage and bandwidth resources. It might be naive to think hacker groups don't muster the same scale of CPU power.
Finally, most people who do this for a long enough time keep a discovered exploit or two up their sleeves rather than releasing it.