Clock Speed
Clock speed is the most prominent specification given for CPUs, measured in cycles per second (hertz). Early CPUs had clock speeds in the thousands of hertz (kilohertz, or KHz) range, then they moved into the millions of hertz (megahertz, or MHz), and now they are in the billions of hertz (gigahertz, or GHz). This is used mostly for marketing purposes, as it's a simple figure to comprehend and correlates well with performance.
But of course it's not that simple. The performance correlation only holds true when talking about the exact same model of CPU. When you start to involve different types of CPUs and CPU architectures, this comparision becomes meaningless. For example, a 3Ghz Pentium D is much slower than a 3Ghz Core 2 Duo.
This is because clock speed is not actually a measure of performance in itself. The clock simply synchronises the different components within the CPU so that everything operates in proper order. The higher the clock speed, the more quickly these components operate.
The design of the components themselves is another matter. CPUs perform operations called 'instructions' on data to get results; some operations take multiple cycles to perform, others can be performed multiple times in a single cycle. Exactly how fast it can do these varies between CPU types. In the example above with the Pentium and Core, the Core is faster because it does more operations (more work) per clock cycle than the Pentium does.
When you put the clock speed together with the operations per cycle, and the 'bit size' or data word length, you get a fairly decent estimate of performance. Measuring performance based on clock speed alone is usually referred to as the "megahertz myth," generally by companies competing with Intel.
Also, a processor cannot just move instantly from one clock cycle to the next. There is a lag between them, called latency. This is an issue with processors, but even more with RAM, which also has a clock speed. Latency means that there is a pause, when a component does nothing. It's the digital equivalent of inertia, and gets worse with poor hardware synchronization, because data has to sit and wait for components to open up for it. It's usually a microscopic fraction of a second, but with dynamic programs like video game graphics, latency cannot be ignored. High latency parts are either best left out of the system, or reserved for processing that doesn't involve graphics (physics, AI, and others).
Another issue with clock speed is in parallel operations. As an example, hard drives traditionally transferred data in parallel (16-bits at once). Later when performance improved, two problems started to come to play. The first was most obvious, the highest performance drives, SCSI drives, had up to 68 pins with huge connectors and thick cables. The second is a problem in which the margin for error that bits could arrive in shrinks with clock speed. Due to latency or other propagation delay, not all the bits arrive at once. When a clock signal is high, this is usually the "sampling" period where the other side checks to see if a bit arrived. At high speeds, this becomes very hard to maintain data integrity. This isn't really a problem though in mostly internal connections when noise and other delays are kept to a minimum.
There's a new type of CPU called clockless CPUs. Where in a clocked CPU the CPU's components use the clock's signal to synchronize their activity, in a clockless CPU the components talk to each other to coordinate and synchronize their activity. A clockless CPU is capable of greater speeds than a clocked CPU, since the clock speed of a clocked CPU is limited by the worst case scenario (the slowest possible response time of the slowest component), but CPUs rarely deal with the worst case scenario, and most components will get their work done before the next clock tick comes; the time the components spend sitting there waiting for the next clock tick is time they could have spent actually doing something if they were in a clockless CPU. On the other hand, a clockless CPU is a lot trickier to design than a clocked one, and the massive amounts already existing software which helps humans design CPUs are geared towards designing clocked CPUs.
Therefore, clockless CPUs are mostly prototypes, none have been mass-produced for desktops or even cell phones yet. x86, ARM, and PPC (in that order) remain by far the most likely CPUs an end user will deal with.