Michael is correct that the community is a bit fractured right now, and documentation is a tad sparse.
Actually, it's all there, it's just impossible to understand. What you really want is the "Pacemaker Configuration Explained" ebook... (Link to PDF). You'll want to read it about a dozen times, and then try to implement it, and then read it another dozen times so that you can actually grok it.
The best supported implementation of cluster services for Linux at this point is probably going to be Novell's SLES11 and it's High Availability Extension (HAE). It JUST came out a month or two ago, and it comes with a nice thick 200 page manual that describes how to set it up and get things running. Novell has also been excellent about supporting Pacemaker configurations in various forms.
Beyond that, there's RHEL5's implementation, which has the same package and decent documentation, but I think it's more expensive than SLES. At least, it is for us.
I would avoid Heartbeat right now and go with Pacekmaker/OpenAIS because they're going to be much better supported going into the future. HOWEVER, the current state of the community is such that there are a few experts, there are a few people who are running it in production, and there are a whole ton of people that are completely clueless. Join the Pacemaker mailing list and pay attention to a man named Andrew Beekhof.
Edit to provide requested details:
Pacemaker/OpenAIS uses a 'monitor' operation on a 'primitive resource' (e.g. nfs-server) to keep track of what the resource is doing. If the example NFS server goes unresponsive to the rest of the cluster for X number of seconds, then the cluster will execute a STONITH (Shoot The Other Node In The Head) operation to shut down the primary node, promoting the secondary node to active. You decide in the configuration what to bring up afterward and associated actions to take. Implementation details from there depend on what service you're trying to make fail over, execution windows for certain operations (such as promoting the primary node back to master) and the whole thing's pretty much as configurable as possible.