The answer overall is: you can distinguish between human and spambots based on the spambot developers' willingness to attack you specifically. There are two categories of defences essentially: bot/human distinction and methods to increase the cost of carrying out attacks.
Distinguishing bots from humans
A lot of alternative techniques to CAPTCHAs in the first family have been proposed on Stack Exchange already: Is there a true alternative to using CAPTCHA images?
I won't discuss CUPTCHUs and CIPTCHIs and other "usable" CAPTCHAs. They all require that humans perform a task meant to discriminate them against bots. Most of them can probably be broken if an adversary actually targets them specifically, and they all still require wasting your user's time -- one of them at least cares about UX and is tolerable in some use contexts where playfulness is a good value to have for the experience you're crafting.
Increasing the cost of attacks
My personal favourite is simply to use federated identity schemes so that you rely on other identity providers to confirm whether a specific IP has an account with them -- OAuth does that -- and that this account has some significant amount of value (for an email account, it has received substantial amounts of mail from other accounts assumed to be real*) -- nobody does that yet to my knowledge.
Note that such an approach provides no protection against infected devices rather than spambots, something which is a growing concern for sites like Facebook where some real, trusted accounts are being abused because of malicious browser extensions and start serving spam (no URL, that's academic hearsay).
Other methods may be good to take, if you have empirical data showing that it works for whoever attacks you. You can reduce the number of accounts created per IP per e.g. week/month before serving a CAPTCHA if you are facing spambots that reuse the same botnet IPs to create accounts instead of systematically changing. You can do machine learning on the existing spambots' details (form of nickname, details filled) to identify a recurrent offender and use that as an extra filter for deciding whether to CAPTCHA or not. Of course that requires a lot of maintenance and only works against unmotivated adversaries, so if you're a million-user platform you're probably out of luck. Active adversaries trivially defeat such instances of machine learning (see AISec conferences).
CAPTCHAs are a no no, in any case
If you believe CAPTCHAs to be absolutely necessary for some cases, you should still implement other methods to detect offenders and only serve CAPTCHAs when you have doubts over an account. It's much likelier that a real user fais a CAPTCHA than a spambot succeeds at 4 or 5 different checks.
* relying on chains of trust opens up for sybil attacks, but these seem to me much easier to defend against (SybilGuard and whatever else has been published ever since) than e.g. direct automated comment/review detections and much nicer on the user than using CAPTCHAs (with up to 40% failure rates according to usability researcher Angela Sasse).