Password hashing is a trade-off: you make the function slower (through using many iterations), which makes both attacks and normal usage more expensive. You accept to spend more CPU on the task of verifying a password because you know it will also make the attacker spend more CPU on the task of cracking a password. A decent password hashing function offers a "cost" parameter that allows you to set the function slowness at the best level, i.e. the most expensive that you can tolerate, for your available resources and average load.
If you do the hashing X times, then you make usage X times slower, for both you and the attacker. In practice, this means that you must lower the iteration count of the function by a factor of X, to keep the computation within your own hardware budget, and this exactly nullifies any security gain. Thus, from that point of view, your proposed mechanism is simply neutral to security (except that it increases implementation complexity, which is, all other things being equal, a bad thing).
There still is an interesting point to consider, which is the randomness of X. Let's suppose that you generate X randomly, between 5 and 15. On average, X will be equal to 10; when validating a (correct) password, you will need to compute the hashing function about 10 times (on average). However, to decide that a password is incorrect, you have to go to 15.
There we could say: hey, that's good ! The defender (your server), under normal usage, validates mostly correct passwords (users tend to type their passwords correctly(*)), while the attacker, in his dictionary attack, will try incorrect passwords most of the time. So the attacker will spend 1.5x more CPU on each password try than the defender. We just gained a 1.5x factor over the attacker ! Isn't it swell ?
Not so. The attacker is aware of your "random X" mechanism, and will organize his attack accordingly. That is, he will hash the words in his dictionary with 5 hashing invocations, recording the hash values (in RAM -- we are only talking about millions of hash values here, or maybe a couple of billions, nothing extreme). If one of the hash values fits, that's fine, the attacker has won. Otherwise, he will compute one extra hash invocation over each recorded value, and try again. And so on.
That way, the attacker will also achieve the same kind of efficiency as the defender: he too will crack a password with a cost of (on average) 10 times the cost of invoking the hash function. Thus, there again, the security advantage of the random X is nullified.
(*) This is a rather optimistic notion.
Summary: the extra X factor, random or not, does not increase security. Implemented properly, it should not decrease it much either, but since it makes the code more complex and also makes the load less predictable, I can only advise against that extra mechanism.