Cascading or cycling encryption algorithms increases implementation complexity, and that's really bad for security. Algorithm intrinsic security (provided that you use published, well-analyzed algorithms, not homemade designs) is invariably far greater than implementation security: most implementations may leak various elements, including secret key bits, through execution timing, cache memory access or insufficiently strict behavior when encountering incorrect data.
If you have covered all the implementation hazards, then it becomes time to worry about the algorithms themselves; at that point, you are a trained cryptographer and you know better than relying on irrational tricks such as cascading or doubling. Basically, the one thing that algorithm doubling or cycling guarantees is that whoever makes the suggestion is not overly competent in the area of cryptographic implementation, and therefore you do not want to use his code.
Historically, cascading or cycling are ways to cope with weak encryption algorithms; this is about assuming that any algorithm will be broken, and you try to do damage control. This somehow negates all research on cryptography since the 70s'. In practice, security issues are in how an algorithm is used (e.g. chaining mode with block ciphers), how an algorithm is implemented, and, most importantly, how keys are managed (creation, storage, destruction...). In order to have the algorithm itself be a weak point, you have to make huge efforts (e.g. designing your own algorithm, as the DVD consortium did). As an example, when modern game consoles are hacked, the algorithm themselves (AES, ECDSA...) are not broken, but circumvented (for ECDSA and Sony/PS3, this was a downright implementation bug).
What you do want is to have algorithm agility: you define the protocol such that the used algorithm is a configurable parameter. Therefore, if a given algorithm turns out to be flaky (which does not happen often at all), then you can switch to another.