Let's look at how such a thing might play out:
A classic example is the Dual-EC DRBG which employees a back-doored RNG to create a general attack against TLS. The other half of this attack beyond the vulnerable RNG is a mechanism for revealing to the attacker the state of the RNG at the time of use to allow the attacker to predict the remaining "random" content as produced by the victim's RNG.
So there's a non-standard feature implemented in TLS called Extended Random, through which a TLS server just dumps a bunch of output of its RNG into the TLS negotiation, which is revealed to anyone eavesdropping on the conversation. In fact, it's been calculated that this feature improves the Dual EC attack by a factor of 64000X.
So a system which uses a different RNG for the "Extended Random" component than for key derivation would be significantly (64000x) better protected against this attack.
But not completely immune. The RNG is vulnerable in itself, even if you keep your "high-value" randomness separate. A second RNG instance is useful, but not foolproof.
The goal of such a system is to protect against the unknown, so measuring its usefulness is difficult at best. And other protections may work better.
For example, random values XORed with anything is still random. So if your application runs multiple RNGs and XORs their output, then the quality of the randomness you get is equal to highest quality of all the input RNGs. So if an application XORs a whole list of RNGs of varying quality, then if any one of the RNGs is safe, then the output will be safe as well.
So how good would this technique be in protecting against unknown threats? We don't know. But what we can say is that it would protect against known threats such as the Dual EC backdoor mentioned above.