As per the question and answer you reference, it may well be faster to contact a host over a network than it is to perform a local disk seek operation (depending on the network and the disks in question, of course).
That doesn't always translate into faster operation for a real life working system - keep in mind when you talk about putting databases "in the memory" of various distributed systems (and leaving aside the availability and latency issues that might arise) you have to remember that those systems will perform their own memory management (and might page your data out to their local disk, giving you the worst of both worlds) and may well have other work to do which will make a system resource such as the network connection, say, busy and cut down on your speed advantage.
There's a big difference between a relatively simple cache system and trying to run a database in the memory of a number of distributed systems as you seem to be doing. Some database transactions might become very cheap (aka fast) but others may become much more expensive, and you may find that a need to design for fast performance in this kind of system places constraints on your DB design that negate any benefits.
So my answer to you is a rather boring one: It depends. You'd need to test your specific system under load to see if any possible theoretical performance gains translate into real ones for your particular situation.