Swap space is not a simple extension to RAM such that the CPU can access it directly, so on top of the lower speed (compare the maximum throughput of the DDR3 and SATA buses on http://en.wikipedia.org/wiki/List_of_device_bandwidths for instance) and higher latency (even if the access was direct any transfer will be going through the I/O controller and the drive's controller, so there are two lots of latency both of which are likely to be higher than that seen between CPU and RAM), there is also extra processing and latency to consider as the blocks of memory on the SSD will need to be paged into RAM before they can be used, and written back out again if they are modified.
This is particularly relevant for random access (and most in-memory algorithms are written with the assumption that random access is efficient). While the CPU can access any word in RAM more-or-less as and when it likes, but if the data it wants is paged out it will need to read the whole page back in before it can read even a single byte, and a whole page needs to be written if a single byte is modified before the page is purged from RAM to make room for another again.
Of course I'm ignoring the complication of cache RAM here, which means random access is sometimes more efficient than other times (depending on whether the data only exists in main memory or is copied into L3/L2/L1 cache at the time). In theory your RAM is simply cache for your permanent storage, and there are architectures that literally work this way with no distinction between slower permanent storage and faster cache (at least as far as the OS is concerned - it just sees the main storage but faster if the data is also in the faster cache), but the architecture of your hardware and OS are not designed this way.