With virtualization you will have a lot of random writes which is really the performance downfall of RAID5 or 6; in our testing, even a raid controller with a gig of cache memory could not make up for this. Also, RAID6 is typically not much slower than RAID5 and gets used mainly when the array is made up of larger (>1TB) or slower rpm (<10K) drives and/or larger arrays (>8 spindles). In these cases there is a greater chance of a second failure while the first is rebuilding resulting in a catastrophic event if using RAID5.
I would make do with a single RAID10, 1.8TB volume configured in a single VMFS datastore. If you need more space, you're going to either suck up the performance hit and go with RAID5 or purchase more drives. You don't have very many drives so there's not much opportunity for carving up into separate volumes to handle competing workloads.
Install to and boot ESXi from a USB stick create an image backup to a second USB stick just in case that fails. Our HP DL380 boxes have an sd card slot on the motherboard for this purpose; maybe the Dells have this too? Boot from SAN/PXE is another potential option for ESXi. Consider also keeping a cold-spare drive on the shelf or adding a hot spare for that inevitable day of a drive failure.
Based on the mostly windows environment around here with Exchange, file shares, and a smattering of SQL and application Servers I would guess that you should plan on adding an additional controller and disk shelf as you will find an I/O performance hit with 20 vm guests on only 6 drives. Keep an eye on your datastore overall latency numbers; when it goes above 15-20ms people will start to notice. You may end up with only some servers virtualized. Beware also of VM sprawl as users get wind of this new-found ability to spin up a server in mere seconds. ;)