I will assume you are going to virtualize servers, not desktops, all right? Next I'm going to assume that you are going to use several ESX/ESXi servers to access your storage and have them managed by vCenter Server.
When deciding on LUN size and the number of VMFS you are balancing several factors: performance, configuration flexibility, and resource utilisation, while bound by supported maximum configuration of your infrastructure.
You could get the best performance with 1 VM to 1 LUN/VMFS mapping. There is no competition between machines on the same VMFS, no locking contention, each load is separated and all is goood. The problem is that you are going to manage an ungodly amount of LUNs, may hit supported maximum limits, face headaches with VMFS resizing and migration, have underutilized resources (those single percentage point free space on VMFS adds up) and generally create a thing that is not nice to manage.
The other extreme is one big VMFS designated to host everything. You'll get best resources utilization that way, there will be no problem with deciding what do deploy where and problems with VMFS X being a hot spot, while VMFS Y is idling. The cost will be the aggregated performance. Why? Because of locking. When one ESX is writing to a given VMFS, other are locked away for the time it takes to complete IO and have to retry. This costs performance. Outside playground/test and development environments it is wrong approach to storage configuration.
The accepted practice is to create datastores large enough to host a number of VMs, and divide the available storage space into appropriately sized chunks. What the number of VMs is depends on the VMs. You may want a single or a couple of critical production data bases on a VMFS, but allow three or four dozen of test and development machines onto the same datastore. The number of VMs per datastore also depends on your hardware (disk size, rpm, controllers cache, etc) and access patterns (for any given performance level you can host much more web servers on the same VMFS than mail servers).
Smaller datastores have also one more advantage: they prevent you physically from cramming too many virtual machines per datastore. No amount of management pressure will fit an extra terabyte of virtual disks on a half-a-terabyte storage (at least until they hear about thin provisioning and deduplication).
One more thing: When creating those datastores standardize on a single block size. It simplifies a lot of things later on, when you want to do something across datastores and see ugly "not compatible" errors.
Update: DS3k will have active/passive controllers (i.e. any given LUN can be served either by controller A or B, accessing the LUN through the non-owning controller incurs performance penalty), so it will pay off to have an even number of LUNs, evenly distributed between controllers.
I could imagine starting with 15 VMs/LUN with space to grow to 20 or so.