1) Hyperthreaded cores aren't real cores, and shouldn't be counted as such. Estimates vary, but I've seen figures that enabling HyperThreading gives you as low as 10-30% additional performance in vSphere.
2) Assigning more vCPUs to a VM should always be considered carefully, especially at higher numbers. The reason (drastically simplified) is that the resource scheduler has to find a time slot where there's enough cores available to execute all cores simultaneously. So on a simplified, hyper-unrealistic, example host with say 10 cores, and 10 VMs with 2 vCPUs, you'd have 5 of VMs waiting (aka. halted) half the time, and 5 VMs executing, alternating between each state. This is alright since all VMs are getting CPU time, and everything is dandy. Now we introduce the 11th VM, with 10 vCPUs. Suddenly you have 10 VMs waiting while the big VM gets it's stuff done, and then 5 of them execute, and then the 5 others. So now your VMs are running 33 % of the time, instead of 50%. In a complex environment, allocating relatively huge amounts of vCPUs can lower performance, especially if the VM doesn't run anything that can actually use all the vCPUs.
3) My personal best practice is to never give a VM more than half the logical cores on one single processor, this is usually also normally quite a sane number with Xeon processors anyhow. This avoids problems with "depending" too much on HT "cores", and also makes your VMs fit on a single processor, making it easier for the scheduler.
There's also the concept of NUMA nodes to take into account, if you start giving a VM more vCPUs than a single processor in the host can provide, you're basically forcing vSphere to split the VM between 2 NUMA nodes, making memory access slower, since not all memory is going to be local to either processor.
There's a lot more magic behind how vSphere schedules VM resources, and what I wrote above is hugely simplified, but these are guidelines that have served me well for almost a decade.