I used to do just this a few years back. (edit: with VMWare running on CentOS hosts, not ESXi admittedly)
Every night I had a script that would suspend a VM, rsync the files from disk to the backup server and then start the VMs again. It worked quite well except...
Rsync doesn't work very well with a 2GB file.
Its not because rsync isn't brilliant, it more that each 2GB vmdk file changes in ways that are very opaque to rsync, even small changes to the enclosed filesystem produce changes in the vmdk (or all vmdks for some reason) which I blamed on Windows, either automatically defragging or otherwise doing all the other things it does that don't matter if you're running a real system, but show up when you are trying to rsync a VM!
I think the rsync mechanism for detecting changes don't work very well on a 2GB file, whilst it quite often skipped chunks of the start of the vmdk, once it started to find a difference it would simply copy the rest of the file. I don't know if that's an issue with rsync not being able to detect a moved chunk of binary data, or with a lack of memory on the source box, or whether the vmdk just got updated all the way through. It doesn't matter as the result wasd the same - majority of the vmdk got copied.
In the end I simply copied any changed files and overwrite them, still using rsync. I also had better performance simply overwriting the backup file instead of letting rsync copy and replace what was there.
Our backup server wasn't the fastest either and it got to the point where overnight wasn't long enough to back up all running VMs.
However, when we did need to restore a VM, it was really easy and worked beautifully.