- I would think so. Be careful about the block-size you chose as it will need to be larger than standard for such a large amount of data
- Not significant during the transfer, your bandwidth will be the bottleneck not your CPU. Generating the torrent meta-file (which involves hashing each block and the whole set of data) in the first place will take quite some time as will the final hash check after the transfer has completed on teh client
- Yes. Unless your connectivity provider, the client's provider, or somewhere between, is selectively shaping P2P traffic.
To mitigate issues regarding points 1 and 2, if you can split the data into smaller chunks and have separate torrents for each chunk you might find the size of the data easier to handle.
Also note that you will need to regenerate the torrent metafiles if any data in the file(s) they cover is updated. If small parts of the data change without the rest changing, you probably find rsync to be a much more efficient solution.
How large are the files in the dataset and what is the spread like (several multi-gig files?, many smaller ones?, ...)?