I think how well this will work will depend on your data set and your load. There is not a direct correlation between storage size and RAM + CPU, however, if you are expecting 3x as many reads and writes going from 1TB to 3TB, then you can expect that you will need to accommodate for that with more RAM and CPU as well, but you very likely won't need to increase your CPU and RAM 1:1 with your storage (i.e. if you go from 1 to 3TB of disk, you won't need 3x RAM to accommodate). In general, you will find that I/O is the bottleneck, so having fast disks (SSDs!) is the most important.
I've ran nodes with 3TB of data and it worked without too much issue. There was a lot of tuning that needed to be done, so unless you have someone on team who has a lot of experience tuning Cassandra I would not recommend it unless this is a hard requirement. Where you have to be careful is with RAM and how much heap you will assign to the Cassandra jvm process. The maximum recommended heap for Cassandra is 8GB as garbage collection becomes more disruptive with larger heaps (unless you go with Azul Zing), and less frequent full GCs can lead to fragmentation which impacts performance. In general, it is not a good idea to run java applications with larger than 8GB of heap if you can avoid it.
In newer versions of Cassandra, you can move a lot off of heap and into native memory. Since 1.2, bloom filters and compression metadata have been moved off heap and into native memory. In 2.1 you can now allocate memtables off heap, this may help you deal with a larger data set. So now you can benefit more from having more RAM while staying at a reasonable (8GB) heap.
It is my recommendation to always lean more towards the side of having smaller nodes. These recommendations exist for a reason, and I think it's mostly because Cassandra is more proven being used in this way. Cassandra works great on cloud providers and with commodity hardware, you may even find it cheaper to have more smaller nodes than less bigger ones. Where it can become costly is in operations, but if you use good configuration management tools like puppet or chef, it becomes less costly. This also becomes harder to do with dedicated hardware set ups.
I would recommend not taking anyone's word for it though, and to find test out with different configurations in EC2 or another cloud provider and see what works best for your application. Your load profile and data set is really going to be the determining factor as to whether or not this will work. I can't stress it enough, do a lot of testing with different configurations! Once you've decided on something, it becomes an effort (but not impossible) to switch off. As someone who has gone through 3 different cluster configurations for 1 application, I cannot stress this enough :). To help test this, the new stress tool included with Cassandra 2.1 makes it really easy to generate a load scenario that is representational of what your application will do. Cassandra is very tunable and has a lot of good metrics for measuring performance, so using the stress tool also gives you an opportunity to try different options and learn more about managing Cassandra instances (tweaking memtable, compaction and other settings to get a feel). One or two weeks of testing will save you months of hardship!