A bigger factor than the number of nodes is the number of convergences - which translate to API hits - your clients are making when configuring nodes.
As you found, the Ruby API server is memory intensive, so a micro instance is going to feel cramped pretty quick. The CouchDB backend can be write intensive (depending on your convergences), so IO performance is a consideration. The search engine is normally fine, and you can increase the number of expander vnodes to handle the workload of indexing.
Generally, we have found that the c1.medium is the best bang for the buck instance size for a large variety of workloads, not just for the Chef Server, but for general application use. It does cost twice as much as an m1.small, though.
The Chef Server was designed for horizontal scale. It can start out on one system just fine, but as the size of your infrastructure increases, you may wish to split components out to separate systems. Depending on the economics of it, you might mix and match instance sizes for your workload by running the components on separate instances of their own. For more information about the configuration options on the Chef wiki.
Also, Opscode Hosted Chef might be an economical solution, as you would not have to worry about any of that.