There are two main downsides:
Your load isn't evenly
distributed. Sticky sessions will
stick, hence the name. While
initial requests will be
distributed evenly, you might end up
with a significant number of users
spending more time than others. If
all of these are initially set to a
single server, that server will have
much more load. Typically, this
isn't really going to have a huge
impact, and can be mitigated by having more servers in your cluster.
Proxies conglomerate users into single IP's, all of which would get sent to a single server. While that typically does no harm, again other than increasing individual server loads, proxies can also operate in a cluster. A request into your F5 from such a system would not necessarily be sent back to the same server if the request comes out of a different proxy server in their proxy cluster.
AOL was at one point using proxy clusters, and really screwed with load balancers and sticky sessions. Most load balancers will now offer sticky sessions based off of C-Class net ranges, or with the case of F5, cookie based sticky sessions which store the end node in a web request cookie.
While cookie based sessions should works, I've had some problems with them, and typically choose IP based sessions. BIG HOWEVER: I'm mostly working on internal apps - DMZ milage might vary.
All that being stated, we've had some great success with sites running behing F5 with sticky sessions and In-Proc sessions.
You also might want to take a look at one of the in memory distributed caching systems like Memcached or Velocity for an alternative to session being stored in SQL or the out of proc memory service. You get close to the speed of in-proc memory with the ability to run it across several servers.