We're using it for 7+GB of data per day, but we pay for that. A lot. I think we get a bit of an academic discount, but mostly we managed to justify spending the money because it satisfied auditors about having somebody/something looking over our logs.
We also use nagios. We've configured nagios with some saved searches that call scripts that either generate nagios alerts or create RT tickets. So, for instance, over X login failures in a 5 minute time window (across all servers) will generate an alert. That's the kind of thing nagios can't really do on its own.
Previously we were using SEC to generate those kinds of alerts, but it didn't work as well and somebody still had to try to use grep on a 20GB file now and then.
I'm not sure we have any nagios alerts generated anymore; we've switched most, if not all, of that to generating RT tickets. The nagios alert model doesn't really work well for stuff based on log analysis, it's better at things with a state that can be good or bad, not a discrete event that may need investigating.
EDIT:
Yes, it really does make life a lot easier for us. It's substantially better than trying to grep through logs. We've got Windows, Linux and Solaris boxes sending it logs.
Does it magically find exactly what you want like some of the videos imply? No, it's got some limitations and you may have to do a bit of configuration to get it handling specific types of logs well. And overly "interesting" searches can require reading through the docs and then waiting a few minutes while the splunk server churns. But, seriously, it rocks. From what I've seen, there's really nothing else in its league.