Present sizing/continuity options, with costs, for several sets of requirements. Also, you need to add more specificity than "concurrent users." For example, say, 100,000 user requests evenly distributed over a 1000 second period (that is, 100 user requests/second) with an response time averaging less than 3 seconds and a std deviation of less than 1 second. Your numbers, of course, may be different. Point out, if necessary, that ethernet is serial and you'll be getting user requests one packet at a time. You need, as the technical expert, to be able to provide enough technical education for the business users for them to understand the tradeoffs and guide the reasonable decision.
Show how response time and number of requests combine to affect cost. (Loads of requests, but 1 hr response OK? No problem. Ditto fast response, but very few requests). For the size system you imply, you can load balance across multiple active sites and get business continuity cheap.
There's no good overall stat for total registered users vs concurrent for several reasons. It varies greatly with type of site, for example. "Concurrent" is also a very loosely defined term -- is someone logged in, but with no activity over a few hours a concurrent user?