This is hard to answer directly, because the answer is very much 'it depends on your caching'.
You see - hard disks are slow. Very slow. But actually, most of the time you don't notice, if you have a decent cache, and a reasonably predicable access pattern.
Read caching means recently accessed things will be coming off cache. Write caching means that recently written things can be deferred and coalesced into full stripe writes.
But when it comes to measuring speed - there's two elements to it. Write performance ... actually probably doesn't matter too much, because of the caching - you can always cache writes, and so as long as your sustained write IO doesn't exceed that of your combined RAID group, it's irrelevant.
But read IO it's much more of a problem - you cannot service a read until you've got your data, and to get your data you have to wait for the disk if it's not already in cache. This is where you see the performance cost - it shows up in read latency.
And that's why slower disks are a problem - because it doesn't actually matter how fast the combined RAID group is, when the data is at rest on one of two drives - you have to access that drive to service the read.
So - at risk of sounding a bit wishy washy - "it depends" a lot on your workload. You can usually get by with some really slow disks and some awesome caching if you want really large amounts of terabytes cheap.
However, I'd only bother doing so when we start talking at 100+TB or so, which is where the cost of rack space, cooling, floor footprint etc. start to become significant.
For your application - I'd say buy the fastest drives you can afford, because it's a lot easier to buy more drives later if you need them, than to realise you can't because you've filled your drive bays and need a new shelf.
But I'd strongly suggest also considering SSDs - their price-per-gig isn't amazing, but their price-per-IOP really is. And whilst no one seems to care about IOPs, that's the thing they really do care about when it comes to 'being a bit slow'.