Those are probably not the best counters. The problem with disk I/O is that is it is not going to be that useful because disk I/O depends a lot more on database optimization unless the whole database was in memory. Operations like table scans and reporting / load scripts will overload your memory. A heavy I/O load will produce I/O on the log file, regardless of memory.
These disk counters are not the best for disk analysis either - queue length isn't always clear because it varies depending on your hardware layout. For example, 250 might not be a bad queue length for your disc, but maybe not if you have a large power SAN which can handle lots of parallel requests.
I rather go with the primary factors: Disk overloaded, I/O takes longer, seconds/read, seconds/wire - this is non-subjective. More primary data such as seconds/read gives a unique number not dependent on the hardware, and lower response times show what you are really looking for.
For memory, I would take:
- SQL Memory Manager:Memory Grants Pending
- SQL Buffer Manager: Page Life expecancy
The last gives you an idea of how fast pages are getting out again. You have to make sure, though, someone is not forcing this - table scans are perfect for that (as basically you will overload the cache with a table scan, unless the whole table fits in memory).