You can do a "quick and dirty" test for file server throughput by creating a large temporary file on the server computer with the fsutil command, and then timing the transfer to a client computer:
fsutil file createnew temp-file-name 209715200
That would create 200MB temporary file. You can do a quick copy w/ timing using the following script (from the directory where you created the temporary file on the server, and assuming you have rights to connect to the "C$" share of the client computer):
@echo off
echo.|time
copy temp-file-name \\remote-computer-name\c$
echo.|time
Subtract the ending time from the starting time, convert to seconds, and divide 209715200 by the number of seconds elapsed to get bytes-per-second.
You should see upwards of 7,000,000 bytes per second (roughly 56Mbps) on a 100Base-TX LAN. Anything below that and I'd begin to suspect that something is up. Assuming that the server computer is reasonably modern, it should be able to fill a 100Mbps pipe with no problem. If you're seeing transfer speeds slower than that, I'd start to look at the error counters in the administration interface of the switch that the server and client are connected to. You could have faulty cabling, a duplex mismatch, or NIC driver problems. It's all just a matter of tracking the problem down methodically.