For most server machines, it should probably be hitting files in %PROGRAM_FILES% at boot and initialization, and not very often after that - it should get anything it needs to run into memory, and then page-in/out as needed without having to hit the actual original file on disk again.
An exception could be log files, if they are in the directory tree that you've compressed. And in that case it's doing a ton of writes, so I don't see compression helping IO performance in any way at all. If your application(s) involve generating or seeking through large files, that would also make a difference. But in either case, you shouldn't have those files located on your system volume; if they fill up, your system is down, so you should keep such transient or growing files on a different partition (which you may or may not want to compress.)
Another exception would be if it's hitting a ton of different binaries during normal operations, but I would imagine that's much more common on a desktop machine than most servers. An exception would be a Citrix or TS server presenting complete desktop sessions; I'm sure there could be some others scenarios too.
So, I imagine the best answer would be "it depends on your server's workload. Try it with and without, with appropriate benchmarking, and tell us what you found."
As a side note, if you have new hardware, configured properly, why should free disk space on c:\ be a concern?