-2

I have a Java console application that performs Reed-Solomon error correction on large volumes of data (~700 MB at a time), just to test performance. The application is multithreaded, I can see it gives 100% occupancy on all cores when run.

On my laptop (Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz, 8 virtual cores), I can process ~37.84 MB/s. This CPU benchmark comparison made me think I could get a significantly better performance on an Amazon EC2 G2 instance (Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz, 8 virtual cores), which scored 2.5× better.

However, after running the application on the give Amazon instance, it gave only a slightly better throughput of 42.07 MB/s.

I would have thought an improvement in architecture, better clock, the three year difference in technology and the fact that it's a server CPU vs. a laptop CPU would have resulted in much better performance.

Could someone explain what the score on CPU benchmark really refers to if it is not raw processing speed, and why I saw such a small improvement in performance?

Zoltán
  • 217
  • 2
  • 6

1 Answers1

3

In the comparison there are 6(12 virtual) additional cores, in your test you have the same number of cores.

You are increasing the speed by just 7% and getting 11% more work done, seems like a decent improvement to me.

JamesRyan
  • 8,138
  • 2
  • 24
  • 36