0

I'm looking for a way of measuring performance impact/gain, in case of using huge pages. For example, I have 192GB RAM server, with 140GB allocated by huge pages. I run Postgres, with shared buffers in huge pages.

Is there any way to check performance in kernel or libc sides using perf or eBPF? Ideally, it would be better to see the changing of the latency (or spent time during execution) of particular functions. But problem is, I don't know what functions or probes I need to profile.

lesovsky
  • 143
  • 7

2 Answers2

0

You don't need some abstract tests to measure the performance gain or loss. Run test suites that describe best your load profile and your current environment. If you have some heavy PLSQL/stored procedures sequence - use them as benchmark: run with and without huge pages while recording the time taken. This would be the best approach. In fact, if someone would benefit from using huge pages on some load profile that you don't use, and you will get a statistical zero performance gain - it means that this tweak is useless to your task.

As about huge pages, I really doubt there will be a significant benefit in case of PostgreSQL. Or in case of any SQL server in general. But I may be wrong.

drookie
  • 8,051
  • 1
  • 17
  • 27
  • yep, I've already used postgres-level benchmarks using my typical workload. But I'm intersting to look a bit deeper and see a work of what system functions get better/worst ))) – lesovsky Dec 18 '18 at 18:52
0

The good starting point is using perf stat with TLB events. Details are here.

lesovsky
  • 143
  • 7