5

I've done some reading in google and the only thing I did was get myself confused. Some of the people say that count() is slower with many records, others say you can cache the number or even use sequence...

What's the best way to count many rows in a table with best performance and lowest cpu/ram usage? By many I mean >=500,000 rows.

Antoine Benkemoun
  • 7,314
  • 3
  • 41
  • 60
tftd
  • 1,480
  • 7
  • 24
  • 38

1 Answers1

4

The best way I know of to tweak count()'s performance on huge data sets is to count() something that's been indexed (if you're trying to count a partial result, you can create a matching partial index).

voretaq7
  • 79,345
  • 17
  • 128
  • 213
  • This seems perfectly logical. I've never did this in postgresql.. Do you have an example from the net which you think will do the trick ? – tftd Feb 09 '11 at 21:39
  • Take a look at http://wiki.postgresql.org/wiki/Introduction_to_VACUUM,_ANALYZE,_EXPLAIN,_and_COUNT (and various other chunks of the wiki) – voretaq7 Feb 09 '11 at 21:52