1

I have an application which require doing a lot of queries for every single page which make it slower.

I have tried to use cache systems (Redis) but implementation are little bit complicated and will take time as every query depend on a lot of dependencies.

I'm think about have another clone of database in memory which should be read only but every time database update or insert happened should update the memory rows? do you think this will be effective and practical and if so is there are any current way to achieve this as i'm using postgres.

My second idea was to use something like sphinx search engine for all queries and in this case i will need to use real-time indexes.

Is this working or there are a better solution.

Notice: database size is about 50GB and i can get memory for it all.

Regards,,,

Greg Askew
  • 34,339
  • 3
  • 52
  • 81
Devy
  • 131
  • 3

1 Answers1

4

Generally it is not a smart idea. See, databases are written by smart people and use as much memory for caching as possible. If you hit the disc too often, then you have a memory problem - and keeping ANOTHER copy in the same limited memory will not make things better.

What someone should do is baseline analysis. What is the bottleneck? Fix it. More memory, SSD are typical steps here. What are the most expensive queries? Experience tells me you likely have some hammers in there that do a lot of damage, and the problem is not "too many queries" but "what the heck is an index".

But an in memory database normally should not make a difference because a database worth anything already uses as much memory as it can as cache. That is - naturally - unless some admin puts in a ridiculously low limit to "save the memory" and then has performance problems.

And then you really should start with caching.

TomTom
  • 50,857
  • 7
  • 52
  • 134
  • Mostly sensible, but you should be looking at your query plans before you rush out and buy SSDs. Making sure you've set the shared buffers to use most of your memory is a good start - with enough memory, postgres will effectively run with the entire data set in memory but writes stored on disk too. – symcbean Jul 11 '15 at 21:39
  • Also, focus your caching efforts on data that's expensive to generate or queried a lot and can be afford to be a bit out of date. There's no point caching stuff that's a trivial index-only scan to look up and has a short validity lifetime relative to how often it's accessed. – Craig Ringer Jul 12 '15 at 05:10
  • Well, personally I think SSD are all the way to go outside of storage. I removed any non ssd from any server - again, outside our 30tb backup volumes, outside our file servers. The rest runs on SSD anyway. They are not THAT expensive and make everything faster. – TomTom Jul 12 '15 at 07:35