-2

Our Oracle server (version 12c) is running on a RedHat-6.7 server with 128GB of RAM.

The current size of the only database on it is only about 60GB.

Yet, we see plenty of disk reads -- on the device where the database is stored -- in the iostat-output (shown in blue). There are some writes (shown in yellow) too, of course:

enter image description here

While the writes to permanent storage make sense because we do make modifications to the data some times, the reads do not -- the entire DB can fit in memory twice over... Indeed, we should be able to store it on a USB-stick and still have good read-times.

What parts of the Oracle-configuration should we examine and tune to make the server use all available RAM?

Update: we will look at the INMEMORY settings -- thanks @lenniey. But it would seem, the INMEMORY parameters come into play, when the server has to (because of RAM shortage) decide, what to drop from memory to be re-read back later. In our case nothing should ever be dropped from memory, because there is room for everything. And yet some things are, evidently, being (re)read again and again...

Mikhail T.
  • 2,272
  • 1
  • 22
  • 49
  • What makes you think Oracle doesn't use the RAM it needs (did you even set INMEMORY for tables?)? You can always set memory/pga/sga/xxx-targets to as high as you like, but you will **never** "convince" Oracle to only use memory (which is a good thing). – Lenniey Jan 11 '18 at 17:13
  • The fact that it reads the disk -- frequently -- is what makes me think so. That it does so is a _bad_ thing -- it should cache all of the data in RAM because nothing can modify it on disk outside of Oracle. It should only read disk at start-up or if it does not have enough RAM for something... – Mikhail T. Jan 11 '18 at 17:16
  • You'd have to check buffer cache (hits), INMEMORY access, pga/sga/memory target sizes, etc. Oracle does some database background optimization on its own (e.g. sys scheduler jobs), so it has to read the database files. But first of all, check _what exactly_ is causing the disk reads, we know nothing about the workload/design/etc. of your DB. – Lenniey Jan 11 '18 at 17:29
  • How would I check? I know, what queries we are making -- but all of the data necessary to process them should already be in RAM: the queries are the same every day and the server's been up for weeks. With the abundance of RAM we have, it should be acting more like Redis -- but does not... – Mikhail T. Jan 11 '18 at 17:59
  • Your INMEMORY statement edited in the question is just plain wrong. INMEMORY has nothing to do with memory management or whatever, you can define whole tablespaces to be stored in memory to be _read_ from (everything will still be stored on disk). The details you ask are very specific and way beyond the scope of SF, I think. Check the official [Oracle documentation](https://docs.oracle.com/database/121/ADMIN/title.htm) for starters. But let me ask you one question: why do you use Oracle, anyway? I don't think you really _need_ it, and some other DB could be better for your workload, – Lenniey Jan 12 '18 at 08:59
  • The very reason to use INMEMORY (or something like it) at all is because your database would not fit in RAM in its entirety. Well, ours would, and that's what I'm talking about. We need Oracle, because that's the only DB our application vendors supports. Please, stop commenting here -- none of your responses have been helpful while some of them were condescending and otherwise unpleasant. – Mikhail T. Jan 12 '18 at 14:52

1 Answers1

0

Are you running more than one database on that Oracle instance? If so ensure the reads in question are happening across all the databases on that instance. If this isn’t this case then, check the logs and, processes and try to narrow it down to particular queries, jobs. Before you do that you have to ask yourself is the performance your getting from this database good enough? Are you getting complaints? Sometimes we end up trying to optimize something that is running just fine. Good luck!

Marcus Patman
  • 66
  • 1
  • 2
  • No. I actually state in the question (second paragraph), that there is only a single DB -- and its size is 60GB. – Mikhail T. Jan 11 '18 at 21:44