2

i have a query which execution takes 15s on the server. the server is bi-xeon 5130 (2GHz) with 2GB of RAM and 160GB of scsi hardrive in RAID. Database is PostgreSQL 8.1 on Debian 4 etch.

This query is executed by an ERP (openerp), so i can't change or modify it. i can test it as i want,i can reproduce the long time as i want, i got it in the log file and reproduce the case into pgadmin.

Copy/pasted into a txt file, the simple query need 170Kb to be stored (just the text query).

During the execution, linux (by 'top' commandline) tell me that postgres use 99-100% of the CPU but only 6% of RAM.

So, how can i increase the performance of postgres to gain time in this query execution. a parameter somewhere ? in Postgres ? in linux ?

The query is like this:

SELECT
    id
FROM
    sale_order_line
WHERE
    (state IN ('confirmed', 'confirmed_wait', 'manquant'))
    AND
    (id IN (27405, 27399, 16583, 27395, XXX))
ORDER BY 
    id DESC LIMIT 50;

[replace the XXX by a list of 20000 id ! ]

yes this is a monster query but not a complicated one , very simple in fact.

thanks for help. i search for a long time (2 weeks already...)

1 Answers1

0

There are a number of steps to take here:

  • Optimise the PostgreSQL config so it uses the correct amount of your system resources (see this article I wrote for a previous employer for my full set of tips on this front);
  • You need to optimize the database schema -- specifically, the indexes therein -- to make the database run as efficiently as possible. For that, see this article on query tuning written by a colleague of mine at said former job. It covers both MySQL and PostgreSQL, but just the relevant bits should get you along the way to the ultimate solution.
womble
  • 95,029
  • 29
  • 173
  • 228