9

I have known about SQLite for a long time and I know it is blazing fast, but I have never tried it in a production server. I was never able to find a solid estimate on how much traffic it could handle before failing.

Does anyone have any numbers or an article on this?

T. Zengerink
  • 199
  • 5
  • 13
Dr Hydralisk
  • 315
  • 3
  • 7
  • 2
    Warning: *The Question and some of the Answers contain misconceptions, misunderstandings, and outdated information!* – Chris S Nov 05 '12 at 17:24
  • We've been evaluating SQLite as a caching solution in production: http://uri.agassi.co.il/2014/10/using-sqlite-for-production.html – Uri Agassi Oct 20 '14 at 06:37

5 Answers5

4

Unfortunately, I haven't got any figures for you about load capabilities, but a few comments about some of the performance-restricting factors:

  • The speed of SQLite is affected by the speed of the disk that it is on and whether there are lots of inserts/updates going on (i.e. write access). The write-lock is limited by the disk spin speed

  • Transactions are started by default, but you get better performance if you start and commit the transaction. I have had very fast mass inserts when handling the transaction programmatically

  • If you are generally only reading data then you get good performance in my experience. So, SQLite can be used as a caching system to store database server reads, particulary remote ones or complex queries.

  • It uses less resources than a database server, so this can affect site performance as freeing more resources for the Web server and application code

  • If you require a number of concurrent writes to be possible, then a database server (e.g. MySQL, Postgres) may well serve you better

As Devrim stated, the SQLite site states around 100k users/day should be fine. A Trac system requires writes, so performance would probably be slower in that case

cEz
  • 313
  • 1
  • 6
  • Thanks for the respones, so basically you can only write to the database 1 at a time? But you can still have mutliple people read from the database without getting locked? If so, I have just gotten a few ideas for a caching system. – Dr Hydralisk Feb 19 '10 at 15:57
  • The "100K" number that's thrown around is useless. The docs read "Generally speaking, any site that gets fewer than 100K hits/day should work fine with SQLite." A "hit" is typically defined as a HTTP request. This includes a request for every .js, .css, and image on a hit. What does this have to do with DB performance? Assuming the authors meant "pageview", it's still useless. How many queries per pageview? Ratio of reads to writes? What kind of hardware is the DB running on? It's not a good idea to use SQLite for a website when there are at least a dozen other better tools for the job. – jamieb Feb 19 '10 at 17:45
  • @Dr Hydralisk Yes, reads can occur in parallel.. just like a file – cEz Feb 19 '10 at 21:11
  • @jamieb The point about "hits" is a fair one. Nobody has mentioned what hardware or server resources are available though, so it could be a small virtual server for all that we know. Involving SQLite databases in a caching system, or primarily read-only tables *can* give performance benefits, particularly when the database server is a remote one. Memory cache + SQLite cache of Database queries will beat the pants off database queries alone. – cEz Feb 19 '10 at 21:14
3

I have some points to add to these good answers.

The current version of SQLite has WAL (Write-Ahead Logging) so that reading and writing can proceed concurrently. So the traditional single writer limitation mentioned in the previous answers no longer exists. I haven't seen WAL in production yet so I cannot comment how well it scales.

Using WAL or not, if your SQLite database is read only (or is batch updated) and it fits in RAM (your OS has enough spare RAM to keep it in buffers) it can scale very well on a production web application. I personally was very sceptic about its performance, scalability and robustness but now after nine months in production it has proven to run even the most complex parts of the system very well.

XDF
  • 86
  • 4
1

Sqlite is great for embedding in applications, and that is what it is designed for, but it most certainly isn't "blazingly fast". I use it for several of my own applications, purely for the convenience of only having two files that can be copied to another machine to give a fully working application. Tests against MySQL, using the same structure, indexes, etc., shows Sqlite to be considerably slower, even for for small databases. I would expect the performance difference to grow as the database size grows, although I can't say for certain as I've only used it with databases of less than 100MB.

John Gardeniers
  • 27,262
  • 12
  • 53
  • 108
  • PRAGMA can be adjusted for increased performance at the expense of potentially occasionally corrupting the database(not losing data though). In my experience , I could get SQLite to take inserts at roughly 250-300rps on an old WindowsXP machine. – djangofan Jul 27 '11 at 15:57
  • -1, because I don't agree. I've built and queried SQLite databases of several GB in size and I would never get the same amount of queries per second using MySQL for instance (in the same type of hardware). SQLite can easily do 120k write queries per second if you know how to correctly *massage* it. – Alix Axel Nov 05 '13 at 13:40
0

I think sqlite is only faster than a text/xml file (you may be surprised if you tried it). And it doesn't support concurrency, if u want to create a site for intranet where people register their work hours or use trac ticketing it may serve well. Other than that it should be avoided and replaced by mysql or couchdb.

sqlite website says 100k users/day should be fine but i highly doubt it, since a simple trac project gets stuck a lot with 10 ppl office use.

Devrim
  • 1,197
  • 4
  • 16
  • 29
0

Sqlite isn't a traditional client/server DB application. It's essentially a library that's embedded within another application. It's designed for single-user desktop applications. You absolutely do not want to try to use it as some sort of standalone MySQL/PostgreSQL/MS-SQL replacement in a multiuser enviroment because the entire DB is locked on write. You'll be dealing with contention issues on even a light load which will destroy performance.

jamieb
  • 3,387
  • 4
  • 24
  • 36
  • In WAL, only concurrent writers are blocked. Plus the default MySQL engine (MyISAM) also locks full tables whenever there's a write query. – Alix Axel Nov 05 '13 at 13:42
  • 1
    @AlixAxel This answer was written six months before WAL was available. – jamieb Nov 25 '13 at 22:10