So - we have an internal company database, the usual kind of stuff: manages clients, phone calls, sales deals and client agreements/schemes.
It's an Access 2000 front-end, and an SQL Server 2000 Standard back-end. Single server, dual Xeon 3.2GHz, 2GB RAM, Windows Server 2003, gets about 40% CPU load all day, spread across the 4 cores visible to the OS (HT).
The back-end database is poorly designed, and has organically grown over 10+ years, maintained by less-than-skilled individuals. It is badly normalised, and some of the obvious problems include tables with tens of thousands of rows with no primary key or index, which are also used heavily in multi-table joins for some of the most heavily used parts of the system (e.g. a call manager application that sits on everyone's second monitor for 8 hours a day and runs a big inefficient query every few seconds).
The front-end is not much better, it's the typical mess of hundreds of forms, nested saved queries, poorly written embedded SQL in the VBA code, dozens of "quirks" etc, and whenever a change is made something unrelated seems to break. We have settled on one MDB that works "well enough" and now have a no-change policy on that as we have no Access heavyweights in-house (and no plans to hire one either).
The company is now slowly growing, increasing numbers of clients, calls etc, as well as a modest increase in the number of concurrent users, and performance has been getting noticeably worse just recently (waiting to move between forms, waiting for lists to populate etc)
Perfmon says:
- Disk transfers per second: between 0 and 30, average 4.
- Current disk queue length: hovers around 1
SQL Server's profiler sees hundreds of thousands of queries every minute. CPU usage on the clients is pretty much zero, indicating it's waiting on server-side queries to execute. I have put this workload through the DB Engine Tuning Advisor, applied its suggestions to a test backup, but this hasn't really made much difference.
By the way, we have a mix of 100MB and gigabit ethernet, all on one subnet, 40 ish users across two floors.
To the question.
As I see it we have two choices to resolve/improve this situation.
- We can scrap it and replace it with an entirely new CRM system, either bespoke or part bespoke
- We can extend the life of this system by chucking hardware at it.
We can build an Intel i7 system with crazy performance numbers for an order of magnitude less cost than replacing the software.
When a new system is eventually developed, it can be hosted on this box, so there's no wasted hardware. A new CRM system keeps getting put off, and off, and off - I don't see that happening for at least a year.
Any thoughts on this situation, especially if you've been here yourself, would be most appreciated.
Thanks