39

So - we have an internal company database, the usual kind of stuff: manages clients, phone calls, sales deals and client agreements/schemes.

It's an Access 2000 front-end, and an SQL Server 2000 Standard back-end. Single server, dual Xeon 3.2GHz, 2GB RAM, Windows Server 2003, gets about 40% CPU load all day, spread across the 4 cores visible to the OS (HT).

The back-end database is poorly designed, and has organically grown over 10+ years, maintained by less-than-skilled individuals. It is badly normalised, and some of the obvious problems include tables with tens of thousands of rows with no primary key or index, which are also used heavily in multi-table joins for some of the most heavily used parts of the system (e.g. a call manager application that sits on everyone's second monitor for 8 hours a day and runs a big inefficient query every few seconds).

The front-end is not much better, it's the typical mess of hundreds of forms, nested saved queries, poorly written embedded SQL in the VBA code, dozens of "quirks" etc, and whenever a change is made something unrelated seems to break. We have settled on one MDB that works "well enough" and now have a no-change policy on that as we have no Access heavyweights in-house (and no plans to hire one either).

The company is now slowly growing, increasing numbers of clients, calls etc, as well as a modest increase in the number of concurrent users, and performance has been getting noticeably worse just recently (waiting to move between forms, waiting for lists to populate etc)

Perfmon says:

  • Disk transfers per second: between 0 and 30, average 4.
  • Current disk queue length: hovers around 1

SQL Server's profiler sees hundreds of thousands of queries every minute. CPU usage on the clients is pretty much zero, indicating it's waiting on server-side queries to execute. I have put this workload through the DB Engine Tuning Advisor, applied its suggestions to a test backup, but this hasn't really made much difference.

By the way, we have a mix of 100MB and gigabit ethernet, all on one subnet, 40 ish users across two floors.

To the question.

As I see it we have two choices to resolve/improve this situation.

  • We can scrap it and replace it with an entirely new CRM system, either bespoke or part bespoke
  • We can extend the life of this system by chucking hardware at it.

We can build an Intel i7 system with crazy performance numbers for an order of magnitude less cost than replacing the software.

When a new system is eventually developed, it can be hosted on this box, so there's no wasted hardware. A new CRM system keeps getting put off, and off, and off - I don't see that happening for at least a year.

Any thoughts on this situation, especially if you've been here yourself, would be most appreciated.

Thanks

tomfanning
  • 3,308
  • 6
  • 33
  • 34

18 Answers18

21

I'm going to disagree with everyone here. Chuck some hardware at it. It's cheap, fast, easy, and will buy you the time needed to implement a proper CRM solution. The reason I'm advocating something that is anathema to just about everyone on not only this board, but stackoverflow as well, is that I've been a project manager/manager and have been on the "Business" side for a while(business is in quotes due to my hatred for the word). Based on your description of the software, it will take close to a year to rebuild something else. Just discovering/documenting the business rules/quirks, will probably take 2 months. It will also be unbelievably expensive to develop. Especially when compared to the cost of a tricked out server.

I'm actually about to host a set of web apps for a company for just that reason. The internal IT department will not move it to better hardware because they want to redevelop them on a new platform. That cost is approximately triple what it would cost to move it to new hardware. Not too mention that the company might not have the contract renewed in a year.

Dayton Brown
  • 1,549
  • 2
  • 13
  • 23
  • His question was "NewHardware OR NewCRM" not "NewHardware AND NewCRM" ... And really you're not going against the board (buy a new CRM), so much as re-framing the question (moving from OR to AND). – Joseph Kern Aug 10 '09 at 13:14
  • A few other comments here are saying "Do both". If he does both, then there's really no question. But can he afford to do both? – Joseph Kern Aug 10 '09 at 13:17
  • Joseph, I replied below in full, but -- New hardware, plus upgrading to more recent server editions, PLUS optimizing some of the queries and adding indexes will probably be most effective. You don't want to give away the competitive advantage that custom CRMs give to small, growing businesses. – Karl Katzke Aug 10 '09 at 14:48
  • My Do Both, if you read it was keep the hardware going while reworking it. Depending on the resources it needs that might be a couple of hundred $$ in RAM while refactoring parts of it. Showing issues he'd had with scrapping it and going with totally new whether new is a custom written system or turn key out of the box. – SpaceManSpiff Aug 10 '09 at 14:53
  • 1
    +1 for looking at the big picture: It will cost less to throw hardware at it than it will to throw developers/IT at it. Unless you can find an off-the-shelf CRM that does everything you need, costs less than the server, and will take no time to migrate to, that is. – Ernie Aug 10 '09 at 17:43
15

You might not need to do either. My suggestion is to simply add some indexes/keys to the table.

tables with tens of thousands of rows with no primary key or index, which are also used heavily in multi-table joins

Before spending a lot of money or time, take a couple hours and add indexes (or primary keys if you can) to any tables involved in those joins ... particularly for columns used in a where clause. You could easily improve the performance by a factor of 10 in just a few hours.

Beep beep
  • 1,843
  • 2
  • 18
  • 33
8

The lack of disk I/O implies that queries are fed mostly out of RAM. If you 'suddenly' have your hot tables not fit in RAM anymore and the server starts working the disks, you may be in for a bad ride. 2GB or RAM isn't very much these days, but back in the SQL2000 era it would have been sizeable. I'm guessing that the amount of data that the application normally manipulates is smaller than the RAM that you have. You might want to look at the amount of "used" space in the data files. This will give you an idea of how much RAM the database might consume, worst-case. SQL Server doesn't keep the data that it doesn't need in RAM, but it can be difficult to know what tables are used and when.

Hyperthreading doesn't always helpful with SQL Server. You may get better performance turning it off. It's hard to test because flipping it off and on requires a reboot, and that's a big hassle on a production server.

"Hundreds of thousands of queries a minute" translates to thousands of queries a second. That sounds pretty busy, but much of that traffic may just be cursor fetches by Access. Access is particularly bad at efficiently retrieving result sets from SQL. You may get better performance by turning the SQL Server parallelization setting off.

You also want to look for blocking. Throwing hardware at a blocking problem does not always produce the hoped-for dramatic improvement. If there is not much blocking and queries are satisfied by RAM, rather than disk, you are basically relying on the processor's grunt, and their ability to pull data across the memory channels. In that case, faster hardware ought to provide a good improvement. If you are in a hurry (to get past this issue) and growing slowly, that might be good enough.

As a solution, adding hardware doesn't scale as well as database improvements. If you get a surge in growth, you may find your new hardware struggling. Another thought is that successful applications draw users. If the application becomes more responsive, users may be more likely to run more reports and such on it than they would if they needed to go for coffee while waiting for the report to finish.

If the database schema is really bad, you may be able to get some performance wins simply by looking at the indexing on the tables. Concentrate on tables that you know get queried often. You can use Profiler to watch queries running against the server, just tell it to look for queries that read a lot of data (like 100,000 pages) and then work down towards queries that don't read much. You mentioned that some of the tables don't have keys. Are there natural keys in the data, just not enforced by constraints or unique indexes?

Do the tables have clustered indexes? Lack of clustered indexing can cause all sorts of secondary effects.

Are there lots of nonclustered indexes, with many columns? This is often an attempt to build many covering indexes, rather than implementing a more effective indexing strategy. SQL Server can effectively build covering indexes on the fly during a query, if it makes sense to do so and there are supporting nonclustered and clustered indexes.

Lastly, it's worth asking: Is maintenance (reindexing and/or update statistics) being done on the tables?

Darin Strait
  • 2,012
  • 12
  • 6
  • +1 try to find columns in often-used tables to properly index, with a bit of luck it could be easy and fast to fix it - if not, the little time spent shouldn't be too costly if you or whatever DBA/DBA contractor give up and move on early if it doesn't look like there's a silver bullet to fire at it... – Oskar Duveborn Aug 10 '09 at 16:00
  • Access is *not* "particularly bad at efficiently retrieving result sets from SQL" unless the app has been designed poorly. Jet/ACE can make bad assumptions when it sends requests to SQL Server. One such is that Jet/ACE breaks down a batch update into one UPDATE per row. This is terrible from a performance point of view, but it's trying to be a good server citizen, as it allows the server to serialize and interleave the requests with those of other users, as opposed to potentially tying up everything with a long update. This can be worked around by moving the operation server side to an SPROC. – David W. Fenton Aug 30 '10 at 19:28
  • The majority of access apps I see aren't designed, they just sort of happen and then evolve 'organically'. I have been a victim of Access retrieving large result sets row by row, with of the network traffic and latency that comes with such behavior, so many times that I stopped counting. I'm not 100% sure that this has not been fixed with modern versions of Access that might use something like SNAC rather than Jet/Ace or if this is something that could be worked around by more knowledgeable Access coders, but it is something that I have seen often. – Darin Strait May 26 '11 at 15:16
6

this is a business question not a technical question.

As a business owner: How strategic is the system to the business? the less strategic, the less i care & fixing it & any money spent, is money I could be using elsewhere to grow my business.

Computer folk scare me as they all get in a big room & argue about design & cost me a fortune. Keep the system going! whether this means performance tuning (without re-architecting) or throwing more hardware at it, It's only a priority if it stops working.

As an IT consultant: Your system is legacy and has hidden operational costs. We can design a system that's right for you, that will scale and provide a platform for future growth & strategic advantage. Sign here & all your dreams will come true.

As an IT employee: I can be the superhero here & save the company by averting an imminent disaster by optimizing the hell out of this thing! my manager will shower me with gifts & praise as I'll have saved the company thousands.

Nick Kavadias
  • 10,758
  • 7
  • 36
  • 47
2

Well... this is a while ago now, but I thought I'd record the outcome here.

In the end, I stepped through the VBA line by line to deal with another problem. It was then that I realised that some calls to fetch rowsets were blocking for 20-30+ seconds.

When I dug down into them, I found that the rowset was based on an MS Access query.

That was selecting data from another Access query.

That was selecting data from yet another Access query.

All of which looked like they'd been dragged and dropped together using the query designer.

I went through the top half dozen user pain points and found that without fail they were all exactly the same as this.

So I removed the stacks of chained queries entirely, and replaced each of them with a single pass-through query which could be written in T-SQL and executed directly on the server.

The improvement was absolutely vast in every case without fail and there was no more waiting for queries any more for anyone.

And then I left the company. No idea whether it's still there... but I don't miss it.

tomfanning
  • 3,308
  • 6
  • 33
  • 34
  • There is nothing inherently wrong with nested queries. And, in fact, what really matters is not what's in the source QueryDefs in Access, but what Jet/ACE ends up sending off to the Server, which you can find out by using SQL Profiler. Yes, it's possible to write bad queries in Access that are inefficient and slow things down, but that's possible in every database! – David W. Fenton Aug 30 '10 at 19:30
2

I say do both.

Right now your at 40% or so CPU you said? Are you user's complaining (a lot) yet? If not you still have breathing room. More memory might be just enough to do it for a while.

Question for the way to go, do you have in house software developers? If the answers is NO then don't even attempt to redo it. You'll end up exactly where you are now.

Assuming you have in house developers, do your in house developers have the ability to properly do a project? I'm talking spec'd out fully, properly (relistic) timeline, basicly the same as if it was a customer's project. If not, then don't bother or it will end up back where you are now.

Until companies relize they are also clients of themselves, and need to give the same resouces to internal projects, you'll end up exactly where you are now. Been there, done that, got an entire dresser of t-shirts.

So if you can't do it properly you're two choices are out of the box turn key, which you're staff will hate cause they now you have to fit the mould of the system you buy. Or it will be customizeable and you'll still have to spend PROJECT time customizing it.

OR Refactor what you do have. Remember people will expect all the same complete functionallity when the new one comes in, so thats why any other way you have to do everything at once. If you re-factor it, you have a chance to figure out how it works and then rather then ad-hoc changes you plan it out into many small sub projects.

Without seeing the system, I'd likely see about normalizing as much as I can in the back-end, move as much of the SQL into Stored procs. Then build a new front end either out of C# Forms or a webapp. If you can get your business logic and SQL out of the front end, it will be easier to re-do it later. By keeping what you do to small projects, if it gets pushed aside at anytime or stopped, you'll have made progress that will be used.

SpaceManSpiff
  • 2,547
  • 18
  • 19
2

Some good replies here already- but might I just point out that (assuming there are in house developers) a relatively small amount of work will have a big impact- add primary keys (you shouldn't even have to change all of your queries to use them), add indexes to fields you do use, and tune your queries a bit and you could see an absolutely huge increase. Buy yourself some RAM for it now to buy the time and headroom you need to fix it, and then get to work.

On the subject of "fix it or ditch it" , if the system's features basically work for you and do what you need, don't rewrite the wheel. If your users are having to do gymnastics to use the thing because it does not fit your needs, then no point putting effort in to it.

Kyle
  • 1,849
  • 2
  • 17
  • 23
1

I'm posting a separate answer instead of just appending to Dayton's answer because there's a cost that's not being taken into account by the first few people to post an answer: The cost of retraining users and the cost of changing your business procedures to fit into a new software program. Otherwise, what he said.

One of the main reasons that companies develop their own software is that they have business procedures that don't match something that's on the market. Which is great -- a company's individual business procedures are a significant part of the value that a company brings to the table, and ARE the competitive advantage that a company has over the rest of it's market. To replace the software with something generic would require that you retrain your people and sacrifice the competitive advantage, or you'd have to customize the solution to match your business processes. Both are expensive and time-consuming. As a business consultant as well as a sysadmin, I've seen these costs kill small companies by themselves.

From your statements, it looks like you're pretty much processor/software bound. I'd do two things -- add indexes (within limits), especially to columns that don't currently use them. And I'd chuck the fastest set of processors you can at it, because it looks like that's where you're binding if you don't have that many drive reads 'cept at peak.

(Also, I'd upgrade the server edition as far as possible -- by the time you get this into place, Access 2000 and SQL Server 2000 will be ten years old. That's OLD in computer years!)

Karl Katzke
  • 2,596
  • 1
  • 21
  • 24
1

This needs a total re-structuring (re-architecturing). Rebuild the system from the ground up. This will save you a lot in the long run (overhead costs in maintenance). For the mean time, chuck hardware at it. I think this question is more of a "business case" than a technical inquiry. Technical-wise, the answer is an outright "chuck more power at it". Business-wise, Build a new system!

MarlonRibunal
  • 283
  • 1
  • 3
  • 8
1

Technical answer:

You've got a numbber of suggestions stating primary keys and indexing should be thoroughly reviewed. Access also really likes using a SQL Server TimeStamp aka RowVersion column on each table as this reduces a lot of time that Access spends deciding if a record has been changed when it comes to updating the records.

Business answer:

A new CRM solution is a lot of work in training folks and you'll never end up with a system that exactly suits your business requirements.

I'd find a good Access person who is also very knowledgable on SQL Server and get them to spend 3 or 6 months in normalzing the tables and fixing up user pain points. Ensure that person works on the saem floors as your users, although in a quiet space, and is accessible. Although not too accessible. Developers don't like interruptions. S

Tony Toews
  • 352
  • 1
  • 10
  • What he said -- the most bang for the buck, in my opinion, is in adding the indexes first, then throwing hardware at it, and then getting an expert-level Access developer from outside to analyse the app and figure out what the bottlenecks are. It might very well be something very simple, but my bet is that you could get the work done by the Access guru for about what the high-end server hardware will cost (or less). – David W. Fenton Aug 30 '10 at 19:34
0

Based on the given information, I would replace the system. Preferably with another CRM that allows flexible integration with other systems (that would compromise your ERP IS).

The trickiest part will be to convince management and the users into needing an upgrade.

Management

Express your concerns over the technical issues, low performance, fear of Byzantine failures, etc. Then present 2 alternative CRMs. Talk about business integration, the overall ERP strategy for the business, and most importantly how this will make employees more productive and profitable. Use case studies. Make this no longer than 15 minutes (unless they want more info).Then you need to convince the users.

Users

Training plans (which a vendor may supply[and management would need to endorse]), continued communication to the top 20% of your users (the power users, those that cause trouble), and a solid commitment to keep the system 100% operational for the first month (the first month will make or break any new IS).

There are a lot of CRM products out there, pick one that best suits your business needs.

Joseph Kern
  • 9,809
  • 3
  • 31
  • 55
0

Throwing hardware at it only encourages more poor design and management, until the system is so pathetic it won't run well on even the latest and greatest hardware. It's time to look at a better implementation. First analyse what is required. Only when you fully understand the requirements can you begin looking at the best way to implement them. Once you are sure you understand the requirements start looking at whether it is better/more cost effective to modify what you have or to start from scratch, possibly with something completely different.

John Gardeniers
  • 27,262
  • 12
  • 53
  • 108
0

In the long run you would probably be best off redoing the database. Throwing more hardware at it will solve your issue for a little while, but if you continue to use it you're going to end up having to throw more hardware at it after a little while.

Typically when there's performance issues you look at things like i/o bottlenecks on hard disks/RAID, sharding the database, etc...but for things like sharding you kind of need the database to be properly designed to take advantage of it. From the sounds of it your application is never going to be able to scale.

In the long run redoing the database and the front end software to better reflect your current business needs will serve you better in the long run. Your users will thank you, your hardware will last longer, and it'll save a lot more money in the long run than throwing gargantuan hardware at the issue.

Bart Silverstrim
  • 31,092
  • 9
  • 65
  • 87
0

Given your description, creation of an entirely new bespoke system is pointless - you're going to end up right where you started, or perhaps even worse off than you are now. So unless you can convince someone to buy a third-party solution, your best bet is to refactor as best you can, and throw hardware at it.

I'd say that you should do two things: 1) Performance analysis on the SQL server. You seem to have identified the server side as the source of the lags, so now you need to know which queries are lagging and why. In all likelyhood, you can find some hot-spot queries to optimize that would give you big benefits. Heck, if you've got clients that update every few seconds, see if you can turn down their update rate (Does the list on the screen REALLY need to update every 5 seconds? Would 30 be OK? If not, how about 15?). Dumb stuff like increasing refresh timers could save you a lot of overhead, if you can get away with it.

2) Throw more hardware at it. ESPECIALLY throw massive amounts of ram at it. You want so much memory that the database fits entirely in RAM. Keep an eye on your OS and software versions (there are apparently quite a lot of rules with versions of Windows and what hardware they actually support). If you can convince yourself that more cores would help, then throw as many CPUs and cores as you can get at it as well.

Michael Kohne
  • 2,284
  • 1
  • 16
  • 29
0

You've mentioned RAM and processors, but not disks. I admit it's been almost a decade since I've dealt with MS's SQL Server, but it's bound to the disks just as much as any other database, if it can't fit everything in memory.

I'd probably first try to the fill the machine with as much RAM as it can take -- then I'd make sure the logs are being written to disks that are not being used for tables or indexes. I'd then try to make sure that no tables have their indexes on the same spindle.

After that, I'd worry about database level tuning, but with that, you're going to need to define your tests and an optimization goal so that you don't break things or make queries worse. Although you said that the primary keys didn't show much advantage in your test database, you should look at your testing methodology -- primary keys can allow some databases (not sure if MS SQL is one of 'em) to use record level locking, rather than table level locks, which can reduce contention that might not be shown in testing with only a few users.

Joe H.
  • 1,897
  • 12
  • 12
0

First, I agree with Kyle Hodgson and add PKs. It is cheap (time only) and you might see a boost on your queries. What about indexes on join columns in your top 10 most ugly queries? Where are the table scans?

Second, what about trimming data in the database? Are more rows being returned in the queries than are really needed? I also agree with Kyle's suggestion of RAM (two more GB).

Put all of this into your write up (Joseph Kern) on what you propose for the interim while mapping out the future. Ask managment and users what happens to the organization if the current CRM app craters and isn't avaiable. Perhaps that will help get them thinking about the future.

jl.
  • 1,076
  • 8
  • 10
0

Get the hardware!

Not only is tin very cheap at the moment if you go for a Xeon 55xx series chip it'll scream for anything you can throw at it.

It's just a risk/reward thing - you can spend money and lots of time making the DB better or buy your way there quicker and cheaper.

Chopper3
  • 100,240
  • 9
  • 106
  • 238
0

Consider flipping the 3GB memory switch and adding another 2GB of memory.

Sam
  • 1,990
  • 1
  • 14
  • 21