Ideal SQL Server 2000 hardware configuration

Gojira54

The folly of Man
Joined
Apr 22, 2002
Messages
2,240
Location
Monster Island black-sand beaches
What is the ideal hardware configuration for SQL Server 2000? We have about 2 dozen databases - ranging in size from 500 MB up to 90 GB. All databases have very heavy traffic from multiple users on the same app. There are about 8 reads for every write, averaging about 8 million writes per 24-hour period.

Currently we have these databases split across 2 servers with 4 2.4 Ghz hyperthreaded processors and have 8 GB of RAM. The disks are 10,000 RPM SCSI, and are all in one very large physical disk array split into 2 logical drives per server. Logs and data for multiple databases are all sharing the same array.

Performance is becoming a pressing issue. Our Network administrator is 100% that the hardware configuration is not an issue, and I am 100% convinced it is at least a major part of the issue. Does anyone here feel that it may be the issue?
 
You might check out www.tek-tips.com for SQL forums, your question is a little high-end for this forum.

I'd say, though, that it could be either the drive array itself (reponse time for that much read/write), the network I/O of the drive (is it a NAS or connected to both servers via SCSI?), or possibly the network traffic to the servers.

If your network admin is so sure, have him run a Performance Monitoring session which looks at the most likely bottleneck counters. The only way for him to be sure that hardware is not an issue is for him to have already looked at it, logic and gut instinct are both fallible for something like this (and I know - I'm a network admin).
 
Thanks for the response.

I have posted at several tech-forums already (although not at the link you posted - thanks for the link! :) ). I am just trying to gather as many aopinions as possible from as many sources as possible on this. This forum has a lot of professional tech geeks (don't take offense, I fall into this category!), so I figured someone here should be able to respond.

These are just 2 servers with internal 10k RPM SCSI drive arrays, connected by 2 100 MB direct cables.

We've both looked at PerfMon at different times and walked away with different perceptions of the problem. He thinks the main app is 100% the problem. I think the app is about 40% (still needs to be addressed, but that is a long-term solution for the programmers), disk problems is about 40%, and I/O contention the remaining 20 %.

First off, each server in has only 2 logical drives and one physical drive array. One of the first things they teach you in DBA classes is that each database should have 2 physical arrays - one for the data and one for the log. We are pooling 12 databases and logs onto one physical drive. This is a tremendously huge bad.

What I want to do is convert our server-based system to a NAS unit using 15k SCSIs and with each DB split out via MS's reccomendations. I have already been told that is too expensive (which does not make sense considering our profit off of this system is about $1.1 million per day). So my compromising solution is to buy 2 more servers equivalent to the ones we have, split out the drives as specified using external arrays (at least with the 12 REALLY big and active DBs) and upgrading all disks from 10K to 15K. I also think these should be connected to each other via 3 1GB Cables vs the 2 100MB cables (these 2 cables are maxed out during 95% of the busiest 10 hours of the day).

I agree with the NA that the app is crap too, the 3 citrix servers the users connect to are maxed out most of the time, and having worked on the program itself I know what a mess it is. But this will take time, and that is not something we have. We need to upgrade the hardware now while the app is being optimized.
 
I haven't done any database work since Oracle 7.3.4 was new, but at first glance, the obvious place to check is the I/O bandwidth. You're trying to drink a bathtub through a straw, and it won't work well like that.

You can run perfmon, and check the I/O stats, but you should really spend a little money and contract a good DBA for a little while to give you recommendations, and help tune your databases. No offense, but a network admin's primary responsibility is keeping the network up and running, not figuring out how to troubleshoot a database. Get someone in there who is familiar with SQL Server2k, and get recommendations from them. A logical first step would be to split off the storage into a SAN of some type, such as a Fibre Channel array, or a simple gig attached NAS. Multiple arrays on multiple controllers, no disk overlap between databases, that kind of thing.
 
Back
Top Bottom