does anyone have any data on performance increase due to separating
data and indexes - each on a different physical drive? thanks,
arthurSomething like that depends on a lot of factors and is hard to quantify as
such. I don't think you will find a white paper or anything official along
those lines due to the fact there are so many variables. Do you have a
specific question in mind?
--
Andrew J. Kelly SQL MVP
"arthur" <alangham@.gmail.com> wrote in message
news:1109187034.716627.83110@.g14g2000cwa.googlegroups.com...
> does anyone have any data on performance increase due to separating
> data and indexes - each on a different physical drive? thanks,
> arthur
>|||well, we've got several huge tables, and queries against them that are
suffering from poor performance. both data and indexes are on a single
raid 0+1 drive. we've got several bays left in our disk array, and i'm
wondering if moving the indexes to their own raid 0+1 or raid 5 drive
will increase performance ( - in oracle this is a no brainer). you can
assume that the queries and indexes themselves are optimized.|||also, these huge tables have both clustered and non-clustered indexes
on them, and most of the slow running queries involve joins on
non-clustered indexes.|||you could consider moving individual tables and indexes to their own
drives\filegroups to improve performance. But you would want the filegroup
to be no the same server (not on a different machine somewhere).
how many disks (Physical) are in the 0+1 array ?
How much Write cache does your RAID Controller have ?
You do not want to go to a RAID 5 volume. The performance penalty in
comparison to 0+1 would kill you
Greg Jackson
PDX, Oregon|||Have you done any research to see why they are slow? What do the query
plans look like? Are you scanning the indexes of doing seeks? Have you tried
tuning the queries and or indexes? Yes it is possible that if you split
some of the indexes onto a different drive array you can get better
performance if you are maxing out the current array or channel. If you are
then you should see why that is. Have you looked at the disk counters to
see what is going on as well?
--
Andrew J. Kelly SQL MVP
"arthur" <alangham@.gmail.com> wrote in message
news:1109194461.051461.129970@.l41g2000cwc.googlegroups.com...
> also, these huge tables have both clustered and non-clustered indexes
> on them, and most of the slow running queries involve joins on
> non-clustered indexes.
>|||> Have you looked at the disk counters to
> see what is going on as well?
Right... unless you are using old IDE disks baled together with hay wire, it
is doubtful that disk is your #1 bottleneck against a nc index... you might
be able to squeeze a little more performance by moving some things to
different disks, but that will be a lot of effort for little gain in my
experience, and certainly not a silver bullet solution.|||1. 4 physical disks on the 0+1 array.
2. 256MB read/write cache that's set to 50/50. it was at 90/10, but
we're playing around w/ the settings.
3. is that a performance penalty in reads or writes? according to
http://www.acnc.com/04_01_05.html, raid 5 has the highest read
transaction rate.
4. the queries are slow because of bad database, application, and query
design. we don't have the resources, however, to spend on fixing these.
we need a quick solution that will keep performance at an acceptable
level while we focus our resources on our application rewrite.
5. the query plans show a lot of index scans which are unavoidable. our
users can search on members w/ conditions like, last_name = 'd%'. our
primary key is not on last_name, hence the index scan.
6. yes, we've tuned the queries and indexes as much as possible
(without modifiying table structure and application code).
7. yes, we routinely check serveral performance counters, including
disk counters, and i/o isn't much of an issue.|||see notes below:
> 1. 4 physical disks on the 0+1 array.
> 2. 256MB read/write cache that's set to 50/50. it was at 90/10, but
> we're playing around w/ the settings.
Change this to 100%. As Andrew pointed out to me last week, SQL Server
Caches Reads anyway. When I changed my controller setting sto 100% Writes,
it boosted IO significantly.
> 3. is that a performance penalty in reads or writes? according to
> http://www.acnc.com/04_01_05.html, raid 5 has the highest read
> transaction rate.
Penalty on Writes
> 4. the queries are slow because of bad database, application, and query
> design. we don't have the resources, however, to spend on fixing these.
> we need a quick solution that will keep performance at an acceptable
> level while we focus our resources on our application rewrite.
so throw hardware at this bad boy eh ?
> 5. the query plans show a lot of index scans which are unavoidable. our
> users can search on members w/ conditions like, last_name = 'd%'. our
> primary key is not on last_name, hence the index scan.
You need AN Index on Lastname. Even with the Wildcard, the index will be
used. Does not have to be a clustered index and does not have to be the PKey
> 6. yes, we've tuned the queries and indexes as much as possible
> (without modifiying table structure and application code).
Are you sure, you cant improve the index usage at all ? when you say you are
seeing scans, that to me means there IS room for improvement.
> 7. yes, we routinely check serveral performance counters, including
> disk counters, and i/o isn't much of an issue.
IO is not much of an issue ? You dont see any Disk Queueing ? With the
Scans, I would guess you are seeing some queueing
Greg Jackson
PDX, Oregon|||I agree with everything Greg stated here. If I/O isn't an issue then why do
you think adding more drives will help? If it is an issue then the scans
are most likely the cause. Even if you have to add more indexes than you
want it may get you through the tough times until you can spend time fixing
everything. Any chance you can add a covering index? There are some things
that throwing hardware at simply can not fix unless you can add enough ram
and cpus to over come the scans.
--
Andrew J. Kelly SQL MVP
"pdxJaxon" <GregoryAJackson@.Hotmail.com> wrote in message
news:%23yw3smfGFHA.2416@.TK2MSFTNGP14.phx.gbl...
> see notes below:
>
>> 1. 4 physical disks on the 0+1 array.
>> 2. 256MB read/write cache that's set to 50/50. it was at 90/10, but
>> we're playing around w/ the settings.
> Change this to 100%. As Andrew pointed out to me last week, SQL Server
> Caches Reads anyway. When I changed my controller setting sto 100% Writes,
> it boosted IO significantly.
>
>> 3. is that a performance penalty in reads or writes? according to
>> http://www.acnc.com/04_01_05.html, raid 5 has the highest read
>> transaction rate.
> Penalty on Writes
>> 4. the queries are slow because of bad database, application, and query
>> design. we don't have the resources, however, to spend on fixing these.
>> we need a quick solution that will keep performance at an acceptable
>> level while we focus our resources on our application rewrite.
> so throw hardware at this bad boy eh ?
>> 5. the query plans show a lot of index scans which are unavoidable. our
>> users can search on members w/ conditions like, last_name = 'd%'. our
>> primary key is not on last_name, hence the index scan.
> You need AN Index on Lastname. Even with the Wildcard, the index will be
> used. Does not have to be a clustered index and does not have to be the
> PKey
>
>> 6. yes, we've tuned the queries and indexes as much as possible
>> (without modifiying table structure and application code).
> Are you sure, you cant improve the index usage at all ? when you say you
> are seeing scans, that to me means there IS room for improvement.
>
>> 7. yes, we routinely check serveral performance counters, including
>> disk counters, and i/o isn't much of an issue.
> IO is not much of an issue ? You dont see any Disk Queueing ? With the
> Scans, I would guess you are seeing some queueing
>
> Greg Jackson
> PDX, Oregon
>|||These gurus covered it all. I can't agree more about the indexes. If you
know which fields are being searched, especially frequently, add indexes if
they're not already there. You might get a little boost out of your
existing indexes by running UPDATE STATISTICS and DBCC INDEXDEFRAG on your
tables and indexes also.|||Just make sure you do it in this order DBCC INDEXDEFRAG and then UPDATE
STATISTICS<g>.
--
Andrew J. Kelly SQL MVP
"Michael C#" <xyz@.abcdef.com> wrote in message
news:JTbTd.19988$8m4.18245@.fe08.lga...
> These gurus covered it all. I can't agree more about the indexes. If you
> know which fields are being searched, especially frequently, add indexes
> if they're not already there. You might get a little boost out of your
> existing indexes by running UPDATE STATISTICS and DBCC INDEXDEFRAG on your
> tables and indexes also.
>|||Good point :) Thx
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:%23Vmg7mnGFHA.2744@.tk2msftngp13.phx.gbl...
> Just make sure you do it in this order DBCC INDEXDEFRAG and then UPDATE
> STATISTICS<g>.
> --
> Andrew J. Kelly SQL MVP
>
> "Michael C#" <xyz@.abcdef.com> wrote in message
> news:JTbTd.19988$8m4.18245@.fe08.lga...
>> These gurus covered it all. I can't agree more about the indexes. If
>> you know which fields are being searched, especially frequently, add
>> indexes if they're not already there. You might get a little boost out
>> of your existing indexes by running UPDATE STATISTICS and DBCC
>> INDEXDEFRAG on your tables and indexes also.
>|||> unless you can add enough ram and cpus to over come the scans.
exactly! again, part of the problem w/ scans is that users can search
(for example on a member) via any of a number of fields, each of which
have an associated non-clustered index. if they search for a particular
user, e.g. where last_name = 'smith', then performance is fine,
however, when they do, where last_name like 's%', performance degrades.
some of this is a training issue, but still.
also, we've got a job that runs indexdefrag nightly.
anyway, i ran some tests last night by creating a copy of one of our
large tables and putting the associated indexes on a separate drive
(raid 5). i ran a number of (select) queries against both the large
table and copy of large table (issuing dbcc freeproccache and
dropcleanbuffers before each run), and saw a performance difference of
0 to 27% depending on the complexity of the query.
thanks for all your help so far on this!|||> user, e.g. where last_name = 'smith', then performance is fine,
> however, when they do, where last_name like 's%', performance degrades.
Actually, this isn't always true. When you use LIKE 's%' there is a very
good chance that the index will be utilized, however when you use LIKE
'%s...' that is when you will likely see a change in the plan.
A|||fyi: here's an article that suggest splitting data and indexes,
http://www.informit.com/guides/content.asp?g=sqlserver&seqNum=42|||Don't forget to UPDATE STATISTICS also. The select ... where last_name like
's%' query should be able to take advantage of your indexes in its query
plan. You might try running the Index Tuning Wizard from Enterprise Manager
to help optimize your indexes. Based on the scans, it sounds like your
indexes could use some optimization.
"arthur" <alangham@.gmail.com> wrote in message
news:1109265280.958546.16260@.f14g2000cwb.googlegroups.com...
>> unless you can add enough ram and cpus to over come the scans.
> exactly! again, part of the problem w/ scans is that users can search
> (for example on a member) via any of a number of fields, each of which
> have an associated non-clustered index. if they search for a particular
> user, e.g. where last_name = 'smith', then performance is fine,
> however, when they do, where last_name like 's%', performance degrades.
> some of this is a training issue, but still.
> also, we've got a job that runs indexdefrag nightly.
> anyway, i ran some tests last night by creating a copy of one of our
> large tables and putting the associated indexes on a separate drive
> (raid 5). i ran a number of (select) queries against both the large
> table and copy of large table (issuing dbcc freeproccache and
> dropcleanbuffers before each run), and saw a performance difference of
> 0 to 27% depending on the complexity of the query.
> thanks for all your help so far on this!
>sql
No comments:
Post a Comment