Hello-
FYI. The system in question is on SQL Server 7.0/sp4.
We have a strange problem. We are supporting a vertical solution that was basically written for main frame and was ported to use for SQL Server environment (that's what I have been told). The problem with that environment is they use DELETE/INSERT combination for making changes in the database compared to using UPDATE statement. The result is the database size shoots up from actual data size of 57+ GB (now) to ~ 100 GB by the end of the week. There are some nightly scheduled processes that run during the week nights causing this problem.
In order to fix the database size, I rebuild indexes and run some other internal maintenance DBCCs on weekly basis. Everything was working fine till last week. Our internal development section also creates some tables (permitted by the vendor as part of the agreement). Previously, the user tables in the database used to be ~ 1650, but this morning I found them to be 1840 (last night they were around 1800+ mark although I cannot recall the total count). The drive size where data device exists is 135 GB (always have 40-45 GB empty space) while drive where log device exists has at least 30+ GB empty. The reason I mentioned is that I have observed during the index rebuilding the database size initially increased by around 10 GB before decreasing to 57 GB (may be used for some temporary space allocation).
For the 2 weeks in a row, the process ran for over 12 hours without completing. We have two steps in the scheduled task. The second part for DBCC never ran in both cases. Last time, I tried to do some investigation. I found one table with structural problem that I fixed with DBCC CHECKTABLE (I was able to execute DBCC REINDEX even after using REPAIR_FAST option). However, there were some tables where the process got stuck as well. However, when I stopped the process and rebuilt the indexes separately they worked fine. (FYI, my script has a cursor that for now select all user tables and rebuilds them, I have plans to change it to rebuild only those with fragmentation higher than or equal to some threshold ).
Thanks for being patient in reading this long text. I tried to be elaborative to provide as much information as possible to give you a clearer picture. I have some questions (for both short and long term implications).
1. Is there any correlation between CURSOR execution (as a process) and number of tables (or data size). Can there be some memory problems?
2. If I decide to reduce the number of affected tables by choosing only those with fragementation above or equal to some threshold level, what will be that fragmentation level (10%, 20% or something different)? How should I capture this information... from DBCC SHOWCONTIG, 'Scan Density' parameter?
3. I have been told that SQL Server 2000 handles fragmentation issues much better than SQL Server 7.0? Is this information correct. And if yes, can there be some technical information available listing this that I can show to my top management to expedite the process? I searched but so far have not found anything on this.
4. Assuming answer of the first part of #3 is negative, how good is DBCC INDEXDEFRAG compared to DBCC REINDEX for badly fragmented tables? I plan to use this as we'll move to SQL 2000 environment sometime this year bcoz the database supports web also and there have been user complaints of time out problems while the indexes are rebuld (may be due to DBCC REINDEX taking the table offline while rebuilding indexes).
Thank again for taking interest in this case!
MZeeshan> 1. Is there any correlation between CURSOR execution (as a process) and
number of tables (or data size). Can there be some memory problems?
Make sure you use a STATIC cursor as the cursor will be open for quite some
time.
> 2. If I decide to reduce the number of affected tables by choosing only
those with fragmentation above or equal to some threshold level, what will
be that fragmentation level (10%, 20% or something different)? How should I
capture this information... from DBCC SHOWCONTIG, 'Scan Density' parameter?
How much fragmentation to act on depends on how you use the table. You want
to look at Logical Fragmentation if you have more than 1 file in the
filegroup. Unfortunately there is no easy way to get the results of
ShowContig into a table like you can with SQL 2000.
> 3. I have been told that SQL Server 2000 handles fragmentation issues much
better than SQL Server 7.0? Is this information correct. And if yes, can
there be some technical information available listing this that I can show
to my top management to expedite the process? I searched but so far have not
found anything on this.
> 4. Assuming answer of the first part of #3 is negative, how good is DBCC
INDEXDEFRAG compared to DBCC REINDEX for badly fragmented tables? I plan to
use this as we'll move to SQL 2000 environment sometime this year bcoz the
database supports web also and there have been user complaints of time out
problems while the indexes are rebuld (may be due to DBCC REINDEX taking the
table offline while rebuilding indexes).
2000 gives more options but doesn't necessarily handle it any better or
worse given the same command for both 7.0 and 2000. Here is a link that
explains very well fragmentation in general and the difference between the
two.
http://www.microsoft.com/technet/treeview/default.asp?url=/technet/prodtechnol/sql/maintain/Optimize/SS2KIDBP.asp
Make sure you always have plenty of free space in both the log file and
database files at all times. Never shrink the db after you do maintenance
on it as you will just fragment the tables again.
Andrew J. Kelly
SQL Server MVP
"MZeeshan" <mzeeshan@.yahoo.com> wrote in message
news:83FB0575-DB76-4732-9B39-2718DA244967@.microsoft.com...
> Hello-
> FYI. The system in question is on SQL Server 7.0/sp4.
> We have a strange problem. We are supporting a vertical solution that was
basically written for main frame and was ported to use for SQL Server
environment (that's what I have been told). The problem with that
environment is they use DELETE/INSERT combination for making changes in the
database compared to using UPDATE statement. The result is the database size
shoots up from actual data size of 57+ GB (now) to ~ 100 GB by the end of
the week. There are some nightly scheduled processes that run during the
week nights causing this problem.
> In order to fix the database size, I rebuild indexes and run some other
internal maintenance DBCCs on weekly basis. Everything was working fine till
last week. Our internal development section also creates some tables
(permitted by the vendor as part of the agreement). Previously, the user
tables in the database used to be ~ 1650, but this morning I found them to
be 1840 (last night they were around 1800+ mark although I cannot recall the
total count). The drive size where data device exists is 135 GB (always have
40-45 GB empty space) while drive where log device exists has at least 30+
GB empty. The reason I mentioned is that I have observed during the index
rebuilding the database size initially increased by around 10 GB before
decreasing to 57 GB (may be used for some temporary space allocation).
> For the 2 weeks in a row, the process ran for over 12 hours without
completing. We have two steps in the scheduled task. The second part for
DBCC never ran in both cases. Last time, I tried to do some investigation. I
found one table with structural problem that I fixed with DBCC CHECKTABLE (I
was able to execute DBCC REINDEX even after using REPAIR_FAST option).
However, there were some tables where the process got stuck as well.
However, when I stopped the process and rebuilt the indexes separately they
worked fine. (FYI, my script has a cursor that for now select all user
tables and rebuilds them, I have plans to change it to rebuild only those
with fragmentation higher than or equal to some threshold ).
> Thanks for being patient in reading this long text. I tried to be
elaborative to provide as much information as possible to give you a clearer
picture. I have some questions (for both short and long term implications).
> 1. Is there any correlation between CURSOR execution (as a process) and
number of tables (or data size). Can there be some memory problems?
> 2. If I decide to reduce the number of affected tables by choosing only
those with fragementation above or equal to some threshold level, what will
be that fragmentation level (10%, 20% or something different)? How should I
capture this information... from DBCC SHOWCONTIG, 'Scan Density' parameter?
> 3. I have been told that SQL Server 2000 handles fragmentation issues much
better than SQL Server 7.0? Is this information correct. And if yes, can
there be some technical information available listing this that I can show
to my top management to expedite the process? I searched but so far have not
found anything on this.
> 4. Assuming answer of the first part of #3 is negative, how good is DBCC
INDEXDEFRAG compared to DBCC REINDEX for badly fragmented tables? I plan to
use this as we'll move to SQL 2000 environment sometime this year bcoz the
database supports web also and there have been user complaints of time out
problems while the indexes are rebuld (may be due to DBCC REINDEX taking the
table offline while rebuilding indexes).
> Thank again for taking interest in this case!
> MZeeshan|||This is a multi-part message in MIME format.
--=_NextPart_000_0077_01C3D10B.EB488630
Content-Type: text/plain;
charset="Utf-8"
Content-Transfer-Encoding: quoted-printable
Hi,
1) I don't know.
2) Use the 'Logical fragmentation' result from DBCC SHOWCONTIG. As a =rule of thumb I'd go with 10-20% but it all depends on your access =patterns and how much increasing fragmentation correlates to decreasing =throughput on your system.
3) Yes, SQL Server 2000 has better tools to detect and correct =fragmentation than 7.0. See the whitepaper at =http://www.microsoft.com/technet/treeview/default.asp?url=3D/technet/prod=
technol/sql/maintain/optimize/ss2kidbp.asp for the details you want.
4) I wrote the DBCC INDEXDEFRAG command for SQL Server 2000 specifically =to provide an 'online' alternative to rebuilding indexes. However you =should be aware that in doing this I made several tradeoffs - it usually =takes longer than rebuilding an index and can generate more log. See the =whitepaper for full details. The timeout problems your users are =complaining about are exactly because an index is offline while it is =being rebuilt. If a clustered index is rebuilt, the entire table is =ofline for the duration.
Please let me know if you have any further questions.
Regards,
Paul.
-- Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no =rights.
"MZeeshan" <mzeeshan@.yahoo.com> wrote in message =news:83FB0575-DB76-4732-9B39-2718DA244967@.microsoft.com...
> Hello-
> > FYI. The system in question is on SQL Server 7.0/sp4.
> > We have a strange problem. We are supporting a vertical solution that =was basically written for main frame and was ported to use for SQL =Server environment (that's what I have been told). The problem with that =environment is they use DELETE/INSERT combination for making changes in =the database compared to using UPDATE statement. The result is the =database size shoots up from actual data size of 57+ GB (now) to ~ 100 =GB by the end of the week. There are some nightly scheduled processes =that run during the week nights causing this problem. > > In order to fix the database size, I rebuild indexes and run some =other internal maintenance DBCCs on weekly basis. Everything was working =fine till last week. Our internal development section also creates some =tables (permitted by the vendor as part of the agreement). Previously, =the user tables in the database used to be ~ 1650, but this morning I =found them to be 1840 (last night they were around 1800+ mark although I =cannot recall the total count). The drive size where data device exists =is 135 GB (always have 40-45 GB empty space) while drive where log =device exists has at least 30+ GB empty. The reason I mentioned is that =I have observed during the index rebuilding the database size initially =increased by around 10 GB before decreasing to 57 GB (may be used for =some temporary space allocation).
> > For the 2 weeks in a row, the process ran for over 12 hours without =completing. We have two steps in the scheduled task. The second part for =DBCC never ran in both cases. Last time, I tried to do some =investigation. I found one table with structural problem that I fixed =with DBCC CHECKTABLE (I was able to execute DBCC REINDEX even after =using REPAIR_FAST option). However, there were some tables where the =process got stuck as well. However, when I stopped the process and =rebuilt the indexes separately they worked fine. (FYI, my script has a =cursor that for now select all user tables and rebuilds them, I have =plans to change it to rebuild only those with fragmentation higher than =or equal to some threshold ).
> > Thanks for being patient in reading this long text. I tried to be =elaborative to provide as much information as possible to give you a =clearer picture. I have some questions (for both short and long term =implications).
> > 1. Is there any correlation between CURSOR execution (as a process) =and number of tables (or data size). Can there be some memory problems?
> > 2. If I decide to reduce the number of affected tables by choosing =only those with fragementation above or equal to some threshold level, =what will be that fragmentation level (10%, 20% or something different)? =How should I capture this information... from DBCC SHOWCONTIG, 'Scan =Density' parameter?
> > 3. I have been told that SQL Server 2000 handles fragmentation issues =much better than SQL Server 7.0? Is this information correct. And if =yes, can there be some technical information available listing this that =I can show to my top management to expedite the process? I searched but =so far have not found anything on this. > > 4. Assuming answer of the first part of #3 is negative, how good is =DBCC INDEXDEFRAG compared to DBCC REINDEX for badly fragmented tables? I =plan to use this as we'll move to SQL 2000 environment sometime this =year bcoz the database supports web also and there have been user =complaints of time out problems while the indexes are rebuld (may be due =to DBCC REINDEX taking the table offline while rebuilding indexes).
> > Thank again for taking interest in this case!
> MZeeshan
--=_NextPart_000_0077_01C3D10B.EB488630
Content-Type: text/html;
charset="Utf-8"
Content-Transfer-Encoding: quoted-printable
=EF=BB=BF<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
&
Hi,1) I don't know.2) =Use the 'Logical fragmentation' result from DBCC SHOWCONTIG. As a rule of thumb =I'd go with 10-20% but it all depends on your access patterns and how much =increasing fragmentation correlates to decreasing throughput on your =system.3) Yes, SQL Server 2000 has better tools to detect and correct fragmentation =than 7.0. See the whitepaper at http://www.microsoft.=com/technet/treeview/default.asp?url=3D/technet/prodtechnol/sql/maintain/=optimize/ss2kidbp.asp for the details you want.
4) I wrote the DBCC INDEXDEFRAG command =for SQL Server 2000 specifically to provide an 'online' alternative to =rebuilding indexes. However you should be aware that in doing this I made several =tradeoffs - it usually takes longer than rebuilding an index and can generate more =log. See the whitepaper for full details. The timeout problems your users are =complaining about are exactly because an index is offline while it is =being rebuilt. If a clustered index is rebuilt, the entire table is ofline for =the duration.
Please let me know if you have any =further questions.
Regards,
Paul.
-- Paul RandalDev Lead, Microsoft SQL Server Storage EngineThis posting is provided "AS IS" with no warranties, and =confers no rights."MZeeshan" wrote in message news:83FB0575-DB76-4732-9B39-2718DA244967@.microsoft.com...> Hello-> > FYI. The system in question is on SQL Server 7.0/sp4.> > We have a strange problem. We are supporting a =vertical solution that was basically written for main frame and was =ported to use for SQL Server environment (that's what I have been told). The =problem with that environment is they use DELETE/INSERT combination for making =changes in the database compared to using UPDATE statement. The result is the database =size shoots up from actual data size of 57+ GB (now) to ~ 100 GB by the end =of the week. There are some nightly scheduled processes that run during the =week nights causing this problem. > > In order to fix the database =size, I rebuild indexes and run some other internal maintenance DBCCs on weekly =basis. Everything was working fine till last week. Our internal development =section also creates some tables (permitted by the vendor as part of the =agreement). Previously, the user tables in the database used to be ~ 1650, but this =morning I found them to be 1840 (last night they were around 1800+ mark although =I cannot recall the total count). The drive size where data device exists =is 135 GB (always have 40-45 GB empty space) while drive where log device =exists has at least 30+ GB empty. The reason I mentioned is that I have observed =during the index rebuilding the database size initially increased by around 10 GB =before decreasing to 57 GB (may be used for some temporary space =allocation).> > For the 2 weeks in a row, the process ran for over 12 hours =without completing. We have two steps in the scheduled task. The second part for =DBCC never ran in both cases. Last time, I tried to do some investigation. I =found one table with structural problem that I fixed with DBCC CHECKTABLE (I =was able to execute DBCC REINDEX even after using REPAIR_FAST option). However, =there were some tables where the process got stuck as well. However, when I =stopped the process and rebuilt the indexes separately they worked fine. (FYI, =my script has a cursor that for now select all user tables and rebuilds them, I =have plans to change it to rebuild only those with fragmentation higher than or =equal to some threshold ).> > Thanks for being patient in reading =this long text. I tried to be elaborative to provide as much information as =possible to give you a clearer picture. I have some questions (for both short and =long term implications).> > 1. Is there any correlation between =CURSOR execution (as a process) and number of tables (or data size). Can there =be some memory problems?> > 2. If I decide to reduce the number of =affected tables by choosing only those with fragementation above or =equal to some threshold level, what will be that fragmentation level (10%, 20% or =something different)? How should I capture this information... from DBCC =SHOWCONTIG, 'Scan Density' parameter?> > 3. I have been =told that SQL Server 2000 handles fragmentation issues much better than SQL Server =7.0? Is this information correct. And if yes, can there be some technical =information available listing this that I can show to my top management to expedite =the process? I searched but so far have not found anything on this. > => 4. Assuming answer of the first part of #3 is negative, how =good is DBCC INDEXDEFRAG compared to DBCC REINDEX for badly fragmented tables? I =plan to use this as we'll move to SQL 2000 environment sometime this year bcoz =the database supports web also and there have been user complaints of time =out problems while the indexes are rebuld (may be due to DBCC REINDEX taking =the table offline while rebuilding indexes).> > Thank again =for taking interest in this case!> MZeeshan
--=_NextPart_000_0077_01C3D10B.EB488630--|||Thank you very much for the information. I'll post further questions if needed but before that I'll read the document as it seems to provide useful information.
I appreciate your help and Happy New Year!
MZeeshansql
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment