Friday, March 30, 2012

Index tuning wizard

I'm having trouble getting the above to do what I want. I pulled a trace
table on a 2000 server. There were a couple of thousand of (mostly) stored
proc runs pulled into it. I selected all tables but when it ran it failed
with a rather cryptic "An error occurred trying to run a sql statement".
I'm wondering if the wizard can handle stored procs? Especially ones that
are executed strings which the developers rely on heavily here?CLM,
Try to reduce the size and scope of the workload file. I've had individual
statements cause the ITW to fail in the past.
This may also help in general:
http://www.sql-server-performance.com/index_tuning_wizard_tips.asp
and
http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/tunesql.mspx
HTH
Jerry
"CLM" <CLM@.discussions.microsoft.com> wrote in message
news:5D0F2A95-C89D-4558-83CF-D2F540B87CA0@.microsoft.com...
> I'm having trouble getting the above to do what I want. I pulled a trace
> table on a 2000 server. There were a couple of thousand of (mostly)
> stored
> proc runs pulled into it. I selected all tables but when it ran it failed
> with a rather cryptic "An error occurred trying to run a sql statement".
> I'm wondering if the wizard can handle stored procs? Especially ones that
> are executed strings which the developers rely on heavily here?|||But if it can't handle stored procs that execute strings, then I'm not sure
it's worth it? I'd have to build these by hand and by the time I do that I
can probably analyze it manually.
Thx for the help...
"Jerry Spivey" wrote:
> CLM,
> Try to reduce the size and scope of the workload file. I've had individual
> statements cause the ITW to fail in the past.
> This may also help in general:
> http://www.sql-server-performance.com/index_tuning_wizard_tips.asp
> and
> http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/tunesql.mspx
> HTH
> Jerry
> "CLM" <CLM@.discussions.microsoft.com> wrote in message
> news:5D0F2A95-C89D-4558-83CF-D2F540B87CA0@.microsoft.com...
> > I'm having trouble getting the above to do what I want. I pulled a trace
> > table on a 2000 server. There were a couple of thousand of (mostly)
> > stored
> > proc runs pulled into it. I selected all tables but when it ran it failed
> > with a rather cryptic "An error occurred trying to run a sql statement".
> > I'm wondering if the wizard can handle stored procs? Especially ones that
> > are executed strings which the developers rely on heavily here?
>
>|||On Fri, 26 May 2006 11:10:01 -0700, CLM wrote:
>But if it can't handle stored procs that execute strings, then I'm not sure
>it's worth it? I'd have to build these by hand and by the time I do that I
>can probably analyze it manually.
>Thx for the help...
Hi CLM,
You can set up a profiler trace, run some of your SQL, then let the ITW
analyze the output from the trace.
Check BOL to find out what events you have to include in your trace for
the ITW.
--
Hugo Kornelis, SQL Server MVPsql

Index tuning wizard

Hello!
Is ist necessary to put the database, i want to run the index tuning
wizard on, into the single user mode?
Many thanks in advance.
ChristophNo, just run as normal.
"xoph" wrote:
> Hello!
> Is ist necessary to put the database, i want to run the index tuning
> wizard on, into the single user mode?
> Many thanks in advance.
> Christoph
>

index tuning wizard

I have a query that looks something like this:
select fields from tableA
select fields from tableB
select fields from tableC
...
I want to try to run the Index Tuning Wizard on all of the queries in the
window. However, whenever I try, the ITW tells me "There are no events in
the workload. Either the trace file contains no sql batch or RPC events or
the sql script contained no sql queries." If I select one line and run the
ITW, it works fine - no matter which line I run. I tried adding GO between
each line but that didn't help.
Any ideas how I can get this to work?
Thanks, AndreTurns out it was because there was a syntax error in one of the queries.
Even though the parse completed successfully, when i ran all 84 of the
queries, one of them bombed out. I fixed it, and the ITW is working it's
little self away.
Thanks for the help.
"Jasper Smith" <jasper_smith9@.hotmail.com> wrote in message
news:u0PefPTVDHA.2248@.TK2MSFTNGP10.phx.gbl...
> What version and service pack are you on ? The following works fine for me
> on SQL2000 SP3
> use Northwind
> go
> select * from Customers where Country = 'Mexico'
> select * from Customers where City = 'México D.F.'
> select * from Customers where CustomerID = 'ALFKI'
> If I highlight the above in QA and invoke the ITW it generates
> recommendations
> --
> HTH
> Jasper Smith (SQL Server MVP)
> I support PASS - the definitive, global
> community for SQL Server professionals -
> http://www.sqlpass.org
> "Andre" <AndreGetsEnoughSPAM@.nospam.com> wrote in message
> news:O5mb2LSVDHA.2364@.TK2MSFTNGP09.phx.gbl...
> I have a query that looks something like this:
> select fields from tableA
> select fields from tableB
> select fields from tableC
> ...
> I want to try to run the Index Tuning Wizard on all of the queries in the
> window. However, whenever I try, the ITW tells me "There are no events in
> the workload. Either the trace file contains no sql batch or RPC events or
> the sql script contained no sql queries." If I select one line and run
the
> ITW, it works fine - no matter which line I run. I tried adding GO
between
> each line but that didn't help.
> Any ideas how I can get this to work?
> Thanks, Andre
>
>

Index Tuning Wizard

Hi all,
I'd like to use Index Tuning Wizard. I created a workload trace file with
SQL Profiler, then I start the wizard but it ends up with message that there
are no events and queries that can be tuned against current database.
I used trace template 'SQLProfilerTuning.tdf', so there are only
RPC:Completed and SQL:BatchCompleted events, which should be OK, but may be
it is a problem that we use stored procedures for data manipulation, in fact
there is no SQL query called directly from our application, we always call
SP and pass parameters to it. Is Index Tuning Wizard tool limitid only for
queries? Or may I achieve it to work using events for Stored Procedures?
Any advice appretiated.
eXavierI havent used the Index Tuning Wizard much myself, but I
would say that well normalised tables (using the correct
fields for and with the correct indexes) and well written
procs would be more fruitful than using the Wizard.
>--Original Message--
>Hi all,
>I'd like to use Index Tuning Wizard. I created a workload
trace file with
>SQL Profiler, then I start the wizard but it ends up with
message that there
>are no events and queries that can be tuned against
current database.
>I used trace template 'SQLProfilerTuning.tdf', so there
are only
>RPC:Completed and SQL:BatchCompleted events, which should
be OK, but may be
>it is a problem that we use stored procedures for data
manipulation, in fact
>there is no SQL query called directly from our
application, we always call
>SP and pass parameters to it. Is Index Tuning Wizard tool
limitid only for
>queries? Or may I achieve it to work using events for
Stored Procedures?
>Any advice appretiated.
>eXavier
>
>.
>

Index Tuning Wizard

Hi

I tried running ITW against one of our databases, and it came up with a
number of suggestions, but also filled the Application Log with messages
like:-

The description for Event ID ( 4 ) in Source ( ITW ) could not be found. It
contains the following insertion string(s):
Error in Parsing Event:
declare @.P1 nvarchar(1)
declare @.P2 nvarchar(1)
declare @.P3 nvarchar(1)
declare @.P4 nvarchar(1)
EXEC "dbo"."sp_get_RAO_indexW2K2" @.P1 , @.P2 , @.P3 , @.P4 .

Does anyone know if this is serious (or indeed what it means at all)?

There were no apparent errors in the ITW run.

Chloe CrowderHi

This is not a stored procedure I recognise or can find. Are you running ITW
from a script or profile? If yes then does it exist there?

Also, if you are using procedure names "sp_" SQLServer will be expecting
them to be in the master database, and will check it first, before the
current database for your stored procedure. By calling your procedure
something other than "sp_" you should improve performance.

John

"Chloe Crowder" <chloe.crowder@.bl.uk> wrote in message
news:bfrc2o$abe$1@.south.jnrs.ja.net...
> Hi
> I tried running ITW against one of our databases, and it came up with a
> number of suggestions, but also filled the Application Log with messages
> like:-
> The description for Event ID ( 4 ) in Source ( ITW ) could not be found.
It
> contains the following insertion string(s):
> Error in Parsing Event:
> declare @.P1 nvarchar(1)
> declare @.P2 nvarchar(1)
> declare @.P3 nvarchar(1)
> declare @.P4 nvarchar(1)
> EXEC "dbo"."sp_get_RAO_indexW2K2" @.P1 , @.P2 , @.P3 , @.P4 .
> Does anyone know if this is serious (or indeed what it means at all)?
> There were no apparent errors in the ITW run.
> Chloe Crowder

Index Tuning Wiz - recommendations page differs from script

Hi, When I look at the script generated from the ITW it often doesn't
have all the recomendations I see on the recommendations page. Is
this normal? The doc on the wiz seems a little thin...Johnny,
Check out:
http://www.microsoft.com/technet/pr...in/tunesql.mspx
and
http://www.sql-server-performance.c...wizard_tips.asp
and
http://msdn.microsoft.com/library/d...>
_05_0cbo.asp
HTH
Jerry
"Johnny Ruin" <schafer.dave@.gmail.com> wrote in message
news:1128634228.482567.300780@.z14g2000cwz.googlegroups.com...
> Hi, When I look at the script generated from the ITW it often doesn't
> have all the recomendations I see on the recommendations page. Is
> this normal? The doc on the wiz seems a little thin...
>|||Quite Frankly the 'brain' behind the ITW is quite this too.
I recommend you do your own analysis, set a benchmark, evaluate indices and
speed, factor in processors, competing resources, SQL cache etc etc.
You're more likely to get a better efficiency should you tackle the design
yourself (albeit you could use the ITW as a guide but it's scope is very
limited) and a better understanding of your data and query requirements.sql

Index tuning wiward, 98% improvement

When I used the index tuning wizard to analysis MSDE. It came up with a
recommendation says that:
If you choose to implement the recommended changes, you should see the
following improvement: 98% improvement in performance based on the sampled
workload!
What is the meaning of 98%? it means I will get 2% (100-98) better
performance, or I really twice better performance' Should I implement it?
regards,
Guoqi Zheng
http://www.ureader.com> What is the meaning of 98%? it means I will get 2% (100-98) better
> performance, or I really twice better performance'
It means based upon the sample you gave it, implementing these changes will
result on a performance increase of
>Should I implement it?
Officially yes of course you should no questions asked, yes, yes, yes and
yes again.
Unoffically and I am not writing this my evil twin brother is, if the speed
of the system has not been commented upon then you may want to delay it until
it is commented on, that way you have some working room :D
Peter
"guoqi zheng" wrote:
> When I used the index tuning wizard to analysis MSDE. It came up with a
> recommendation says that:
> If you choose to implement the recommended changes, you should see the
> following improvement: 98% improvement in performance based on the sampled
> workload!
> What is the meaning of 98%? it means I will get 2% (100-98) better
> performance, or I really twice better performance' Should I implement it?
> regards,
> Guoqi Zheng
> http://www.ureader.com
>

Index tuning wiward, 98% improvement

When I used the index tuning wizard to analysis MSDE. It came up with a
recommendation says that:
If you choose to implement the recommended changes, you should see the
following improvement: 98% improvement in performance based on the sampled
workload!
What is the meaning of 98%? it means I will get 2% (100-98) better
performance, or I really twice better performance? Should I implement it?
regards,
Guoqi Zheng
http://www.ureader.com
> What is the meaning of 98%? it means I will get 2% (100-98) better
> performance, or I really twice better performance?
It means based upon the sample you gave it, implementing these changes will
result on a performance increase of

>Should I implement it?
Officially yes of course you should no questions asked, yes, yes, yes and
yes again.
Unoffically and I am not writing this my evil twin brother is, if the speed
of the system has not been commented upon then you may want to delay it until
it is commented on, that way you have some working room :D
Peter
"guoqi zheng" wrote:

> When I used the index tuning wizard to analysis MSDE. It came up with a
> recommendation says that:
> If you choose to implement the recommended changes, you should see the
> following improvement: 98% improvement in performance based on the sampled
> workload!
> What is the meaning of 98%? it means I will get 2% (100-98) better
> performance, or I really twice better performance? Should I implement it?
> regards,
> Guoqi Zheng
> http://www.ureader.com
>

Index tuning wiward, 98% improvement

When I used the index tuning wizard to analysis MSDE. It came up with a
recommendation says that:
If you choose to implement the recommended changes, you should see the
following improvement: 98% improvement in performance based on the sampled
workload!
What is the meaning of 98%? it means I will get 2% (100-98) better
performance, or I really twice better performance' Should I implement it?
regards,
Guoqi Zheng
http://www.ureader.com> What is the meaning of 98%? it means I will get 2% (100-98) better
> performance, or I really twice better performance'
It means based upon the sample you gave it, implementing these changes will
result on a performance increase of

>Should I implement it?
Officially yes of course you should no questions asked, yes, yes, yes and
yes again.
Unoffically and I am not writing this my evil twin brother is, if the speed
of the system has not been commented upon then you may want to delay it unti
l
it is commented on, that way you have some working room :D
Peter
"guoqi zheng" wrote:

> When I used the index tuning wizard to analysis MSDE. It came up with a
> recommendation says that:
> If you choose to implement the recommended changes, you should see the
> following improvement: 98% improvement in performance based on the sampled
> workload!
> What is the meaning of 98%? it means I will get 2% (100-98) better
> performance, or I really twice better performance' Should I implement it?
> regards,
> Guoqi Zheng
> http://www.ureader.com
>

Index Tuning with SQLXML

Is there an easy way to generate indexes based on xpath queries and a schema?
The problem that I am having is that the profiler is not able to capture my
larger queries because the generated query is too long. Because of this i
can not see the generated sql code and I am unable to decide how to generate
indexes from that. It seems like it would not be difficult (With inside
knowlege of how the queries are generated) to build a tool that suggests
indexes based off of xpath queries however I have been unable to find such a
tool.
Is there another method to retrieve the sql generated from an XPath query
other than SQL Profiler?
Tony
Senior Software Engineer / Network Administrator
Hi Tony,
Welcome to use MSDN Managed Newsgroup!
SQL Profiler is the only way for us trace the SQL queries.
For SQL Server 2005 and related issue, you are welcome to paste in the
newsgroup below
Welcome to the Microsoft SQL Server 2005 Community Technology (CTP)
Newsgroups
http://communities.microsoft.com/new...lserver2005&sl
cid=us
Sincerely yours,
Michael Cheng
Microsoft Online Partner Support
When responding to posts, please "Reply to Group" via your newsreader so
that others may learn and benefit from your issue.
================================================== ===
This posting is provided "AS IS" with no warranties, and confers no rights.
|||We are basically generating FOR XML EXPLICIT queries which are basically
large UNION ALL queries. Did you try to use the index tuning wizzard to see
if it can recommend indices for these queries?
Best regards
Michael
"Tony Moulton" <precursor@.community.nospam> wrote in message
news:F1C423FA-3894-4947-A35F-EFA0E9951C2D@.microsoft.com...
> Is there an easy way to generate indexes based on xpath queries and a
> schema?
> The problem that I am having is that the profiler is not able to capture
> my
> larger queries because the generated query is too long. Because of this i
> can not see the generated sql code and I am unable to decide how to
> generate
> indexes from that. It seems like it would not be difficult (With inside
> knowlege of how the queries are generated) to build a tool that suggests
> indexes based off of xpath queries however I have been unable to find such
> a
> tool.
> Is there another method to retrieve the sql generated from an XPath query
> other than SQL Profiler?
> --
> Tony
> Senior Software Engineer / Network Administrator
>

Index Tuning with SQLXML

Is there an easy way to generate indexes based on xpath queries and a schema
?
The problem that I am having is that the profiler is not able to capture my
larger queries because the generated query is too long. Because of this i
can not see the generated sql code and I am unable to decide how to generate
indexes from that. It seems like it would not be difficult (With inside
knowlege of how the queries are generated) to build a tool that suggests
indexes based off of xpath queries however I have been unable to find such a
tool.
Is there another method to retrieve the sql generated from an XPath query
other than SQL Profiler?
Tony
Senior Software Engineer / Network AdministratorHi Tony,
Welcome to use MSDN Managed Newsgroup!
SQL Profiler is the only way for us trace the SQL queries.
For SQL Server 2005 and related issue, you are welcome to paste in the
newsgroup below
Welcome to the Microsoft SQL Server 2005 Community Technology (CTP)
Newsgroups
http://communities.microsoft.com/ne...qlserver2005&sl
cid=us
Sincerely yours,
Michael Cheng
Microsoft Online Partner Support
When responding to posts, please "Reply to Group" via your newsreader so
that others may learn and benefit from your issue.
========================================
=============
This posting is provided "AS IS" with no warranties, and confers no rights.|||We are basically generating FOR XML EXPLICIT queries which are basically
large UNION ALL queries. Did you try to use the index tuning wizzard to see
if it can recommend indices for these queries?
Best regards
Michael
"Tony Moulton" <precursor@.community.nospam> wrote in message
news:F1C423FA-3894-4947-A35F-EFA0E9951C2D@.microsoft.com...
> Is there an easy way to generate indexes based on xpath queries and a
> schema?
> The problem that I am having is that the profiler is not able to capture
> my
> larger queries because the generated query is too long. Because of this i
> can not see the generated sql code and I am unable to decide how to
> generate
> indexes from that. It seems like it would not be difficult (With inside
> knowlege of how the queries are generated) to build a tool that suggests
> indexes based off of xpath queries however I have been unable to find such
> a
> tool.
> Is there another method to retrieve the sql generated from an XPath query
> other than SQL Profiler?
> --
> Tony
> Senior Software Engineer / Network Administrator
>sql

Index tuning under heavy load...

I am trying to find the best way to tune/defrag the indexes on a database
which stays under heavy load 24/7/365
About 1-5 million records added daily...By the end of the day query
performance decreses dramatically.
DBCC DBREINDEX does the job the best but it is an OFFLINE operation and is
unacceptable in my case as the table becomes offline and users get hung...
ALTER INDEX REBUILD (WITH ONLINE) works ok, still queries are running slower
during this operation and it does not do as good of a job as DBREINDEX.
What are my options? Is there a solution to this...?I should add that we are on SQL Server 2005
"Michael Kansky" <mike@.zazasoftware.com> wrote in message
news:eAiuJRM6HHA.464@.TK2MSFTNGP02.phx.gbl...
>I am trying to find the best way to tune/defrag the indexes on a database
>which stays under heavy load 24/7/365
> About 1-5 million records added daily...By the end of the day query
> performance decreses dramatically.
> DBCC DBREINDEX does the job the best but it is an OFFLINE operation and is
> unacceptable in my case as the table becomes offline and users get hung...
> ALTER INDEX REBUILD (WITH ONLINE) works ok, still queries are running
> slower during this operation and it does not do as good of a job as
> DBREINDEX.
> What are my options? Is there a solution to this...?
>|||If you are on 2005 you should be using ALTER INDEX not DBCC xxx. Reindexing
is a very resource intensive operation and if you want to do this in a 24x7
operation you need to have the hardware to support it or performance will
suffer. But I would argue that your fill factors are not properly set to
limit fragmentation if by the end of the day performance suffers from
fragmentation. But fragmentation should not impact a properly tuned OLTP
system that much anyway. If you do a lot of scans it can hurt you but you
should find out why you are scanning and address that. Perhaps partitioning
is called for here as well. If you can partition such that most of the new
rows are in a different partition than the rest you won't need to rebuild
all the indexes.
--
Andrew J. Kelly SQL MVP
Solid Quality Mentors
"Michael Kansky" <mike@.zazasoftware.com> wrote in message
news:%23LrM9cM6HHA.3940@.TK2MSFTNGP05.phx.gbl...
>I should add that we are on SQL Server 2005
>
> "Michael Kansky" <mike@.zazasoftware.com> wrote in message
> news:eAiuJRM6HHA.464@.TK2MSFTNGP02.phx.gbl...
>>I am trying to find the best way to tune/defrag the indexes on a database
>>which stays under heavy load 24/7/365
>> About 1-5 million records added daily...By the end of the day query
>> performance decreses dramatically.
>> DBCC DBREINDEX does the job the best but it is an OFFLINE operation and
>> is unacceptable in my case as the table becomes offline and users get
>> hung...
>> ALTER INDEX REBUILD (WITH ONLINE) works ok, still queries are running
>> slower during this operation and it does not do as good of a job as
>> DBREINDEX.
>> What are my options? Is there a solution to this...?
>|||I submit that it isn't index frag that is hurting here, but rather the
statistics. New records won't be reflected in the stats (assuming a large
table in which 1-5M rows doesn't trigger an automatic stats update). Thus
queries involving the newly inserted rows won't have optimal query plans
(such as index seeks).
Set up a job to refresh your stats several times throughout the day. Be
careful of the type of scan performed. You need the scan to go quickly.
For those indexes that receive values throughout the range of the index
(like on LastName for example), fragmentation can become an issue. Pick a
reasonable fillfactor to avoid lots of page splits between rebuilds.
Picking this number is part art, but mostly science.
--
TheSQLGuru
President
Indicium Resources, Inc.
"Michael Kansky" <mike@.zazasoftware.com> wrote in message
news:eAiuJRM6HHA.464@.TK2MSFTNGP02.phx.gbl...
>I am trying to find the best way to tune/defrag the indexes on a database
>which stays under heavy load 24/7/365
> About 1-5 million records added daily...By the end of the day query
> performance decreses dramatically.
> DBCC DBREINDEX does the job the best but it is an OFFLINE operation and is
> unacceptable in my case as the table becomes offline and users get hung...
> ALTER INDEX REBUILD (WITH ONLINE) works ok, still queries are running
> slower during this operation and it does not do as good of a job as
> DBREINDEX.
> What are my options? Is there a solution to this...?
>|||I think you are absolutely right...
And i do need a really fast scan while updating stats because i think it
places share locks on the table..
Will this be the fastest scan i can achieve:
UPDATE STATISTICS Database.Table
WITH SAMPLE 5 PERCENT;
Thank you,
michael
"TheSQLGuru" <kgboles@.earthlink.net> wrote in message
news:uNHfyoX6HHA.5160@.TK2MSFTNGP05.phx.gbl...
>I submit that it isn't index frag that is hurting here, but rather the
>statistics. New records won't be reflected in the stats (assuming a large
>table in which 1-5M rows doesn't trigger an automatic stats update). Thus
>queries involving the newly inserted rows won't have optimal query plans
>(such as index seeks).
> Set up a job to refresh your stats several times throughout the day. Be
> careful of the type of scan performed. You need the scan to go quickly.
> For those indexes that receive values throughout the range of the index
> (like on LastName for example), fragmentation can become an issue. Pick a
> reasonable fillfactor to avoid lots of page splits between rebuilds.
> Picking this number is part art, but mostly science.
> --
> TheSQLGuru
> President
> Indicium Resources, Inc.
> "Michael Kansky" <mike@.zazasoftware.com> wrote in message
> news:eAiuJRM6HHA.464@.TK2MSFTNGP02.phx.gbl...
>>I am trying to find the best way to tune/defrag the indexes on a database
>>which stays under heavy load 24/7/365
>> About 1-5 million records added daily...By the end of the day query
>> performance decreses dramatically.
>> DBCC DBREINDEX does the job the best but it is an OFFLINE operation and
>> is unacceptable in my case as the table becomes offline and users get
>> hung...
>> ALTER INDEX REBUILD (WITH ONLINE) works ok, still queries are running
>> slower during this operation and it does not do as good of a job as
>> DBREINDEX.
>> What are my options? Is there a solution to this...?
>|||Be a little careful of too low of a sample. If the problem is due to rows
that the optimizer doesn't know about you may get similar results with only
a 5% sample. This is usually a try and see, then adjust methodology.
--
Andrew J. Kelly SQL MVP
Solid Quality Mentors
"Michael Kansky" <mike@.zazasoftware.com> wrote in message
news:OD5EkRY6HHA.5844@.TK2MSFTNGP02.phx.gbl...
>I think you are absolutely right...
> And i do need a really fast scan while updating stats because i think it
> places share locks on the table..
> Will this be the fastest scan i can achieve:
> UPDATE STATISTICS Database.Table
> WITH SAMPLE 5 PERCENT;
> Thank you,
> michael
>
> "TheSQLGuru" <kgboles@.earthlink.net> wrote in message
> news:uNHfyoX6HHA.5160@.TK2MSFTNGP05.phx.gbl...
>>I submit that it isn't index frag that is hurting here, but rather the
>>statistics. New records won't be reflected in the stats (assuming a large
>>table in which 1-5M rows doesn't trigger an automatic stats update). Thus
>>queries involving the newly inserted rows won't have optimal query plans
>>(such as index seeks).
>> Set up a job to refresh your stats several times throughout the day. Be
>> careful of the type of scan performed. You need the scan to go quickly.
>> For those indexes that receive values throughout the range of the index
>> (like on LastName for example), fragmentation can become an issue. Pick
>> a reasonable fillfactor to avoid lots of page splits between rebuilds.
>> Picking this number is part art, but mostly science.
>> --
>> TheSQLGuru
>> President
>> Indicium Resources, Inc.
>> "Michael Kansky" <mike@.zazasoftware.com> wrote in message
>> news:eAiuJRM6HHA.464@.TK2MSFTNGP02.phx.gbl...
>>I am trying to find the best way to tune/defrag the indexes on a database
>>which stays under heavy load 24/7/365
>> About 1-5 million records added daily...By the end of the day query
>> performance decreses dramatically.
>> DBCC DBREINDEX does the job the best but it is an OFFLINE operation and
>> is unacceptable in my case as the table becomes offline and users get
>> hung...
>> ALTER INDEX REBUILD (WITH ONLINE) works ok, still queries are running
>> slower during this operation and it does not do as good of a job as
>> DBREINDEX.
>> What are my options? Is there a solution to this...?
>>
>

Index Tuning Problem

Hello:
I am attempting to tune the indexes on a frequently used
SQL 2000 (Enterprise Edition)database. I used Profiler
to create a workload file that captured 3/4 of a day's
transactions.
I started the Index Tuning Wizard and went through the
steps to have the wizard analyze my workload file. After
analyzing the database for awhile, the wizard returns the
following message:
The workload does not contain any events or queries that
can be tuned against the current database. Check to see
that the trace contains valid events or that the SQL
script contains valid queries and you are tuning against
the right database.
The workload file is close to 5GB in size and definitely
contains stored procedures and SQL statements. And I am
definitely sitting in the proper database.
Any comments or suggestions would be greatly appreciated
BrennanThis is a multi-part message in MIME format.
--=_NextPart_000_08A3_01C39C8E.8B8E6540
Content-Type: text/plain;
charset="iso-8859-1"
Content-Transfer-Encoding: 7bit
You may need to save it as a script and then put in a use statement.
Otherwise, it will assume master. For example, if your database is MyDB:
use MyDB
go
-- the rest of the script goes here
--
Tom
---
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
SQL Server MVP
Columnist, SQL Server Professional
Toronto, ON Canada
www.pinnaclepublishing.com/sql
"Brennan" <anonymous@.discussions.microsoft.com> wrote in message
news:011601c39cb7$c7373c60$a401280a@.phx.gbl...
Hello:
I am attempting to tune the indexes on a frequently used
SQL 2000 (Enterprise Edition)database. I used Profiler
to create a workload file that captured 3/4 of a day's
transactions.
I started the Index Tuning Wizard and went through the
steps to have the wizard analyze my workload file. After
analyzing the database for awhile, the wizard returns the
following message:
The workload does not contain any events or queries that
can be tuned against the current database. Check to see
that the trace contains valid events or that the SQL
script contains valid queries and you are tuning against
the right database.
The workload file is close to 5GB in size and definitely
contains stored procedures and SQL statements. And I am
definitely sitting in the proper database.
Any comments or suggestions would be greatly appreciated
Brennan
--=_NextPart_000_08A3_01C39C8E.8B8E6540
Content-Type: text/html;
charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
&

You may need to save it as a script =and then put in a use statement. Otherwise, it will assume master. For =example, if your database is MyDB:
use MyDB
go
-- the rest of the script goes here
-- Tom
---T=homas A. Moreau, BSc, PhD, MCSE, MCDBASQL Server MVPColumnist, SQL =Server ProfessionalToronto, ON Canadahttp://www.pinnaclepublishing.com/sql">www.pinnaclepublishing.com=/sql
"Brennan" wrote in message news:011601c39cb7$c7=373c60$a401280a@.phx.gbl...Hello:I am attempting to tune the indexes on a frequently used SQL 2000 =(Enterprise Edition)database. I used Profiler to create a workload file =that captured 3/4 of a day's transactions.I started the Index =Tuning Wizard and went through the steps to have the wizard analyze my =workload file. After analyzing the database for awhile, the wizard =returns the following message:The workload does not contain any events =or queries that can be tuned against the current database. Check =to see that the trace contains valid events or that the SQL script =contains valid queries and you are tuning against the right =database.The workload file is close to 5GB in size and definitely contains stored =procedures and SQL statements. And I am definitely sitting in =the proper database.Any comments or suggestions would be greatly =appreciated Brennan

--=_NextPart_000_08A3_01C39C8E.8B8E6540--|||How do I save it as a script?
Brennan
>--Original Message--
>You may need to save it as a script and then put in a
use statement.
>Otherwise, it will assume master. For example, if your
database is MyDB:
>use MyDB
>go
>-- the rest of the script goes here
>--
>Tom
>----
--
>Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
>SQL Server MVP
>Columnist, SQL Server Professional
>Toronto, ON Canada
>www.pinnaclepublishing.com/sql
>
>"Brennan" <anonymous@.discussions.microsoft.com> wrote in
message
>news:011601c39cb7$c7373c60$a401280a@.phx.gbl...
>Hello:
>I am attempting to tune the indexes on a frequently used
>SQL 2000 (Enterprise Edition)database. I used Profiler
>to create a workload file that captured 3/4 of a day's
>transactions.
>I started the Index Tuning Wizard and went through the
>steps to have the wizard analyze my workload file. After
>analyzing the database for awhile, the wizard returns the
>following message:
>The workload does not contain any events or queries that
>can be tuned against the current database. Check to see
>that the trace contains valid events or that the SQL
>script contains valid queries and you are tuning against
>the right database.
>The workload file is close to 5GB in size and definitely
>contains stored procedures and SQL statements. And I am
>definitely sitting in the proper database.
>Any comments or suggestions would be greatly appreciated
>Brennan
>|||This is a multi-part message in MIME format.
--=_NextPart_000_0944_01C39C96.6BD78620
Content-Type: text/plain;
charset="iso-8859-1"
Content-Transfer-Encoding: 7bit
I was referring to your workload script. You can load your trace into the
Profiler and then click on File->Save As->SQL Script.
--
Tom
---
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
SQL Server MVP
Columnist, SQL Server Professional
Toronto, ON Canada
www.pinnaclepublishing.com/sql
"Brennan" <anonymous@.discussions.microsoft.com> wrote in message
news:038301c39cbf$4da0b450$a101280a@.phx.gbl...
How do I save it as a script?
Brennan
>--Original Message--
>You may need to save it as a script and then put in a
use statement.
>Otherwise, it will assume master. For example, if your
database is MyDB:
>use MyDB
>go
>-- the rest of the script goes here
>--
>Tom
>----
--
>Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
>SQL Server MVP
>Columnist, SQL Server Professional
>Toronto, ON Canada
>www.pinnaclepublishing.com/sql
>
>"Brennan" <anonymous@.discussions.microsoft.com> wrote in
message
>news:011601c39cb7$c7373c60$a401280a@.phx.gbl...
>Hello:
>I am attempting to tune the indexes on a frequently used
>SQL 2000 (Enterprise Edition)database. I used Profiler
>to create a workload file that captured 3/4 of a day's
>transactions.
>I started the Index Tuning Wizard and went through the
>steps to have the wizard analyze my workload file. After
>analyzing the database for awhile, the wizard returns the
>following message:
>The workload does not contain any events or queries that
>can be tuned against the current database. Check to see
>that the trace contains valid events or that the SQL
>script contains valid queries and you are tuning against
>the right database.
>The workload file is close to 5GB in size and definitely
>contains stored procedures and SQL statements. And I am
>definitely sitting in the proper database.
>Any comments or suggestions would be greatly appreciated
>Brennan
>
--=_NextPart_000_0944_01C39C96.6BD78620
Content-Type: text/html;
charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
&

I was referring to your workload =script. You can load your trace into the Profiler and then click on File->Save =As->SQL Script.
-- Tom
---T=homas A. Moreau, BSc, PhD, MCSE, MCDBASQL Server MVPColumnist, SQL =Server ProfessionalToronto, ON Canadahttp://www.pinnaclepublishing.com/sql">www.pinnaclepublishing.com=/sql
"Brennan" wrote in message news:038301c39cbf$4d=a0b450$a101280a@.phx.gbl...How do I save it as a script?Brennan>--Original Message-->You may need to save it as a script and then put in =a use statement.>Otherwise, it will assume master. For =example, if your database is MyDB:>>use =MyDB>go>-- the rest of the script goes here>>-->Tom>>--=---->Thomas A. Moreau, BSc, PhD, MCSE, MCDBA>SQL Server MVP>Columnist, =SQL Server Professional>Toronto, ON Canada>www.pinnaclepublishing.com/sql>>>"Bren=nan" wrote in message>news:011601c39cb7$c7373c60$a401280a@.phx.gbl...>=Hello:>>I am attempting to tune the indexes on a frequently used>SQL 2000 (Enterprise Edition)database. I used Profiler>to create a =workload file that captured 3/4 of a day's>transactions.>>I =started the Index Tuning Wizard and went through the>steps to have the =wizard analyze my workload file. After>analyzing the database for =awhile, the wizard returns the>following message:>>The =workload does not contain any events or queries that>can be tuned against =the current database. Check to see>that the trace contains =valid events or that the SQL>script contains valid queries and you are tuning against>the right database.>>The workload file is =close to 5GB in size and definitely>contains stored procedures and SQL statements. And I am>definitely sitting in the proper database.>>Any comments or suggestions would be greatly appreciated>>Brennan>

--=_NextPart_000_0944_01C39C96.6BD78620--

Index Tuning Multiple Queries

I'm attempting to run the Index Tuning Wizard against multiple queries at
once (dozens) in SQL Query Analyzer. I put a GO at the end of them to
indicate a batch, select all the queries + GO, but the wizard only analyzes
the first query. I'm also avoiding the production hit of the Profiler,
unless a last resort.
Ideas?
ThanksHi
Have you tried putting them into a .sql file and using that?
John
"Brett Ossman" wrote:

> I'm attempting to run the Index Tuning Wizard against multiple queries at
> once (dozens) in SQL Query Analyzer. I put a GO at the end of them to
> indicate a batch, select all the queries + GO, but the wizard only analyze
s
> the first query. I'm also avoiding the production hit of the Profiler,
> unless a last resort.
> Ideas?
> Thanks|||Thanks for the tip, but unfortunately still only analyzed the first query.
"John Bell" wrote:
> Hi
> Have you tried putting them into a .sql file and using that?
> John
> "Brett Ossman" wrote:
>|||Hi Brett
Check out the topic on Index Tuning Wizard in books online and check if you
have any of the conditions it says that it will not work in. You could also
re-order the queries and see if it is the same query that gets tuned.
John
"Brett Ossman" <BrettOssman@.discussions.microsoft.com> wrote in message
news:7B9C9A35-F03B-4BEB-9288-EBF1AD47B21E@.microsoft.com...
> Thanks for the tip, but unfortunately still only analyzed the first query.
> "John Bell" wrote:
>

Index Tuning

Hi All!
I have this table:
CREATE TABLE [dbo].[Constructions] (
[ConstructionID] [int] IDENTITY (1, 1) NOT NULL , --PK
[TypeID] [int] NOT NULL ,
[TerritoryID] [int] NOT NULL,
[BeginDate] [datetime] NOT NULL ,
[EndDate] [datetime] NOT NULL ,
[IsCancelled] [bit] NOT NULL
) ON [PRIMARY]
With over 1.000.000 records and daily inserts of 10.000 records.
The fields that are updated by users from time to time are BeginDate,
EndDate and IsCancelled.
This is a table of an online database and the indexes could by rebuild once
a day or so.
Here is the question:
What index would be the appropriate for this type of query (highly used):
SELECT
ConstructionID, TypeID, TerritoryID, BeginDate, EndDate
FROM
Constructions
WHERE
TerritoryID = @.TerritoryID
AND
EndDate < GETDATE()
AND
IsCancelled = 0I guess
1) Create an index on any very selective column. If something is very
selective then that's the only index you need!
2) Create in index on any combination of selective criteria. You don't need
everything from the WHERE clause but try and supply the combination of
columns that will yield a selective set!
3) If nothing is selective - even when combined - then cover the query!
"Jorgebg" <Jorgebg@.discussions.microsoft.com> wrote in message
news:C72F8FBF-5E27-4271-A5DA-BD2DD8A3AD3B@.microsoft.com...
> Hi All!
> I have this table:
> CREATE TABLE [dbo].[Constructions] (
> [ConstructionID] [int] IDENTITY (1, 1) NOT NULL , --PK
> [TypeID] [int] NOT NULL ,
> [TerritoryID] [int] NOT NULL,
> [BeginDate] [datetime] NOT NULL ,
> [EndDate] [datetime] NOT NULL ,
> [IsCancelled] [bit] NOT NULL
> ) ON [PRIMARY]
> With over 1.000.000 records and daily inserts of 10.000 records.
> The fields that are updated by users from time to time are BeginDate,
> EndDate and IsCancelled.
> This is a table of an online database and the indexes could by rebuild
> once
> a day or so.
> Here is the question:
> What index would be the appropriate for this type of query (highly used):
> SELECT
> ConstructionID, TypeID, TerritoryID, BeginDate, EndDate
> FROM
> Constructions
> WHERE
> TerritoryID = @.TerritoryID
> AND
> EndDate < GETDATE()
> AND
> IsCancelled = 0|||It is pure guesswork without being able to analyze the actual data in
the table, but I suspect that an index on (TerritoryID, IsCancelled,
EndDate) would be a good place to start.
If I had access to the table I would be trying to get a feel for the
data by running queries along the lines of:
select IsCancelled, count(*) as rows
from Constructions
group by IsCancelled
select count(distinct TerritoryID) from Constructions
select TerritoryID, count(*) as rows,
sum(case when IsCancelled = 1 then 1 else 0 end) as Cancelled
from Constructions
group by TerritoryID
order by 2 desc
The idea behind queries like this is to start to understand the data.
This includes how selective each column is and how evenly distributed
the data is. A table with 100 values for TerritoryID is one thing,
but if 1 TerritoryID out of the hundred has 80% of all rows that is
something else. Likewise knowing if 1%, or 50%, or 99% of the rows
are cancelled makes rather a large difference.
Roy Harvey
Beacon Falls, CT
On Tue, 6 Jun 2006 02:21:01 -0700, Jorgebg
<Jorgebg@.discussions.microsoft.com> wrote:

>Hi All!
>I have this table:
>CREATE TABLE [dbo].[Constructions] (
> [ConstructionID] [int] IDENTITY (1, 1) NOT NULL , --PK
> [TypeID] [int] NOT NULL ,
> [TerritoryID] [int] NOT NULL,
> [BeginDate] [datetime] NOT NULL ,
> [EndDate] [datetime] NOT NULL ,
> [IsCancelled] [bit] NOT NULL
> ) ON [PRIMARY]
>With over 1.000.000 records and daily inserts of 10.000 records.
>The fields that are updated by users from time to time are BeginDate,
>EndDate and IsCancelled.
>This is a table of an online database and the indexes could by rebuild once
>a day or so.
>Here is the question:
>What index would be the appropriate for this type of query (highly used):
>SELECT
> ConstructionID, TypeID, TerritoryID, BeginDate, EndDate
>FROM
> Constructions
>WHERE
> TerritoryID = @.TerritoryID
> AND
> EndDate < GETDATE()
> AND
> IsCancelled = 0|||These are good suggestions. Alternatively, you could try using the Database
Tuning Advisor (or the Indexing Tuning Wizard in SQL Server 2000) to tune
your indexes for your whole workload.
SQL Server 2005 also includes the "Missing Indexes" feature which is
suitable for the task. Check the SQL Server 2005 Books Online for details on
the above options.
Regards,
Leo
"Roy Harvey" <roy_harvey@.snet.net> wrote in message
news:4joa825gpkmpqe6ojmm2ig00fs338873f9@.
4ax.com...
> It is pure guesswork without being able to analyze the actual data in
> the table, but I suspect that an index on (TerritoryID, IsCancelled,
> EndDate) would be a good place to start.
> If I had access to the table I would be trying to get a feel for the
> data by running queries along the lines of:
> select IsCancelled, count(*) as rows
> from Constructions
> group by IsCancelled
> select count(distinct TerritoryID) from Constructions
> select TerritoryID, count(*) as rows,
> sum(case when IsCancelled = 1 then 1 else 0 end) as Cancelled
> from Constructions
> group by TerritoryID
> order by 2 desc
> The idea behind queries like this is to start to understand the data.
> This includes how selective each column is and how evenly distributed
> the data is. A table with 100 values for TerritoryID is one thing,
> but if 1 TerritoryID out of the hundred has 80% of all rows that is
> something else. Likewise knowing if 1%, or 50%, or 99% of the rows
> are cancelled makes rather a large difference.
> Roy Harvey
> Beacon Falls, CT
>
> On Tue, 6 Jun 2006 02:21:01 -0700, Jorgebg
> <Jorgebg@.discussions.microsoft.com> wrote:
>|||On Tue, 6 Jun 2006 02:21:01 -0700, Jorgebg wrote:

>Hi All!
>I have this table:
>CREATE TABLE [dbo].[Constructions] (
> [ConstructionID] [int] IDENTITY (1, 1) NOT NULL , --PK
> [TypeID] [int] NOT NULL ,
> [TerritoryID] [int] NOT NULL,
> [BeginDate] [datetime] NOT NULL ,
> [EndDate] [datetime] NOT NULL ,
> [IsCancelled] [bit] NOT NULL
> ) ON [PRIMARY]
Hi Jorgebg,
The "--PK" comment suggests that the IDENTITY column is the primary key,
but you didn't declare it as such. SQL Server will not automatically
define a PRIMARY KEY constsaint for IDENTITY columns.
In addition to the identity surrogate key, your table should also have a
business key (and you should declare a UNIQUE constraint for it).
(snip)
>Here is the question:
>What index would be the appropriate for this type of query (highly used):
>SELECT
> ConstructionID, TypeID, TerritoryID, BeginDate, EndDate
>FROM
> Constructions
>WHERE
> TerritoryID = @.TerritoryID
> AND
> EndDate < GETDATE()
> AND
> IsCancelled = 0
Suggestion 1:
CREATE NONCLUSTERED INDEX ix1
ON Constructions (TerritoryID, IsCancelled, EndDate)
Suggestion 2 - assuming you don't have a CLUSUTERED index, or that you
can change it:
CREATE CLUSTERED INDEX ix2
ON Constructions (TerritoryID, IsCancelled, EndDate)
Suggestion 3.1 - assuming you have a CLUSTERED index on ConstructionID
and don't want to change it, AND assuming you're using SQL Server 2000:
CREATE NONCLUSTERED INDEX ix3_1
ON Constructions (TerritoryID, IsCancelled, EndDate, TypeID,
BeginDate)
Suggestion 3.2 assuming you have a CLUSTERED index on ConstructionID and
don't want to change it, AND assuming you're using SQL Server 2005
CREATE NONCLUSTERED INDEX ix3_1
ON Constructions (TerritoryID, IsCancelled, EndDate)
INCLUDE (TypeID, BeginDate)
Hugo Kornelis, SQL Server MVP|||I may be mistaken, but since isCancelled is a bit column, it is a poor
choice for an index. In fact, I don't think SQL Server will even allow it
(at least it does not on 2000). Otherwise, the advice you have gotten so
far should get you the performance that you need.
"Hugo Kornelis" <hugo@.perFact.REMOVETHIS.info.INVALID> wrote in message
news:ti9d82d667kunlbek0mrfu6midipucq7g7@.
4ax.com...
> On Tue, 6 Jun 2006 02:21:01 -0700, Jorgebg wrote:
>
> Hi Jorgebg,
> The "--PK" comment suggests that the IDENTITY column is the primary key,
> but you didn't declare it as such. SQL Server will not automatically
> define a PRIMARY KEY constsaint for IDENTITY columns.
> In addition to the identity surrogate key, your table should also have a
> business key (and you should declare a UNIQUE constraint for it).
> (snip)
> Suggestion 1:
> CREATE NONCLUSTERED INDEX ix1
> ON Constructions (TerritoryID, IsCancelled, EndDate)
> Suggestion 2 - assuming you don't have a CLUSUTERED index, or that you
> can change it:
> CREATE CLUSTERED INDEX ix2
> ON Constructions (TerritoryID, IsCancelled, EndDate)
> Suggestion 3.1 - assuming you have a CLUSTERED index on ConstructionID
> and don't want to change it, AND assuming you're using SQL Server 2000:
> CREATE NONCLUSTERED INDEX ix3_1
> ON Constructions (TerritoryID, IsCancelled, EndDate, TypeID,
> BeginDate)
> Suggestion 3.2 assuming you have a CLUSTERED index on ConstructionID and
> don't want to change it, AND assuming you're using SQL Server 2005
> CREATE NONCLUSTERED INDEX ix3_1
> ON Constructions (TerritoryID, IsCancelled, EndDate)
> INCLUDE (TypeID, BeginDate)
>
> --
> Hugo Kornelis, SQL Server MVP|||On Wed, 7 Jun 2006 10:44:48 -0400, Jim Underwood wrote:

>I may be mistaken, but since isCancelled is a bit column, it is a poor
>choice for an index. In fact, I don't think SQL Server will even allow it
>(at least it does not on 2000).
Hi Jim,
Sorry, but you are wrong. Here's a script to prove it. If you execute
just the two SELECT statements with the option to show execution plan
on, you'll see that the index is not only create but also used.
CREATE TABLE BitTest
(PKCol int NOT NULL IDENTITY PRIMARY KEY,
BitCol bit NOT NULL)
go
CREATE INDEX x_bit ON BitTest (BitCol)
go
INSERT INTO BitTest (BitCol) VALUES (0)
INSERT INTO BitTest (BitCol) VALUES (0)
INSERT INTO BitTest (BitCol) VALUES (1)
INSERT INTO BitTest (BitCol) VALUES (1)
INSERT INTO BitTest (BitCol) VALUES (0)
INSERT INTO BitTest (BitCol) VALUES (1)
INSERT INTO BitTest (BitCol) VALUES (0)
go
-- Searched query - uses index s on x_bit
SELECT * FROM BitTest WHERE BitCol = CAST(1 AS bit)
-- Query w/o WHERE - uses index scan on x_bit
SELECT * FROM BitTest
go
DROP TABLE BitTest
go
Here's the version of SQL Server I tested this on:
SELECT @.@.Version
go
Microsoft SQL Server 2000 - 8.00.2187 (Intel X86)
Mar 9 2006 11:38:51
Copyright (c) 1988-2003 Microsoft Corporation
Developer Edition on Windows NT 5.1 (Build 2600: Service Pack 2)
Hugo Kornelis, SQL Server MVP|||Hugo,
Thanks for that example. The results were identical on my installation.
I was misled by this quote from BOL.
bit
Integer data type 1, 0, or NULL.
Remarks
Columns of type bit cannot have indexes on them.
That will teach me to take BOL for granted without testing first.
That said, would it be preferable to put the bit col as the last column in
the index, after EndDate? Where EndDate will be more selective than
isCancelled? Unless, of course, isCancelled has very few values of 0.
"Hugo Kornelis" <hugo@.perFact.REMOVETHIS.info.INVALID> wrote in message
news:tqfe82p289pqt02752cshltfhaoroubl4n@.
4ax.com...
> On Wed, 7 Jun 2006 10:44:48 -0400, Jim Underwood wrote:
>
it
> Hi Jim,
> Sorry, but you are wrong. Here's a script to prove it. If you execute
> just the two SELECT statements with the option to show execution plan
> on, you'll see that the index is not only create but also used.
> CREATE TABLE BitTest
> (PKCol int NOT NULL IDENTITY PRIMARY KEY,
> BitCol bit NOT NULL)
> go
> CREATE INDEX x_bit ON BitTest (BitCol)
> go
> INSERT INTO BitTest (BitCol) VALUES (0)
> INSERT INTO BitTest (BitCol) VALUES (0)
> INSERT INTO BitTest (BitCol) VALUES (1)
> INSERT INTO BitTest (BitCol) VALUES (1)
> INSERT INTO BitTest (BitCol) VALUES (0)
> INSERT INTO BitTest (BitCol) VALUES (1)
> INSERT INTO BitTest (BitCol) VALUES (0)
> go
> -- Searched query - uses index s on x_bit
> SELECT * FROM BitTest WHERE BitCol = CAST(1 AS bit)
> -- Query w/o WHERE - uses index scan on x_bit
> SELECT * FROM BitTest
> go
> DROP TABLE BitTest
> go
> Here's the version of SQL Server I tested this on:
> SELECT @.@.Version
> go
> Microsoft SQL Server 2000 - 8.00.2187 (Intel X86)
> Mar 9 2006 11:38:51
> Copyright (c) 1988-2003 Microsoft Corporation
> Developer Edition on Windows NT 5.1 (Build 2600: Service Pack 2)
> --
> Hugo Kornelis, SQL Server MVP|||> That said, would it be preferable to put the bit col as the last column in
> the index, after EndDate? Where EndDate will be more selective than
> isCancelled?
Possibly. Remember that statistics is only kept for the first column in the
index. So if you have:
WHERE bitcol = 1
AND othercol = 2786
Then SQL Server doesn't know the selectivity for the "othercol = 2768" condi
tion, and because of
that the cost estimation can be off. Having othercol as the first column in
the index mean
statistics is available for that column and selectivity for othercol = 2786
can be determined.
However, if the index is defined over (othercol, bitcol) and you have anothe
r query:
WHERE bitcol = 1
Then SQL Server cannot use that index, as bitcol isn't the first column in t
hat index.
Tradeoffs...
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Jim Underwood" <james.underwoodATfallonclinic.com> wrote in message
news:ecQcasviGHA.4504@.TK2MSFTNGP03.phx.gbl...
> Hugo,
> Thanks for that example. The results were identical on my installation.
> I was misled by this quote from BOL.
> bit
> Integer data type 1, 0, or NULL.
> Remarks
> Columns of type bit cannot have indexes on them.
> That will teach me to take BOL for granted without testing first.
> That said, would it be preferable to put the bit col as the last column in
> the index, after EndDate? Where EndDate will be more selective than
> isCancelled? Unless, of course, isCancelled has very few values of 0.
>
> "Hugo Kornelis" <hugo@.perFact.REMOVETHIS.info.INVALID> wrote in message
> news:tqfe82p289pqt02752cshltfhaoroubl4n@.
4ax.com...
> it
>|||Actually, my thought process was a little different, let me try to explain.
I may be way off here, but this is how I thought multicolumn indexes
worked...
We query for (yes, I realize this is different from the OP) :
TerritoryID = @.TerritoryID
AND EndDate = @.EndDate
AND IsCancelled = 0
Lets say we have 1,000,000 rows of data, and we have an index on
(TerritoryID, IsCancelled, EndDate)
@.TerritoryID narrows the search to 100 records
IsCancelled = 0 cuts that in half to 50 records
@.EndDate narrows those 50 records to 5 records
Now, lets say we have an index on (TerritoryID, EndDate, IsCancelled)
@.TerritoryID narrows the search to 100 records
@.EndDate narrows those 100 records to 10 records
IsCancelled = 0 cuts that in half to 5 records
In the case above wouldn't it be more efficient to have the date as the
second column in the index? Essentially, put the most selective column
before the others in the index. Of course, this assumes that SQL Server's
algorithms use the physical order of the columns to filter data, which may
not be the case.
Upon reviewing the query in the OP, it occurs to me that EndDate <= GetDate
may actually be less selective than IsCancelled = 0, and that for that
specific query having the bit column precede the date may be more efficient
after all.
"Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote in
message news:OS58CwviGHA.1600@.TK2MSFTNGP04.phx.gbl...
in
> Possibly. Remember that statistics is only kept for the first column in
the index. So if you have:
> WHERE bitcol = 1
> AND othercol = 2786
> Then SQL Server doesn't know the selectivity for the "othercol = 2768"
condition, and because of
> that the cost estimation can be off. Having othercol as the first column
in the index mean
> statistics is available for that column and selectivity for othercol =
2786 can be determined.
> However, if the index is defined over (othercol, bitcol) and you have
another query:
> WHERE bitcol = 1
> Then SQL Server cannot use that index, as bitcol isn't the first column in
that index.
> Tradeoffs...
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
>
> "Jim Underwood" <james.underwoodATfallonclinic.com> wrote in message
> news:ecQcasviGHA.4504@.TK2MSFTNGP03.phx.gbl...
in
allow
>

index tuning

Hi,
I have a very large table that holds all the ledger transactions. Its the
number 1 table that gets hit when posting invoices, etc, etc. It started
with 5 indexes that the developers did. I have been running traces that are
fed to the index tuning wizzard and now over the last 3 months it has
generated about 20 new indexes, one is clustered. No significant performance
is noticed, rather the opposite. What do I do now? Please help,
Regards,
dimitris,
The index tuning wizard should be used as a guide, not as a tool for
creating your indexes. I would run a trace on this table over a typical
working day, and find the top 10 most intensive queries against this
table based on IO and CPU, and possibly duration.
Find out what the SARGs are for these queries and make sure you are
indexed for these SARGs. To do this check the WHERE clauses of the
queries. The clustered index should be placed on a narrow column and
chosen very carefully. Try to place it on a FK column, or a column where
range queries are performed, or one where there are frequent ORDER BY
clauses.
Test your new queries in a test environment, don't do this in
production. Benchmark your results so you can compare what works against
what doesn't.
Mark Allison, SQL Server MVP
http://www.markallison.co.uk
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
dimitris wrote:
> Hi,
> I have a very large table that holds all the ledger transactions. Its the
> number 1 table that gets hit when posting invoices, etc, etc. It started
> with 5 indexes that the developers did. I have been running traces that are
> fed to the index tuning wizzard and now over the last 3 months it has
> generated about 20 new indexes, one is clustered. No significant performance
> is noticed, rather the opposite. What do I do now? Please help,
> Regards,
>
sql

Index tuning

Hi all,
I created load file (with few test queries) via Profiler and then I tried to
run tuning wizard. Wizards gave message "Workload does not contain any
events or queries that can be tuned against current database ..."
It's strange because load file contains events with queries and this queries
were run over current database (I also set filter condition for database
name to be sure ...)..
Any ideas what am I doing wrong ?
Regards,
Jan
Jan Hruz wrote:
> Hi all,
> I created load file (with few test queries) via Profiler and then I
> tried to run tuning wizard. Wizards gave message "Workload does not
> contain any events or queries that can be tuned against current
> database ..." It's strange because load file contains events with
> queries and this
> queries were run over current database (I also set filter condition
> for database name to be sure ...)..
> Any ideas what am I doing wrong ?
> Regards,
> Jan
I think the ITW requires SQL:BatchCompleted and RPC:Completed events
with the necessary columns. Please post the events and columns that were
captured as well as any filters you used during the trace.
David Gugick
Imceda Software
www.imceda.com
|||I used ProfilerTuning template. it is setup to trace events :
- RPC:Completed
- SQL:BatchCompleted
and data columns :
- EventClass
- TextData
- Duration
- BinaryData
- SPID
I added new filted condition (to track only my requests):
- DBUserName
- DatabaseName
Regards,
Jan
"David Gugick" <davidg-nospam@.imceda.com> wrote in message
news:O604jpHRFHA.3288@.TK2MSFTNGP14.phx.gbl...
> Jan Hruz wrote:
> I think the ITW requires SQL:BatchCompleted and RPC:Completed events with
> the necessary columns. Please post the events and columns that were
> captured as well as any filters you used during the trace.
> --
> David Gugick
> Imceda Software
> www.imceda.com

Index troubles

We're seeing funny things with SQL Server 2000 when dealing with indexes on
a customer table. When we issue:
DBCC reindex (cust)
we see an error Message 169: a column has been specified more than once in
the order by list.
We see the same error during Update Statistics.
We're using the Query analyzer - index tuning wizard to create indexes on
this table. The table has 12 indexes, and the field Cust_num is used in 7 of
12 indexes.
We tried to remove indexes one by one, we get same Message 169.
Any clues? Thanks!!> The table has 12 indexes,
WOW! That isn't the most I've ever seen, but it's certainly more than I'm
used to. Why is one column covered in seven different indices?
> We tried to remove indexes one by one, we get same Message 169.
Can you show the result of:
EXEC sp_help tablename
EXEC sp_helpindex tablename|||Here is the output from sp_helpindex
I think the hypotheticals are added by the Index wizard ... how do you get
rid of these?
--
Address nonclustered located on PRIMARY Address
CustID nonclustered unique located on PRIMARY CustID
CustNum nonclustered unique located on PRIMARY CustNum
hind_1682821057_24A_2A nonclustered hypothetical auto create located on
PRIMARY PriceCode CustNum
hind_1682821057_2A_24A_50A nonclustered hypothetical auto create located
on PRIMARY CustNum PriceCode AssignedLocation
hind_1682821057_2A_50A nonclustered hypothetical auto create located on
PRIMARY CustNum AssignedLocation
hind_1682821057_50A_24A_2A nonclustered hypothetical auto create located
on PRIMARY AssignedLocation PriceCode CustNum
hind_1682821057_50A_2A nonclustered hypothetical auto create located on
PRIMARY AssignedLocation CustNum
hind_c_1682821057_24A clustered hypothetical auto create located on
PRIMARY PriceCode
hind_c_1682821057_24A_2A clustered hypothetical auto create located on
PRIMARY PriceCode CustNum
hind_c_1682821057_2A clustered hypothetical auto create located on PRIMARY
CustNum
hind_c_1682821057_2A_24A clustered hypothetical auto create located on
PRIMARY CustNum PriceCode
Name nonclustered located on PRIMARY Name
tblCust10 nonclustered located on PRIMARY CustNum Name City Phone
tblCust13 nonclustered located on PRIMARY CustNum SalesComm
tblCust2 clustered located on PRIMARY State CustNum
tblCust23 nonclustered located on PRIMARY CustNum PriceCode
tblCust25 nonclustered located on PRIMARY CustNum Name City State
tblCust26 nonclustered located on PRIMARY PriceCode CustNum Name City
State
tblCust31 nonclustered located on PRIMARY AssignedLocation PriceCode
tblCust7 nonclustered located on PRIMARY SalesComm DateNewComm
tblCust9 nonclustered located on PRIMARY CustNum PriceIncrease
"Aaron [SQL Server MVP]" <ten.xoc@.dnartreb.noraa> wrote in message
news:#oAjuKxLFHA.3844@.TK2MSFTNGP14.phx.gbl...
> > The table has 12 indexes,
> WOW! That isn't the most I've ever seen, but it's certainly more than I'm
> used to. Why is one column covered in seven different indices?
> > We tried to remove indexes one by one, we get same Message 169.
> Can you show the result of:
> EXEC sp_help tablename
> EXEC sp_helpindex tablename
>|||DROP INDEX should do it.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
http://www.sqlug.se/
"Dean J Garrett" <info@.amuletc.com> wrote in message news:%23Sxvb3xLFHA.2648@.TK2MSFTNGP14.phx.gbl...
> Here is the output from sp_helpindex
> I think the hypotheticals are added by the Index wizard ... how do you get
> rid of these?
> --
> Address nonclustered located on PRIMARY Address
> CustID nonclustered unique located on PRIMARY CustID
> CustNum nonclustered unique located on PRIMARY CustNum
> hind_1682821057_24A_2A nonclustered hypothetical auto create located on
> PRIMARY PriceCode CustNum
> hind_1682821057_2A_24A_50A nonclustered hypothetical auto create located
> on PRIMARY CustNum PriceCode AssignedLocation
> hind_1682821057_2A_50A nonclustered hypothetical auto create located on
> PRIMARY CustNum AssignedLocation
> hind_1682821057_50A_24A_2A nonclustered hypothetical auto create located
> on PRIMARY AssignedLocation PriceCode CustNum
> hind_1682821057_50A_2A nonclustered hypothetical auto create located on
> PRIMARY AssignedLocation CustNum
> hind_c_1682821057_24A clustered hypothetical auto create located on
> PRIMARY PriceCode
> hind_c_1682821057_24A_2A clustered hypothetical auto create located on
> PRIMARY PriceCode CustNum
> hind_c_1682821057_2A clustered hypothetical auto create located on PRIMARY
> CustNum
> hind_c_1682821057_2A_24A clustered hypothetical auto create located on
> PRIMARY CustNum PriceCode
> Name nonclustered located on PRIMARY Name
> tblCust10 nonclustered located on PRIMARY CustNum Name City Phone
> tblCust13 nonclustered located on PRIMARY CustNum SalesComm
> tblCust2 clustered located on PRIMARY State CustNum
> tblCust23 nonclustered located on PRIMARY CustNum PriceCode
> tblCust25 nonclustered located on PRIMARY CustNum Name City State
> tblCust26 nonclustered located on PRIMARY PriceCode CustNum Name City
> State
> tblCust31 nonclustered located on PRIMARY AssignedLocation PriceCode
> tblCust7 nonclustered located on PRIMARY SalesComm DateNewComm
> tblCust9 nonclustered located on PRIMARY CustNum PriceIncrease
>
> "Aaron [SQL Server MVP]" <ten.xoc@.dnartreb.noraa> wrote in message
> news:#oAjuKxLFHA.3844@.TK2MSFTNGP14.phx.gbl...
>> > The table has 12 indexes,
>> WOW! That isn't the most I've ever seen, but it's certainly more than I'm
>> used to. Why is one column covered in seven different indices?
>> > We tried to remove indexes one by one, we get same Message 169.
>> Can you show the result of:
>> EXEC sp_help tablename
>> EXEC sp_helpindex tablename
>>
>

Index troubles

We're seeing funny things with SQL Server 2000 when dealing with indexes on
a customer table. When we issue:
DBCC reindex (cust)
we see an error Message 169: a column has been specified more than once in
the order by list.
We see the same error during Update Statistics.
We're using the Query analyzer - index tuning wizard to create indexes on
this table. The table has 12 indexes, and the field Cust_num is used in 7 of
12 indexes.
We tried to remove indexes one by one, we get same Message 169.
Any clues? Thanks!!
> The table has 12 indexes,
WOW! That isn't the most I've ever seen, but it's certainly more than I'm
used to. Why is one column covered in seven different indices?

> We tried to remove indexes one by one, we get same Message 169.
Can you show the result of:
EXEC sp_help tablename
EXEC sp_helpindex tablename
|||Here is the output from sp_helpindex
I think the hypotheticals are added by the Index wizard ... how do you get
rid of these?
Address nonclustered located on PRIMARY Address
CustID nonclustered unique located on PRIMARY CustID
CustNum nonclustered unique located on PRIMARY CustNum
hind_1682821057_24A_2A nonclustered hypothetical auto create located on
PRIMARY PriceCode CustNum
hind_1682821057_2A_24A_50A nonclustered hypothetical auto create located
on PRIMARY CustNum PriceCode AssignedLocation
hind_1682821057_2A_50A nonclustered hypothetical auto create located on
PRIMARY CustNum AssignedLocation
hind_1682821057_50A_24A_2A nonclustered hypothetical auto create located
on PRIMARY AssignedLocation PriceCode CustNum
hind_1682821057_50A_2A nonclustered hypothetical auto create located on
PRIMARY AssignedLocation CustNum
hind_c_1682821057_24A clustered hypothetical auto create located on
PRIMARY PriceCode
hind_c_1682821057_24A_2A clustered hypothetical auto create located on
PRIMARY PriceCode CustNum
hind_c_1682821057_2A clustered hypothetical auto create located on PRIMARY
CustNum
hind_c_1682821057_2A_24A clustered hypothetical auto create located on
PRIMARY CustNum PriceCode
Name nonclustered located on PRIMARY Name
tblCust10 nonclustered located on PRIMARY CustNum Name City Phone
tblCust13 nonclustered located on PRIMARY CustNum SalesComm
tblCust2 clustered located on PRIMARY State CustNum
tblCust23 nonclustered located on PRIMARY CustNum PriceCode
tblCust25 nonclustered located on PRIMARY CustNum Name City State
tblCust26 nonclustered located on PRIMARY PriceCode CustNum Name City
State
tblCust31 nonclustered located on PRIMARY AssignedLocation PriceCode
tblCust7 nonclustered located on PRIMARY SalesComm DateNewComm
tblCust9 nonclustered located on PRIMARY CustNum PriceIncrease
"Aaron [SQL Server MVP]" <ten.xoc@.dnartreb.noraa> wrote in message
news:#oAjuKxLFHA.3844@.TK2MSFTNGP14.phx.gbl...
> WOW! That isn't the most I've ever seen, but it's certainly more than I'm
> used to. Why is one column covered in seven different indices?
>
> Can you show the result of:
> EXEC sp_help tablename
> EXEC sp_helpindex tablename
>
|||DROP INDEX should do it.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
http://www.sqlug.se/
"Dean J Garrett" <info@.amuletc.com> wrote in message news:%23Sxvb3xLFHA.2648@.TK2MSFTNGP14.phx.gbl...
> Here is the output from sp_helpindex
> I think the hypotheticals are added by the Index wizard ... how do you get
> rid of these?
> --
> Address nonclustered located on PRIMARY Address
> CustID nonclustered unique located on PRIMARY CustID
> CustNum nonclustered unique located on PRIMARY CustNum
> hind_1682821057_24A_2A nonclustered hypothetical auto create located on
> PRIMARY PriceCode CustNum
> hind_1682821057_2A_24A_50A nonclustered hypothetical auto create located
> on PRIMARY CustNum PriceCode AssignedLocation
> hind_1682821057_2A_50A nonclustered hypothetical auto create located on
> PRIMARY CustNum AssignedLocation
> hind_1682821057_50A_24A_2A nonclustered hypothetical auto create located
> on PRIMARY AssignedLocation PriceCode CustNum
> hind_1682821057_50A_2A nonclustered hypothetical auto create located on
> PRIMARY AssignedLocation CustNum
> hind_c_1682821057_24A clustered hypothetical auto create located on
> PRIMARY PriceCode
> hind_c_1682821057_24A_2A clustered hypothetical auto create located on
> PRIMARY PriceCode CustNum
> hind_c_1682821057_2A clustered hypothetical auto create located on PRIMARY
> CustNum
> hind_c_1682821057_2A_24A clustered hypothetical auto create located on
> PRIMARY CustNum PriceCode
> Name nonclustered located on PRIMARY Name
> tblCust10 nonclustered located on PRIMARY CustNum Name City Phone
> tblCust13 nonclustered located on PRIMARY CustNum SalesComm
> tblCust2 clustered located on PRIMARY State CustNum
> tblCust23 nonclustered located on PRIMARY CustNum PriceCode
> tblCust25 nonclustered located on PRIMARY CustNum Name City State
> tblCust26 nonclustered located on PRIMARY PriceCode CustNum Name City
> State
> tblCust31 nonclustered located on PRIMARY AssignedLocation PriceCode
> tblCust7 nonclustered located on PRIMARY SalesComm DateNewComm
> tblCust9 nonclustered located on PRIMARY CustNum PriceIncrease
>
> "Aaron [SQL Server MVP]" <ten.xoc@.dnartreb.noraa> wrote in message
> news:#oAjuKxLFHA.3844@.TK2MSFTNGP14.phx.gbl...
>

Index troubles

We're seeing funny things with SQL Server 2000 when dealing with indexes on
a customer table. When we issue:
DBCC reindex (cust)
we see an error Message 169: a column has been specified more than once in
the order by list.
We see the same error during Update Statistics.
We're using the Query analyzer - index tuning wizard to create indexes on
this table. The table has 12 indexes, and the field Cust_num is used in 7 of
12 indexes.
We tried to remove indexes one by one, we get same Message 169.
Any clues? Thanks!!> The table has 12 indexes,
WOW! That isn't the most I've ever seen, but it's certainly more than I'm
used to. Why is one column covered in seven different indices?

> We tried to remove indexes one by one, we get same Message 169.
Can you show the result of:
EXEC sp_help tablename
EXEC sp_helpindex tablename|||Here is the output from sp_helpindex
I think the hypotheticals are added by the Index wizard ... how do you get
rid of these?
Address nonclustered located on PRIMARY Address
CustID nonclustered unique located on PRIMARY CustID
CustNum nonclustered unique located on PRIMARY CustNum
hind_1682821057_24A_2A nonclustered hypothetical auto create located on
PRIMARY PriceCode CustNum
hind_1682821057_2A_24A_50A nonclustered hypothetical auto create located
on PRIMARY CustNum PriceCode AssignedLocation
hind_1682821057_2A_50A nonclustered hypothetical auto create located on
PRIMARY CustNum AssignedLocation
hind_1682821057_50A_24A_2A nonclustered hypothetical auto create located
on PRIMARY AssignedLocation PriceCode CustNum
hind_1682821057_50A_2A nonclustered hypothetical auto create located on
PRIMARY AssignedLocation CustNum
hind_c_1682821057_24A clustered hypothetical auto create located on
PRIMARY PriceCode
hind_c_1682821057_24A_2A clustered hypothetical auto create located on
PRIMARY PriceCode CustNum
hind_c_1682821057_2A clustered hypothetical auto create located on PRIMARY
CustNum
hind_c_1682821057_2A_24A clustered hypothetical auto create located on
PRIMARY CustNum PriceCode
Name nonclustered located on PRIMARY Name
tblCust10 nonclustered located on PRIMARY CustNum Name City Phone
tblCust13 nonclustered located on PRIMARY CustNum SalesComm
tblCust2 clustered located on PRIMARY State CustNum
tblCust23 nonclustered located on PRIMARY CustNum PriceCode
tblCust25 nonclustered located on PRIMARY CustNum Name City State
tblCust26 nonclustered located on PRIMARY PriceCode CustNum Name City
State
tblCust31 nonclustered located on PRIMARY AssignedLocation PriceCode
tblCust7 nonclustered located on PRIMARY SalesComm DateNewComm
tblCust9 nonclustered located on PRIMARY CustNum PriceIncrease
"Aaron [SQL Server MVP]" <ten.xoc@.dnartreb.noraa> wrote in message
news:#oAjuKxLFHA.3844@.TK2MSFTNGP14.phx.gbl...
> WOW! That isn't the most I've ever seen, but it's certainly more than I'm
> used to. Why is one column covered in seven different indices?
>
> Can you show the result of:
> EXEC sp_help tablename
> EXEC sp_helpindex tablename
>|||DROP INDEX should do it.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
http://www.sqlug.se/
"Dean J Garrett" <info@.amuletc.com> wrote in message news:%23Sxvb3xLFHA.2648@.TK2MSFTNGP14.ph
x.gbl...
> Here is the output from sp_helpindex
> I think the hypotheticals are added by the Index wizard ... how do you get
> rid of these?
> --
> Address nonclustered located on PRIMARY Address
> CustID nonclustered unique located on PRIMARY CustID
> CustNum nonclustered unique located on PRIMARY CustNum
> hind_1682821057_24A_2A nonclustered hypothetical auto create located on
> PRIMARY PriceCode CustNum
> hind_1682821057_2A_24A_50A nonclustered hypothetical auto create located
> on PRIMARY CustNum PriceCode AssignedLocation
> hind_1682821057_2A_50A nonclustered hypothetical auto create located on
> PRIMARY CustNum AssignedLocation
> hind_1682821057_50A_24A_2A nonclustered hypothetical auto create located
> on PRIMARY AssignedLocation PriceCode CustNum
> hind_1682821057_50A_2A nonclustered hypothetical auto create located on
> PRIMARY AssignedLocation CustNum
> hind_c_1682821057_24A clustered hypothetical auto create located on
> PRIMARY PriceCode
> hind_c_1682821057_24A_2A clustered hypothetical auto create located on
> PRIMARY PriceCode CustNum
> hind_c_1682821057_2A clustered hypothetical auto create located on PRIMA
RY
> CustNum
> hind_c_1682821057_2A_24A clustered hypothetical auto create located on
> PRIMARY CustNum PriceCode
> Name nonclustered located on PRIMARY Name
> tblCust10 nonclustered located on PRIMARY CustNum Name City Phone
> tblCust13 nonclustered located on PRIMARY CustNum SalesComm
> tblCust2 clustered located on PRIMARY State CustNum
> tblCust23 nonclustered located on PRIMARY CustNum PriceCode
> tblCust25 nonclustered located on PRIMARY CustNum Name City State
> tblCust26 nonclustered located on PRIMARY PriceCode CustNum Name City
> State
> tblCust31 nonclustered located on PRIMARY AssignedLocation PriceCode
> tblCust7 nonclustered located on PRIMARY SalesComm DateNewComm
> tblCust9 nonclustered located on PRIMARY CustNum PriceIncrease
>
> "Aaron [SQL Server MVP]" <ten.xoc@.dnartreb.noraa> wrote in message
> news:#oAjuKxLFHA.3844@.TK2MSFTNGP14.phx.gbl...
>

Index tree

I can understand the number of table records will affect the width of the
tree. But why it also affect the depth of the tree ? Or how the key size
will afftect depth and width of the index tree ?
Alan,
The b-tree will increase its depth to improve the query time by reducing
the number of I/Os. It does this by shortening the path to the data, or
leaf node.
Let's say you have an index on surname. The more data in the table, the
greater the depth of the b-tree will result in fewer I/Os. I'm finding
this topic difficult to describe in a newsgroup posting, so hopefully
the following articles on B-Trees will help you:
B-tree algorithms
http://www.semaphorecorp.com/btp/algo.html
Binary tree
http://en.wikipedia.org/wiki/Binary_tree
binary tree
http://planetmath.org/encyclopedia/BinaryTree.html
I think B-Tree actually stands for Balanced Tree, not Binary Tree; but
the two terms are often used synonymously.
Mark Allison, SQL Server MVP
http://www.markallison.co.uk
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602m.html
Alan wrote:
> I can understand the number of table records will affect the width of the
> tree. But why it also affect the depth of the tree ? Or how the key size
> will afftect depth and width of the index tree ?
>
|||On Tue, 7 Dec 2004 15:03:09 +1100, Alan wrote:

>I can understand the number of table records will affect the width of the
>tree. But why it also affect the depth of the tree ? Or how the key size
>will afftect depth and width of the index tree ?
Hi Alan,
I'll use an example to explain. Let's assume you have a nonclustered index
with 800 bytes in the indexed columns and another 800 bytes in the
clustered key.
The leaf pages store both the indexed values and the corresponding values
in the clustered index (as locator to the actual row). Other pages (root,
intermediate) store only the indexed values. Since each page is about 8K,
a leaf page holds values for 5 rows; other pages have pointers for 10
rows.
If the number of rows in the table is 10, we need two leaf pages (assuming
no empty space, which will not always be the case in practice) to hold all
these rows. The root page will have the indexed values of the first row on
leaf page 1 and the first row on page 2; the rest of the root page remains
empty. This B-tree has depth 2 (root is level 1; leaf at level 2).
If we add another 40 rows for a total of 50, we need 10 leaf pages. The
root page will have the indexed values of the first row on each of these
10 leaf pages and no room to spare.
One extra row, bringing the total to 51, means we now need 11 leaf pages.
Since the root page can only point to max 10 pages, we need to add a
level. The new B-tree will have one root, two intermediate (level-2) and
11 data pages. One intermediate page will have the indexed values of the
first row on the first 5 or 6 leaf pages; the other intermediate page has
the indexed values of the first row on the remaining leaf pages; the root
page will only hold the indexed values of the first row of the two
intermediate pages. Note that we now have depth 3: root at level 1,
intermediate at level 2 and leaf at level 3.
We can now continue to add rows. When there are 500 rows, there are 100
leaf pages, 10 intermediate pages (each pointing to 10 leaf pages) and 1
root page (pointing to the 10 intermediate pages). All these pages are
completely full: when the 501st row is added, another level has to be
added to the index and it is now at depth 4.
The above shows how number of rows affects the B-tree depth. To see how
key size affects B-tree depth as well, imagine what happens if the indexed
columns are not 800 but 80 bytes: now you can store the indexed values of
not 10 but 100 rows in non-leaf pages. For the leaf pages, the capacity
would increase to (8K / (80 + 800)) = 9 rows. If the size of the clustered
index would decrease as well, this number would rise even further. This of
course means that you can add more rows until all leaf, intermediate and
root pages are full and another level has to be added.
Best, Hugo
(Remove _NO_ and _SPAM_ to get my e-mail address)
sql

Index tree

I can understand the number of table records will affect the width of the
tree. But why it also affect the depth of the tree ? Or how the key size
will afftect depth and width of the index tree ?Alan,
The b-tree will increase its depth to improve the query time by reducing
the number of I/Os. It does this by shortening the path to the data, or
leaf node.
Let's say you have an index on surname. The more data in the table, the
greater the depth of the b-tree will result in fewer I/Os. I'm finding
this topic difficult to describe in a newsgroup posting, so hopefully
the following articles on B-Trees will help you:
B-tree algorithms
http://www.semaphorecorp.com/btp/algo.html
Binary tree
http://en.wikipedia.org/wiki/Binary_tree
binary tree
http://planetmath.org/encyclopedia/BinaryTree.html
I think B-Tree actually stands for Balanced Tree, not Binary Tree; but
the two terms are often used synonymously.
--
Mark Allison, SQL Server MVP
http://www.markallison.co.uk
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602m.html
Alan wrote:
> I can understand the number of table records will affect the width of the
> tree. But why it also affect the depth of the tree ? Or how the key size
> will afftect depth and width of the index tree ?
>|||On Tue, 7 Dec 2004 15:03:09 +1100, Alan wrote:
>I can understand the number of table records will affect the width of the
>tree. But why it also affect the depth of the tree ? Or how the key size
>will afftect depth and width of the index tree ?
Hi Alan,
I'll use an example to explain. Let's assume you have a nonclustered index
with 800 bytes in the indexed columns and another 800 bytes in the
clustered key.
The leaf pages store both the indexed values and the corresponding values
in the clustered index (as locator to the actual row). Other pages (root,
intermediate) store only the indexed values. Since each page is about 8K,
a leaf page holds values for 5 rows; other pages have pointers for 10
rows.
If the number of rows in the table is 10, we need two leaf pages (assuming
no empty space, which will not always be the case in practice) to hold all
these rows. The root page will have the indexed values of the first row on
leaf page 1 and the first row on page 2; the rest of the root page remains
empty. This B-tree has depth 2 (root is level 1; leaf at level 2).
If we add another 40 rows for a total of 50, we need 10 leaf pages. The
root page will have the indexed values of the first row on each of these
10 leaf pages and no room to spare.
One extra row, bringing the total to 51, means we now need 11 leaf pages.
Since the root page can only point to max 10 pages, we need to add a
level. The new B-tree will have one root, two intermediate (level-2) and
11 data pages. One intermediate page will have the indexed values of the
first row on the first 5 or 6 leaf pages; the other intermediate page has
the indexed values of the first row on the remaining leaf pages; the root
page will only hold the indexed values of the first row of the two
intermediate pages. Note that we now have depth 3: root at level 1,
intermediate at level 2 and leaf at level 3.
We can now continue to add rows. When there are 500 rows, there are 100
leaf pages, 10 intermediate pages (each pointing to 10 leaf pages) and 1
root page (pointing to the 10 intermediate pages). All these pages are
completely full: when the 501st row is added, another level has to be
added to the index and it is now at depth 4.
The above shows how number of rows affects the B-tree depth. To see how
key size affects B-tree depth as well, imagine what happens if the indexed
columns are not 800 but 80 bytes: now you can store the indexed values of
not 10 but 100 rows in non-leaf pages. For the leaf pages, the capacity
would increase to (8K / (80 + 800)) = 9 rows. If the size of the clustered
index would decrease as well, this number would rise even further. This of
course means that you can add more rows until all leaf, intermediate and
root pages are full and another level has to be added.
Best, Hugo
--
(Remove _NO_ and _SPAM_ to get my e-mail address)

index tepmorary table

Hi All,
I'm looking at a query at the moment which builds up a large-ish
(50,000 rows) temporary table, then goes about performing various
updates and selects. This is taking ages as the table is too large, so
full table scans are being performed on ab out 8 or 9 separate selects.
I've tried adding a few indexes to the temporary table, but with little
improvement, however it has prompted the following questions:
is there ever a good situation to index a local temporary table, or
does the overhead of building and maintaining the index always outweigh
the benefits?
will there be concurrency issues for the index? you can have several
temporary tables of the same name, if I create an index against it,
will another temporary table be able to have an index of that name
created against it as well (or is there something like a temporary
index)?
Like I said, in this case indexes don't appear to help anyway, but it
would be nice to know if it's ever an option.
Cheers
WillWill
> is there ever a good situation to index a local temporary table, or
> does the overhead of building and maintaining the index always outweigh
> the benefits?
Well defined indexes will improve performance

> will there be concurrency issues for the index? you can have several
> temporary tables of the same name, if I create an index against it,
> will another temporary table be able to have an index of that name
> created against it as well (or is there something like a temporary
> index)?
Can you be more specific? Are you going to JOIN the tables ?

> Like I said, in this case indexes don't appear to help anyway, but it
> would be nice to know if it's ever an option.
What are you trying to achive, can you explain ? DDL+ sample data +
expectedresult will be helpful.
"Will" <william_pegg@.yahoo.co.uk> wrote in message
news:1144673353.232351.101530@.i39g2000cwa.googlegroups.com...
> Hi All,
> I'm looking at a query at the moment which builds up a large-ish
> (50,000 rows) temporary table, then goes about performing various
> updates and selects. This is taking ages as the table is too large, so
> full table scans are being performed on ab out 8 or 9 separate selects.
> I've tried adding a few indexes to the temporary table, but with little
> improvement, however it has prompted the following questions:
> is there ever a good situation to index a local temporary table, or
> does the overhead of building and maintaining the index always outweigh
> the benefits?
> will there be concurrency issues for the index? you can have several
> temporary tables of the same name, if I create an index against it,
> will another temporary table be able to have an index of that name
> created against it as well (or is there something like a temporary
> index)?
> Like I said, in this case indexes don't appear to help anyway, but it
> would be nice to know if it's ever an option.
> Cheers
> Will
>|||If you create a local temp table (i.e. name begins with a single #), then
there are no concurrency issues. As for performance, they often do help.
I'd populate the table first and then add the indexes. there are no hard
and fast rules, other than to try it and see if it helps.
If you post your DDL and the code that runs slowly, perhaps we can
troubleshoot further.
Tom
----
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
SQL Server MVP
Toronto, ON Canada
.
"Will" <william_pegg@.yahoo.co.uk> wrote in message
news:1144673353.232351.101530@.i39g2000cwa.googlegroups.com...
Hi All,
I'm looking at a query at the moment which builds up a large-ish
(50,000 rows) temporary table, then goes about performing various
updates and selects. This is taking ages as the table is too large, so
full table scans are being performed on ab out 8 or 9 separate selects.
I've tried adding a few indexes to the temporary table, but with little
improvement, however it has prompted the following questions:
is there ever a good situation to index a local temporary table, or
does the overhead of building and maintaining the index always outweigh
the benefits?
will there be concurrency issues for the index? you can have several
temporary tables of the same name, if I create an index against it,
will another temporary table be able to have an index of that name
created against it as well (or is there something like a temporary
index)?
Like I said, in this case indexes don't appear to help anyway, but it
would be nice to know if it's ever an option.
Cheers
Will|||Uri, Tom,
Thanks for your replies. I can't post the exact code as I don't own it,
but the basic layout is as follows:
Create table #result(col1 nvarchar(100), col2 nvarchar(100), ... ,...,
coln decimal(18,2))
INSERT INTO #result
select stuff
from applicationtables
Update #Result
SET col5 = col4*col6
FROM #Result as r
JOIN applicationTable as a on a.Col9 = r.Col9
WHERE Col1 = 'a fixed filter'
The code then does a few more updates with different formulas and joins
on different columns, but the same basic statement
I thought that it would be made more efficient by indexing #Result such
that the updates could more quickly complete the table scans (e.g.
create an index on #results.col9), however the improvement was
negligeable.
In this case it was just a bit of opportunistic optimisation, I was
really posting the question more to know if this was a route to attempt
if I encounter any other queries with large temp tables - which it
appears it can be.
Thanks for the help
Will|||Be sure that when you do the joins that you have identical datatypes. For
example, if you join an nvarchar to a varchar, it will usually skip the
index.
Tom
----
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
SQL Server MVP
Toronto, ON Canada
.
"Will" <william_pegg@.yahoo.co.uk> wrote in message
news:1144676705.952576.294480@.t31g2000cwb.googlegroups.com...
Uri, Tom,
Thanks for your replies. I can't post the exact code as I don't own it,
but the basic layout is as follows:
Create table #result(col1 nvarchar(100), col2 nvarchar(100), ... ,...,
coln decimal(18,2))
INSERT INTO #result
select stuff
from applicationtables
Update #Result
SET col5 = col4*col6
FROM #Result as r
JOIN applicationTable as a on a.Col9 = r.Col9
WHERE Col1 = 'a fixed filter'
The code then does a few more updates with different formulas and joins
on different columns, but the same basic statement
I thought that it would be made more efficient by indexing #Result such
that the updates could more quickly complete the table scans (e.g.
create an index on #results.col9), however the improvement was
negligeable.
In this case it was just a bit of opportunistic optimisation, I was
really posting the question more to know if this was a route to attempt
if I encounter any other queries with large temp tables - which it
appears it can be.
Thanks for the help
Will|||This may sound a little backward, but have you considered using a real table
to do the job?
I'll explain a little more. Add another column to your temp table called
say, ExecutionGUID, as a GUID column. Then inside your SP, create a GUID
variable, and populate it with NewID(), Now for all queries, including the
initial insert use your GUID to identify your new subset.
If you pass the Temp table to another stored procedure, this will cause the
procedure to recompile, which is extremely expensive. Using a real table
the procedure does not have to recompile every execution.
You can also combine this code with a cleanup job, which runs automatically,
you'll need to know what you can delete but it'll mean your client isn't
hanging about whilst the delete takes place.
This may not be suitable for your situation, but it might be worth a try.
Colin.
"Will" <william_pegg@.yahoo.co.uk> wrote in message
news:1144673353.232351.101530@.i39g2000cwa.googlegroups.com...
> Hi All,
> I'm looking at a query at the moment which builds up a large-ish
> (50,000 rows) temporary table, then goes about performing various
> updates and selects. This is taking ages as the table is too large, so
> full table scans are being performed on ab out 8 or 9 separate selects.
> I've tried adding a few indexes to the temporary table, but with little
> improvement, however it has prompted the following questions:
> is there ever a good situation to index a local temporary table, or
> does the overhead of building and maintaining the index always outweigh
> the benefits?
> will there be concurrency issues for the index? you can have several
> temporary tables of the same name, if I create an index against it,
> will another temporary table be able to have an index of that name
> created against it as well (or is there something like a temporary
> index)?
> Like I said, in this case indexes don't appear to help anyway, but it
> would be nice to know if it's ever an option.
> Cheers
> Will
>|||Hi Colin,
Thanks for the reply,
In this case it's not possible however...
You see, first off the procedure is building up for a crystal report
(it's where I see the most horrendous use of temporary tables).
Therefore this has to be able to have 2 people running the report at
the same time (though I dread to think of the load on the server).
If I start using Guids and globalising this behaviour I'll start
getting issues with 2 competing processes locking this table, when in
fact the data is entirely separate, and can be kept that way. Also, the
data is temporary, and I don't see the benefit in actually creating the
table, just to drop it afterwards.
I don't quite get where you're coming from with the "pass to another
stored procedure issue" - this was never presented as an issue.
Everything is done in just the one procedure, and works fine. The issue
is that it's got no indexes, and therefore takes a couple of minutes to
process all the data it's just created in the temporary table.
Finally, and most critically, it was (as is usual for the people
posting and trying to fix the issue) not me that wrote this.
it's written, live, and I can't justify such a change without a lot of
procedure, and strong arguments of the benefit compared to the cost and
risk.
I agree however that there are times where a good use of a table would
have exceeded the use of a temporary one, this is just unfortunately
not one of them
Cheers
Will|||Hi Will,
I've inserted my comments in the text below...
<snip>
> In this case it's not possible however...
> You see, first off the procedure is building up for a crystal report
> (it's where I see the most horrendous use of temporary tables).
> Therefore this has to be able to have 2 people running the report at
> the same time (though I dread to think of the load on the server).
Ooo yuck, I hate Crystal Reports. I used to love it, but over the last 5
years, my opinion has changed, it tends to be extremely expensive on DB
Resources. You have my sympathy :-p

> If I start using Guids and globalising this behaviour I'll start
> getting issues with 2 competing processes locking this table, when in
> fact the data is entirely separate, and can be kept that way. Also, the
> data is temporary, and I don't see the benefit in actually creating the
> table, just to drop it afterwards.
There's a couple of interesting points here...
1. You can avoid the competing process issue by using the NoLock table hint.
Normally, GUID's generated are completely different, so in theory you should
be working on different parts of the table when inserting/updating, however
you can use the RowLock table hint to ensure that Page/Table locks do not
occur. This should allow 2 or more people to work on the table at the same
time.
2. The thing about a temporary table is that it's not really a temporary
table. It's a real one. It's just that SQL has added a cleanup routine to
so that the table is destroyed when the connection closes (or scope changes
to a parent level), it all intents and purposes it has all the same problems
as a real table, but you also need to allocate resources to create the
temporary table every time that the code is executed.

> I don't quite get where you're coming from with the "pass to another
> stored procedure issue" - this was never presented as an issue.
> Everything is done in just the one procedure, and works fine. The issue
> is that it's got no indexes, and therefore takes a couple of minutes to
> process all the data it's just created in the temporary table.
>
Your using just one procedure, so that's not an issue. If your stored proc
used either exec or sp_executesql the called code will need to be recompiled
every time, this happens because the temp table is in fact a different table
every time that it is called. To test this create a temporary table using
SSMS or QA then execute Select * from tempdb.sysobjects where xtype='U' your
temporary table will have a suffix which changes every session!
The problem is that as SQL needs to re-compile everytime, this can take alot
of processor resources.

> Finally, and most critically, it was (as is usual for the people
> posting and trying to fix the issue) not me that wrote this.
> it's written, live, and I can't justify such a change without a lot of
> procedure, and strong arguments of the benefit compared to the cost and
> risk.
I completely understand that problem.
Regards
Colin Dawson
www.cjdawson.com|||>> I'm looking at a query at the moment which builds up a large-ish
(50,000 rows) temporary table, then goes about performing various
updates and selects. This is taking ages as the table is too large, so
full table scans are being performed on ab out 8 or 9 separate selects.
<<
Why can you not avoid proprietary temp table code and procedural
processing with a single query? What you have describes (vaguely) is
the way we wrote code with magnetic tape and punch cards -- multiple
passes over the data.
Good SQL programmers do things in one statement, not by mimicking
magnetic tape file from the 1950's.|||Celko,
I can't avoid the temporary table code as I'm not allowed to re-write
the stored procedure. However I would argue that temporary tables have
their place just as anything else in SQL (even cursors). The reason
they are of benefit here is that the stored procedure is having to
produce a result set for crystal reports. This means that there has to
be a large amount of presentation logic within the stored procedure
(which I really hate, in my opinion crystal and other reporting tools
should provide a c# mid layer to allow that sort of manipulation of the
data). The requirements of the report mean that we need to update rows
based on others already in the table, and generally manipulate the
table to get it into the right format for crystal. There may be a
purely set based solution, but in this case I think it would be
detrimental to performance.
I would love it if there was a way to keep the queries simple
functional requests, but with no mid layer it's not that simple.