SET NOCOUNT ON Performance

I was a little skeptical of the NOCOUNT ON boosting query performance so I went to test it. My new job requires me to wear both DBA and DB Developer hats so I am trying to get up to speed on the basics.

I created a customer table with 3,000 rows. I used http://www.generatedata.com/#generator to generate some data. Love that site. You can quickly generate data free (I know there are probably better pay-to-play tools out there, but this is free).

I then created two very simple Stored Procedures.

SelectCustomer

create procedure SelectCustomer as
set nocount on
 select * from customer

SelectCustomer2

create procedure SelectCustomer2 as
select * from customer

I turn on query statistics using set statistics time on, and called the procs. Execution times vary, but one constant was the fact that SelectCustomer performed a lot better than SelectCustomer2. In 3 samples SelectCustomer performed 24%, 34%, and 35% better. These numbers seem skewed, and I would not go by the drastic performance gains of this little test case. But one thing for sure the SET NO COUNT ON does provide a performance boost. I imagine the performance boost on proc that returns several million rows could be drastic.

SQL Server Load testing

WARNING: I am not responsible for an out of control pack of wild honey badgers destroying your database. Harness the power of the Honey Badger at your own risk.

I wanted a quick and easy way to do some SQL Server load testing. So I came up with the Honey Badger Load Tester . It is a set of powershell scripts that will start a user-determined amount of sessions to connect and execute custom query payloads against your SQL Server database.

releaseHoneyBadgers.ps1
This is the main caller script. Here you specify a badger count…as in how many sessions would you like to generate.

$badgerCount = 15
$scriptFile = "c:\scripts\query.ps1"

$i = 1
while ($i -le $badgerCount) {
start-job $scriptFile
$i++
}

query.ps1
The query script will execute the query. The following variables can be set to meet your needs:

  • loopCount – how many times you want each honey badger to loop through the payload
  • Query1,2,3,x – Query you want each honey badger to execute. You can add as many as you want here.
  • ServerInstance – set this to your SQL Server name
  • Database – name of the database you want to connect to
/*
LOOP COUNT
Indicates how many times you want to execute the query loop. Default
is 5000 meaning it will run your query payload 5000 times
*/
$loopCount = 5000
/*
DATABASE CONNECTION INFO
Set your Database Properies here
*/
$ServerInstance = "SERVER1 "
$Database = "Db1 "
$ConnectionTimeout = 30
$QueryTimeout = 120

/*
QUERY LIST
Use this to direct your payload. You can of course add more queries here and then
call them in the while loop below.
*/
$Query1 = "SELECT * FROM Db1.[dbo].[table1]"
$Query2 = "SELECT * FROM Db1.[dbo].[table2]"
$Query3 = "SELECT * FROM Db1.[dbo].[table3]"
$conn=new-object System.Data.SqlClient.SQLConnection
$ConnectionString = "Server={0};Database={1};Integrated Security=True;Connect Timeout={2}" -f $ServerInstance,$Database,$ConnectionTimeout
$conn.ConnectionString=$ConnectionString
$conn.Open()

$i = 1
while ($i -le $loopCount) {
# QUERY 1
$cmd=new-object system.Data.SqlClient.SqlCommand($Query1,$conn)
$cmd.CommandTimeout=$QueryTimeout
$ds=New-Object system.Data.DataSet
$da=New-Object system.Data.SqlClient.SqlDataAdapter($cmd)
[void]$da.fill($ds)
$conn.Close()
$ds.Tables

# QUERY 2
$cmd=new-object system.Data.SqlClient.SqlCommand($Query2,$conn)
$cmd.CommandTimeout=$QueryTimeout
$ds=New-Object system.Data.DataSet
$da=New-Object system.Data.SqlClient.SqlDataAdapter($cmd)
[void]$da.fill($ds)
$conn.Close()
$ds.Tables

# QUERY 3
$cmd=new-object system.Data.SqlClient.SqlCommand($Query3,$conn)
$cmd.CommandTimeout=$QueryTimeout
$ds=New-Object system.Data.DataSet
$da=New-Object system.Data.SqlClient.SqlDataAdapter($cmd)
[void]$da.fill($ds)
$conn.Close()
$ds.Tables

$i++

}

Here are some pics of the HoneyBadger in Action:

Executed the releaseHoneyBadgers.ps1 script with Badger count of 15.

Then I fired up 15 more badgers:

Laptop holding its own against the badgers:

SQL Parameter Sniffing

There are several articles out there on parameter sniffing. I had never come across this issue until this week. Here are some of my notes. Here is a good article for definitions and examples. http://www.simple-talk.com/sql/t-sql-programming/parameter-sniffing/

Lessons learned

  1. In certain cases you will need to utilize optimizer hints. The Optimizer Engine is real good, but its not perfect
  2. Use the WITH RECOMPILE in a STORED PROCEDURE Create statement with caution. The system overhead involved in recompiling the plan every execution may eliminate the perofrmance gains of picking the correct plan.
  3. Make sure you check your index fragmentation and statistics. Just to eliminate the obvious.
  4. Learned some other techniques to reduce parameter sniffing (using local variables).
select COUNT(*) from dbo.table1

 

select * from table1

--Hardcoded Date
select * from table1 with (nolock) where CollectionDate >= '4/24/2012 4:04PM'

--Date Variable
DECLARE @FromDate datetime
set @FromDate = '4/24/2012 4:04PM'
select @FromDate select * from table1 with (nolock) where CollectionDate >= @FromDate

Using an Index HINT
sourcecode language="sql"]
--Date Variable with INDEX HINT
DECLARE @FromDate datetime
set @FromDate = '4/24/2012 4:04PM'
select @FromDate
select * from table1 with (nolock  INDEX(IX_table1_CollectionDate)) where CollectionDate >= @FromDate
[/sourcecode]

CREATE PROCEDURE testSearch2
@FromDate
datetime
AS
select * from table1 with (nolock)
where CollectionDate >= @FromDate

--NO Recompile exec testSearch2 '4/24/2012 4:04PM'


Then I dropped the procedure and recreated it with the RECOMPILE Option:

DROP PROCEDURE testSearch2
GO
CREATE PROCEDURE testSearch2
@FromDate datetime
WITH RECOMPILE
AS select *
from table1 with (nolock)
where CollectionDate >= @FromDate

--AFTER RECOMPILE
exec testSearch2 '4/24/2012 4:04PM' 

SQL State 42000 Rebuild Index

So when using this script

USE [database name]
GO
exec sp_MSForEachtable @command1=”ALTER INDEX ALL ON ? REBUILD”
GO
exec sp_MSForEachtable @command1=”UPDATE STATISTICS ? WITH FULLSCAN”
GO

I started to receive this error message:

ALTER INDEX failed because the following SET options have incorrect settings: ‘QUOTED_IDENTIFIER’. Verify that SET options are correct for use with indexed views and/or indexes on computed columns and/or query notifications and/or xml data type methods. [SQLSTATE 42000]

When you try to rebuild a indexed view or a computed column you will run into this error. I quickly tried adding a SET QUOTED_IDENTIFIER ON to the top of my query but after some reading found out that this has no impact when using the sp_MSForEachTable.

For now I created a DB Maintenance Plan to rebuild the indexes and recompute the statistics. Another way around the issue would be to build a custom script/stored procedure to loop through all the tables in sys.objects (where type = ‘U’ and then using the SET statement before each ALTER INDEX statement.

Optimizer Hints

Man I really need to spend some time and learn how to use Optimizer Hints. A user reported that a query was running slow. I started troubleshooting the issue by first generating an Graphical Execution plan. The main problem was exposed right away.  95% of the process was a table scan on a table that was 7 million rows.

I then fed the query into the Database Engine Tuning Advisor (DETA). The DETA suggested I added a covering index on the 7 million row table for the two columns in my select. With a little more investigation I realized that the PK for that table covers these two columns. Why is the optimizer not using the PK index in the execution plan?

So I set out to rebuild the index and update statistics. No dice. The Optimizer still used the Table Scan vs. the Index Seek. So I started engaging support for the application and after throwing around a few emails, letting them know I had no idea what I was talking about, they finally sent back a query with the Optimizer Hint WITH (INDEX(Index_Name)).

The query went from taking 30 seconds + (sometimes 2 minutes or more) to executing in just under 2 seconds!! Another interesting fact is that the same query ran in 9 seconds on a test server, while using the Table Scan. However that server is a lot more beefier with faster procs and a lot more RAM.

So anyway, all that to say this, I really need to look into Optimizer Hints. Here is a link from http://www.sql-server-performance.com with some more optimizer hint information.

Error: 17803, Severity: 20, State: 14

Had a SQL Server go buck wild this morning. Here are some entries from the log:

2010-02-23 06:29:27.08 server Error: 17803, Severity: 20, State: 14
2010-02-23 06:29:27.08 server Insufficient memory available..
2010-02-23 06:29:29.37 server Error: 17803, Severity: 20, State: 14
2010-02-23 06:29:29.37 server Insufficient memory available..
2010-02-23 06:29:31.63 server Error: 17803, Severity: 20, State: 14
2010-02-23 06:29:31.63 server Insufficient memory available..
2010-02-23 06:29:33.90 server Error: 17803, Severity: 20, State: 14
2010-02-23 06:29:33.90 server Insufficient memory available..
2010-02-23 06:29:36.16 server Error: 17803, Severity: 20, State: 14
2010-02-23 06:29:36.16 server Insufficient memory available..
2010-02-23 06:29:36.38 spid1 LazyWriter: warning, no free buffers found.
2010-02-23 06:29:36.38 spid1 Buffer Distribution: Stolen=1722 Free=0 Procedures=24
2010-02-23 06:29:36.38 spid1 Buffer Counts: Commited=2245 Target=2245 Hashed=499
2010-02-23 06:29:36.38 spid1 Procedure Cache: TotalProcs=6 TotalPages=24 InUsePages=24
2010-02-23 06:29:36.38 spid1 Dynamic Memory Manager: Stolen=1746 OS Reserved=504
2010-02-23 06:29:36.38 spid1 Global Memory Objects: Resource=692 Locks=41
2010-02-23 06:29:36.38 spid1 Query Memory Manager: Grants=0 Waiting=0 Maximum=333 Available=333
2010-02-23 06:29:38.43 server Error: 17803, Severity: 20, State: 14
2010-02-23 06:29:38.43 server Insufficient memory available..
2010-02-23 06:29:40.71 server Error: 17803, Severity: 20, State: 14
2010-02-23 06:29:40.71 server Insufficient memory available..
2010-02-23 06:29:42.98 server Error: 17803, Severity: 20, State: 14
2010-02-23 06:29:42.98 server Insufficient memory available..
2010-02-23 06:29:45.26 server Error: 17803, Severity: 20, State: 14

I simply gave this Server a quick reboot. Apparently he server was starved for memory. Fortunately it was a DEV server so no production down time. I am working on getting our monitoring software updated. Once that is done I will be able to fire off alerts based on memory conditions. Should be good times.

Lazywriter detailed

Based on this article from sql-serverperformance.com, I created this graphical representation the Lazywriter and how it interacts with the Memory Pool.

From SQLServerPedia.com:

The Lazy writer serves two purposes:

  1. Ensure that a specified number of buffers are free in the Buffer Pool so they can be allocated for use by the server.
  2. Monitor the usage of committed memory by the Buffer Pool and adjust it as necessary so that enough physical memory remains free to prevent Windows from paging.

The Lazy writer can adjust the number of buffers in the buffer pool if dynamic memory management is enabled. SQL Server estimates the number of necessary Buffer Pool buffers based on system activity and based on the number of stalls. A stall occurs when a request for memory has to wait on a free buffer page.

To monitor the Lazy Writer use this perf counter: SQLServer:Buffer Manager\Lazy writes/sec. This counter represents the number of buffers written by buffer manager’s lazy writer. This counter tracks how many times a second that the Lazy Writer process is moving dirty pages from the buffer to disk in order to free up buffer space. Usually this should not be a high value, say more than 20 per second or so. Of course you will need to compare it with your established baseline. Ideally, it should be close to zero. If it is zero, then the buffer cache is big enough and SQL Server doesn’t have to free up dirty pages. If this value is high, then you may be experiencing a memory bottleneck.