At times you need to dump the results of a stored procedure call to a text file. To me the simplest way to achieve this is a call to the OSQL command line utility.
osql -E /Q “exec master..sp_help_revlogins” -o c:\GenLogins.sql
This little gem will use the sp_help_revlogins (More info here) to dump the SQL logins to a text file. Handy to run for recovery purposes. The resulting script can be used to quickly recreate logins after a restore.
There are about a billion articles about this already I am sure. I created this script using several different articles and samples. Just a straight forward way of quickly listing all the details about index fragmentation for the database of your chosing sorted by avg_fragmentation_in_percent.
DECLARE @theDatabase int
set @theDatabase = (select DB_ID(‘Your Database here’))
SELECT OBJECT_NAME(i.object_id) AS TableName,i.name AS TableIndexName,p.*
FROM sys.dm_db_index_physical_stats (@theDatabase, NULL, NULL, NULL, ‘DETAILED’) p, sys.indexes i
where i.object_id = p.object_id
and p.index_id = i.index_id
order by p.avg_fragmentation_in_percent DESC
Again there are several articles out there on how to interpret the results as well I like to target the most used indexes first. You can use SQL Profiler to find out what is used the most. An article on SQL Profiler should be my next post. Sometimes the most used index is also the largest, requiring the most resources to rebuild. Once you determine what indexes need attention you have some options for repairing the fragmentation.
Books Online Search Topic: Reorganizing and Rebuilding Indexes.
Option 1: Reorganizing the Index – uses minimal system resources. Reorders leaf-level pages to match the logical order of the leaf nodes. Having the pages in order improves index-scanning performance. The index is reorganized withing the existing pages allocated to it, no new pages are allocated. Reogranizing also compacts the index page. Reorganizing is good for indexes that are not heavily fragmented.
Option 2: Rebuilding the Index – Rebuilding an index drops the index and creates a new one. All fragmentation is removed, disk space is reclaimed by compacting the pages using the specified or existing fill factor setting, and the index rows are reordered in contiguous pages. This can reduce the number of page reads required to obtain the requested data and can imporve disk performance. You can either use the ALTER INDEX REBUILD or the CREATE INDEX DROP_EXISTING SQL to rebuild an index.
AWE – Address Windowing Extensions. Honestly I have heard the term before, and I knew that the person/article stating the term was talking memory. So I set out to get a clearer understanding of what AWE meant and how it relates to SQL Server.
From M$ web site: Address Windowing Extensions (AWE) allow 32-bit operating systems to access large amounts of memory.
My first question from that is how? How does that work? Some more searching led me to this from Microsoft’s web site:
Microsoft SQL Server uses the Microsoft Windows Address Windowing Extensions (AWE) API to support very large amounts of physical memory. SQL Server can access up to 64 gigabytes (GB) of memory on Microsoft Windows 2000 Server and Microsoft Windows Server 2003.
So from that we know the following:
* AWE is a set of memory management extensions (API)
* Only applicable to 32bit Systems
* Will increase the amount of memory SQL Server can support
Standard 32-bit addresses can map a maximum of 4GB of memory. You can find several articles on the Boot.ini hack to override the windows default of 2GB to SQL, 2GB to OS. The override will get you another 1GB so that 3GB is dedicated to SQL Server while 1GB is reserved for the OS.
So yes you can use AWE to allow SQL server to allocate more memory, however the memory can only be used by SQL Server data buffers and cannot be used for things like caching query plans, sorting, indexing, user connections, etc. And best guess is that enabling AWE adds some overhead that you would not see in a pure 64bit installation. Before you utilize AWE, if the budget is there, and you need to address more than 3GB of RAM you may want to go the 64bit route.
Below is a script you can use to backup a database and include a date and time stamp on the file name.
DECLARE @backdate varchar(20)
set @backdate = (select REPLACE(REPLACE(CONVERT(varchar(100), GETDATE(), 100),’:’,’_’),’ ‘,’_’))
exec (‘BACKUP DATABASE [DatabaseName] TO DISK = N”c:\Program Files\Microsoft SQL Server\MSSQL\BACKUP\databaseName_’ + @backdate + ‘.bak” WITH INIT , NOUNLOAD , NAME = N”Database Name backup”, SKIP , STATS = 10, FORMAT’)
The date and time stamp will look like this: Dec_14_2009_12_00PM
You can use this to append on the file if you need to keep multiple copies of your .bak file.
DTS. Sometimes it is hard to let go. I still use DTS early and often. I even migrate some DTS jobs to run on SQL Server 2005 servers. I know, I know. I should invest the time in converting them to SSIS and I keep putting it off. I will get to it some day.
In the meantime I ran across the below error today after installing the 2000 DTS Designer components.
The procedure entry point ?ProcessExecute@@YAXPAUHWND__@@PBG1@Z could not be located in the dynamic link library SEMSFC.dll.
The Designer Components are apart of the 2005 Feature Pack located here. I had the following tools installed. SQL Server 2000 Client tools (Enterprise Manager, Query Analyzer, Profiler, etc), SQL Server 2005 Client tools (SSMS, Books Online, SSIS Design Stuido, etc.). I had patched the SQL Server 2000 client tools to SP4. I went to install the DTS Designer Component. The install was successful no errors. I went to fire up Enterprise Manager and then started receiving the error.
I promptly rebooted to see if that would fix the problem. No dice. So I did a google search and stumbled on a installation path that would prevent the error from popping up. I didn’t really want to reinstall everything so I just re-applied SP4 for SQL Server 2000 and boo-yeah, no error. Working good.
Here is a cursor to kill all the spids for a database you specify.
DECLARE @spid int
DECLARE @sql varchar(500)
DECLARE spid_cursor CURSOR FOR
select spid from sysprocesses where spid > 50 and DB_NAME(dbid) = ‘Your Database Here’
FETCH NEXT FROM spid_cursor INTO @spid
WHILE @@FETCH_STATUS = 0
set @sql = (‘KILL ‘ + CAST(@spid as varchar(20)))
FETCH NEXT FROM spid_cursor INTO @spid
You specify what database you want to kill in your where clause DB_NAME(dbid) = ‘Your Database here’