Calculating Max and Min Memory Settings

Here is the T-SQL we use to calculate the Max and Min memory settings for our SQL Server instances

 @maxMem BIGINT,
 @optMem BIGINT,
 @minMem fLOAT;
IF OBJECT_ID('tempdb..#msver') IS NOT NULL
[index] int,
[name] varchar(200),
[intval] bigint,
[charval] varchar(2000)

IF OBJECT_ID('tempdb..#report') IS NOT NULL
DROP TABLE #report
[name] varchar(200),
[value] bigint,
[description] varchar(2000)

INSERT INTO [#msver] ([Index], [Name], [IntVal], [CharVal]) EXEC [master].dbo.xp_msver;

INSERT INTO [#report] ([Name], [Value], [Description])
SELECT 'total memory', [IntVal], 'Total amount of physical memory in server'
FROM [#msver]
WHERE [Name] = 'PhysicalMemory';

INSERT INTO [#report] ([Name], [Value], [Description])
SELECT [Name], CONVERT(BIGINT,[value_in_use]), [description]
FROM [master].sys.[configurations]
WHERE [name] IN ('max server memory (MB)', 'min server memory (MB)');

INSERT INTO [#report] ([Name], [Value], [Description])
SELECT [Name], CONVERT(BIGINT,[value_in_use]), [description]
FROM [master].sys.[configurations]
WHERE [name] = 'optimize for ad hoc workloads';

SELECT @curMem = CONVERT(BIGINT, [Value]) FROM [#report] WHERE [Name] = 'max server memory (MB)';

SELECT @maxMem = CONVERT(BIGINT, [Value]) FROM [#report] WHERE [Name] = 'total memory';
PRINT 'Current Max Memory Setting: ' + CAST(@curMem AS VARCHAR(20))
PRINT 'MAX Server Memory: ' + CAST(@maxMem AS VARCHAR(20))
SET @optMem = @maxMem - (CASE WHEN @maxMem <= 16384 THEN (@maxMem / 4) ELSE 4096 + (@maxMem - 16384)/8 END);
PRINT 'Suggested Max Memory: ' + CAST(@optMem AS VARCHAR(20))
SET @minMem = @optMem * .60
PRINT 'Suggested Min Memory: ' + CAST(@minMem AS VARCHAR(20))

The Script will output the suggested values for Max and Min memory settings based on the current memory of the server.

bcp to CSV 0x00 character issues – NUL in Notepad++

I tried every bcp option under the sun trying to get rid of the NUL characters that kept cropping up in my CSV file from a bcp export. Finally stumbled on the solution here:

I wrapped the columns causing the problem with a NULLIF(column, ”) as suggested in the forum post and that worked like a champ.

bcp "select * from ##MessageRights" queryout c:\Scripts\MessageRights.csv -w -t, -T -E -k -S SERVERNAME

sp_cycle_errorlog Msg 17049 Unable to Cycle Error Log file

Today I had an ERRORLOG file going buck wild! It was up over 82GB. This was discovered because of a hard drive monitor we have on the server telling us when the hard drive space is running low. Knowing that I had little chance of opening the ERRORLOG in SSMS or in a text editor I used the xp_readerrorlog stored procedure to parse the last few log entries to see what was up.

xp_readerrorlog 0,1,NULL,NULL,'MM/DD/YY HH:MI:SS','MM/DD/YY HH:MI:SS','desc'

--Value of error log file you want to read: 0 = current, 1 = Archive #1, 2 = Archive #2, etc...
--Log file type: 1 or NULL = error log, 2 = SQL Agent log
--Search string 1: String one you want to search for
--Search string 2: String two you want to search for to further refine the results
--Search from start time
--Search to end time
--Sort order for results: N'asc' = ascending, N'desc' = descending

We had DEADLOCKS going crazy. 100K+ entries into ERRORLOG every minute. I checked which TRACE Flags we had enabled and disabled the DEADLOCK related flags.



Then I tried to cycle the log to give me a fresh log file. I received the following error when I tried to cycle the log.

EXEC sp_cycle_errorlog

Msg 17049, Level 16, State 1, Procedure sp_cycle_errorlog, Line 9
Unable to cycle error log file from ‘X:\Path to ERRORLOG\ERRORLOG’ to ‘X:\PATH to ERRORLOG\ERRORLOG.1′ due to OS error ’32(The process cannot access the file because it is being used by another process.)’. A process outside of SQL Server may be preventing SQL Server from reading the files. As a result, errorlog entries may be lost and it may not be possible to view some SQL Server errorlogs. Make sure no other processes have locked the file with write-only access.”
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

Ran this query to see what was using xp_readerrorlog and found 2 sessions. Killed the Sessions and re-ran the sp_cycle_errorlog.

FROM sys.dm_exec_requests a
OUTER APPLY sys.dm_exec_sql_text(a.sql_handle) b
WHERE session_id > 50
and session_id <> @@spid AND( text = 'xp_readerrorlog' OR text = 'sp_cycle_errorlog')
ORDER BYstart_time</em>

--Killed the SPIDs

Once I had the log cycled I just deleted it. It was more important to get the disk space under control then researching the deadlock. After some research we discovered the Deadlocks were a result of a missing index in our replicated environment, causing the DEADLOCK flood of ’13.