Identity Ignores Transactions

window-941625_1920So…

I think I found a bug in the latest production version of SQL Server (2017 as of publish). The easiest way to explain it is the title “Identity Ignores Transactions” and the easiest way to show it is with a small sample.

This happens in 4 parts:

Setup:

SET NOCOUNT ON;

IF OBJECT_ID('dbo.IdentityIgnoresTransactions') IS NOT NULL
DROP TABLE dbo.IdentityIgnoresTransactions;


CREATE TABLE dbo.IdentityIgnoresTransactions
( ID INT IDENTITY(1, 1), FunText VARCHAR(250) );


DBCC CHECKIDENT('dbo.IdentityIgnoresTransactions', NORESEED);

When you run this you’ll see nothing in the Results window and in the messages window you’ll see:

Checking identity information: current identity value 'NULL', current column value 'NULL'.
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

Which is exactly what we’d expect. We haven’t entered anything into the table to the identity value hasn’t had to be assigned yet, hence the NULL.

Now we enter some test data:

DECLARE @a SMALLINT;
SET @a = 0
WHILE @a < 1000
BEGIN
INSERT INTO dbo.IdentityIgnoresTransactions
( FunText ) VALUES ('These ones we''ll keep!');
SET @a += 1
END

DBCC CHECKIDENT('dbo.IdentityIgnoresTransactions', NORESEED);
SELECT * FROM dbo.IdentityIgnoresTransactions;
GO

As you can see from the results window we now have 1000 records with identities defined for them and in the messages window it correctly identifies that our last value entered was 1000 and our identity value is 1000.

Checking identity information: current identity value '1000', current column value '1000'.
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

Now is when the fun starts to happen. In this one the only code that’s changed is that we have introduced a transaction that we rollback:

BEGIN TRANSACTION;
DECLARE @a SMALLINT;
SET @a = 0
WHILE @a < 1000
BEGIN
	INSERT INTO dbo.IdentityIgnoresTransactions
	( FunText ) VALUES ('These ones we''ll rollback!');
	SET @a += 1
END

SELECT * FROM dbo.IdentityIgnoresTransactions;

ROLLBACK;

DBCC CHECKIDENT('dbo.IdentityIgnoresTransactions', NORESEED);
GO

At the end of this you can see that those records were in the table before the rollback as evidenced by the now 2000 records in the table. However we ran our identity checker after the rollback and we’ll see some interesting results because of that.

Checking identity information: current identity value '2000', current column value '1000'.
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

The rollback did absolutely nothing for our identity and left it as is. This is not good if you ask me as we’ll see from our entering of our 3rd dataset.

DECLARE @a SMALLINT;
SET @a = 0
WHILE @a < 1000
BEGIN
INSERT INTO dbo.IdentityIgnoresTransactions
( FunText ) VALUES ('These ones we''ll show that identity ignores transactions!');
SET @a += 1
END

DBCC CHECKIDENT('dbo.IdentityIgnoresTransactions', NORESEED);
SELECT * FROM dbo.IdentityIgnoresTransactions;

As you can see from the results we still have our first 1000 records, our second 1000 are gone because of the rollback, but now our 3rd 1000 have identities 2001-3000. But our identity check now comes back with the same results for both numbers so that’s good.

Checking identity information: current identity value '3000', current column value '3000'.
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

 

While we should never rely on our new identity value being right after the last one, for continuity reasons it is nice. Also, shouldn’t a transaction deal with this? Where I foresee an issue is not in the one time rollback but in a testing scenario where we are constantly inserting and rolling back.

Now is your part, tell me why this is or is not a problem in the comments below.

Advertisements

Troubleshooting Deadlocks with RID as the wait resource

Recently I had occasion to take a look at the deadlocks happening on one of our servers. I usually do this the easy way by running the sp_BlitzLock stored procedure placed out there by the wonderful people at Brent Ozar Unlimited.

https://www.brentozar.com/archive/2017/12/introducing-sp_blitzlock-troubleshooting-sql-server-deadlocks/

This time though sp_BlitzLock was running for over an hour so I was guessing there was an issue. Turns out it was mostly on my side. I also have an Extended Events trace running just for deadlocks to make it easier for me to diagnose and I noticed one that was probably the reason that sp_Blitzlock was having trouble. In the visual representation it looks like this:

deadlock

I can’t make heads or tails of that but I can tell you that seems like a really bad brawl for resources. It’s like a Jerry Springer show with a few extras thrown in. Since I knew that my graph wasn’t going to be helpful in this instance I went to the actual xml and tried to figure out how I could tune this to make it better in the future. I needed to know exactly where the issue was so the waitresource pointer is a good place to start.

wait resource

You will see many blog articles on how to find SQL wait resources when the resource type is a key, a page, or an object (I suggest Kendra Little’s blog post) There is however a noticeable glut on articles explaining RID (a RID is a key on a table with no clustered index). I finally found how to tie a RID to an actual resource name but it was used for corruption so the details were a bit hazy at first.

Here is the full script that I was able to use in deciphering RID back to a named resource.


--RID: 15:15:11695844:3
/* Break apart my RID into it's individual sections */
DECLARE @db_id SMALLINT = 15;
DECLARE @file_id SMALLINT = 15;
DECLARE @page_id INT = 11695844;
DECLARE @index_id SMALLINT = 3;
DECLARE @db_name VARCHAR(256) = (SELECT DB_NAME(@db_id))


/* Create and populate a temp table with the meta data from that data page */
CREATE TABLE #PageResults
( [ParentObject] NVARCHAR(255), [Object] NVARCHAR(255), [Field] NVARCHAR(255), [VALUE] NVARCHAR(255) );
INSERT INTO #PageResults
( ParentObject, [Object], Field, [VALUE] )
EXECUTE ('DBCC PAGE (' + @db_id + ', ' + @file_id + ', ' + @page_id + ', 0) WITH TABLERESULTS;');


/* Tie the metadata in the dbcc page output and give me my resource */
EXECUTE ('
SELECT sc.name AS schema_name,
so.name AS object_name,
si.name AS index_name
FROM ' + @db_name + '.sys.partitions AS p
INNER JOIN ' + @db_name + '.sys.objects AS so
ON p.object_id = so.object_id
INNER JOIN ' + @db_name + '.sys.indexes AS si
ON p.index_id = si.index_id
AND p.object_id = si.object_id
INNER JOIN ' + @db_name + '.sys.schemas AS sc
ON so.schema_id = sc.schema_id
INNER JOIN #PageResults pr
ON so.object_id = pr.[VALUE]
WHERE si.index_id = ' + @index_id + '
AND Field = ''Metadata: ObjectId'';
')

results

So you want to customize your SQL Server Management Studio

There has been many a blog post out there that shows you some of the great ways you can customize the look and feel of your management studio windows. One of my favorite recently published ones is from the great people at Brent Ozar Unlimited (Brent himself in this case) here.

There is even many a blog post out there about how you can export your settings from one and import it to another here.

All of the blog posts I’ve read however have one problem for me. They seem very “clicky”, as in a lot of clicking of the mouse (or keyboard shortcuts) has to happen every time you want to import your settings and I thought there has to be a more programatic way.

After all, many of us espouse to the DRY (Don’t Repeat Yourself) technique when coding. Why wouldn’t we want to trim time and energy off of something as basic as a settings import.

I searched for a way to programatically change just one setting (once I know how to do it with one I can do it with the rest) and there was no source I could find to do it. I did however find all the clues and so figured I would share here how I placed Line Numbers on my T-SQL query windows in SSMS 17.x by running a script.

First the setup:

  • All of your user settings for SSMS 17.x are kept in a file at “%userprofile%\Documents\Visual Studio 2015\Settings\SQL Server Management Studio\NewSettings.vssettings”.
  • This is an XML file (with a weird extension obviously).
  • Because the two languages I’m most familiar with today are SQL and PowerShell and I obviously can’t do this in SQL because the settings are re-saved every time we exit so I’d blow out my settings, PowerShell here we go.

The first step is to backup your file (because we want to be safe about this after all).

<# file and backup file variables to be used later #>
$SSMSSeetingsFile = "$Env:userprofile\Documents\Visual Studio 2015\Settings\SQL Server Management Studio\NewSettings.vssettings"
$SSMSSeetingsBackupFile = "$Env:userprofile\Documents\Visual Studio 2015\Settings\SQL Server Management Studio\NewSettings.vssettings.backup"

<# Copy out file to a backup file #>
Copy-Item -Path $SSMSSeetingsFile -Destination $SSMSSeetingsBackupFile

The second step is to import the file into PowerShell because this is XML we have to do it a special way but don’t use Import-CLIXML, it’s good, just not for this.

<# Import our XML File into PowerShell (make sure you use the [XML] otherwise it imports as a string and this breaks #>
$XMLDocument = [XML](Get-Content $SSMSSeetingsFile)

The third step is to find the value we want to change which with this file required a number of where clauses. This is the easiest way to display and get down to the root of what we want to do with this. I have snipped out some info for this screenshot but the important part to know is that if a node is named the same as another we have to differentiate it down to only one node. This is accomplished through the Where-Object statements.vssettingsfile

<# find our node where we will change the value #>
<# anywhere that there are multiple nodes with the same nodename we have to specify their unique value to only bring back that one #>
$node = ((($XMLDocument.UserSettings.ToolsOptions.ToolsOptionsCategory | `
Where-Object {$_.name -eq "TextEditor"}).ToolsOptionsSubCategory | `
Where-Object {$_.Name -eq "SQL"}).PropertyValue | `
Where-Object {$_.Name -eq "ShowLineNumbers"})

Fourth we change the value to what we want it to be.

<# Make our setting what we want it to be #>
$node.'#text' = 'true'

If we want to change another value repeat steps 3 and 4.

And finally step five, save the file.

<# Save the file #>
$XMLDocument.Save($SSMSSeetingsFile)

I had my SSMS window open and it looked like this.

SSMS Before

I closed it with no line numbers, ran my script, and re-opened and they had magically appeared.

SSMS After

In the famous words of John “Hannibal” Smith, “I love it when a plan comes together.”

Let me know if this helps you in your ventures. Thanks.

Stored Procedure Inception

Recently I came upon an issue which for all my searching I could not find a good response to. I was trying to create a stored procedure in another database from a stored procedure in my current database. Finally after much trial I figured the solution to the issue and I figured I would share here so others could have the knowledge.

We have a system in our environment that is complicated enough it has its own pre and post deployment scripts when copying the database from production to non-production. In order to simplify the process I took our old deployment scripts and recreated them as stored procedures. Most of it was pretty straight forward and the items that weren’t were resolved quickly, except for the stored procedure creation.

The background:

  • We have a stored procedure only in non-Production.
  • The stored procedure truncates data in some tables.
  • This stored procedure is run “AS OWNER”.
  •  This allows us to give execute access to this stored procedure without giving over access to our power users.

Now if you are creating a stored procedure through regular T-SQL that’s fine but when creating it in another stored procedure you might run into some issues. As I stated I am trying to create this in another database so lets try doing it through an EXECUTE statement:

EXECUTE ('USE database;
CREATE PROCEDURE dbo.ClearTables
WITH EXECUTE AS owner
AS
TRUNCATE TABLE Table1
TRUNCATE TABLE Table2
RETURN')

When we run this we now get the error:

Msg 111, Level 15, State 1, Procedure ClearTables, Line 5
'CREATE/ALTER PROCEDURE' must be the first statement in a query batch.

Okay…

Well now that we know that, and we know that we can’t put a batch separator in an execute statement how else can we do it? Oh I know. We’ll just fully qualify the create statement.

EXECUTE ('CREATE PROCEDURE database.dbo.ClearTables
WITH EXECUTE AS owner
AS
TRUNCATE TABLE Table1
TRUNCATE TABLE Table2
RETURN')

Now we get a different (set of) error:

Msg 166, Level 15, State 1, Line 10
'CREATE/ALTER PROCEDURE' does not allow specifying the database name as a prefix to the object name.
Msg 319, Level 15, State 1, Line 11
Incorrect syntax near the keyword 'with'. If this statement is a common table expression, an xmlnamespaces clause or a change tracking context clause, the previous statement must be terminated with a semicolon.
Msg 156, Level 15, State 1, Line 12
Incorrect syntax near the keyword 'AS'.

Doh…

So how am I supposed to do this SQL? Is this destined to be something that I can’t script though this method? Do I throw my stored procedure method out and switch to a powershell method?

Finally, it hit me “Inception”. And yes I’m talking about the movie. Basically, like the movie, we have to get around the rules implemented to get the information we want in and out.

The way I finally figured how to do this was to use an execute statement from within an execute statement. Behold my kludgy code glory:

EXECUTE sp_executesql N'USE database; EXECUTE (''CREATE PROCEDURE database.dbo.ClearTables
WITH EXECUTE AS owner
AS
TRUNCATE TABLE Table1
TRUNCATE TABLE Table2
RETURN'')
'

Command(s) completed successfully.

We are opening an sp_executesql and then telling it what database to run against. Once we do that we open an EXECUTE statement within that statement to create the stored procedure. Only 2 levels deep on this one but it finally worked.

Now you know. It is possible and one of these days I hope to be able to make it less ugly. Have you run into anything similar? What was your solution?

Subject Matter Expert

Recently at my work I’ve been touted by my manager as a SQL performance tuning expert to other department heads. Now I’m doing work in other departments to help tune their SQL but I’ve come to realize very quickly that while I have a particular set of skills that I have built up over the years for tuning Microsoft databases, (both at an instance level and a query level), I have never been given a “fresh” system in which I’ve not touched before.

This post is a call to all of you tuners and tweakers out there. When called to look at a system that you’ve never looked at before what is the first thing you look at? I’m starting to learn with this project that I had a very stunted view of tuning. I never stopped to think that finding the issues and making the suggestions wouldn’t be most of my work. I’m now coming to realize that a good portion of the work is not in making the suggestions but in making sure that they have follow-up, at least for this project.

When you do have suggestions, where do you start? Do you give your “client” all of your suggestions all at once, or do you give them to them piecemeal? Do you start with the easiest stuff or with the stuff that will have the most impact? Is it a mixture of both and what metrics go into deciding that?

I realize this post wanders from my typical post at this point but currently this is where I’m at. I’m more questions than answers and so I’m reaching out. I hope to follow-up on this post once I get some of my own answers but for now, this is where I’m at.

Thanks for reading my ramblings.

Code You Would Hate To Live Without #tsql2sday

T SQL Tuesday
T SQL Tuesday

 

I finally found a #tsql2sday topic that I thought I could contribute to and wanted to make a quick post. T-SQL Tuesday is a writing prompt shared with the SQL community at large to talk about something related to your work (usually with SQL) that occurs on a monthly basis and is now up to #104.

This month it is being hosted by Bert Wagner ( b | t ) and he has challenged us to share some code (preferably that we’ve written) that we honestly would probably be making currently if we didn’t have it existing already.

My bit of code that I’m throwing into the foray this month is my sp_FileSpace stored procedure. I’ve been given a large number of databases through the years that can be referred to as VLDBs (Very Large DataBases) or at least VLDB hopefuls. Since they are in the VLDB arena one of the best practices is to break out the database data into multiple files and then someone has to monitor those multiple files to ensure they are staying consistent.

When I want to get the state of files in a database often they are spanned across multiple drives and often the easiest way to understand what is going on with them is with a grid created in T-SQL. I’ve created this script to show me different things but all in all typically how big each file is, how big each filegroup is, and if they are consistent.

My parameters in this stored procedure all have default values because I like to typically see things the same way but anytime I look at something and I’d like to see it changed for one reason of another I’ve started to throw it into a parameter so it’ll be more useful in the long run.

As Bert says, my code is not perfect but that’s why it’s not a compiled stored procedure. When I find something that needs changing I change it.

Now I’ve made sure to publish this at midnight (my time) on Tuesday. I’m hoping that gives someone else plenty of time to put something together.

Without further ado I introduce you to sp_FileSpace:

USE [master]
GO
IF OBJECT_ID('dbo.sp_FileSpace') IS NULL
EXEC ('CREATE PROCEDURE dbo.sp_FileSpace AS RETURN 0;');
GO
ALTER PROCEDURE [dbo].[sp_FileSpace]
@Drive [CHAR](1) = NULL
, @ShowRunningTotals [BIT] = 1
, @Filegroups [VARCHAR](50) = NULL
, @ShowGB [BIT] = 1
, @ShowMB [BIT] = 1
, @ShowKB [BIT] = 0
, @ShowExtents [BIT] = 0
, @ShowPercents [BIT] = 1
, @ShowGrowth [BIT] = 1
, @DBName [VARCHAR](256) = NULL
AS
BEGIN
PRINT 'Declaring Variables'
DECLARE @Select AS [VARCHAR](MAX)
DECLARE @From AS [VARCHAR](MAX)
DECLARE @Where AS [VARCHAR](MAX)
DECLARE @Order AS [VARCHAR](MAX)
DECLARE @sql AS [NVARCHAR](MAX)
DECLARE @Compat AS [SMALLINT]
PRINT 'Check Compatibility Level of database to use SUM windowing functions'
SELECT @Compat = [compatibility_level] FROM [sys].[databases] db WHERE [db].[name]=DB_Name()
PRINT 'Building Select Statement'
SET @Select = '
SELECT
[FileId] = [dbf].[file_id]
, [FileGroup] = dbf.data_space_id
, [FileGroupName] = COALESCE([fg].[name],''Log'')'
PRINT 'Deciding to put GB on values or not'
IF @ShowGB = 1
SET @Select = @Select + '
, [GBs] = [dbf].[size] / CAST(131072 AS FLOAT(2))
, [GBs Used] = FILEPROPERTY([dbf].[name],''SpaceUsed'') / CAST(131072 AS FLOAT(2))'
PRINT 'Deciding to put MB on values or not'
IF @ShowMB = 1
SET @Select = @Select + '
, [MBs] = [dbf].[size] / CAST(128 AS FLOAT(2))
, [MBs Used] = FILEPROPERTY([dbf].[name],''SpaceUsed'') / CAST(128 AS FLOAT(2))'
PRINT 'Deciding to put KB on values or not'
IF @ShowKB = 1
SET @Select = @Select + '
, [KBs] = CAST([dbf].[size] * CAST(8 AS FLOAT(2)) AS BIGINT)
, [KBs Used] = fileproperty([dbf].[name],''SpaceUsed'') * CAST(8 AS FLOAT(2))'
PRINT 'Deciding to put Extents on values or not'
IF @ShowExtents = 1
SET @Select = @Select + '
, [Total Extents] = [dbf].[size] / 8
, [Used Extents] = FILEPROPERTY([dbf].[name],''SpaceUsed'') / 8'
PRINT 'Deciding to put Percents on or not'
IF @ShowPercents = 1
SET @Select = @Select + '
, [Percent Used] = CAST(CAST((CAST(fileproperty([dbf].[name],''SpaceUsed'') AS NUMERIC(15,2)) / CAST([dbf].[size] AS NUMERIC(15,2))) * 100 AS NUMERIC(15,2)) AS VARCHAR(10)) + ''%'''
PRINT 'Adding file rows to select'
SET @Select = @Select + '
, [Logical File Name] = [dbf].[name]
, [Real File Name] = [dbf].[physical_name]'
PRINT 'Deciding to put Growth on or not'
IF @ShowGrowth = 1
SET @Select = @Select + '
, [Growth] = CASE [dbf].[is_percent_growth]
WHEN 0 THEN CAST([dbf].[growth] / 128 AS VARCHAR(8)) + '' MB''
WHEN 1 THEN CAST([dbf].[growth] AS VARCHAR(8)) + '' %''
ELSE ''New unknown growth pattern''
END'
PRINT 'Deciding to put Running Sums on or not'
IF @Compat > 110 AND @ShowGrowth = 1
SET @Select = @Select + '
, [Running Total GB Used FileGroup] = CAST(SUM(fileproperty(dbf.name,''SpaceUsed'') / CAST(131072 AS FLOAT(2))) OVER(PARTITION BY fg.name ORDER BY (dbf.name)) AS FLOAT(2))
, [Running Total GB FileGroup] = CAST(SUM(dbf.size / CAST(131072 AS FLOAT(2))) OVER(PARTITION BY fg.name ORDER BY (dbf.name)) AS FLOAT(2))'
PRINT @Select
PRINT @DBName
IF @DBName IS NULL
SELECT @DBName = DB_NAME();
PRINT 'Database we are looking at: ' + @DBName
SET @From = '
FROM
' + QUOTENAME(@DBName) + '.[sys].[database_files] [dbf]
LEFT JOIN ' + QUOTENAME(@DBName) + '.[sys].[filegroups] [fg]
ON [dbf].[data_space_id]=[fg].[data_space_id]'
SET @Where = '
WHERE
1=1'
IF @Filegroups IS NOT NULL
SET @Where = @Where + '
AND [dbf].[data_space_id] IN (' + @Filegroups + ')'
IF @Drive IS NOT NULL
SET @Where = @Where + '
AND LEFT([dbf].[physical_name],1)='''+ @Drive + ''
SET @Order = '
ORDER BY
[dbf].[data_space_id]
,[dbf].[name]
'
SET @sql = @Select + ' ' + @From + ' ' + @Where + ' ' + @Order
PRINT @sql
EXECUTE sp_executesql @sql
END;

Temporary Tables: Local Temporary Tables

Temporary Tables

I know. I know. The title looks weird. In our ongoing talk about temporary tables we have now talked about Common Table Expressions(CTEs). CTEs are the shortest lived temporary type tables (only kept around for the next statement after). Then we talked about table variables (table names that start with an @ sign). Those lasted so long as we were actively executing. Were now talking about the types of temporary tables that do not have another name associated with them and they are also the most common used. You can tell these tables because they always start with a # sign.

Temporary tables follow all rules that regular tables do with a few exceptions:
Regular tables can have names as long at 128 characters. Temporary tables can only be 116 characters. This is because SQL will place a unique value at the end. That unique value is 12 characters long. It is a hex value between 0 and E. This means that SQL server could have 184 quadrillion temp tables before reusing uniquifiers.

  • Temporary tables can only exist in the tempdb database. This is both good and bad. It means a shared workspace so your not mucking up your own database. It also means you can have contention from the other databases.
  • Temporary tables delete themselves when the connection breaks. This means if you are running a stored procedure that creates a temporary table the temporary table is gone at the end of the stored procedure. If you have a temporary table in a Management Studio session you can tell SQL to drop the table or close the connection.
  • When you have data that you only want to keep for this session the goto should be use of a temporary table. Temporary tables can have indexes added to them. They get statistics created for them so SQL gets a good idea of how much work it has to do.

Our test code this week is very close to last week:

CREATE TABLE #FirstUsers (Id int, DisplayName nvarchar(40))
INSERT INTO #FirstUsers
SELECT Id, DisplayName
FROM dbo.Users
WHERE Id < 10
SELECT FU.Id, FU.DisplayName, COUNT(*)
FROM #FirstUsers FU
LEFT JOIN dbo.Votes V ON FU.ID = V.UserId
GROUP BY FU.Id, FU.DisplayName

Under the hood, SQL is doing very different things. When SQL creates this table it gives it a name that is somewhat meaningful under the covers. It is also creating statistics (at least when it needs to) which can be very helpful for getting our data as fast as possible.

These should be one of the first things that you learn how to make for data aggregation purposes as temporary tables are used often.