SQL PASS 2014 Early Bird

Next year, the PASS Summit will be held in Seattle on November 4th through the 7th. The full price is $2295. However, if you register early, you can get it for as little as $1095. That amounts to a discount of $1200, or a 52% discount.

This year, I am registering for PASS now to get the Early Bird Price. And, I encourage many of you to do the same thing. Act now, don’t delay. The Early Bird Price is only good until December 6th. Then, the price starts creeping up every couple of months.

The primary reason I am doing this is to take control of my career and my learning. While I have had employers agree to send me to PASS in years past, it was never a sure thing. Sometimes yes, sometimes no.

And to be fair, your job doesn’t really owe you that, either. However, I feel PASS is important enough to my career, that I am willing to pay the costs myself to ensure that I am able to attend.

Of course, the best option is if your employer covers the costs. And what better way to convince your boss than to make it cheaper for her? By paying for the registration myself, I’m doing two things. One, I’ve cut the costs in half. That’s enough to pay for most of my travel costs. And two, I have helped with budget planning. The training budget for 2014 may not be approved or funded yet. If your company is on a fiscal calendar, and most are, any training money for next November probably won’t be available until after the summer.

Hopefully, this will demonstrate some things to your employer. You are serious about SQL Server and your career. If you are willing to spend your own money for training, then they should recognize that you will be a good steward in spending their money. You are the one keeping up with current trends in technology. When it is time to help steer the company’s technology vision, you will be the one they call on.

Here are a few links to some other brave (or foolish) souls who have done the same thing:

“The only thing worse than training your employees and having them leave is not training them and having them stay.” – Henry Ford

SQL Saturday #255 Dallas Revisited

Last week, I presented my session on Troubleshooting Memory Pressure in SQL Server at the SQL Saturday in Dallas. I have uploaded my session slides and sample code both here and at the SQL Saturday website.

This SQL Saturday was a special one for me. However, there were both good and bad sides to my experience.

The Good

At this point, I have presented at all three SQL Saturdays in Texas: Austin, Houston, and Dallas. Of course, since I’ve only done one per year, it’s taken me three years to get this far. So I’m not exactly tearing it up. :-)

So my goal for next year is two-fold. On the one hand, I want to present at as many of the Texas SQL Saturdays as my schedule will allow. On the other, I would like to present at at least one SQL Saturday that is out of state. Of course, I have to balance this with work requirements and my family obligations.

I had a great crowd for my session. All the seats were filled and people were sitting on the floor in the aisles. The estimated crowd size was about 65 people. Not bad at all.

As always, one of the best aspects of any community event is, the community. I really enjoyed the speakers dinner, reconnecting with old friends, and making some new ones in the process.

I got to see some great sessions in the afternoon. To start things off, Tim Mitchell had a great session full of tips to Make Your SSIS Packages Run Faster. Then, I sat in on Sri Sridharan’s session on Turbocharging Your Career. He had some great ideas on how to take your career to the next level. To close out the day, I checked out Mike Hotek’s session on how he designed a 10+ Exabyte Data Warehouse. Afterwards, several of us were trying to figure out who the client was. But alas, it was confidential.

The Bad

In the weeks leading up to SQL Saturday I have been sick. So I was not exactly enthused about presenting in front of anybody. All I wanted to do was stay home and sleep. Add to this I had some other fairly stressful things going on and I wasn’t the happiest camper this particular weekend.

I signed up for Andy Leonard’s pre-con on SSIS. There wasn’t anything wrong about his presentation. But I felt so sick that I ended up leaving at lunchtime. Basically, I could do the pre-con or give my session the next day, but I couldn’t do both.

But that’s alright, because things soon took a turn for the worse, and helped me forget about any of that.

The Ugly

I had some pretty bad laptop problems during my presentation. Everything was going fine while I was getting setup, but the moment I started my laptop display and the projector display cut out. It took the A/V guys a full ten minutes of futzing around with it to get everything running again.

At that point, I was thrown off balance a bit and ended up rushing through my presentation. To make matters worse, it seemed like every time I switched between PowerPoint and SSMS I would loose my laptop screen again. So I would have to crane my neck to look at the projector screen while setting up each demo. Lots of fun.

At the end of the day, I have to take responsibility for my laptop problems. I have given presentations several times before, but this was a new laptop and I had never used it before. If it were not for the Johnny-on-the-Spot A/V guy, this could have been a lot worse. Thank you, sir!

So I pushed on through my presentation, trying to make the best of a bad situation. I made some jokes about how the title should have been about Disaster Recovery. People laughed at my jokes, and no one walked out. Thank you, Dallas!

I gotta tell you, I was really dreading my evaluations. I did get dinged by a few people, fair enough. But I was pleasantly surprised shocked to find that I overwhelmingly received fours and fives on my evaluations. Thank you again Dallas.

Phoenix Rising

I’ve heard it said that whatever does not kill you, makes you stronger. I must say that I agree.

Just a week ago, I was sick, stressed out, and not a happy camper. Now, my career has taken a new turn that should be quite entertaining.

Many times, the SQL Community has given me a little boost that I needed to get myself back on track. Get involved with your local SQL User Group. Network, learn, and grow. And when you’re ready, or even when you are not, sign up to give a presentation. You might be surprised how your career starts to take off.

Veterans Day

Iwo Jima

Iwo Jima

Today is veterans day. I like to take some time to reflect on the all the men and women in service who have sacrificed for the rest of us.

I am also thankful for the things that the Marines have provided me, as well. They taught my the value of integrity and hard work. They also helped pay my way through college.

USMC

USMC

As you are going about your day, please take a moment to think about the world around us and be thankful for what you have in this world.

Service and sacrifice is not limited to the military. Throughout human history countless people have stood up for what they felt was right and have helped propel us along. It is that desire for improvement that makes us all human.

SQL Server 2014 CTP2

SQL Server 2014 CTP2 is now available. This is the last public beta before the final RTM release. So, if you like playing with new features and experimenting before everyone else then this is the download for you.

CTP2 offers some new enhancements with In-Memory OLTP, aka Hekaton. I’m interested to see the new range indexes work with memory-optimized tables. In CTP1 we only had hash indexes.

Kalen Delaney has an updated white paper available. If you want read more about Hekaton under the covers, then be sure to check it out.

I’m Speaking at SQL Saturday #255 Dallas

SQL Saturday is coming to town. SQL Saturday is coming to town. OK, so that’s not as awesome as Santa Claus coming to town, it’s still pretty cool. You get a full day of SQL Training for free. Well, almost free. There is a $10 charge for lunch.

sqlsat255_web

If you haven’t been to a SQL Saturday before, here is your chance. Dallas is hosting SQL Saturday #255 on Saturday, November 2nd. Saturday? Are you kidding me? I know, I thought it was a joke the first time I heard it, but this will make my 5th SQL Saturday.

SQL Saturday’s are a ton of fun. You get to meet other SQL Folks from all over the state. There are always a significant number of attendees that come from across the country, and even a few that will come in from overseas.

Quite a few of the speakers will be top tier talent who have spoken at PASS or other national conferences. And then they let me in the club. Not sure how I made it, but I’ve been selected to speak, as well.

I’ll be presenting a session on Troubleshooting Memory Issues in SQL Server. I’ll go through the basics of Memory Pressure, and show you various tools and techniques to troubleshoot it. Be sure to bring some tomatoes and old DIMMs for the interactive portion of the show.

If you’re not interested in my session, there are a total of 42 sessions being offered. Douglas Adams would be proud. Sessions are organized into several tracks including Administration, Development, Business Intelligence, and Career Development.

Additionally, on Friday there are three full-day pre-con sessions being offered. These cost $100 and you must register ahead of time. I’ve registered for Andy Leonard’s session covering SSIS 2012. Another great one is Grant Fritchy’s session on Query Tuning. I saw his session at PASS last year; it’s a good one. At PASS these sessions cost about $300 – $400. So this is a huge discount for the same level of content.

So what are you waiting for? Grab a mouse, head to the website, and register. Oh, you don’t live anywhere near Dallas. That’s OK, because there’s a SQL Saturday coming soon to a town near you.

An Overview of SQL Server 2014 In-Memory OLTP Hekaton

So you’ve heard of Hekaton, but what is it, and why do you want it?

Hekaton, or In-Memory OLTP, is an entirely new set of data structures for tables and indexes that is optimized for memory-optimized storage as opposed to disk-based storage.

Hekaton is the code name for In-Memory OLTP, so I will use these two terms interchangeably.

Why is Microsoft doing this? Short version, memory and server are much cheaper now, than they were when SQL Server first launched.

At this point, it is feasible to have enough memory on your server to house the entire database. Even large, one terabyte databases.

However, the query optimizer and its costing rules haven’t changes along with this. So, even if you have tons of memory, SQL Server is still making assumptions that it will be reading data off of the disk.

Basic Glossary of Terms

  • Cross-Container Transactions – transactions that use both disk-based tables and memory-optimized tables
  • Disk-Based Tables – plain old normal tables, what you have right now, 8k pages
  • Hekaton – codename for In-Memory OLTP
  • In-Memory OLTP – new architecture and data structures using memory for data storage instead of disks
  • Interop – a TSQL query written against memory-optimized tables
  • Memory-Optimized Tables – tables using new memory data structures to store their data
  • Natively Compiled Stored Procedures – compiled machine code instead of interpreted TSQL, still written in TSQL but with some restrictions

Databases

In order to make use of In-Memory OLTP, you need a database that supports it. It’s fairly easy to do this. When you create the database you need a special filegroup with the CONTAINS MEMORY_OPTIMIZED_DATA clause. Additionally, you need to use a Windows BIN2 collation. This can be done at the database, table, or query level.

Tables

To create a Memory-Optimized you use the MEMORY_OPTIMIZED = ON clause. There are several restrictions on column types, but in simple terms no LOB data types are allowed, no whatever(max), and no CLR.

Rows are limited to 8060 bytes with nothing stored off row. The size limitation is enforced at creation, so all of your columns sizes must fit within this limitation.

DML triggers are not allowed, neither are foreign key or check constraints. Love GUIDs? I hope so, because identity columns are out, too.

There are two basic types of Memory-Optimized Tables, SCHEMA_ONLY and SCHEMA_AND_DATA.

SCHEMA_ONLY tables are non-durable. You can put data in there, but in the event of a restart or crash, the table is recreated but your data is gone. This could be useful for storing application session state or for staging tables in a data warehouse.

Indexes

Memory-Optimized Tables can have two types of indexes, Hash Indexes and Range Indexes. All tables must have at least one index, and no more than eight. Also, tables that are defined as SCHEMA_AND_DATA must have a primary key. Indexes are rebuilt each time SQL Server starts up.

A Hash Index is an array of pointer, where each element points to a linked list of rows. The number of elements in the array is controlled by the BUCKET_COUNT clause. In general, you want to set the BUCKET_COUNT to at least the number of unique key values in your table.

If you have too few buckets, then multiple key values will share the same linked list, which will mean longer scans to look for your row. If you have too many, then you will be wasting memory with empty buckets.

Range Indexes are good for when you will be searching for a range of values, or if you are not able to properly estimate the BUCKET_COUNT size. However, Range Indexes are not available in CTP1, so we’ll have to wait a bit to learn more about those.

Queries and Stored Procedures

There are two basic methods for querying memory-Optimized Tables. Natively Compiled Stored Procedures or good old-fashioned TSQL, known as Interop, which also includes regular stored procedures.

Natively Compiled Stored Procedures are going to be the fastest. However, they are only able to access Memory-Optimized Tables. If you want to be able to query regular tables along with Memory Optimized Tables, then you will need to use TSQL Interop. There are a variety of restrictions when using TSQL Interop such as MERGE, cross database queries, locking hints, and linked servers.

TSQL Interop allows you to make a gradual migration to In-Memory OLTP. This way you can slow convert a few objects at a time based on which ones will give you the most performance gain.

One Big Caveat

One thing to keep in mind is that tables, indexes, and stored procedures cannot be modified in Hekaton. This means that you will need to drop and re-create these objects in order to make changes. Also, stats have to be rebuilt manually. And then to take advantage of them, the stored procedures would need to be recreated, as well.

Obviously, this is a fairly major restriction. However, I think I can live with this for a version one product. I hope that by the time SQL Server 2015 comes out, there will be an easier way to add a column to a Memory-Optimized Table.

Concurrency

Hekaton offers an improved versioned optimistic concurrency model for memory-Optimized Tables that removes waiting for locks and latches. Explicit transactions are supported using Repeatable Read, Serializable, and Snapshot isolation levels. Read Committed and RCSI are only available with autocommit transactions, with RCSI only if no disk-based tables are involved.

High Availability and Disaster Recovery

All the basics such as backups and restores are available. Additionally, AlwaysOn and Log Shipping are fully supported. Unfortunately, Mirroring and Transactional Replication are not. However, this isn’t too much a surprise since SQL Server is definitely pushing AlwaysOn as the new HA/DR solution.

Migration Assistance

The AMR Tool (Analyze, Migration, Reporting) will identify unsupported data types and constraints in tables. It will also recommend which tables and stored procedures should see the most performance improvement by converting to In-Memory OLTP and Memory-Optimized Tables.

Management Data Warehouse offers the Transaction Performance Collection Set, which will help you gather the necessary data in order to let the AMR Tool work its magic.

SQL Server 2014 AMR Tool for In-Memory OLTP

SQL Server 2014 has many new features and improvements over SQL Server 2012. One feature that a lot of people are interested in is In-Memory OLTP. Knowing where or how to take advantage of this feature can hold people back from playing with it.

The AMR Tool (Analyze, Migrate, Report) helps you simplify migrations to SQL Server 2014 In-Memory OLTP.

SQL Server 2014 AMR Tool

SQL Server 2014 AMR Tool

The AMR Tool helps you identify which tables and stored procedures will benefit from In-Memory OLTP. If you already have some migration plans, then the AMR Tool can help validate your plans. It will evaluate what needs to be done to migrate your tables and stored procedures.

In order to take advantage of the AMR Tool you will need the following three items:

  • A target database that you want to migrate to SQL Server 2014. This needs to be SQL Server 2008 or higher. So no old-school databases here.
  • A copy of SQL Server 2014 CTP1 Management Studio installed. Note, you do not need a SQL Server 2014 instance or database, just the tools.
  • And last, a Management Data Warehouse with the Transaction Performance Collection Set installed.

Once you have these items setup, you are ready to begin using the AMR Tool to generate recommendations based on the access characteristics of your workload, contention statistics, and CPU usage of stored procedures.

Resources

Benjamin Nevarez has a nice tutorial on using the AMR Tool. Another good resource is the Hekaton whitepaper by Kalen Delaney. If you don’t already have SQL Server 2014 CTP1, you can download it here.

Performance Tuning with Compression

One lesser known trick for performance tuning is compression. Wait, what? Isn’t compression all about saving space? Yes, but it also tends to have another pleasant side effect.

SQL Server is typically an IO bound application. That means, IO is almost always your constraining factor. Whenever I am troubleshooting a system, IO is one of the areas that I always take a look at.

Enabling compression will reduce the amount of IO that SQL Server uses to satisfy SELECT queries. Since more data is being stored on each page, it takes less pages to complete a query.

A Quick Demo

We’ll use the following simple query as our baseline. Run the query and then take a look at the number of IOs. To see this click on the Messages Tab after you run the query.

Results

IO Baseline

IO Baseline

Here’s our baseline. We had 1,240 reads on the FactInternetSales table. Now, let’s enable Row Compression and re-run the query.

Results

Row Compression

Row Compression

Here you can see the IOs were cut in half, 656 reads from FactInternetSales. Last, turn on Page Compression and run the query one more time.

Results

Page Compression

Page Compression

Now we have less than a quarter of the original IOs. Only 292 reads from FactInternetSales. Looks good to me.

There’s No Such Thing as a Free Lunch

One thing to keep in mind is that compression will increase your CPU usage. In practice, I have usually found this to be in the range of one to three percent. That said, if you are currently experiencing CPU issues with your server, it would behoove you to address that first. Oddly enough, quite often I find that CPU problems are being driven by IO problems. So be sure to tune your queries and check your indexes.

Row Compression versus Page Compression

There are two types of compression available for use with SQL Server; Row Compression and Page Compression.

Row Compression stores fixed data type columns using a variable length format. Page Compression adds to that by incorporating Prefix and Dictionary Compression to the mix. Page Compression works very well when you have lots of repeating values in your tables. Like a Data Warehouse…

Generally speaking, I recommend using Row Compression with OLTP databases, and using Page Compression with Data Warehouses.

Now this doesn’t mean you should blindly enable compression for all tables and indexes on all of your databases. Do a little analysis first and start small.

Focus on your largest tables first; the ones that are causing you pain. Run some checks and see if those tables would benefit from having compression enabled. Pick your top ten.

The best candidates for compression are tables that are not being updated frequently. So if you have a table that is getting 25% of its rows updated every day, that may not be the best table to compress. As always, you will need to test your servers and workload to see what combination works best for your environment.

Show me the T-SQL

The script below will check all of your tables and indexes. It will report back the current size, current compression methods being used, and an estimation of the space savings you can achieve by using either Row Compression or Page Compression. It runs on SQL 2008, SQL 2012, and SQL 2014 CTP1.

 

Results

Compression Estimates

Compression Estimates

Compression Estimates Percent Complete

Compression Estimates Percent Complete

The default is to display everything sorted by Table Name and Index Name. I’ve included a few other queries in the comments that will let you modify the display to focus on the largest tables, and to show that tables that should receive the highest percentage of compression.

Enjoy!

Master of Puppets

You may have heard about the recent announcement from Microsoft to cancel the Microsoft Certified Masters program. AKA, the MCM and MCSM programs. Well, it wasn’t really an announcement. An announcement is when you make a public declaration of some information or event. Think about a wedding announcement.

MCM_logo

Instead, this was announced in an email very late on Friday night. Because of Time Zone differences, mine came in after midnight. In the email we were informed that all of the MCM exams and the entire program are being retired on October 31st.

A few news sites have picked up on this already, and actually had made public the non-announcement.

What’s so awful about this is, within the past few weeks there have been other, contrasting announcements about expanding the number of testing centers, upcoming release of a new exam, etc. It’s like the proverbial saying about the right hand not knowing what the left hand is doing.

Strange, to say the least.

Anyone who was part-way through the MCM program is being left out to dry at this point. All your time and money spent for naught. I can only imagine how that must feel. Awful, truly awful.

On the one hand, I’m disappointed. I spent a lot of time, effort, and money to achieve the MCM, only to have it discontinued. Kind of a slap in the face. However, I have to admit, that on the other hand, I’m a little relieved.

Wait, let me explain. As I see it, there are some serious problems with the overall Microsoft Certification Program. Since they’ve decided to gut the MCM Program, this is a good opportunity to fix everything that is wrong with it.

A Looming Deadline

I finished the MCM a little over a month ago, and I couldn’t be more relieved. I had put a lot of time, money, and effort into this. My Significant Other has been very patient and supportive during this journey, but it was time for it to end. Or at least be able to rest for a little while.

Nope.

You see, once you have the SQL 2008 MCM, you only have until June of 2014 to complete the SQL 2012 MCM. That’s only ten months away! Also remember, there is a 90-day waiting period for retakes at this level.

If you couldn’t make that deadline, then you start back at the bottom with all the MCP/MCTS/MCITP/MCSA/MCSE exams to pre-qualify you for the opportunity to try the SQL 2012 MCM exams once again.

And, guess what, they didn’t even have the the SQL 2012 MCM Lab Exam ready. So you have a deadline ticking away, but not much you can do about it.

I was a little frustrated by that timeline. I had just spent a considerable amount of money, my own money, to complete the MCM program, and now I had jump right back in and start spending a bunch more money, immediately. Add to that, I’ve used about half of my PTO (vacation days) for my studies, travel, and test taking along this journey.

So you can see why I’m a little relieved. Now, I don’t have to explain my elaborate cover story as to why I’m not going to bother with the SQL 2012 MCM. Instead, I can join the chorus of folks who are screaming bloody murder about the program being canceled.

Maybe now, I can have a little break and attempt to pay down my SQL 2008 MCM costs. Maybe tomorrow, there will be a new announcement of an entirely news certification program that has been in development for months and months. Wink, wink, nudge, nudge.

Don’t Change the Brand

One of the problems with the Microsoft Certifications and the MCM Program is the names and acronyms keep changing. Take a page from other, successful companies and don’t.

Since the beginning of time, it has been the MCP. My first certification was an MCP in Visual Basic Programming. That should continue to be the basis of everything. Stop changing the names of the certifications every time a new version of the product comes out.

People are just now starting to learn about the MCM Program. Most don’t even know what it is, including recruiters and HR, and now you’re changing it to MCSM, why? And now, before people have a chance to be confused about the MCSM, it’s getting scrapped.

Keep the standard MCP/MCTS/MCITP/MCM naming scheme, do you see Ford renaming the Mustang to Horse 2.0? No, you don’t.

I can’t tell you how many times I’ve seen a job posting or spoken to HR/Recruiting and they ask if I’m an MCDBA for SQL 2008 or even SQL 2014.

If you say ‘no, I have the MCITP or MCM for SQL 2008 which is the newer version’ all they hear is ‘no’ and move on. So, what you have to say is ‘yes, I have the McDBA for SQL 2012′ or whatever stupid crap recruiters are asking for.

TLAs are better than FLAs

But if you were going to change the names of the certificates, at least choose something easy to say, easy to understand, and that is intuitive.

People love three letter acronyms. They roll off the tongue easier, and they just sound so cool.

I would propose the following nomenclature:

  • MCA, MCP, MCE, MCM. Simple, easy, TLAs.
  • Associate, Professional, Expert, Master.

Most people intuitively know how to rank those four levels. You don’t need to know anything about the technology in order to understand that an Associate is lower than an Expert, or Master.

Certificates Shouldn’t Expire

I’m not saying you shouldn’t continue to train, get certified, learn new skills, etc. But the certs you’ve earned should stay with you, period. Think about how many SQL 2000 installations there are still out there.

If you are an expert on an old piece of technology, and the customer needs that, then you are still the expert.

If a certification is tied to a specific version of technology there is no need to expire it. That person is not diminishing or interfering with new technology or certifications.

If someone only has certification from ten years ago, and nothing more recent, then let the customer decide if that is what they want.

Specialists Specialize

The SQL 2008 Server Certification program had three tracks: DEV, DBA, BI. There were two levels: junior and senior. Now, you have to complete all three tracks to get the entry level certification for SQL 2012.

Think about cars for a minute. Mechanics specialize. You have transmissions, engines, fuel injection systems, etc. Someone who knows how to fix one, rarely knows the others. Or you have an oil-change technician.

Or doctors? Orthopedic surgeon; ear, nose, and throat; endocrinology. Or you have a general practitioner.

Have you perused job descriptions that require you to be an expert in all three: BI, Dev, DBA; yet paid lower than just one? Me too, lot’s of them. Those are interesting interviews, but they are also jobs to be avoided like the plague.

The official party line seems to be that Dev, DBA, and BI are so intertwined that you have to understand all of them in order to do any of them. Well, the real world doesn’t quite work that way. Knowing about other areas certainly makes you better, and should be rewarded. But for an entry level certification that is ridiculous.

And, if you truly believed that, then how come someone can upgrade to the new MCSA with only one of the old MCTS certs. If all three skills were so intertwined, then you would require someone doing the upgrade to hole all three MCTS certifications.

Cost Benefit Analysis

All this leads me to the question whether I made the best choice of pursuing the SQL 2008 MCM. What is the cost / benefit analysis of all the time, money, effort, PTO, relationship costs, etc. for pursuing the MCM?

With the same money, you could self-fund a trip to the PASS or BA conferences. You could speak at tons of SQL Saturdays. You could take all the SQL 2012 MCSE Certifications. You could go on a SQL Cruise. And you’d still have money left over.

MCM RIP

I do hope Microsoft reconsiders canceling the MCM Program. This was the only certification that was serious and had sufficient rigor. It gave you something to strive for if you wanted to distinguish yourself from your peers.

Please take a moment and register a comment on the connect site and let Microsoft  know how you feel.

T-SQL Tuesday #45

T-SQL Tuesday

T-SQL Tuesday

T-SQL Tuesday is rotation blog series started by Adam Machanic. Each month a different person hosts the series and decides on a topic. Then, everyone writes a blog post on the same topic. This month, it is being hosted by Mickey Stuewe, and the topic is Auditing.

Auditing

In layman’s terms, Auditing is keeping track of changes and who made them. There could be many reason for wanting to do this. For example, legal requirements, change control process, security or troubleshooting.

Who Done It?

Once upon a time, I was a Database Administrator and I had a system that was suffering from some random errors that we were unable to pin down. Basically, at seemingly random times various parts of the application would fail. Usually this would involve alerts firing after hours and the ensuing late night fun.

The team would get called together to troubleshoot the problem, and typically a developer would put together a fix to the code to get the application working again. However, we were unable to identify the root cause.

Turns out, one of the developers was making unauthorized changes to the database. For example, modifying a table or stored procedure. This, in turn, would cause the application to break.

What the Deuce?

We did not have the best separation of duties at this organization. The developers were part of the deployment process. As such, one of the developers had gained access to the Application Service Account. So, he would use it from time to time to “fix” things, or slide in some last minute changes that got missed in the deployment steps.

We started to suspect this when it was always the same developer who would volunteer to help us troubleshoot and miraculously come up with the code fix. He got to be the hero, and we were the zeroes.

To Catch a Thief?

We added a DDL Trigger to start logging all changes to the database. In addition to the standard information returned by the EVENTDATA() function, we added a few other things. Since he was using the Application Service Account, we needed a few more details to distinguish the culprit from one of us. So, we added the IP Address and the name of the application that was issuing the DDL commands.

Show Me the Audit

One thing you’ll want is a central place to store all of your Audit History. While you could create this table within the database you are auditing, I prefer using a separate database. This way, you can have several databases logging to the same database for easier reporting. Or, you could locate the audit database on a separate server for added security and redundancy.


-- create a separate database for housing all of your audit information
create database DBAAudit;
go

use DBAAudit;
go

-- create a table to store the history of DDL changes
create table dbo.DDLHistory (

DDLID int identity(1, 1) not null,
DDLDate datetime not null default getdate(),
LoginName nvarchar(256) not null,
IPAddress nvarchar(25) not null,
AppName nvarchar(100) not null,
DatabaseName nvarchar(256) not null,
SchemaName nvarchar(256) not null,
ObjectName nvarchar(256) not null,
ObjectType nvarchar(50) not null,
EventType nvarchar(50) not null,
DDLCommand nvarchar(max) not null,
DDLXML xml not null

);
go

Setting up a DDL Trigger is fairly straightforward. All the relevant information is returned by the EVENTDATA() function. This will return various details about the DLL event in an XML format.

You can add any other code to flesh out your trigger and you see fit.

I like to add the IP Address and the Application Name. These are easy enough with some built-in functions.


-- replace with your own database name
create database MyTest;
go

-- replace with your own database name
use MyTest;
go
-- modify trigger to capture the information that is relevant to you
create trigger DDLTracking
on database
for create_table, alter_table, drop_table,
create_procedure, alter_procedure, drop_procedure,
create_function, alter_function, drop_function
as begin

set nocount on;

-- grab the trigger event data
declare @eventdata xml;
set @eventdata = EVENTDATA();

-- grab the ip address, sometimes people use another login, this will help trace to their machine
declare @ipaddress nvarchar(25);

select @ipaddress = client_net_address
from sys.dm_exec_connections
where session_id = @@SPID;

-- log the info in our table
insert into DBAAudit.dbo.DDLHistory
(LoginName, IPAddress, AppName, DatabaseName, SchemaName, ObjectName, ObjectType, EventType, DDLCommand, DDLXML)
values (
@eventdata.value('(/EVENT_INSTANCE/LoginName)[1]', 'nvarchar(256)'),
@ipaddress,
APP_NAME(), -- grabs what program the user was using, e.g. management studio
@eventdata.value('(/EVENT_INSTANCE/DatabaseName)[1]', 'nvarchar(256)'),
@eventdata.value('(/EVENT_INSTANCE/SchemaName)[1]', 'nvarchar(256)'),
@eventdata.value('(/EVENT_INSTANCE/ObjectName)[1]', 'nvarchar(256)'),
@eventdata.value('(/EVENT_INSTANCE/ObjectType)[1]', 'nvarchar(50)'),
@eventdata.value('(/EVENT_INSTANCE/EventType)[1]', 'nvarchar(50)'),
@eventdata.value('(/EVENT_INSTANCE/TSQLCommand)[1]', 'nvarchar(256)'),
@eventdata
);

end;
go

Now, this is a fairly basic DDL Trigger. From here, you can modify it to add any addition information that you may require. As always, tailor any code to your own situation.

If you are co-mingling information from multiple servers, you may wish to add a column for that. You may also wish to look into locating the Audit table on a remote server.

And There You Have It

A good Audit or Logging System can help you solve all manner of mysteries. They make troubleshooting a server much easier than trying to divine what happened in the past.

And remember, when you have ruled out the impossible, whatever remains, however improbable, is the answer.