Showing posts with label Best Practices. Show all posts
Showing posts with label Best Practices. Show all posts

Wednesday, January 7, 2009

Free eBook from RedGate : Dissecting SQL Server Execution Plans (Grant Fritchey)

For the last few months I have been working with Understanding SQL Server Execution Plans at a very basic level to provide information on how to get Execution Plans from SQL, how to read them, how to find the information contained in them, and what to look for as flags of potential areas of concern when performance tuning code.  Last night I was having a chat with Jim McLeod, running some thoughts by him when he mentioned this eBook, so I did a quick search and located a free copy from RedGate which I promptly download from:

http://www.red-gate.com/specials/Grant.htm?utm_content=Grant080623

All I can say is WOW.  This has to be the best reference I have ever seen on Execution Plans in SQL Server.  My hats off to Grant Fritchey for first putting this together, but then for making the concepts simple enough that you really don't have to be a Sr DBA to figure out what is happening.  I'd highly recommend this eBook to anyone who writes or reviews TSQL code.

Thursday, November 13, 2008

The Database Transaction Log - Part 1: Managing Size

This post will be the start of a series on the Database Transaction Log.  This started out as a single posting, but the topic is vast that there is no way to properly cover it in a single post.  I could probably write a mini pamphlet on the transaction log in SQL Server the topic is just that big.  I plan to focus on common problems that I consistently see on the MSDN Forums regarding the transaction log, as well as how to prevent/correct them.

The transaction log in SQL Server is one of the most important parts of a SQL Server database, as well as one of the most common generators of problems I see online.  If you don't believe me, do a Google, MSN, or Yahoo search for 'transaction log full', and you will find article after article and question after question dealing with the subject.  Worst yet are the stories of a deleted transaction log:

http://www.sqlservercentral.com/articles/Disaster+Recovery/63311/
https://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=2727860&SiteID=1
http://bytes.com/forum/thread543465.html

To start with let me first say that you should never, ever, never, under any circumstances delete your database log file.  This is, as Jens Suessmeyer put it, "the transactional heart of the database." 

In a managed database server the database data/log files should only grow when specified by a Database Administrator.  This is not to say that you should turn AutoGrowth OFF on the databases in your environment.  I prefer that this option is left ON as a sort of insurance policy for your database in the event that the space available drops fast enough that you can't react in time to prevent a problem.  It should however, be configured to grow by a fixed size rather than a percentage.  The reason for this is that a fixed size has a predictable cost should the database have to grow automatically whereas a percentage size will be more and more expensive the larger the database grows to. 

I previously blogged about the How to Automate Common DBA Tasks series that I posted on the MSDN Forums Example Site.  One of the articles included in that series is a Data File Free Space monitoring script.  This can be used to monitor the available free space in the log files, and provide early notification to an administrator that they need to grow the file.  By manually managing the size of the log file, an administrator will know if there is a problem almost immediately.  The transaction log on an established database should very rarely have to be grown once properly sized.

So what exactly will cause the log file to fill up and need to be grown?  There are a number of things that can cause this:

  1. Full Recovery Model without Log Backups
  2. Long running transaction
  3. Uncommitted transaction
  4. Rebuilding Indexes
  5. Loading Bulk Data without minimizing logging.

The first item listed is the primary reason that log files get to be oversized, and eventually consume all of the available disk space on the server.  I have yet to figure out why or how, but almost every time that someone posts a problem with an oversized transaction log, the database is in FULL Recovery, and the only backups being performed are nightly full backups.  If you are not backing up the transaction log between full backups, then the database should be in SIMPLE recovery.  The reason for this is that the transaction log is not truncated in FULL recovery except by log backups using the BACKUP LOG command, and specifying WITH TRUNCATE_ONLY doesn't make sense.  If you aren't going to backup the logged data for point in time recovery purposes, then there is no reason to not have it auto truncate on checkpoint.

Long running transactions most often associated with a growing or oversized transaction log are generally data purging processes that are running as a single transaction like the following:

DELETE 
FROM MYTABLE
WHERE MYCOL
< @Criteria

Purging data like this will be heavily logged, even on databases in SIMPLE recovery.  The reason being that there is no commit between the start and commit, so if one million rows are being deleted, then the log has to hold all one million deletes to be able to perform a rollback of the operation.  The appropriate way to purge data like this was provided by fellow forum member Adam Haines in response to numerous forums postings:


DECLARE @BatchSize INT,
@Criteria DATETIME
SET
@BatchSize = 1000
SET @Criteria = '1/1/2005'

WHILE EXISTS(SELECT 1 FROM MYTABLE WHERE MYCOL < @Criteria)
BEGIN
DELETE TOP
(@BatchSize)
FROM MYTABLE
WHERE MYCOL < @Criteria
END

Another solution to this problem was provided by Denis the SQL Menace is:


DECLARE @BatchSize INT,
@Criteria
DATETIME,
@RowCount
INT
SET @BatchSize
= 1000
SET @Criteria = '20050101'
SET @RowCount = 1000

SET ROWCOUNT @BatchSize

WHILE @RowCount
> 0
BEGIN
DELETE
FROM MYTABLE
WHERE MYCOL
< @Criteria

SELECT @RowCount = @@rowcount
END

SET ROWCOUNT 0

Both of these solve the problem by working in smaller implicit transactions which will perform much better, as well as control the size of the transaction log. 


Uncommitted/Open transactions are problematic beyond just causing transaction log growth.  An open transaction will also cause excessive blocking in the database which will also impact the application users.  To find open transactions in a database, you can use DBCC OPENTRAN, which will return the active transactions in the current database.  To demonstrate this, run the following statement in one window:


use tempdb
go
create table mytable
(rowid int identity primary key)

begin transaction
insert into mytable default values

Then open a new window and run:


dbcc opentran

The output from DBCC OPENTRAN will show the open transaction and offending SPID in the database as follows:



Transaction information for database 'tempdb'.

Oldest active transaction:
    SPID (server process ID): 55
    UID (user ID) : -1
    Name          : user_transaction
    LSN           : (20:44:214)
    Start time    : Nov 13 2008 10:35:09:780PM
    SID           : 0x01050000000000051500000064116832e36ccd723422e75bba640000
DBCC execution completed. If DBCC printed error messages, contact your system administrator.


To investigate this further, you can use DBCC INPUTBUFFER(spid#) to get the last command run on the offending SPID.  From there, you can determine what action you want to take, whether to kill the offending SPID and cause it to rollback, or maybe troubleshoot further to identify what caused the transaction to be left open.


Rebuilding Indexes and bulk loading data should both be done in SIMPLE or BULK_LOGGED recovery.  The reason for this is to minimize the logging that is done, but you can't eliminate it completely.  In my experience, rebuilding indexes can require a transaction log to be larger than the size of your largest index.  This is where sizing your log file is important as a part of future capacity planning, and should be done considering how large the database might be in the future.


If you are reading this because you are already in trouble, there are a number of ways to deal with a oversized transaction log, and none of them necessitates deleting the file from the file system.  If you get yourself to the point that your server is completely out of disk space, there is no network path where you can backup your log to, and the log is in the hundreds of GB in size, then the best recommendation is to put the database into Simple recovery, which will truncate the active portion of the log and allow it to be shrunk.  I would highly caution and recommend that a new Full backup of your database be taken immediately after performing such an operation.


Look for further postings on the Transaction Log to come.


References:
http://support.microsoft.com/kb/873235
http://support.microsoft.com/kb/317375/

Friday, November 7, 2008

Are you safe without your Keys?

I am not talking about your house keys, or your car keys, I am talking about your SQL Server Keys.  SQL Server 2005 allows you to use keys for protecting objects and data in the server and databases.  It is very likely that if you are doing encryption in SQL Server that you already know that you have keys in SQL, but if you have never touched encryption, you might now know that, you too, have at least one key in SQL, the Service Master Key.  For those not familiar with this, the Service Master Key is the root of encryption key hierarchy in SQL Server 2005, it is by default used to encrypt Database Master Keys, linked server security information, and any credentials created using CREATE CREDENTIAL.  If you don't use any of the above, then read no further, you are probably fine.

Personally, I learned about keys, certificates, and encryption while studying for my MCITP Database Administrator exams at the beginning of this year, and that was enough to add a backup of the SMK into a password protected file to my installation procedures for each SQL Server Instance I have.  I personally have never needed to use the backup to date.  So what sparked this blog posting?  As usual, a post on the MSDN Forums, where a backup of a database using encryption was restored to a different server, and the poster didn't know the password to the database master key.  After goofing the initial answer on it, I had to do some investigating into what could be done in the case that the password to encrypt a DbMK is lost, and the answer isn't all that complex, and it is included in the post referenced above. 

However, it is much easier to just avoid the situation completely and have available backups of the SMK, as well as all the other keys in use in your database server. These can easily be done with DDL commands:

BACKUP SERVICE MASTER KEY (Transact-SQL)
RESTORE SERVICE MASTER KEY (Transact-SQL)

BACKUP MASTER KEY (Transact-SQL)
RESTORE MASTER KEY (Transact-SQL)

BACKUP CERTIFICATE (Transact-SQL)
CREATE CERTIFICATE (Transact-SQL)

One of the best references online is Laurentiu Cristophors blog on MSDN for security and cryptography in SQL Server.  Laurentiu is a frequent answerer in the SQL Security forum on MSDN, and has wealth of information available about this subject.  His blog is definitely a recommended read and is on my personal blog roll.

Since this is such a vast subject, I plan on putting some time into it in the near future, and writing some additional information on the subject.  However, in the mean time, backup those keys and certificates.

EDIT:  One thing I realized while thinking about his blog is I do have two different Certificates implemented in my servers, one for mirroring between two servers not on a domain, and one for a certificate user to provide permissions for a dynamic SQL search procedure that uses parameterized dynamic SQL to perform optimized searching in one of our databases, since application users are required to use stored procedures and have no table level access.  The certificate user has no associated login and is used WITH EXECUTE AS to provide the needed table level SELECT rights to run dynamic SQL code in the stored procedure.  The calling user only has execute rights on the stored procedure but can still make use of its dynamic code due to this type of impersonation.  I do have backups of both of these certificates, as created when they were created.

If you want more information about using Certificates to grant elevated rights with a Certificate User look at Erland Sommarskogs article Giving Permissions through Stored Procedures.

Thursday, November 6, 2008

Are your Disks Aligned Properly?

I am always on the lookout for new blogs to watch and learn from.  Not long ago, Kevin Kline wrote a very intriguing blog post:

How to Improve Application and Database Performance up to 40% in One Easy Step

I read it, but I didn't get a lot of time to dig into it until recently.  I finally got around to digging into the topic more, and I also spoke to one of our server admin's about it.  Doing some research we found that our entire environment is running on misaligned partitions.  I found an additional resource blog post that has an excellent slide deck on the MSDN blogs by a guy name Jimmy May:

Disk Partition Alignment (Sector Alignment) for SQL Server: Part 1: Slide Deck

I am working on doing some testing of this by rebuilding a disk completely to see how it affects SQLIO benchmarks before and after.  I'll post results later on, but this topic is definitely an interesting subject.

More to come........

Tuesday, November 4, 2008

Automating Common DBA Tasks

A few weeks ago I did a live meeting presentation for the PASS Database Administrators Special Interests Group (DBA SIG) on Automating Common DBA Tasks.  I spent the last few weeks trying to figure out how to put the source code from my own processes online for others to view/use.  One problem I encountered is that I never designed this to be used by anyone other than myself, so the code wasn't really friendly to anyone but me.  I also did a number of things that are environment specific to my SQL Server environment, and were overly complex to try to explain online the how and or why behind them.  So in the end I reworked a majority of my own code into what I hope is an easy to follow, and easy to implement article series that I put up on the MSDN Forums Example site we use for the SQL Server Forums.

http://code.msdn.microsoft.com/SQLExamples/Wiki/View.aspx?title=AutomatedDBA

If you happen to have problems with any of the scripts, please let me know by leaving me a comment or sending me an email, and I will correct the problem immediately.

Tuesday, August 12, 2008

How to tell if you have Instant File Initialization Enabled

I subscribe to a number of great blogs, and Paul Randal's is one of my favorites.  As a previous member of the SQL Development team at Microsoft, and the writer of such wonderful tools as CHECKDB, his knowledge of the storage engine is amazing at times.  I don't really like to cross post blogs like this usually, but Paul has a great post for validating that you have Instant File Initialization Enabled:

How to tell if you have instant initialization enabled?

Tuesday, June 24, 2008

SQL Injection on the Rise

Microsoft released a new advisory today on the rising number of instances of SQL Injection that are happening on the web. 

To view the security advisory, visit the following Microsoft Web site:

http://www.microsoft.com/technet/security/advisory/954462.mspx

If you have web sites that concatenate unchecked user inputs or parameter line arguments into SQL database calls, then it is important that you fix this problem following the guidelines on this advisory.  This is not all to uncommon, nor is it overly surprising to see this more and more.  This is a common question on the forums, and more than a few posts have been dedicated to trying to help people look into these problems after being attacked.

Wednesday, April 30, 2008

Lock Pages In Memory Incompatible with Virtualization

In a previous posting I provided links to references for Enabling the Lock Pages in Memory Option on 64 bit Windows Servers.  I recently learned this is not compatible with Servers running in a virtualized environment.  After enabling this on two servers that use physical hardware, without problems, our server admin agreed that we should probably implement it on all of our 64 bit SQL Servers following the Microsoft recommended practice, so we made the change at the end of the week last week.

Everything was ok, until Sunday just before noon, when I received a series of alerts on my Treo that connection tests to one of our primary SQL Servers were failing.  Though not recommended or supported, we run SQL Server on VMWare 3.5 for Disaster Recovery, and we have never had trouble with it.  This server happened to be one of the virtual servers, so I logged into work, and found that the server was completely dead, no ping response, no RDP Access, so I jumped onto the VM Client Console and it was blue screened.  Not good, but at least I can restart it and start looking for what might have caused this.  However there are no logs, not dumps, nothing.

Unfortunately I have learned that sometimes in IT the unexplainable happens and then doesn't reoccur.  A phone call to our server admin and some digging by him didn't turn anything up either, so we are stuck with no answer, just wait and see if it happens again.  The performance charts for the server didn't hold any clue to what could have occured on the server.  It was moving along like normal then dead. 

Monday morning the server is online and running just fine.  Somewhere around 10:30am one of the other Server Administrators did a VMotion of the server to move it from one host to another.  This has been a standard practice by the server team for a few years, to do load balancing or maintenance on the Virtual Hosts, and there have never been problems with this before.  Generally the memory cache flushes out, and has to re-grow when they do this, so it is not often that they move a SQL Server from one host to another.  However, this time, the server immediately lost all but 1/4 of its normal memory, and it would not recover from this.  Instead it began paging everything to the Virtual Swap Memory on disk, until it finally crashed completely.  Since I monitor for high memory pressure, I never received an alert of the problem, something I have since corrected, and the server limped along for just over 2 hrs before it died completely.  Not one complaint for poor performance was received during this time span, interestingly enough.

The first thing we did was disable the last change made which was locking pages in memory and restart the server.  Now it is a waiting game to see if the problem repeats before we can get a scheduled outage at night to take the server offline and really look it over.  Later that night we couldn't repeat the problem by moving the server, so I guess problem solved?  To find out for sure we decided to move a developer server with the option set and see what happened.  Since there was a delay between the move and crash previously on a high load server and the developer servers have little load on them, I wrote a process to generate load against the server and enabled it.  It took just over 16 hrs, but the server crashed just like the production server.

If you run SQL Server on VMWare 3.5 and you use VMotion to move servers, do not Enable Locking Pages in Memory.

Tuesday, April 15, 2008

Locking Pages In Memory on 64 bit SQL Servers

One of the benefits of being on 64bit SQL Server is that it allows you to utilize memory beyond the 4GB limit without additional changes like enabling AWE, the /PAE extensions, and the /3GB switching like were done in 32 bit SQL 2000. However, a lot of questions on the forums recently pulled some information to light about a recommended best practice, though not required, configuration of Enabling Lock Pages in Memory for x64 Servers.

Some important Links on this topic:

PSS SQL Server Engineers Blog: Do I have to assign the Lock Pages in ...
Ask the Performance Team : Lock Pages in Memory...do you really need it?
How to: Enable the Lock Pages in Memory Option (Windows)
How to reduce paging of buffer pool memory in the 64-bit version ...

If you haven't done this on your 64 bit servers already, you should review the above and change your configuration.