THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

Tibor Karaszi

Do you want improved performance?

Can you survive a few lost transactions if your server does a "hard shutdown"? If so, check out SQL Server 2014 and "Delayed Durability".

A cornerstone in SQL Server's transaction handling has up until 2014 been "durability" for a committed transaction. Durability is by the way the "D" in the ACID acronym: Atomicity, Consistency, Isolation and Durability.

Durability means that SQL Server has do perform a synchronous write to the LDF file for each transaction. This so that SQL Server can re-construct all committed transactions up until the point of a (potentially hard) shutdown. 

In SQL Server 2014, MS has planned for a database setting called "Delayed Durability". Setting this means that SQL Server can bath writes to the ldf file, meaning a potentially significant improved performance for applications where you have many small transactions.

I did a quick test, using a bench from an earlier blog post of mine (http://sqlblog.com/blogs/tibor_karaszi/archive/2008/08/14/are-inserts-quicker-to-heap-or-clustered-tables.aspx) to test what difference I would see for that workload. Roughly (for 50000 rows, on a PC with single spinning disk HD):

All inserts in one transaction averaged about 0.3 seconds.

One transaction per row with Delayed Durability set to OFF approx 12 seconds. 

One transaction per row with delayed durability set to Forced approx 1.2 seconds. 

 As you can see, for this workload we got about a tenfold performance improvement by letting SQL Server batch the write operations to the ldf file. The question is how much improvement you get for your workload and if you can tolerate to lose some modifications in case of a hard shutdown? 

Published Thursday, February 13, 2014 1:48 PM by TiborKaraszi
Filed under:

Comment Notification

If you would like to receive an email when updates are made to this post, please register here

Subscribe to this post's comments using RSS

Comments

 

Simon said:

Hey

I did a very similar series of tests which I spoke about here:

http://rule30.wordpress.com/2014/03/09/delayed-durability-in-sql-server-2014-part-1-introduction/

The numbers I came back with were pretty similar to yours - 10+ X improvement. It has to be said though that the workload patterns I used was deliberately designed to maximise the effect of batching up the writes the transaction log. I havent got round to trying to generate a more realistic workload.

Cool move from microsoft either way though. I'm in the process of finishing off an article on when someone should and should not used DD,

Cheers

S

March 12, 2014 7:00 AM
 

TiborKaraszi said:

Good blog posts, Simon! :-)

Yep, not only cool, but also for me a very unexpected move as well. As you said, Durability has in my mind been cast in stone, so to speak.

Please post back if you remember when you're done with your 3:rd article on this.

One thing that I'm curious about is what is the actual potential for data-loss? A hard shutdown is pretty obvious. But what about just stopping the service? Will the stop request be pending until the log records have been hardened? Or what about shutting down the OS? Same question here.

I think it is acceptable to do a calculated risk for an unexpected shutdown. But having data-loss when stopping SQL server or re-booting the machine is much less acceptable, IMO.

March 12, 2014 7:42 AM

Leave a Comment

(required) 
(required) 
Submit

This Blog

Syndication

Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement