THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

Michael Zilberstein

SQL Server 2014 – delayed transaction durability

As I’m downloading SQL Server 2014 CTP2 at this very moment, I’ve noticed new fascinating feature that hadn’t been announced in CTP1: delayed transaction durability. It means that if your system is heavy on writes and on another hand you can tolerate data loss on some rare occasions – you can consider declaring transaction as DELAYED_DURABILITY = ON. In this case transaction would be committed when log is written to some buffer in memory – not to disk as usual. This way transactions can become much faster – especially when log IO is the bottleneck. Delayed durability can smoothen write peaks. Looking forward to use it. But… unfortunately, this feature works for In-Memory OLTP only! Which greatly reduces its effectiveness. Prior to noticing this limitation, I’ve already imagined how I can use it on one of my heaviest procedures that generates tens and sometimes hundreds megabytes of log every time it runs. And no, it isn’t OLTP procedure, it works with BLOBs, so Hekaton isn’t an option. And I can always reload this data if something fails. So delayed durability could potentially reduce procedure duration by 30-40%. Potentially – in case Microsoft would some day implement it for really heavy writers and not only for In-Memory OLTP.

I guess, primary use of this feature is in extremely loaded environments with IO bottleneck on transaction log – in this case even millisecond or several microseconds reduced from transaction duration can prevent collision. And possible failure because as you probably already know, Hekaton treats conflicts in different way: instead of blocking, transaction can just fail on inconsistency.

Published Wednesday, October 16, 2013 5:08 PM by Michael Zilberstein

Comment Notification

If you would like to receive an email when updates are made to this post, please register here

Subscribe to this post's comments using RSS

Comments

 

LondonDBA said:

Oracle and other database systems (eg MySQL, PostgresSQL) implement group commit. For Oracle see the documentation for the COMMIT WRITE BATCH here: http://docs.oracle.com/cd/B28359_01/server.111/b28286/statements_4010.htm. It states: The BATCH parameter causes the redo to be buffered to the redo log, along with other concurrently executing transactions. When sufficient redo information is collected, a disk write of the redo log is initiated. This behavior is called "group commit", as redo for multiple transactions is written to the log in a single I/O operation.

Also here's a Microsoft Research paper detailing results with a version of SQL Server with group commit: http://research.microsoft.com/~gray/papers/JHU_thumper.doc. From a support case we raised with Microsoft I understand an aspect of this group commit has been implemented in the Enterprise Edition of SQL Server 2008 R2. The background to our case was that on the same hardware for the same simple test harness doing concurrent inserts we were seeing far higher transactions per second for SQL Server 2008 R2 Enterprise Edition compared to SQL 2008 R2 Standard Edition. The explanation was the undocumented group commit feature in Enterprise Edition. I'm straying off the point of your blog entry but thought this to be tangentially relevant.

October 16, 2013 3:46 PM
 

Michael Zilberstein said:

I suppose, you're talking about minimal logging feature:

http://technet.microsoft.com/en-us/library/ms191244(v=sql.105).aspx

I've never heard Microsoft (or Oracle or Postgres) derive from strict ACID approach. "D" stays for Durability, meaning transaction can be committed only after its log is hardened to the disk, so in case of crash it can be committed or rolled back based on transaction log / redo log. Here we have revolutionary approach for RDBMS - "D" part of ACID is delayed, so technically committed transaction can never make its way to disk - if crash happens when log is still in memory buffer. So data loss is an option - it should be taken into account and treated in application design. For example in my application I can reload any data wasn't committed because of a crash.

October 16, 2013 4:17 PM
 

Michael Zilberstein said:

Actually from Jim Gray's whitepaper I learn that my proposition about primary use of delayed durability was right: extremely loaded OLTP environments. So instead if huge amount of small writes to log we'll have much less large IOs.

Pity that this whitepaper is from January 2007, Jim Gray says that implementation of the functionality is very easy and still it took 6 years to add it to the product even in a limited (to in-memory only) way. I suppose, it required sort of revolution in product team's mind - just the thought of deriving from ACID used to be considered heretic. Probably appearance of Big Data systems with their "eventual consistency" persuaded them that it can work. But probably I've gone too far in my assumptions. :-)

October 17, 2013 2:32 AM
 

Panagiotis Antonopoulos said:

Delayed durability can be used for all SQL Server transactions, not only for in-memory OLTP.

The SQL Server documentation needs to be corrected.

As described in the link Michael included, there is a DATABASE setting, a COMMIT level option and an ATOMIC BLOCK option. The first two apply to both regular SQL Server transactions and in-memory OLTP. The latter (Atomic Block) applies to in-memory OLTP only.

November 1, 2013 11:16 PM
 

Michael Zilberstein said:

Panagiotis, that's interesting!

BOL is very unequivocal: "SQL Server In-Memory OLTP transaction commits can be either fully durable, the SQL Server default, or delayed durable." But I indeed see that there is database-level "Delayed Durability" option that can be switched to ALLOW or even FORCED with no relation to In-Memory OLTP. Would be interesting to do some tests - whether it indeed works without Hekaton. On test system it would be easy - when nothing else runs, you can easily measure IO on transaction log file. So we should be able to observer both IO number and size changes depending on "Delayed Durability" setting.

November 3, 2013 4:00 AM
 

Panagiotis Antonopoulos said:

I am following up with the documentation team to correct the BOL.

I am a developer for SQL Server and worked on the Delayed Durability feature, so I can tell you for sure that this feature has been designed for both regular SQL Server transactions and in-memory OLTP.

If you set the delayed_durability setting to FORCED for your database and run a simple workload, you can notice from the perf counters and DMVs that there won't be many flushes happening.

Also the performance for read/write transactions will be a lot better.

I apologize for the inconsistency in the BOL.

November 4, 2013 11:00 PM
 

Michael Zilberstein said:

Panagiotis, thanks! Didn't understand from your first comment that you were talking first-hand.

So I guess, I will do some tests and post followup.

November 5, 2013 2:13 AM
 

Michael Zilberstein said:

In my previous post about the subject I’ve complained that according to BOL , this feature is enabled

November 5, 2013 9:51 AM
 

Simon said:

Hey there

Thanks for this post - it was actually your introduction to Delayed Durability that set me off doing my own research. I've actually done a small series on Delayed Durability on my blog and included some fairly crude performance tests if you're interested:

http://rule30.wordpress.com/2014/03/09/delayed-durability-in-sql-server-2014-part-1-introduction/

Best Regards

Simon

March 9, 2014 6:51 PM
 

John Billingsley said:

What addition techniques and recommendations are there to ensure eventual consistency if leveraging Delayed Durability.

March 26, 2014 8:15 AM

Leave a Comment

(required) 
(required) 
Submit
Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement