THE SQL Server Blog Spot on the Web

Welcome to - The SQL Server blog spot on the web Sign in | |
in Search

The Rambling DBA: Jonathan Kehayias

The random ramblings and rantings of frazzled SQL Server DBA

Enabling Data Compression On All Tables By Default in a SQL Server 2008 Database

One of the great things about answering questions online is that every now and then you get one of those Aha! moments and you create a solution to a problem that once written seems really obvious and makes you wonder how that ever escaped you previously. This happened recently on the MSDN Forums, but in this case it wasn't my Aha! moment, it was Aaron Alton's in a follow up response he posted behind me.  The question asked was:

"Is there a way to set the default compression level for all tables created in a database?" 

I misread the question initially and thought that it was a two part question, first asking if it could be set by default and then if not, could it be changed afterwards, so I responded with how to alter the table and indexes afterwards since there isn't a default option to set it for all tables.  Later Aaron replied how to use a DDL Trigger to automagically add Database Compression to all tables.  He followed up today with a blog post titled:

Enabling Data Compression By Default in a SQL Server 2008 Database

Its an interesting solution to the problem posed by the original poster, and when I read it in the forums, my first thought was Duh!  Why in the world didn't that cross my mind?  I'll never know but it was definitely a Aha! type of thing.  There are some caveats to this solution, which you can read about in his blog post, or on the original forums thread:

Is there a way to set the default compression level for all tables created in a database?

Published Tuesday, February 24, 2009 12:35 PM by Jonathan Kehayias



Denis Gobo said:

I wouldn't go as far as setting it ON for every table

I tested row and page level compression with some of our production tables and for some tables compression helped while for some tables it did not, it all depends on the data inside the table.

February 24, 2009 11:45 AM

Jonathan Kehayias said:


Read Aaron's blog post.  He actually mentions that there is probably a very good reason why this isn't available as a default option, and that some tables like LookUp tables have no benefit in being Compressed.  The question on the forums was how to enable it for all tables.

February 24, 2009 12:00 PM

Denis Gobo said:

Yep, I saw that about the lookup tables, it would be nice if there was an option like

"Compress every table where compression will save 30% in space for all the tables over 1000 pages"

What do you think?

February 24, 2009 12:09 PM

James Luetkehoelter said:

I think Denis has the right idea and something that MS has strived for in the past when moving from 6.5>7.0 and beyond (lots of dials to tune vs. having an automatic tuning rule kick in if a situation is reached).

I think enabling all by default is not a good idea (I can't stand default behaviours - I like to know  explicitly what is going on...).

But something like "if there would be a certain level of space savings, compress" rule...that makes sense to me...

February 24, 2009 12:22 PM

Aaron Alton said:


That would be great.  The only problem is that, with a DDL trigger, you're working in the scope of the original CREATE TABLE statement.  As such, there's no way to know how many pages a table is eventually going to consume at the DDL trigger stage.  The OP on the forums was working with an application that created tables "automatically".  They had determined that these automatically generated tables would benefit from compression, hence the dilemma.

For a DBA working with a "normal" application, compressing all tables by default would be insanity (hence the warnings).  Most people will want to harvest and analyze output from sp_estimate_data_compression_savings and manually decide which tables to compress.  There could be situations where savings would be significant, but compression and decompression overhead would be so large that it invalidated the space savings.  The 30% rule that you're suggesting could be done with a periodically run job, but I would go one step further and say that even that kind of rule is likely too broad for most environments.

In other words, I think we're all speaking the same language ;-)

February 24, 2009 1:01 PM

Jonathan Kehayias said:


You should be able to do that with a script like this:


DECLARE @compress_type nvarchar(5) = 'PAGE'

DECLARE @estimate_cmd nvarchar(max) = N'

DECLARE @schema_name sysname = PARSENAME(''?'', 2)

DECLARE @object_name sysname = PARSENAME(''?'', 1)

EXEC sp_estimate_data_compression_savings

   @schema_name = @schema_name,

   @object_name = @object_name,

   @index_id = NULL,

   @partition_number = NULL,

   @data_compression = '''+@compress_type+''' ; '

DECLARE @alter_cmd nvarchar(max) = ''



object_name sysname,

schema_name sysname,

index_id int,

partition_number int,

size_with_current_compression_setting bigint,

size_with_requested_compression_setting bigint,

sample_size_with_current_compression_setting  bigint,

sample_size_with_requested_compression_setting bigint


INSERT INTO #Output(object_name, schema_name, index_id, partition_number,

size_with_current_compression_setting, size_with_requested_compression_setting,

sample_size_with_current_compression_setting, sample_size_with_requested_compression_setting)

EXEC sp_msforeachtable @estimate_cmd

SELECT @alter_cmd = @alter_cmd +


WHEN (size_with_current_compression_setting/(isnull(nullif(size_with_requested_compression_setting, 0), 1)*1.00)) > 1.3




WHEN o.index_id IN (0, 1)

THEN 'ALTER TABLE ' + QUOTENAME(schema_name)+'.'+QUOTENAME(object_name) + ' REBUILD WITH (DATA_COMPRESSION = '+ @compress_type +');' +CHAR(10)

ELSE 'ALTER INDEX ' + QUOTENAME( + ' ON ' + QUOTENAME(schema_name)+'.'+QUOTENAME(object_name) + ' REBUILD WITH (DATA_COMPRESSION = '+ @compress_type +');' +CHAR(10)



FROM #Output o


select, p.index_id, p.object_id, p.partition_number,

reservedpages = sum(a.total_pages),  

usedpages = sum(a.used_pages)

from sys.partitions p

join sys.allocation_units a on p.partition_id = a.container_id  

left join sys.internal_tables it on p.object_id = it.object_id

join sys.indexes i on i.index_id = p.index_id AND i.object_id = p.object_id

group by, p.index_id, p.object_id, p.partition_number

) as t

ON OBJECT_ID(QUOTENAME(schema_name)+'.'+QUOTENAME(object_name)) = t.object_id AND o.index_id = t.index_id AND t.partition_number = o.partition_number

WHERE usedpages > 1000

print @alter_cmd

drop table #output


February 24, 2009 3:25 PM

Paul Nielsen said:

February 25, 2009 12:38 AM

Denis Gobo said:

Jonathan, ran the proc looks good (took a while on one of the DBs)

February 25, 2009 6:27 AM

Paul Nielsen said:


a script to apply compression has to also test row compression. If Row and Page both give 30% then you want to use row not page compression. And if the object is already compressed with row, then you need to test none and page. If the compression was 32% before, and now with more data the compression ratio is only 18% you'll want to turn off compression.


February 25, 2009 11:47 PM

Jonathan Kehayias said:


The values are configurable, so you could change them up, but I didn't know the information your comment provides.  This was just a quick slap together over about 45 minutes based on Denis's comment, but I guess I'll go back to the mill in the morning and forge a stronger tool based on the new design specifics you have provided, and then post an update.  

February 25, 2009 11:51 PM

GrumpyOldDBA said:

The whole point about compression is it's useful in certain circumstances - but if you want to degrade performance then by all means enable it for a whole database - that I would say, is why there is no way to enable compression for the database, and I hope there never will be.

Other than that I find compression very useful in certin circumstances.

February 26, 2009 10:07 AM
Anonymous comments are disabled

This Blog


Privacy Statement