… Four tortoises were playing in the backyard when they decided they needed hibiscus flower snacks. They pooled their money and sent the smallest tortoise out to fetch the snacks.
Two days passed and there was no sign of the tortoise.
"You know, she is taking a lot of time", said one of the tortoises.
A little voice from just out side the fence said, "If you are going to talk that way about me I won't go."
Is it too much to request from the quite expensive 3rd party backup tool to be a way faster than the SQL server native backup?
Or at least save a respectable amount of storage by producing a really smaller backup files? By saying “really smaller”, I mean at least getting a file in half size.
After Googling the internet in an attempt to understand what other “sql people” are using for database backups, I see that most people are using one of three tools which are the main players in SQL backup area:
- LiteSpeed by Quest
- SQL Backup by Red Gate
- SQL Safe by Idera
The feedbacks about those tools are truly emotional and happy. However, while reading the forums and blogs I have wondered, is it possible that many are accustomed to using the above tools since SQL 2000 and 2005.
This can easily be understood due to the fact that a 300GB database backup for instance, using regular a SQL 2005 backup statement would have run for about 3 hours and have produced ~150GB file (depending on the content, of course).
Then you take a 3rd party tool which performs the same backup in 30 minutes resulting in a 30GB file leaving you speechless, you run to management persuading them to buy it due to the fact that it is definitely worth the price.
In addition to the increased speed and disk space savings you would also get backup file encryption and virtual restore - features that are still missing from the SQL server.
But in case you, as well as me, don’t need these additional features and only want a tool that performs a full backup MUCH faster AND produces a far smaller backup file (like the gain you observed back in SQL 2005 days) you will be quite disappointed. SQL Server backup compression feature has totally changed the market picture.
Medium size database.
Take a look at the table below, check out how my SQL server 2008 R2 compares to other tools when backing up a 300GB database.
It appears that when talking about the backup speed, SQL 2008 R2 compresses and performs backup in similar overall times as all three other tools. 3rd party tools maximum compression level takes twice longer.
Backup file gain is not that impressive, except the highest compression levels but the price that you pay is very high cpu load and much longer time. Only SQL Safe by Idera was quite fast with it’s maximum compression level but most of the run time have used 95% cpu on the server.
Note that I have used two types of destination storage, SATA 11 disks and FC 53 disks and, obviously, on faster storage have got my backup ready in half time.
Looking at the above results, should we spend money, bother with another layer of complexity and software middle-man for the medium sized databases? I’m definitely not going to do so.
Very large database
As a next phase of this benchmark, I have moved to a 6 terabyte database which was actually my main backup target.
Note, how multiple files usage enables the SQL Server backup operation to use parallel I/O and remarkably increases it’s speed, especially when the backup device is heavily striped. SQL Server supports a maximum of 64 backup devices for a single backup operation but the most speed is gained when using one file per CPU, in the case above 8 files for a 2 Quad CPU server. The impact of additional files is minimal.
However, SQLsafe doesn’t show any speed improvement between 4 files and 8 files.
Of course, with such huge databases every half percent of the compression transforms into the noticeable numbers. Saving almost 470GB of space may turn the backup tool into quite valuable purchase. Still, the backup speed and high CPU are the variables that should be taken into the consideration.
As for us, the backup speed is more critical than the storage and we cannot allow a production server to sustain 95% cpu for such a long time.
Bottomline, 3rd party backup tool developers, we are waiting for some breakthrough release.
There are a few unanswered questions, like the restore speed comparison between different tools and the impact of multiple backup files on restore operation. Stay tuned for the next benchmarks.
- SQL Server 2008 R2 sp1
- 2 Quad CPU
- Database location: NetApp FC 15K Aggregate 53 discs
- Backup destination volumes: two physical NetApps FC 15K Aggregate 53 discs, 4 files on each volume.
No matter how good that UI is, we need to run the backup tasks from inside of SQL Server Agent to make sure they are covered by our monitoring systems. I have used extended stored procedures (command line execution also is an option, I haven’t noticed any impact on the backup performance).
backup database <DBNAME> to
disk= '\\<networkpath>\par1.bak' ,
with format, compression
@database = N'<DBName>',
@backupname= N'<DBName> full backup',
@desc = N'Test',
@init = 1
'-SQL "BACKUP DATABASE <DBNAME>
TO DISK= ''\\<networkpath>\par1.sqb'',
DISKRETRYINTERVAL = 30,
DISKRETRYCOUNT = 10,
COMPRESSION = 4,
@database = 'UCMSDB',
@filename = '\\<networkpath>\par1.bak',
@backuptype = 'Full',
@compressionlevel = 4,
@backupfile = '\\<networkpath>\par2.bak',
@backupfile = '\\<networkpath>\par3.bak'
If you still insist on using 3rd party tools for the backups in your production environment with maximum compression level, you will definitely need to consider limiting cpu usage which will increase the backup operation time even more:
- RedGate : use THREADPRIORITY option ( values 0 – 6 )
- LiteSpeed : use @throttle ( percentage, like 70%)
- SQL safe : the only thing I have found was @Threads option.