en-UShe-IL
You are here:   Blog
Register   |  Login

Blog Archive:

Maximize
* Can be used in order to search for older blogs Entries

Search in blogs


Blog Categories:

Maximize
* Can be used in order to search for blogs Entries by Categories

Blog Tags:

Maximize
* Can be used in order to search for blogs by keywords

TNWikiSummit


Awared MVP

 


Microsoft® Community Contributor 


Microsoft® Community Contributor


 Read first, before you you use the blog! Maximize

Recent Entries

Minimize
פבר12

Written by: ronen ariely
12/02/2016 14:49  RssIcon

Introduction

A very common question that raise in the forums is: why a FULL BACKUP, which was fast in past becomes slow. It can be a one-time issue (specific backup), or permanent. These question are usually not related to big database (100TB +), which we are not using FULL BACKUP usually, and are more common on small-medium databases. In this blog I will provide information, explanation, tips, and some links to read more. 

This blog is not a full article (at this time) on the issue, it is more of a collection of thoughts raised in my head at the time that I saw the question at the forum. The blog will be in permanent process of improving. If you have any comment please post it in my Facebook home page.

Let's Start

Firstly you need to monitor what is your bottleneck.

- Please check the IO using perfmon and Resource Monitor (these are the fastest first tools that you have built-in). You might find out that the issue is not related directly to SQL Server. Maybe you are using all the Disk ability to write... This is not the only application on the machine (make sure that other do not use the disk if possible)
- Monitor CPU as well, especially if you use compression!

* FULL BACKUP also copies the portion of the transaction log it needs to make the restore consistent!

Check the number of Virtual Log File (VLF)! High number of virtual log files cause transaction log backups to slow down and can also slow down database recovery.

If you ask the question at the forum, then please post some information. Start with general information like what is the server version and edition?what is your operating system? What is your RAID configuration? Are you backing up locally or remotely? are you using Virtual Machine and if so, what is the disk configuration, is it dynamically growth? What shard hardware are you using?

If this is a one time case, it might related to actions that made and need to move from the log file to the data file. Check the log file before and after (size, free size). Backup the log file before the full backup.

Do you maintenance the indexes (rebuilt / reorganize)? 
Do not do this as part of the backup maintenance (rebuilt => log filled => backup need to works on that data + the time of the rebuilt)

How are you backing up the database? 
- native 
- native with compression (In this case check the CPU... this might be the bottleneck!)
- 3rd party tool

Do you try to Shrink the log during backup?!? Stop this if you do (I have seen lot of backup maintenance that people write which includes Shrinking the log file).

A full backup does the following:

> Force checkpoint and make a note of the log sequence number at this point. 
updated-in-memory pages flushed to disk  => time!

> Read from the data files in the database. Once this is stop SQL Server Make a note of the log sequence number of the start of the oldest active transaction at that point.

> Read as much transaction log as is necessary!

* You can explore more information during the backup by using WITH STATS option (does not give a lot)

* In order to get more diagnostic information you can use trace flag 3604. This statement instructs DBCC commands, which designed to send the output to the log, attached debugger, or a trace listener, to redirect the output to the client (SSMS for example).

* In order to get more diagnostic information in SQL Server's error log you can use trace flag 3605. This statement instructs DBCC commands to redirect the output to the error log.

* Enable Trace Flag 3014 to returns more information about BACKUP process (with the previous two).

* You might want to enable Trace flag 3004 to get more output about file preparation, bitmaps, and instant file initialization

* While the above Trace Flags help for monitoring issues, in reality (once your process works OK) we want to suppress log entries. This can be done with the Trace Flag 3226.

Improving Backup performance:

According the information above we have several actions that we can do or confirm in order to improve the backups like:

> Isolate the backup IO from other application's IO on your system.

> Suppress log messages, both on the client and on the error log

> Split the backup job:
    * If your Transaction Log includes lot of new committed transaction, backup the log file first
    * You can execute checkpoint before the backup.

> chose the right time to execute the backup

> Use compression if you do not have CPU issues. It will leads to less IO writes and might improve both the backup sizes and the backup times. If the CPU is the issue this might leads to longer time, but in most cases this mean that you need more CPU.

> Use multiple backup devices in order to wite on all devices in parallel (use devices as many CPU that you have - 1, might be nice golden rule)

> Use a combination of full, differential, and transaction log backups.

More to read:

** For more information regarding BACKUP process, I HIGHLY recommends to check this article by Paul S. Randal:
https://technet.microsoft.com/en-us/magazine/2009.07.sqlbackup.aspx

** For more information regarding using the Performance Monitor, you can check this article by Brent Ozar:
https://www.brentozar.com/archive/2006/12/dba-101-using-perfmon-for-sql-performance-tuning/

** You can read more information about Optimizing Backup and Restore Performance in SQL Server:
https://technet.microsoft.com/en-us/library/ms190954(v=sql.105).aspx

 


I hope this was useful :-)