Wednesday, 15 June 2022


The T-SQL DBCC, acronym for “Database Console Command”, is a command that performs several types of tasks. These tasks are mainly of the validation and maintenance type.

Some of the DBCC commands, like the ones below, work on an internal read-only database snapshot. This means that the database engine creates a database snapshot and brings it to transactionally consistent state. The DBCC command then performs the checks against this snapshot. When the execution is completed, the snapshot is dropped.


The DBCC CHECKALLOC command checks the consistency of disk space allocation structures for a specified database.

The DBCC CHECKCATALOG checks the catalog consistency within the specified database.

The DBCC CHECKFILEGROUP checks the allocation and structural integrity of all tables and indexed views in the specified file group of the current database.

The DBCC CHECKTABLE command checks the integrity of all the pages and structures that make up the table or indexed view.

We have described the function of four commands and no, we have not missed the DBCC CHECKDB command. We will be discussing the DBCC CHECKDB below as this command is the "sum" of these four commands.

DBCC CHECKDB: What does this command do?

DBCC CHECKDB is an important command because it checks both the logical and physical integrity of all the objects in the specified database. This command performs the following: 

·         Executes DBCC CHECKALLOC on the database.

·         Executes DBCC CHECKTABLE on each table and view.

·         Executes DBCC CHECKCATALOG on the database.

·        Validates the contents of every indexed view in the database.

·       Validates link-level consistency between table metadata and file system directories and files when storing varbinary (max) data in the file system using FILESTREAM.

·         Validates the Service Broker data in the database. 

Usage of this command is simple. Just indicate the database name.

When the checks proceed, a log will be produced.

Let us now examine the log report. It shows the name of the database.

The first highlighted part of the log actually refers to the last checks carried out by the CHECKDB command and is used for the "validation of the Service Broker data in the database".

Service Broker is an asynchronous messaging framework with which you can implement scalable, distributed, highly available, reliable, and secure database applications based on SQL Server.

Then logical checks start by doing a primitive check on the data pages of critical system tables.

If any errors are found at this point, they cannot be fixed and CHECKDB terminates immediately.

This error message appears: System table pre-checks: Object ID O_ID. Loop in data chain detected at P_ID. Check statement terminated because of an irreparable error.

Then, logical checks will be performed on all the other tables, systems, and users.

Without entering too many details, logical checks that are performed include:

  • Validate each table’s storage engine metadata
  • Read and check all the data, indexes, and text pages, depending on the page type
  • Check all inter-page relationships
  • Check the page header counts in each page
  •  Perform any necessary repairs (if a repair level was specified)

If the command is executed to check if the database has problems, look at the end of the log. If it is all good, then we will see 0 allocation errors and 0 consistency errors. Otherwise, there is a problem.


We can check the log to find out on which object the corruption occurred. We can find tables and indexes, highlighted in red.

This command has options to repair the database, in case of errors.

By specifying one of the options in the DBCC command, we can try to fix the errors.

I suggest using the options REPAIR_FAST, REPAIR_REBUILD, and REPAIR_ALLOW_DATA_LOSS only as the last resort. It is to be noted that the REPAIR_ALLOW_DATA_LOSS can cause data loss.

Let us see how this option works.


The DBCC command checks both physical and logical integrity of the database. If there are any errors, we can try to fix them.

In particular, the "most aggressive" option is REPAIR_ALLOW_DATA_LOSS which attempts to repair data even at the cost of losing it.

Let us see how this command works when this option is specified.

I have a database “Recovered_corrupted_db_data” with a table “corrupted_usertable”

Often, we notice errors while making a SELECT on a table.

When we execute this query:

We get this error:

In this case, SQL Server is trying to read the table.

The data within this table is spread over multiple pages. When it reads the page 260, SQL Server encounters a logical consistency error.

SQL Server computes a checksum for each page when writes the data and verifies the correctness of this calculate value when it reads the data.

Note: We have a checksum value for each page and not for each row of data.

Since the database returned an error, we can execute the DBCC CHECKDB command.

The CHECKDB command has also detected that there is a problem on the page 260.

So, it is confirmed that we have a problem.

As said previously, use the REPAIR_ALLOW_DATA_LOSS command as the last option.

Note: If you have a backup, use it. You can also use a specialized software that can recover data from a corrupted .mdf file. For example, I use the easy-to-use Stellar Repair for MS SQL software.

If, however, we have no alternatives and the other options, like REPAIR_FAST and REPAIR_REBUILD, do not work, we can try this option.

Remember that we must switch to the single user mode before executing this command.

After executing the command, the log shows that all the errors have been repaired.

Now, if we run the CHECKDB command again, we can see that there are no errors.

Great! But what happened?

The table initially contained 100 rows with IDs starting from 1 to 100.

Now, we can see (in the image below) that rows with ID from 49 to 81 have been lost.

But this is not the only problem.

We also have a row with a completely wrong value for the column ID.

This means that the data is no longer reliable.

It is easy to understand that the DBCC CHECKDB with the option REPAIR_ALLOW_DATA_LOSS has omitted entirely the corrupt page (as we said before there is no table row level checksum).

Physically, the data is copied into new data pages, reconstructing a new link between the pages of the table. 

To Conclude

In this article, we discussed the DBCC CHECKDB command that is able to check the database. It checks both logical and physical integrity of the database. This command is also able to repair the database. We have also discussed how the DBCC CHECKDB command with the REPAIR_ALLOW _DATA_LOSS option works.

Monday, 7 February 2022

SQL Server – Backing up the Tail of the Log

When a database gets corrupted or failure occurs, before restoring the database from the backups you've created, it is recommended to create a tail-log backup of the records that haven't been backed up. This helps restore the database to the exact point at which it failed, preventing data loss.

Read on to learn about the other reasons when you need to back up the tail of the transaction log. Also, understand how to take a tail-log backup and restore it to get back the data you fear losing in the event of a crisis.

Why and When Should You Back Up The Tail of the Log?

Tail-log backup helps capture the tail of the log records when the database is offline, damaged, or data files are missing.

Reasons Why You Need To Back Up the Tail of the Transaction Log

  • Database is corrupted, or the data file is corrupted or deleted.
  • Database goes offline and doesn't start; you may want to recover the database as quickly as possible. But, before you begin recovery, first take the tail-log backup.
  • The database is online, and you plan on restoring the database, start by backing up the tail of the log.
  • Migrating database from one server to the other server.

Example Demonstrating the Need to Take Tail-log Backup

Let's say, you run DBCC CHECKDB to check for corruption in the database. It returns consistency errors and you decide to restore the previously taken backups, such as the Full backup. Then, you restore the Differential and all the transaction log backups. But you don't want to lose the log records that haven't been captured in the transaction log backup. So, to avoid losing those log records (i.e., the tail of the backup) and maintain the log chain intact, you will need to take Tail Log Backup.

Let’s consider a scenario.

Assume taking a Full database backup and Log backups after every one hour.



8:00 AM

Create a Full Database Backup

9:00 AM

Take Transaction Log Backup

10:00 AM

Take Transaction Log Backup

11:00 AM

Take Transaction Log Backup

11:30 AM

Failure occurs

You can restore the database starting from Full backup (taken at 8 AM), then restore all the three transaction log backups (taken at 9 AM, 10 AM, 11 AM). But, there are no backups from 11:00 AM till 11:30 AM, resulting in data loss.

So, how to recover the database without data loss between 11-11:30 AM?

Take t-log backup by executing the BACKUP LOG command ‘WITH NO_TRUNCATE’ option. It will create a t-log backup file. Restore the file after the last transaction log backup (11 AM) WITH NORECOVERY to recover the lost data.



BACKUP LOG [Database] TO DISK = ‘C:\ProgramFiles\MSSQLServer\Data\Tail_Log1.LOG’ WITH NO_TRUNCATE;

How to Back up and Restore Tail of the Log?

Before we discuss the process to back up the tail of the transaction log and restore it, it’s important to know the clauses you need for creating a t-log backup.

  • NORECOVERY: Using this clause leave the database into the restoring state. This assures that the database won’t change after the t-log backup.
  • NO_TRUNCATE: Use this clause only when the database is damaged.

  • CONTINUE_AFTER_ERROR: If a database is damaged and you cannot take t-log backup, back up the tail of the log using CONTINUE_AFTER_ERROR.


  • Create a new database



USE Tail_LogDB;


  • Create a new table and insert some data into it.



EmployeeAge int



This T-SQL query will create a table named Employee with columns ‘EmployeeID’ and ‘EmployeeAge’. 

  • Create a stored procedure to add more records to the table.



DECLARE @i int = 100

WHILE @i > 0


INSERT Employee (EmployeeAge) VALUES (@i)

Set @1 -=1



EXECUTE InsertEmployee;


SELECT * FROM Employee;


Executing this T-SQL query will create an ‘InsertEmployee’ stored procedure that runs through a loop to add 100 more records into the Employee table. Then, select the Employee table to verify that everything works.

  • Create a full backup of the Tail_LogDB


TO DISK = 'C:\TempDB\ Tail_LogDB_FULL.bak'

This command will create a full database backup with the 100 records we added in the table in Step 3. And, the backup gets saved in a folder we have created, ‘TempDB’.

  • Insert some more records into the table

EXECUTE InsertEmployee;


SELECT * FROM Employee;


After executing this T-SQL query, we will have 200 records in the database table. 

  • Simulate a database failure

If you're keeping your data and log files on different physical drives, then it's entirely possible that drive failure takes out the data file and leaves you only with the transaction log. We can simulate this simply by deleting the mdf file from the hard drive. Here's how:
  • Right-click on Tail_LogDB > Tasks > Take Offline. 

  • Select the ‘Drop all active connections’ checkbox and press OK. 

  • Now refresh the database, and you can see that the db is now OFFLINE. 

  • Next, go to the location where the data file is stored (i.e., TempDB folder), and you can see the db that we just saved. 

  • Now go to the location where the .mdf file and .ldf files for the Tail_LogDB database are saved. Delete the .mdf file.  

Now let's head back to SSMS and understand how we can recover from this disaster.

Bring Database Back Online

·         Right-click on Tail_LogDB > Tasks > Bring Online.

·         A dialog box with errors, click Close.

         Refresh the database.

As you can see, the database status has changed to Recovery Pending. Before attempting the restore operation, ensure to back up the tail of the log to capture the second instance of the 100 records we added into the database.

Now, let’s take the tail of the log. 

Switch into the master database and execute the BACKUP LOG statement with the CONTINUE_AFTER_ERROR option. This option will ensure to perform tail log backup even if any error occurs.

USE master;



TO DISK = 'C:\TempDB\Tail_LogDB.log'



Restore the t-log backup

Let's initiate the restore process by restoring the full database backup 'WITH NORECOVERY' option. Using this option specifies that the restore procedure would not attempt to undo or roll back any uncommitted transactions. This is important because if a modification to the data had begun but not finished when the failure occurred, there would be a record in the transaction log. Typically, SQL Server will attempt to roll back any of these partially completed changes during a restore, and we don't want this to happen.

USE master


FROM DISK = 'C:\TempDB\Tail_LogDB_FULL.bak'



This restores the backup of the first 100 records. 

To complete restoring the entire record set, let's restore the log file as well.


FROM DISK = 'C:\TempDB\Tail_LogDB.log';


·         Verify the results

USE Tail_LogDB

SELECT * FROM Employee;


So, as you can see, all the 200 records are now restored.

Conclusion: Key Take-Away Points

  • A tail-log backup is useful to avoid losing data when a database is damaged or corrupted. However, you may fail to back up the tail of a damaged database log. So, when executing the BACKUP LOG statement, use WITH CONTINUE_AFTER_ERROR option to take t-log backup.
  • You must also take a tail-log backup before restoring a database in an ONLINE state. If the database is in OFFLINE state and doesn't start, back up the tail of the transaction log WITH NORECOVERY before performing the restore procedure.
  • It is also recommended to take t-log backup when migrating a large database from one source to another.
  • But remember, you can take tail-log backups only if the transaction log file is accessible. Meaning, you cannot perform t-log backup on a database with a corrupted and inaccessible log file.