Pages

Monday, 25 November 2019

How to Fix SQL Database Logical Consistency Based I/O Error

There can be several reasons that cause Database Logical Consistency based I/O error:

  • Unexpected system shutdown/crash or forced shutdown 
  • SQL administrator tries to query or modify SQL data

In both the circumstances, the Host instances on SQL server restart repeatedly or terminate the connection between SQL application and the database. The following error message is displayed in SQL Server error log:

SQL Server detected a logical consistency-based I/O error: incorrect pageid (expected 1:1848; actual 0:0). It occurred during a read of page (1:1848) in database ID 10 at offset 0x00000000e70000 in file ‘C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\ABCDb.mdf’.  Additional messages in the SQL Server error log or system event log may provide more detail. This is a severe error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online.

When a SQL Server database experiences a logical consistency based error, the first step is to diagnose the cause of error. The following methods can help in detecting the error:

  1. Use Best Practice Analyzer (BPA) tool – BPA tool helps in detecting critical consistency error. This tool is specific to the SQL-version, meaning BPA 2008 is available with SQL 2008 and so on.
  2. Check Windows System at Event Log system level, driver or disk related errors
  3. Check file system integrity by running chkdsk command
  4. Run the diagnostics recommended by hardware manufacturers for the system or disk
  5. For SQL server 2008, run the utility SQLIOSim on the drive of the database that has reported I/O error.
  6. Contact hardware vendor or device manufacturer to verify that hardware requirements confirm with I/O requirements of SQL server. 
  7. Check and run DBCC CHECKDB in SQL server database. DBCC CHECKDB command helps to check logical and physical consistency of SQL database pages, rows, system table referential integrity, index relationships and other structural checks. In case the check fails, error is reported as a part of CHECKDB command. The following DBCC CHECKDB command can be used:

DBCC CHECKDB (ABCD) WITH NO_INFOMSGS, ALL_ERRORMSGS

Executing above command highlights the problem in detail, and SQL database may display the I/O  error message as follows:

Msg 8906, Level 16, State 1, Line 1
Table error: Object ID 0, index ID -1, partition ID 0, alloc unit ID 0 (type Unknown), page ID (1:1846) contains an incorrect page ID in its page header. The PageId in the page header = (0:0).

Such error messages help us reach a conclusion that SQL Database logical consistency based I/O error is caused either due to hardware issues or due to corruption in SQL database. 
The following methods can be used to resolve logical consistency based IO error:

Method 1 – Check hardware and application connectivity 
Database inconsistency error can be resolved by establishing proper connectivity between the database and SQL application. 

Method 2 - Restore from SQL backup
The most feasible solution is to use backup for SQL database restoration. Before restoring from backup, check that: 
  1. You have a recent full backup
  2. Backup is updated, just prior to corruption, and not long ago so as to avoid critical data loss
  3. Corruption is at page level, as page level corruption issue can be resolved with the help of page-level restoration. 
Note: Page-level restoration is valid when you are dealing with large database and corruption is in only in one-page of SQL database.
Step 1: Use the following command for restoring SQL database from full backup

Backup the transaction log
BACKUP LOG PageLevelRestores TO
DISK = 'g:\PageLevelRestores_LOG1.bak'
WITH INIT
GO

Step 2: Perform the restoration change to reflect the changes online.

Backup the tail of the log...
BACKUP LOG PageLevelRestores TO
DISK = 'g:\PageLevelRestores_LOG_TAIL.bak'
WITH INIT
GO

Note: There is no need to restore full SQL server database if corruption is confined to a single page. You can restore database from the available backup for that page which is corrupted. Performing the following commands will help in restoring backup for a single page:
Restore all available log backups in the correct order
RESTORE LOG PageLevelRestores FROM
DISK = 'g:\PageLevelRestores_LOG1.bak'
WITH NORECOVERY
GO

-- Finally restore the tail log backup
RESTORE LOG PageLevelRestores FROM
DISK = 'g:\PageLevelRestores_LOG_TAIL.bak'
WITH NORECOVERY
GO

-- Finally finish with the restore sequence
RESTORE DATABASE PageLevelRestores WITH RECOVERY
GO

Once the database backup has restored SQL database, run the query DBCC CHECKDB again to check that the select statement succeeds without SQL Database logical consistency-based I/O error. This command also checks that there is no data loss in this table. 

Limitations of SQL database backup:

  1. It is not possible to restore from SQL database backup when the available backup is obsolete.
  2. If Logical consistency based I/O error is spread across the SQL server database, then this method will not be valid.
  3. For those cases where faulty page exists in a non-clustered index, the SQL database can be fixed by dropping and recreating the index of SQL database. 

Method 3: Repair corrupt SQL database with REPAIR_ALLOW_DATA_LOSS

REPAIR_ALLOW_DATA_LOSS is the minimum repair level for the diagnosed errors. 

Notes: Before using REPAIR_ALLOW_DATA_LOSS, perform the following:
  1. Take a backup of SQL server database and save it with another name
  2. Set SQL database in Single user mode
  3. Get all Tables record count by using the following commands
DECLARE @T_Name VARCHAR(250)
DECLARE @COUNT INT
DECLARE @SQL VARCHAR(2000)
CREATE TABLE #T_Info(ID INT IDENTITY(1,1),T_Name VARCHAR(200),D_Count INT)
DECLARE TINFO_CUR CURSOR FOR
SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_TYPE='BASE TABLE'
OPEN TINFO_CUR
FETCH NEXT FROM TINFO_CUR INTO @T_Name
WHILE @@FETCH_STATUS =0
BEGIN
SET @SQL='INSERT INTO #T_Info(T_Name,D_Count) SELECT '''+@T_Name+''',COUNT(*) FROM '+@T_Name+''
EXECUTE (@SQL)
FETCH NEXT FROM TINFO_CUR INTO @T_Name
END
CLOSE TINFO_CUR
DEALLOCATE TINFO_CUR
SELECT * FROM #T_Info ORDER BY T_NAME

The following steps help to fix SQL database and resolve logical consistency based I/O error:
  • Run the command:
DBCC CHECKDB (DB_NAME, REPAIR_ALLOW_DATA_LOSS)
  • Once SQL database is repaired, repeat the process of ‘Table Record Count’ and compare with old record count. 
There must not be any difference between initial and final Table record count. 

Limitations:
REPAIR_ALLOW_DATA_LOSS may fix Database logical consistency-based IO error but there is a major issue of data loss, where an organization may experience loss of critical data. 

Alternative Solution

If above methods won't work then, give a try to a SQL Recovery Software. Stellar Repair for MS SQL repairs corrupt MDF and NDF files and restores all the database objects. It supports MS SQL 2019 and all the lower versions. 

Conclusion
In this post, we have highlighted the cause of SQL Database logical consistency-based I/O error and the methods to detect this error. 
Based upon the error, we have tried to find the correct resolution method. If hardware or system is responsible for the error, it is recommended to resolve the hardware-related issues, and in case DBCC CHECKDB reports consistency error, then try to restore SQL database by using an updated backup. 
If the issue is not resolved with hardware correction and backup then try to repair the database with the help of REPAIR_ALLOW_DATA_LOSS. This is the minimum level of repair to resolve all errors from CHECKDB, but this does not mean that it will actually fix the error. Furthermore, it may result in data loss. 
SQL repair tool helps in repairing corrupt MS SQL database including corrupt .mdf and .ndf files and recovers all database components – Tables, Triggers, Indexes, Keys, Rules, deleted records and Stored Procedures. It is of great help in times of crisis, as the software provides a definite repair solution and supports all SQL database versions including the recent and older ones. 

Wednesday, 9 October 2019

How to Reduce the Risk of SQL Database Corruption


Introduction:

Today we are going to learn how to prevent SQL database corruption. Let’s start with some facts about how database corruption can occur actually. The SQL database corruption could be completely unsystematic. There are ways from which you find the reason for corruption, but sometimes you are on the wrong track and can’t find the primary reason behind the corruption of the database. And that’s why you have CHKDSK command to figure out the reason for complications. It is much better to reduce the risk of corruption so that you don’t have to face these issues. As a user of SQL Server, you should know that your Server is sharp-witted to predict the problem and already have the solution to that issue. While there are a unit ton of articles floating around the net on the way to fix corruption in SQL database, in this article, we are going to discuss the ways to stop SQL database corruption.


In data-driven businesses, the littlest of bug among the structure information will bring entire operations down. Whereas fixing issues is Associate in nursing after-thought, preventing issues is wiser. Here is a unit the highest six ways that you'll be able to implement to stop SQL server information corruption. There are a lot of ways to detect the corruption in SQL database and remove it from the SQL server. But today I am going to explain 5 ways to reduce the risk of SQL database corruption.

  1. You should Backup your database regularly
  2.  Split your database into more than one part.
  3. You should customize Cron jobs for your database
  4. You should optimize your database regularly
  5. You should check your database file size and hard disk size sometimes all database is lost due to this issue.
  6. You should do hardware and software maintenance regularly.
  7. Alternative Solution.

  • Backup your database regularly:

You’ll have detected it some of thousand-fold already; however, this is that the best safeguard against losses. Backups are an excellent facilitate once databases develop issues. In such cases, backup files square measure won’t restore the corrupted databases, thereby preventing period. You must build a backup of your information regularly so that the possibility of SQL information corruption is scaled back. If the database corrupted, then I will suggest the SQL database repair software. Here is show you how you can back up using MS SQL Server.


After pressing the button you will see the following window:


Press the “OK” button. In this way we backup our files.

  • Split your database:

Split your information. Instead of projecting each object in one MDF file and sharing the complete factor over the network, split the data into 2 pieces: the front-end forms, reports, queries, and knowledge access pages, and also the back-end tables containing your knowledge. Place a duplicate of your front-end information on the native disc of every machine and link back to the tables that are present on a fileserver. It implies that way, less knowledge has got to be dragged around the network and exposed to corruption. Your information will run additional quickly and can enable you to create style changes to the face, whereas still, victimization lives knowledge.

  • Customization of Cron Jobs for your database:

Start Cron jobs to run at regular intervals to confirm no lag in the period.To do that, select a database and press tasks and Back Up:



On the Backup window select Script and Script Action to Job:


Go to Schedules page and press the New button:


Add the schedule of your preference:


  • Optimize your database:

Create scripts that may monitor and determine slow running queries within the info. Often, slow Associates in nested queries end in information issues and an overall lag by taking on over necessary resources. Produce scripts that may determine and optimize these queries and stabilize the SQL server. At a regular basis, do not forget to sporadically monitor and tune-in the server for making certain high performance. Certify that the connections to the server square measure restricted and optimized. The most effective way to maintain the general health of the database is to stay updated with the most recent updates, thus don’t forget to install the last service packs.


Here I show you how you can optimize your database select the last database option. After pressing the button “Database” you will see a new window look like below.


In available space section you will see an option to optimize the size of the database.

  • Checking database file size and storage

When you start employing a new SQL server database, information file size won't be a problem.  Also, the possibilities of big databases obtaining corrupt area unit are tiny. Therefore, it's a decent plan to separate databases of enormous sizes and maintain multiple smaller versions. It additionally helps in liberating up adequate space on the server. As shortly as you see that free area is slippery below the perfect price, begin engaged in liberating it up.


  • Maintenance of hardware and software

Regularly perform memory checks, network observation, hardware-error checks, etc. to avoid bugs and hardware failures. This sort of preventive measure should be distributed sporadically to track the standing of all hardware proactively, and if the requirement arises, to repair all detected problems. Virus and malware will prove very dangerous to your SQL server info. Don't fall prey to such malicious attacks and perpetually keep an eye fixed out for dead of the info. Also, from time to time, check for any traces of malware infections inside the info and use a decent code resolution to stay them cornered.



Image Ref: Hardware Repairing

  • Alternative Solution
Stellar Repair for MS SQL Server is a software that helps us to reduce risk in our databases. This SQL Recovery Software repairs .mdf and .ndf files. It also recovers our deleted records and supports MS SQL Server 2017, 2016, 2014, 2012, 2008 and older versions. After rebuild and recovery of the databases, Stellar Repair for MS SQL Server allows us to save files in multiple file extensions like HTML, XLS, CSV formats and .mdf files. If you keep your recovered file as a .mdf file extension, it will save your data as a new or live database. We will solve our multiple problems using Stellar Repair for MS SQL Server.


Conclusion:

In this article, we studied how we can reduce the risks of MS SQL Database corruptions. We can make daily backups of our database. We will split, optimize, and customize, and schedule jobs in our database. We can also check our database size. We also studied an SQL recovery software. This software also helps us to solve multiple problems. 

Monday, 16 September 2019

An Effective Solution to Resolve SQL Database Corruption


I am a SQL Administrator and today I will share my experience about an issue on SQL server, which corrupted the transaction logs and left the database in a RECOVERY_PENDING state. Thankfully, I was able to resolve it.  

The SQL server experienced a faulty hardware issue and a faulty memory and it suffered a corruption in transaction log leading to unclean server shutdown. When the system was eventually online, it had a database which was not available in a usable state.

The transaction log is used by SQL at all times to ensure the consistency of the database. It provides SQL database the required consistency and durability which are necessary for any relational database engine. If the SQL service is stopped or the SQL database gets detached, it is the job of SQL server to shut down in a clean state, meaning it should run a checkpoint and write all dirty data pages on a disk. Further, the checkpoint also needs to make an entry into the logs that it has completed the task of running. If there is not enough space in the log, the checkpoint cannot run and SQL database cannot shut down cleanly.

When the database was attached and the service was restarted, SQL tried to restart-recovery on the database. Here the transaction log was used to bring the SQL database to a consistent transactional state. So far so good, but here the transaction logs were not available. And hence, the database could not be brought online, and based upon the failure stage, the database was marked in Suspect or Recovery Pending stage. 

The following sequence of activities were performed:


CREATE DATABASE AbcXyz
ON (NAME = ABC_dat,
FILENAME = ‘D:\Develop\Databases\ AbcXyz.mdf’,
SIZE = 10
MAXSIZE = UNLIMITED
FILEGROWTH = 5

LOG ON (NAME =  Abc_log,
FILENAME = ‘‘D:\Develop\Databases\ AbcXyz.ldf’,
SIZE = 5MB,
MAXSIZE = 100MB);
GO

ALTER DATABASE AbcXyz SET RECOVERY FULL;
GO

USE AbcXyz
GO

CREATE TABLE Filler (
ID INT IDENTITY PRIMARY KEY,
FillerStr CHAR(300)
)
GO

BACKUP DATABASE AbcXyz TO DISK = ‘D:\Develop\Databases\Backups\AbcXyz.bak’[/source]
Right, so that’s the setup all done. Now to fill the log up.
[source:sql]DECLARE @i INT;
SET @i = 0;
WHILE (@i < 10000)

BEGIN
INSERT INTO Filler (FillerStr) VALUES (CAST(@i AS VARCHAR(10)))
SET @i = @i+1;
END
GO

After just a 3779 inserts, an error message popped up:

Msg 9002, Level 17, State 2, Line 6
The transaction log for database ‘AbcXyz’ is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases

At this stage, stop the SQL service and delete/rename the log file (D:\Develop\Databases\ AbcXyz.ldf). Now restart the SQL service. When I tried to access the database with the following query,

select MAX(ID) from AbcXyz.dbo.Filler[/source]

SQL server throws below error message.

Msg 945, Level 14, State 2, Line 1
Database ‘AbcXyz’ cannot be opened due to inaccessible files or insufficient memory or disk space.  See the SQL Server error log for details.

With this error message, I came to the conclusion that the state of the database, as per sys.databases is in RECOVERY_PENDING state. ‘Restart Recovery’ cannot run due to the missing log file and the error log enumerates the following:

Error: 17207, Severity: 16, State: 1.

FileMgr::StartLogFiles: Operating system error 2(The system cannot find the file specified.) occurred while creating or opening file ‘D:\Develop\Databases\AbcXyz.ldf’. Diagnose and correct the operating system error, and retry the operation.
File activation failure. The physical file name “D:\Develop\Databases\AbcXyz.ldf” may be incorrect.
The log cannot be rebuilt because the database was not cleanly shut down.

Now, the first question for me was “What is the SLA for the database server and how much data can the user afford to lose?

And the answer was as expected: “the database needs to be up and running a.s.a.p., and we can’t afford to lose data”.

Methods applied to resolve MS SQL corruption

There are two ways to fix RECOVERY_PENDING issue – restore from backup, or repair and rebuild logs. Further, there are two methods for log repair and rebuild – the DBCC CHECKDB command using the REPAIR_ALLOW_DATA_LOSS option and the software-based method. 

First, I wanted to check the health of SQL database backup to restore the database from backup. This is the easiest and most feasible options, provided the backup is in a consistent state.

  • Restore MS SQL database from backup

The most feasible option to fix the database Transaction Log file corruption issue was to restore the database from the latest restorable backup and it had to be done from the ‘Full’ backup. The backup was available but it was not updated. This means, if we tried to restore from the available backup, we would be losing around last few hours data.

We opted for manual backup restoration process with the help of SQL Server Management Studio (available for free from Microsoft).
Note:
Process to restore MS SQL database from backup varies from version to version. Here our database was on SQL Server 2012.
We installed and connected the database to SQL Server Management Studio through Port 1433.

Database users could not afford such a huge data loss so the option of restoring MS SQL database through last available backup was completely ruled out unless there was no other option.

  • Repair database with DBCC


I needed to run a repair to bring the database in RECOVERY_PENDING state to an online state. The repair utility will fix the structural problems, as caused by the missing log and bring back the database to a normal online state. 

The following command was used:

ALTER DATABASE AbcXyz SET SINGLE_USER

DBCC CHECKDB(AbcXyz, REPAIR_ALLOW_DATA_LOSS)

After running this command, I found that the CheckDB complained about the missing log and started the process of log-rebuild with the following warnings:


File activation failure. The physical file name “D:\Develop\Databases\AbcXyz.ldf” may be incorrect.

The log cannot be rebuilt because the database was not cleanly shut down.
Transactional consistency has been lost. The RESTORE chain was broken, and the server no longer has context on the previous log files, so you will need to know what they were. You should run DBCC CHECKDB to validate physical consistency. The database has been put in dbo-only mode. When you are ready to make the database available for use, you will need to reset database options and delete any extra log files.


Again, running the repair command did not resolve the issue. As there was not enough space in the log for a clean shutdown, running a checkpoint also prompted an error message stating that the database could not be repaired and the log can’t be rebuilt because the database was not cleanly shutdown.

Now this was a catch-22 situation for me. I couldn’t restore the database because the backup was not full and I couldn’t repair the logs because transaction logs were missing. The recommended approach is to resolve the error with an MS SQL database repair software. But first, I need to search for such software which could repair the database and make it available for users within no time.

  • Repair with MS SQL Repair software

I did thorough online research for available software and decided to go with Stellar Repair for MS SQL. There were many reasons to support my selection-Along with the corrupted ndf and ldf file, the software also repaired corrupted log files. The software claimed to correct the bit-pattern and bring the database to a consistent state. Both these features were required to bring our MS SQL database online.

I also read a few case studies where the clients were happy in buying the software as they could easily repair the corrupted database and recovered the entire data, which they otherwise thought to be lost. Here’s the link.

Again, before investing in the software, I called up the customer support to know the procedure of database repair and what’s the probability of getting back the data. They informed me that I could download and test the software and the preview would show the recoverable database. That’s what I actually wanted.

I installed the software and submitted the file. To my surprise, I could preview the entire SQL database on my screen. Without wasting any more time, I activated the software. There also, the support team helped me in getting the activation key without any time lag. And, I could save the entire database as a new database. The new database was accessible and did not have any corrupt elements.

I breathed a sigh of relief!

I wished I had thought of this option before everything else, then the downtime would have been much lesser.

Conclusion

As MS SQL Administrators, it is our job to keep the database up and running. And if there is an issue, then we should resolve the issue at the earliest to keep the downtime to the minimum and recover the database in entirety. Hence, this is for us to decide which protocol, backup restore, repair with DBCC command or repair with third-party the software delivers the maximum value in the given circumstances. There is no better solution than restoring the MS SQL database from a good backup and the next best option is to repair the database by using software such as Stellar Repair for MS SQL.
This crisis situation taught me that the database should be backed up consistently and also a software should be available as a help to overcome such crisis. Here’s a small but informative guide for database administrators.  

Monday, 30 September 2013

SQL Server :Part 1 : Architecture of Transaction Log

In our earlier post, we have discussed in detail about the architecture of the data file and different types of pages. In this post, we will be discussing about the architecture of log file.

Logging is a mechanism used in RDBMS to support various ACID (Atomicity,Consistency,Isolation and Durability) properties of transactions. A transaction log is a physical file in which SQL server stores the details of all transactions and data modifications performed on the database.In the event of of disaster, that causes SQL server to shutdown unexpectedly (Power failure/hardware failure), the transaction log is used to bring the database in a consistent state while restarting the server.On restarting the server, database goes through the recovery process.During this recovery process , the transaction log is used to make sure that all committed transactions are written to respective data pages (rolled forward) and revert the uncommitted transaction that were written to data pages.


Logically transaction log is a set of log records.Each records is identified by a log sequence number (LSN). The new log records is always written at the logical end of log file with a LSN which is greater than the previous one.Each LSN will be associated with a transaction id , which can be used to group the log records of a transaction. As log file store the log records in the sequential order as it happens, It is not necessary that, log records of a transaction are always available in sequence in the log file.Each log records will have the LSN of previous log as a backward pointer and that will help in rollback of transaction.

Transaction log will store separate log entries for each operation.For example, while inserting a record into a table, transaction log will store separate log entry for inserting into clustered index and other non clustered index. In the same way, if a single update statement is updating 10 records, transaction log will capture 10 separate log entries.For data modification, transaction log store either the logical operation performed or the before and after image of the record.



To understand it better, let us look into the transaction log using the sys.fn_dblog. It is an undocumented function which will help us to read the active portion of the log. we are using the below query to create two tables and insert some records into that.We will delete the records from these table to keep the table clean. This insert and delete operation is to make sure that the pages are allocated to the table and we will not get unnecessary entries in the transaction log while examining it.We have issued a manual checkpoint to force the SQL server to write the log information to data file and clear the log. Note that, one table is heap table and other one is clustered index table.


USE Mydb
GO
CREATE TABLE LoginfoHeap
(
   
id INT)

GO
CREATE TABLE LoginfoCI
(
   
id INT PRIMARY KEY)

INSERT INTO LoginfoHeap VALUES(1),(2)
INSERT INTO LoginfoCI VALUES(2),(4)
GO
DELETE FROM LoginfoHeap 
DELETE FROM LoginfoCI
GO
CHECKPOINT
GO
SELECT 
[Current LSN]
Operation  ,Context    ,
[Transaction ID],
[Previous LSN],AllocUnitName,[
Previous Page LSN],
[Page ID],[XACT ID],SUSER_SNAME(CONVERT(VARBINARY,[Transaction SID])) AS 'Login',
[Begin Time],[End Time]
FROM sys.fn_dblog (NULL, NULL)






From the output it is very clear that , we have only two active log entries.First one is written ,when the checkpoint started. The second one is written,  when the checkpoint completed the process.


Now we will insert,update and delete records to these tables through two session 

---SESSION I
   BEGIN TRAN
   INSERT INTO LoginfoCI VALUES(2)

--SESSION 2
   BEGIN TRAN
   INSERT INTO LoginfoHeap VALUES(1),(2)

---SESSION I
   INSERT INTO LoginfoCI VALUES(4)

--SESSION 2
   UPDATE LoginfoHeap   SET id =WHERE id=1

---SESSION I
   UPDATE LoginfoCI  SET id =WHERE id=2

--SESSION 2
   
DELETE FROM LoginfoHeap    WHERE id=2

---SESSION I
   DELETE FROM LoginfoCI   WHERE id=4
   SELECT FROM sys.dm_tran_current_transaction
   COMMIT

--SESSION 2
   SELECT FROM sys.dm_tran_current_transaction
   COMMIT

The DMV sys.dm_tran_current_transcation returns a single row that displays the state information of the current transaction in the current session.We are interested only in the transaction_id, which will help us to filter the output of sys.fn_dblog. Let us see the output of sys.fn_dblog.

SELECT 
[Current LSN]
Operation  ,
Context    ,
[Transaction ID],
[Previous LSN],
AllocUnitName,
[Previous Page LSN],
[Page ID],[XACT ID],
SUSER_SNAME(CONVERT(VARBINARY,[Transaction SID])) AS 'Login',
[Begin Time],
[End Time]
FROM sys.fn_dblog (NULL, NULL) 
WHERE [Transaction ID] IN 
(
   SELECT [Transaction ID] FROM sys.fn_dblog (NULL, NULL) 
   WHERE [XACT ID] IN (856960,856981)
)

The values 856960 and 856981 are the transaction_id returned from sys.dm_tran_current_transaction.We have filter the output to get only the relevant rows in which we are interested.

















In our script, we have opened two transaction and all our transaction log entries are grouped to 
one of these transaction_id marked in red and green.Let us analyse what we did and how it is captured in the transaction log.

In the session 1, we have started the transaction and inserted a single record.The first records in the output map to the BEGIN TRAN command. This is the starting point of the transaction and created a new transaction_id.The previous LSN column value is 0 as this is the first log entry in this transaction.In the same log records,it stores the XACT_ID,login and transaction start time.The second record represent the insert into the clustered table.The transaction_id is used to group the entries associated with a transaction. The previouse LSN column, is a pointer to the previous log entry in the same transaction which help SQL server to move backwards in case of rollback.Page id column refer the the page number where this LSN made the change.Previous Page LSN column refer the last log sequence number(LSN) which modify this page.When LSN modify a page, it will also update the corresponding LSN number in the page header (m_lsn field in the header. For more detail refer this post)

In the session 2, we have opened another transaction and inserted two records through single insert statement to the heap table. You can map these operations to row number 3,4, and 5 in the transaction log output. Third row represent the Begin tran command. Even if we inserted two records in single insert statement , SQL server recorded two separate entry in the transaction log. 

As a next step, in session 1 we have added 1 record to the clustered index table.We can map this operation to the 6th record in the transaction log output.

In the next statement , we have modified a record in heap table through Session 2. You can map this to the 7th record in the transaction log output.If you look into the previous LSN column , it will be current LSN column value of the last record associated with this transaction.

In the same way, as a next statement we have modified a record in the clustered table through session 1. We can map the 8th and 9th records in the transaction log output to the update operation on the clustered table. You might have noticed that, when we modified a record in the heap table, transaction log recorded operation in a single row. Where as the same operation in a clustered table has two record in the transaction log. One for delete and other one for insert. When you modify the clustered index key, SQL server internally delete the existing record and insert a new record. This is because, the record need to be stored in the new location based on the modified value(based on the order of clustered index column). The easiest way for SQL server to achieve this is , delete the existing record and insert it as new records with modified clustered column value.

In the next two statement, we are deleting one record from heap table and clustered table.This can be mapped to the 10th and 11th records in the output.Finally we have issued the commit statement in both sessions.12th and 13th record in the transaction log output can be mapped to the commit operation.The Previous LSN column refer the Current LSN column of corresponding  begin tran statement. It will also capture the transaction end time in the End time column.

Understanding the VLF(Virtual Log File)

A database can have one or more log file. In general there will be only one log file as there is no performance improvement by having multiple log file. SQL server uses the transaction log in sequential manner.As the data file divided into pages,log files are divided into virtual log file(VLF).The size of the VLFs  in a log file may not be in equal size. SQL server decide the size and number of VLF in a log file based on the size of the log file growth as given below.

Growth upto 64 MB          = 4  VLF
From 64 MB to 1 GB       = 8   VLF
Larger than 1 GB             = 16 VLF

Let us create a database with 64 MB initial log size and later increase it to 1 GB. As per above calculation the log file should have 12 VLFs.  4 VLF based on initial size and 8 VLF due to changing the log size to 1 GB.

USE MASTER;
GO
CREATE DATABASE Mydb
ON 
(      NAME = MyDb_dat,  FILENAME = 'D:\MyDb\Mydb.mdf',
       SIZE = 10MB, MAXSIZE = 3072MB,   FILEGROWTH = 5MB )
LOG ON ( NAME = MyDb_log,FILENAME = 'D:\MyDb\MyDB.ldf',
    
SIZE = 64MBMAXSIZE = 2048MBFILEGROWTH = 5MB ) ;

GO

ALTER DATABASE Mydb 
MODIFY FILE ( NAME = MyDb_Log,FILENAME = 'D:\MyDb\MyDB.ldf',    SIZE = 1024MB)

Now Let us see how many VLF got created. To find out the number of VLF in database log file, we can make use of DBCC Loginfo.

DBCC loginfo('mydb')

The output is given below.

















There are 12  records in the output each represent a VLF.Let us try to understand the result

FileId: This is the file id of the log file and will be same for all 12 records as we have only one log file.If we have multiple log file , we can multiple numbers here

FileSize: This is the size of the VLF. If you look into the first four, have same size except the fourth one. This because first 8KB of the log file is used for file header. If you add filesize value of first four records along with 8192(8KB) , you will get 64MB which is the initial size of the log file.
16711680+16711680+16711680+16965632 =67100672+8192 =67108864bytes =64MB
In the same if you add the last 8 records it will account the 960 MB (1024-64) , the growth happened due to the alter statement.

StartOffSet: This values is also in bytes, and is the sort column of the output. The first VLF alwasy start from 8192, which is the number of bytes in a page.As mentioned above, the first 8KB is used for file header and will not store any log.

FSeqNo: The file sequence number indicates the order of usage of the VLFs. The row with the highest FSeqNo value is the VLF where current log records are being written.FSeqNo values are not consistent. It will keep changing each time when VLF are getting reused. We will discuss more about this later in this post. A value of 0 in this column means that this VLF has never been used at all. That is the reason we have 0 for all records except one where it is currently logging.

Status: Status has two possible values : 0 and 2. A value of 2 means the VLF is not reusable and a value 0 means it can be reused.It will be more clear as we go further.

Parity: Parity has three possible values 0 ,64 and 128. If the VLF is not used yet, it will have a value 0 and will be set to 64 on first use.Every time a VLF is reused, the parity value is switched between 64 and 128.

CreateLSN: The value indicates when the VLF is created or to group the VLF based on the creation. A values 0 indicates, those VLFs are created as part of database creation. In our case first four records has a value 0 which indicate these VLFs are created as part of database creation with 64MB log size. The remaining 8 records has the same value. These VLF are created as part of our alter database statement to increase the size of the log file from 64 MB to 1024MB

The above output description is referred from Kalen Delaney Blog Post

Now our transaction log will looks like below







Now we have learned about the LSN and VLF. we will discuss more about transaction log in the next post.

If you liked this post, do like my page on FaceBook