Pages

Wednesday 17 October 2012

Windows Server 2008 Cluster: Understanding the Cluster Log Behavior

In any system, logs are very important to troubleshoot the issue. In the case of windows cluster, cluster logs are like a black box which will have all information about the failure of cluster.This is very important information, that we need, to raise a case with Microsoft. In this post let us discuss only about cluster log behavior and generating the cluster log.In the coming post we will,discuss about deciphering the cluster log.

When you trouble shoot the cluster issue, I am sure that, in most of the cases  cluster log will give you the root cause of the issue. You can link the windows event log entries for further analysis.In windows 2008, the cluster logs are captured using the new eventing and diagnostic channel called ETW (Event Tracing for Windows). You can see this tracing in the Reliability and Performance Monitor  under "Data Collector Sets\Event Tracing Session". Below is the snapshot of the same.














The log file generated by ETW are stored in the folder %windir%\system32\winevt\logs

Each time the server is rebooted, a new file will be generated like clusterlog.etl.002 and start logging in that until the server is rebooted again. Up to 3 log files are kept, so after the third  consecutive reboot, you will start loosing old log entries. Please find below a snapshot of etl file in the folder.






For example, in the above case, after two consecutive restart on 29th september, we lost all the logs till 12th september. If we had one more reboot on 29th, we might have lost all the logs till 29th. The ETL log file name incremented each time and has 00X suffix appended to it. Once the maximum number of log files (3) reached, it will start overwriting the first one.At any point of time, only one log file being actively used.

The default log file size is 100 MB (for each etl file). Once the file reached the limit of 100 MB, it will start deleting entries from the beginning of the file to make room for the current logging. In our case the active log is 'ClusterLog.etl.003' which created on 29th September 2012 1:41 AM and it is reached the limit of 100MB. Now the log entries at the beginning of the file ClusterLog.etl.003 will get deleted to make room for new entries.


Generating Cluster Log


Now we have the cluster log spread across three ETL files.The easiest way to read the ETL files is, use the Cluster Log command from the command line. The syntax is :
cluster log /g


This will merge the three etl files of each node and create a output file cluster.log. The output file will be stored in the %windir%\Cluster\Reports directory on each node of the cluster. 

One more interesting switch available for Cluster Log command is /Span:<minutes> , Which will help us to generate the log only for last 30 minutes. For example /Span:15. This will help us to quickly troubleshoot the recent issues.

Missing entries in the cluster log


When you generate the log using the cluster log , in the output file, you might have notice a gap of log, that is log is not available for some days in between. This is happening due to the truncation of log once it is reached the limit of 100 MB. In our example, the etl file ClusterLog.etl.003' which created on 29th September 2012 1:41 AM, has already reached 100MB limit and might have truncated some data at the beginning. So when you merge the etl file using the cluster log command, you can notice a gap after 29th September 2012 1:41 AM. May be for some hours or days/week. In the output file ,you might have log from September 12th 5.21 AM  (might have truncated at the beginning as it reached 100 MB) till 29th September 2012 1:41 AM. These entries are coming from the ClusterLog.etl and ClusterLog.etl.002. After that you might  notice a gap of log data for few hours/days as the data at the beginning of ClusterLog.etl.003 got truncated to make room for current log entries.

Changing the cluster log configuration


we can change the default configuration of cluster using the following command. To change the default size use the below command.

cluster log /size:400


This will change the default size of the log to 400 MB.

You can check this change using the command

cluster /prop


If you liked this post, do like my page on FaceBook 

Thursday 11 October 2012

SQL Server : Index Part 1 : Basics of index



While conducting interviews , I have noticed that, many people does not have clear picture about index.Many of them does not have clear picture about the difference between clustered and non clustered index.When people ask about index, it is tough to explain in one go. I feel it is worth to write details post about SQL server indexes in a simple understandable way even if we have tones of article available in the internet.

In short words, indexes helps the database engine to find the requested data efficiently using the minimal resource.Indexes also helps in data integrity through uniqueness of the column but it is not mandatory to define index on unique column. In a busy system it helps to improve the performance by increasing the concurrency. Multiple indexes on a same table can be used to cater the request issued by different users, but many indexes on a table will create a overhead also.Indexes are stored in different pages, it is like data stored in multiple places, and should be in sync with underlying table. Any insert,update or delete on a table should do same operation on all indexes defined on that table. Index help us improve the performance of the data retrieval but has an overhead on DML operation.In the case of Delete and Update ,index will helps the database engine to find the record that need to be modified. There is no thumb rule about the number of indexes on a table. If you need better performance for your read operation, go with more number of indexes and if you need better performance for the DML operation, keep minimal number of indexes.

SQL server support two types of indexes :

  • Clustered Index (CI) 
  • Non Clustered Index (NCI) 

Let us try to understand these two indexes using real life example. Assume that,your neighbour came to your house and asked for the telephone number of 'Robert Mike'. In this scenario, telephone directory will act as Clustered Index. You will open the directory by skipping almost 3/4th of pages assuming that his name appear in the last part of the directory.After turning couple of pages forward or backward , you reach the page where the name 'Robert Mike' is listed. Now you read out the number to your neighbour.So what happened here? When you reached the page where the name 'Robert Mike' is listed, you have all the information requested by your neighbour (client).

Let us assume that, your neighbour came to your house and asked the email id of 'Robert Mike' and you do not remember his email id. In this situation, telephone directory will act as Non Clustered Index. You will open the directory by skipping almost 3/4th of pages assuming that his name appear in the last part of the directory.After turning couple of pages forward or backward , you reach the page where the name 'Robert Mike' is listed. Now you dial 'Robert Mike' and ask him his email id.After disconnecting the line, you will hand over the email id to your neighbour.So what happened here ? When you reached page where the name ' 'Robert Mike' is listed, you do not have information requested by your neighbour (client). You have to do one more operation (dial the number ) to get the information requested by your neighbour (client). In SQL server, this additional operation is called as Bookmark or RID Lookup.

Hope this will give you pictorial idea about the cluster and non clustered index. In the coming post, we will discuss in details about the clustered index and non clustered index.

If you liked this post, do like my page on FaceBook


Wednesday 10 October 2012

SQL Server:Understanding The Page Free Space (PFS) Page

In my earlier post, we have discussed about GAM and SGAM Page and Data Page. In this postlet us discuss about the Page Free Space (PFS) page.

PFS page is the second  page (Page Number 1) in the data file followed by file header (Page Number 0). GAM and SGAM are used to track the extent allocation status where as PFS pages are  used to track page level allocation. While allocating pages , database engine identify the extent with free pages using the GAM and SGAM. Once the database engine  found extent with free page, it use the PFS page to identify the unallocated page in that extent and amount of free space in the page. Free space is only tracked for pages storing LOB values  (ie  text/image, varchar(max),nvarchar(max),varbinary(max) ,row overflow data) and heap data pages. By default LOB data are stored in a separate page by keeping only a pointer along with record stored in the data pages. These are the only pages where data can be stored in the available free pages.For index pages, the data should be stored in the order of index and there is no options in the insertion point of the records.So it will be unnecessary overhead of tracking free space in the index pages. 

GAM and SGAM have bitmap, but PFS page has a byte map. PFS page keep one byte for each page in the PFS interval. A PFS page can hold the information of 8088 page. 

The bits in each byte are encoded to mean the following:
  • bits 0-2: how much free space is on the page
    • 0x00 is empty
    • 0x01 is 1 to 50% full
    • 0x02 is 51 to 80% full
    • 0x03 is 81 to 95% full
    • 0x04 is 96 to 100% full
  • bit 3 (0x08): is there one or more ghost records on the page?
  • bit 4 (0x10): is the page an IAM page?
  • bit 5 (0x20): is the page a mixed-page?
  • bit 6 (0x40): is the page allocated?
  • Bit 7 is unused
Let us try to explore a PFS page

CREATE DATABASE mydb
GO

USE Mydb
GO
DBCC TRACEON(3604)
GO
DBCC PAGE(mydb,1,1,3)

The output will looks like as given below:
















Let us create a table with data

USE Mydb
GO
SELECT * INTO SalesOrderHeaderTest FROM AdventureWorks2008.Sales.SalesOrderHeader
GO
DBCC TRACEON(3604)
GO
DBCC PAGE(mydb,1,1,3)

The output will looks like as given below. You can see that new pages added from 288 to 1311.


















Now let us drop this table

USE Mydb
GO
DROP TABLE SalesOrderHeaderTest 
GO
DBCC TRACEON(3604)
GO
DBCC PAGE(mydb,1,1,3)

The output will look like as shown below:















You will notice that, a set of pages are in not allocated state but 100 percent full.This is because PFS bytes are not fully reset until the page is reallocated. On deallocation, database engine reset only the allocation status bit, this helps the database engine to rollback the deallocation by only resetting the allocation status bit.

Reference :Paul Randal Blog post 

If you liked this post, do like my page on FaceBook

Thursday 4 October 2012

SQL Server: Sending Email in HTML Table Format Using TSQL

As part of our DBA life , we have to configure many alert or statistical  mails which gives an idea about the state of the database server. Let us discuss how we can send an email in HTML table format using TSQL. I am just going to give a sample script and it is self explanatory.

USE MSDB
GO
DECLARE @Reportdate DATE
SET @Reportdate =CONVERT(VARCHAR(10),GETDATE(),121)

/**************************************************************

           full backup Header
***************************************************************/

DECLARE @FullBackupHeader VARCHAR(MAX)

SET @FullBackupHeader='<font color=black bold=true size= 5>'
SET @FullBackupHeader=@FullBackupHeader+'<BR /> Full Backup Report<BR />' 
SET @FullBackupHeader=@FullBackupHeader+'</font>'
/**************************************************************
           full backup report Section
***************************************************************/
DECLARE @FullBackupTable VARCHAR(MAX)    
SET @FullBackupTable= CAST( (    
SELECT td = name + '</td><td>' + BackupType + '</td><td>'+  FileName + '</td><td>'  
Startdate + '</td><td>'  + FinishDate+ '</td><td>' + Duration + '</td><td>'  +BackupSize
'</td><td>'  +CompressionRatio 
FROM (    
     
      
SELECT 

       sd.name,
       ISNULL(db.[Backup Type],'0') AS [BackupType]
       ISNULL(DB.Physical_device_name,'No Backup') AS 'FileName',
       CAST(ISNULL(DB.backup_start_date,'1900-01-01') AS VARCHAR(24))  AS Startdate ,
       CAST(ISNULL(DB.backup_finish_date,'1900-01-01') AS VARCHAR(24)) AS FinishDate,
       CAST(ISNULL(DB.Duration,'0') AS VARCHAR(24)) AS Duration,
       LEFT(CAST(ISNULL(Backupsize,0)AS VARCHAR(100)),4)+' GB' AS BackupSize,
       LEFT(CAST(ISNULL(ratio,0)AS VARCHAR(100)),5)+'%' AS CompressionRatio FROM 
       SYS.SYSDATABASES sd LEFT JOIN 
       (
       SELECT 
       bm.media_Set_id,
       'FullBackup' AS 'Backup Type'
       bm.Physical_device_name ,
       backup_start_date,
       backup_finish_date,
       Duration
CONVERT(VARCHAR(5),DATEDIFF(second,backup_start_date,backup_finish_date)/60) + ':' +  
RIGHT('00' + CONVERT(VARCHAR(5),DATEDIFF(second,backup_start_date,backup_finish_date)%60),2),
       database_name,
       ROUND((compressed_backup_size)/1024/1024/1024,2) AS Backupsize ,
       100-(compressed_backup_size*100/backup_size) AS ratio
       FROM msdb..backupmediafamily BM 
       INNER JOIN msdb..backupset bs ON bm.media_Set_id = bs.media_Set_id 
       WHERE [type]='D' AND backup_start_date>=DATEADD(dd,-1,@Reportdate) AND 
backup_start_date<=@Reportdate
       ) db ON sd.name=db.database_name
      ) AS d ORDER BY BackupType
  FOR XML PATH( 'tr' ), TYPE ) AS VARCHAR(MAX) )    

  
  
SET @FullBackupTable= '<table cellpadding="0" cellspacing="0" border="1">'    

              + '<tr><th width="10">Database Name</th>
              <th  width="20">Backup Type</th>
              <th width="80">File Name</th>
              <th width="100">Start Date</th>
              <th width="40">Finish Date</th>
              <th width="40">Duration</th>
              <th width="10">Backup Size</th>
              <th width="40">Compression Ratio</th></tr>'    
              + REPLACE( REPLACE( @FullBackupTable, '&lt;', '<' ), '&gt;', '>' )   
              + '</table>' 
             /**************************************************************
           differential backup Header
***************************************************************/
DECLARE @DiffBackupHeader VARCHAR(MAX)
SET @DiffBackupHeader ='<font color=black bold=true size= 5>'
SET @DiffBackupHeader =@DiffBackupHeader +'<BR /> Differential Backup Report<BR />' 
SET @DiffBackupHeader =@DiffBackupHeader +'</font>'
/**************************************************************
           Differential backup Section
***************************************************************/
DECLARE @DiffBackupTable VARCHAR(MAX)    
SET @DiffBackupTable= CAST( (    
SELECT td = name + '</td><td>' + BackupType + '</td><td>'+  FileName + '</td><td>'  
Startdate + '</td><td>'  + FinishDate+ '</td><td>' + Duration + '</td><td>'  +BackupSize
'</td><td>'  +CompressionRatio 
FROM (    
       SELECT 
       sd.name,
       ISNULL(db.[Backup Type],'0') AS [BackupType]
       ISNULL(DB.Physical_device_name,'NO BACKUP') AS 'FileName'  ,
       CAST(ISNULL(DBB.backup_start_date,'1900-01-01') AS VARCHAR(24))AS Startdate ,
       CAST(ISNULL(DB.backup_finish_date,'1900-01-01') AS VARCHAR(24)) AS FinishDate,
       CAST(ISNULL(DB.Duration,'0') AS VARCHAR(24)) AS Duration,
       LEFT(CAST(ISNULL(Backupsize,0) AS VARCHAR(100)),6)+' MB' AS BackupSize,
       LEFT(CAST(ISNULL(ratio,0)AS VARCHAR(100)),5)+'%' AS CompressionRatio  
       FROM SYS.SYSDATABASES sd LEFT JOIN 
       (
           SELECT 
           bm.media_Set_id,
           'Differential Backup' AS 'Backup Type',
           bm.Physical_device_name ,
           backup_start_date,
           backup_finish_date,
           Duration
CONVERT(VARCHAR(5),DATEDIFF(second,backup_start_date,backup_finish_date)/60) + ':' +  
RIGHT('00' + CONVERT(VARCHAR(5),DATEDIFF(second,backup_start_date,backup_finish_date)%60),2),
           database_name,
           ROUND((compressed_backup_size)/1024/1024,2) AS Backupsize ,
           100-(compressed_backup_size*100/backup_size) AS ratio
           FROM msdb..backupmediafamily BM INNER JOIN msdb..backupset bs ON bm.media_Set_id
bs.media_Set_id 
           WHERE TYPE='I'  AND backup_start_date>=DATEADD(dd,-1,@Reportdate) AND 
backup_start_date<=@Reportdate
       ) db ON sd.name=db.database_name
       ) AS d ORDER BY BackupType
      FOR XML PATH( 'tr' ), TYPE ) AS VARCHAR(MAX) )    
 
    
SET @DiffBackupTable= '<table cellpadding="0" cellspacing="0" border="1">'    

              + '<tr><th width="10">Database Name</th>
              <th  width="20">Backup Type</th>
              <th width="80">File Name</th>
              <th width="100">Start Date</th>
              <th width="40">Finish Date</th>
              <th width="40">Duration</th>
              <th width="10">Backup Size</th>
              <th width="40">Compression Ratio</th></tr>'    
              + REPLACE( REPLACE( @DiffBackupTable, '&lt;', '<' ), '&gt;', '>' )   
              + '</table>' 

/**************************************************************

   Empty Section for giving space between table and headings
***************************************************************/
DECLARE @emptybody2 VARCHAR(MAX)  
SET @emptybody2=''  
SET @emptybody2 = '<table cellpadding="5" cellspacing="5" border="0">'    
              
              '<tr>
              <th width="500">               </th>
              </tr>'    
              + REPLACE( REPLACE( ISNULL(@emptybody2,''), '&lt;', '<' ), '&gt;', '>' )   
              + '</table>'    
/**************************************************************
           Sending Email
***************************************************************/
              DECLARE @subject AS VARCHAR(500)    
DECLARE @importance AS VARCHAR(6)    
DECLARE @EmailBody VARCHAR(MAX)
SET @importance ='High'     
DECLARE @recipientsList VARCHAR(8000)
SELECT @recipientsList = 'Dba@PracticalSqlDba.com;nelsonaloor@PracticalSqlDba.com'

SET @subject = 'Backup Report of MYSql Instance'     

SELECT @EmailBody 
=@FullBackupHeader+@emptybody2+@FullBackupTable+@emptybody2+@DiffBackupHeader 
+@emptybody2+@DiffBackupTable
EXEC msdb.dbo.sp_send_dbmail    
@profile_name ='MyMailProfile',    
@recipients=@recipientsList,
@subject = @subject ,    
@body = @EmailBody ,    
@body_format = 'HTML' ,    
@importance=@importance    


You can download the formatted script from here. I have implemented this method to send various statistical/alert mail. Hope this will help you.

If you liked this post, do like my page on FaceBook

Tuesday 2 October 2012

SQL Server : Setting up Server Side Trace

For a DBA, SQL profiler is a great tool to trouble shoot performance issues. It is a pain to configure a  trace and analyse it using the profiler GUI . I usually use server side trace which can be configured by running set of scripts. Server side traces are much lighter as it does not need to transfer data over the network.In this post, let us discuss about configuring server side trace.

There are three steps in creating server side trace. 
  1. Defining a trace.
  2. Defining the events and columns to captured.
  3. Defining the filter condition.
  4. Start the trace.
Please find below the script used to configure a server side trace.Refer BOL for more option about adding events,columns and filter condition.

   /****************************************************************************************
    STEP 1 : DEFINING THE TRACE    
   ***************************************************************************************/
   SET NOCOUNT ON;
   DECLARE @rc INT
   DECLARE @TraceID INT
   DECLARE @MaxFileSize BIGINT
     DECLARE @OutputFileName NVARCHAR(256)
   SET @MaxFileSize = 1024


  
--Replace The H:\MyTraces with a valid folder in your environment
  
SET @OutputFileName = 'D:\MyTraces\FileTrace' +     CONVERT(VARCHAR(20), GETDATE(),112) + REPLACE(CONVERT(VARCHAR(20), GETDATE(),108),':','')



  
--sp_trace_create @traceid,@options,@tracefile,@maxfilesize,@stoptime ,@filecount 

   EXEC @rc = sp_trace_create @TraceID OUTPUT, 2, @OutputFileName, @MaxFileSize, NULL,5
   
   /****************************************************************************************
    STEP 2 : DEFINING THE EVENT AND COLUMNS
   *****************************************************************************************/
 
  
DECLARE @Status bit

   SET @Status = 1
 
  
--sp_trace_setevent @traceid ,@eventid ,@columnid,@on 


   --RPC:Completed event

   EXEC sp_trace_setevent @TraceID, 10, 16, @Status 
   EXEC sp_trace_setevent @TraceID, 10, 1,  @Status
   EXEC sp_trace_setevent @TraceID, 10, 17, @Status
   EXEC sp_trace_setevent @TraceID1014@Status
   EXEC sp_trace_setevent @TraceID, 10, 18, @Status
   EXEC sp_trace_setevent @TraceID, 10, 12, @Status
   EXEC sp_trace_setevent @TraceID, 10, 13, @Status
   EXEC sp_trace_setevent @TraceID, 10, 8,  @Status
   EXEC sp_trace_setevent @TraceID, 10, 10, @Status
   EXEC sp_trace_setevent @TraceID, 10, 11, @Status
   EXEC sp_trace_setevent @TraceID, 10, 35, @Status


   --SQL:BatchCompleted event

   EXEC sp_trace_setevent @TraceID, 12, 16, @Status
   EXEC sp_trace_setevent @TraceID, 12, 1,  @Status
   EXEC sp_trace_setevent @TraceID, 12, 17, @Status
   EXEC sp_trace_setevent @TraceID, 12, 14, @Status
   EXEC sp_trace_setevent @TraceID, 12, 18, @Status
   EXEC sp_trace_setevent @TraceID, 12, 12, @Status
   EXEC sp_trace_setevent @TraceID, 12, 13, @Status
   EXEC sp_trace_setevent @TraceID, 12, 8,  @Status
   EXEC sp_trace_setevent @TraceID, 12, 10, @Status
   EXEC sp_trace_setevent @TraceID, 12, 11, @Status
   EXEC sp_trace_setevent @TraceID, 12, 35, @Status

  
/****************************************************************************************

    STEP 3 : DEFINING THE Filter condition
   *****************************************************************************************/
   --sp_trace_setfilter @traceid ,@columnid,@logical_operator,@comparison_operator,@value
   EXEC sp_trace_setfilter @TraceID,8,0,0,N'MyAppServer'  --Hostname
   EXEC sp_trace_setfilter @TraceID,35,0,0,N'MyDB' --Database name
   EXEC sp_trace_setfilter @TraceID,11,0,0,N'MyAppUser' --SQL login
   /****************************************************************************************
    STEP 4 : Start the trace
   *****************************************************************************************/
   EXEC sp_trace_setstatus @TraceID, 1
   /****************************************************************************************
     Display the trace Id and traceFilename
   *****************************************************************************************/
   SELECT @TraceID,@OutputFileName

Now the trace is running and you can verify the currently running trace using the below query

SELECT * FROM ::fn_trace_getinfo(NULL)

Once it ran for the desired time , you can stop the trace using the below script

   --sp_trace_setstatus [ @traceid = ] trace_id , [ @status = ] status
    
DECLARE @traceid INT

    DECLARE @status INT
    SET @traceid =2
    SET @status =0
    EXEC sp_trace_setstatus @traceid,@status
    SET @status =2
    EXEC sp_trace_setstatus @traceid,@status

To view the content of the trace file. If you have added more column in the trace , add that column in the below select statement also.You can insert the output into a table for further analysis of the trace.

SELECT 
TextData, Duration/1000, Reads, Writes, CPU, StartTime,HostName,ApplicationName,LoginName,DatabaseName
FROM fn_trace_gettable('D:\MyTraces\FileTrace20120929023408.trc',1)
  
Below query will help us to list the events and columns capturing as part of a trace.

SELECT 
t.EventID
t.ColumnID
e.name AS Event_Description
c.name AS Column_DescriptionFROM ::fn_trace_geteventinfo(2) t   --Change the trace id to appropriate one 
JOIN sys.trace_events e ON t.eventID = e.trace_event_id
JOIN sys.trace_columns c ON t.columnid = c.trace_column_id


If you liked this post, do like my page on FaceBook