Pages

Thursday 29 March 2012

Caveats of a CONTROL SERVER permission

Today morning I was working on one of the server to complete the regular documentation of security audit with details of logins with sysadmin rights and their access to various database. Later I have realized that, one login has access to all database but it is not captured by my auditing script. Here comes the culprit , the CONTROL SERVER rights.

By default CONTROL SERVER rights is equivalent to sysadmin permission except the fact that logins with CONTROL SERVER rights will honor the explicitly denied server level permission where the members of sysadmin server role bypass the explicitly denied server level permission. Also note that logins with control server permission will have implicit access to the databases like the sysadmin members and database owners. 

Logins with control server rights will not have mapping entry in the sys.database_principals but it will have access to all databases. The worst part is, logins with CONROL SERVER permission are not easy to find out unless you prepare explicit query. It is not listed in the UI of  SSMS or there is no system procedure like sp_helpsrvrolemember to list the logins with CONTROL SERVER  right.

Let us walk through a sample script. Create two logins using the below script

CREATE login SysadminLogin WITH password ='password123~'
GO
CREATE login controlserverlogin WITH password ='password123~'
GO
EXEC sp_addsrvrolemember 'SysadminLogin','sysadmin'
GO
GRANT control server TO controlserverlogin 
Now log in to the server using the controlserverlogin  and you can access all the databases and perform any actions. Let us see what will happen on explicitly denying the server level permission.

DENY VIEW ANY DATABASE TO controlserverlogin
GO
DENY VIEW ANY DATABASE TO sysadminlogin
GO

Now log in to the server using both the login. You can notice that, in the session that connected with the controlserverlogin will list only Master and Tempdb databases while the session connected with sysadminlogin will list all available databases.
The other potential issue with logins having control server right is , they can add them self  to the sysadmin server role or can create a new login with membership to the sysadmin server role. Fortunately it is not possible to do it in straight forward steps. Let us see how it will work .Connect to the server using the  controlserverlogin and  execute the below scripts


/* Fortunately  this will fail */ 
EXEC sp_addsrvrolemember 'controlserverlogin','sysadmin';
GO
/* Unfortunately  this will work even if sa account is disabled*/
EXECUTE AS LOGIN = 'sa';
GO
EXEC sp_addsrvrolemember 'controlserverlogin','sysadmin';
GO
REVERT;


Now the controlserverlogin has sysadmin role membership and you can see all available databases.


Below script  list the logins with sysadmin role membership and control server permission.


/* List login with membership to Sysadmin server role and Control Server right*/
SELECT p.[name] [Login], 'sysadmin Role Member' [Access] ,p.type_desc AS loginTypeFROM sys.server_principals p
  
JOIN sys.server_role_members RM
  
ON p.principal_id = rm.member_principal_id
  
JOIN sys.server_principals rp
  
ON rm.role_principal_id = rp.principal_id WHERE rp.NAME = 'sysadmin' UNION ALL SELECT p.[name], 'Control Server Right' ,p.type_desc AS loginTypeFROM sys.server_principals p
  
JOIN sys.server_permissions Sp
  
ON p.principal_id = sp.grantee_principal_id WHERE sp.class = 100
  
AND sp.[type] = 'CL'
  
AND state = 'G' GO

Below script will help to list the explicitly denied server level permission for logins which have control server permission.


/* List Explicitly denied permission for the login that has control server permission*/

SELECT CSL.*,sp.permission_name [Explicitly Denied Permission]  FROM sys.server_permissions  SP INNER JOIN (SELECT p.principal_id ,p.[name], p.type_desc AS loginTypeFROM sys.server_principals p
  
JOIN sys.server_permissions Sp
  
ON p.principal_id = sp.grantee_principal_id WHERE sp.class = 100
  
AND sp.[type] = 'CL'
  
AND state = 'G' ) CSL ON CSL.principal_id = sp.grantee_principal_id
  
WHERE state ='D'



If you liked this post, do like my page on FaceBook at http://www.facebook.com/practicalSqlDba




Wednesday 28 March 2012

Microsoft Failover cluster 2008 : Moving MSDTC to new SAN disk

We have four node cluster on Windows 2008 where the MSDTC disk hosted on a SAN array. Now we have to move the MSDTC disk to new SAN drive as we are going to decommission the existing SAN. In this post  I will explain the steps that we have followed in our environment to move the MSDTC disk to new SAN.The same steps can be followed for configuring MSDTC in a new cluster environment.

From the current host server(the node where quorum drive visible) ,open the Failover Cluster Management. In the left pane expand the Services and Application group and select the MS-DTC resource. From the summery window note down the Server network name and IP address to use in our later steps.

The next step is to get  the new SAN disk in the cluster. To add the disk to the cluster follow the steps given below:
  • Open the windows disk manager.
  • Right click the new disk  in Windows Disk Management and select online.
  • Right click again and select initialize.
  • Finally format it by right clicking the disk and selecting New Simple Volume.
  • This will open Simple Volume Wizard. Provide the new drive letter(For Example U) and label for the disk.
  • In the Cluster Manager ,select the storage in the  left pane.
  • Select the Add Disk in the Action pane and select the proposed new disk.
At this point we can see this disk under the available storage group. Now everything is ready to make the move. Unfortunately we can't move existing MSDTC to the new disk directly.The only option is delete the existing one and add new MSDTC
Follow the below steps to delete and add new MSDTC resource:
  • In the cluster manager ,expand the services and applications
  • Right click  the MSDTC  group and select Delete.
  • Now the existing MSDTC  disk (in our environment X drive) will be available under the available storage.
  • Select the old disk (X drive) under the storage and click on change drive letter option available in the activity pane
  • Select None as new drive letter and click Ok button
  • Select the new disk ( U drive ) under the storage  and click on change drive letter option available in the activity pane
  • Select X as new drive letter and click Ok button
    •    The last four  steps are done to keep the drive letter for new disk as same as the old one. These four steps are not mandatory to perform.
  • Right click on the Services and Application and select Configure Services or Application option
  • This will open the high availability wizard screen.
  • Select next which will list Services or application that can be configured for the high availability.Select the Distributed Transaction Coordinator(DTC)















  • Select Next which will open the client access point configuration screen as given below.Enter the network name and IP address which we have noted down in the first step.















  • Select Next,which will open the storage configuration screen.This will list all the disk available under the available storage group. Select the appropriate disk and click Next


On clicking Next, it will directed into the confirmation screen and click again on Next will complete configuration. Now the MSDTC is configured in the SAN disk and can be verified by opening the disk in the windows explorer. Now we should be able to see the MSDTC  folder inside the new drive which confirms the movement. Now bring this service online.Set the preferred owner through the properties window and test the failover.

In the later post I will be explaining about quorum disk and System/user database movement to the new SAN array.

If you liked this post, do like my page on FaceBook at http://www.facebook.com/practicalSqlDba


Monday 26 March 2012

SQL server 2005 Database Mail stopped functioning after installing Service Pack 3

Recently I have upgraded one of our SQL server 2005 instance to Service Pack 3 and later point of time we have noticed that the database mail in that instance stopped working with the following error in Database Mail error log.

[260]  Unable to start mail session (reason: Microsoft.SqlServer.Management.SqlIMail.Server.Common.BaseException: Mail configuration information could not be read from the database. ---> System.IndexOutOfRangeException: timeout


Unfortunately in our QA environment , database mail is not used extensively and did not noticed this issue. After updating the service pack in the live , all our monitoring and alert mail stopped working.

The KB article 2008286 is clearly stated that the solution for this problem is to update the Cumulative Update 4 (CU4)  but for that  I need to go through the change management process and it is time consuming.

After spending some time, I have found a work around for this problem by updating sysmail_help_admin_account_sp stored procedure in msdb database. 

The reason behind this issues is , when the modified database mail binaries ( modified as part of  service pack 3)  makes call to the above mentioned stored  procedure , it expects a timeout column in the result set of the stored procedure,but the current version of stored procedure is not returning the timeout column and hence encounter IndexOutOfRangeException.The modified version of the stored procedure is available here to download. This stored procedure will be replaced by the Microsoft version while updating the CU4 for Service Pack 3 or later version 

If you liked this post, do like my page on FaceBook at http://www.FaceBook.com/PracticalSqlDba



Wednesday 21 March 2012

How do you secure the SQL Server instance after enabling xp_cmdshell ?

OR
How do you allow non-administrator to use xp_cmdshell ?

From the day I started working with SQL server, I have heard about the security risk associated with enabling the xp_cmshell and used to restrain from using the xp_cmdshell. Now I have started using it as xp_cmdshell simplify many tasks for us.The security risk associated with xp_cmdshell can be reduced by following the steps given below.

By default the xp_cmdshell is disabled and this can be enabled by surface area configuration manager or  sp_configure.Enabling the xp_cmdshell is a risk as it open a windows command shell with same security context of the SQL server service account. In most of the scenario SQL server service  account will have elevated permission like member of local/domain administrator group.Using this elevated rights, malicious users  can do lot of damage like creating a log in account with administrator right ,changing the registry,deleting file/folders,accessing network paths,stealing data,etc. With  default configuration, SQL login need sysadmin or control server right to run the xp_cmdshell command.

The first step to reduce the security risk of enabling xp_cmdshell is replacing the SQL service account with minimal rights.It is always advised to run the SQL server and related services under the domain account with minimum privilege. This will helps to reduce the risk of accessing the xp_cmdshell by SQL login  with sysadmin or control server rights.

To allow non-administrator to run the xp_cmdshell, we have allow SQL server to open windows command shell with the security context of a less  privileged windows account by configuring server proxy account. To do that create a domain account/local machine account with very minimal permission and configure the proxy account as given below


EXEC sp_xp_cmdshell_proxy_account 'MyDomain\My.login','MyPassw0rd'

Now the non-administrator can run the xp_cmdshell command by creating a user for them in master database and granting the execute permission to xp_cmdshell in master database as given below


USE MASTER
GO
CREATE login cmdshelluser WITH password ='Password123'CREATE USER cmdshelluser  FOR login cmdshelluser GRANT EXECUTE ON xp_cmdshell TO cmdshelluser

Now the windows command shell opened by cmdshelluser  using the xp_cmdshell will have the security context of the proxy account configured earlier. To determine security context
EXECUTE AS LOGIN = 'cmdshelluser'
GOxp_cmdshell 'whoami.exe'
REVERT

If non-administrators granted with execute permission on xp_cmdshell ran the xp_cmdshell before configuring the proxy account, SQL server will throw below error

Msg 15153, Level 16, State 1, Procedure xp_cmdshell, Line 1
The xp_cmdshell proxy account information cannot be retrieved or is invalid. Verify that the '##xp_cmdshell_proxy_account##' credential exists and contains valid information.


Thank you for reading this article. Follow my blog  @  Facebook page


Wednesday 14 March 2012

Importing images to SQL server database using T-Sql

Yesterday evening one of the developer  asked me that he need a help in importing images into a   table. It was interesting question for me as I never thought about the possibility of importing images to database table using a T-SQL. I invested some time and found that OPENROWSET is the solution which is the best way to open any non sql data sources.

 T-SQL listing given below  helped me to resolve the issue. Note that image folder should be accessible from the SQL server. If the image folder is  in the remote location, either you have to copy to the server or map the folder in SSMS
--Target Table where data to be imported
CREATE TABLE userinfo
  
(
    
USER_ID      INT NOT NULL IDENTITY(1, 1),
    
logonname    VARCHAR(200),
    
profileimage VARBINARY(MAX)
  )
GO --Table to process the filesCREATE TABLE imagelist
  
(
    
imgfilename VARCHAR(200)
  )
GO --Put all file name in a table for easy processing

DECLARE @SQL AS NVARCHAR(2000) SET @SQL =N'xp_cmdshell ''dir d:\USerprofileImage /B''' INSERT INTO imagelist (imgfilename) EXEC sp_executesql @SQL
GO
--Import data into target table
DECLARE @SQL AS NVARCHAR(2000) DECLARE @ImgFilename AS VARCHAR(200) DECLARE filelist CURSOR FOR
  SELECT
imgfilename
  
FROM   imagelist OPEN filelist FETCH NEXT FROM filelist INTO @ImgFilename WHILE ( @@FETCH_STATUS = 0 )
  
BEGIN
      SET
@SQL = 'insert into  UserInfo(LogonName,ProfileImage)
select
reverse(substring(REVERSE('''
+@ImgFilename +''') ,charindex(''.'',REVERSE('''+@ImgFilename+''' ),1)+1,LEN('''+@ImgFilename+'''))) ,
(SELECT img.bulkcolumn FROM OPENROWSET(BULK ''d:\USerprofileImage\'
+@ImgFilename +''',SINGLE_BLOB) AS img)'
      
EXEC sp_executesql @SQL
      
FETCH next FROM FileList INTO @ImgFilename
  
END
CLOSE
filelist DEALLOCATE filelist
GO
--Drop the worktable

DROP TABLE imagelist




Understanding the concept of SQL Server Failover Cluster

Before getting hands on experience in SQL server on failover cluster , I used to read many article about failover clustering. Unfortunately I was not able to digest the concept till I wet my hands.I will keep this point in my mind while explaining the failover cluster in this post.

What is a windows Cluster ? A cluster is a group of independent computer systems, referred to as nodes, working together as a unified computing resource. When we say unified computing , it does not mean that a single application can access the resources (cpu/memory) of multiple computer at any point of time. A cluster provides a single name for clients to use it services.There are two type of cluster:

Network Load Balancing Cluster : A Network Load Balancing Cluster (NLB) distribute the load (request from the client)  to the different nodes which are part of the cluster based on predefined rules.The  client application has to communicate to a single cluster IP address (virtual server)  and client does not know which node in the cluster exactly serving its request.Network Load Balancing cluster helps to enhance the the availability and scalability of  application. We are not going to talk much about this on this post.

Fail over Cluster:  Failover cluster is a collection of servers that by working together increase the availability of applications and services that run on the cluster. It can be  described as a technology that automatically allows one physical server to take over the tasks and responsibilities of another physical server that has failed.A failover cluster provides high availability for cluster aware application like SQL server ,exchange server,etc. The major difference between NLB cluster and failover cluster is, failover cluster will not help to improve the scalability of the application. It can only support high availability.

There are many misunderstanding among people about the advantage of having SQL server on a failover cluster environment. Below table will give a clear picture about the capability of SQL server on cluster environment.


To understand the failover cluster it is important to familiar with the terminologies. Let us have look on the terminologies used in the windows clustering.

Server Node: Physical node with operating system that support windows clustering. Each server node should have minimum two network card for public and private network.Minimum one local hard disk is required for OS and other application binaries.

Private Storage: Local disks are referred as private storage. There will be minimum of one private disk for OS and SQL binaries.Server node can have a additional local disk for bigger page file.
Shared Disk Array: Each server needs to be attached to the shared external storage.In non-clustered SQL server instance, databases (system/user/resource) are store on locally attached disk storage but in clustered SQL server instances databases are store data on a shared disk array.That mean all the server nodes in the cluster setup are physically connected to the disk array.This shared storage configuration allow application to failover between server in the cluster.

Quorum Disk: Quorum is the cluster's configuration file.This file (quorum.log) resides in the the quorum disk (one disk from shared disk array).Quorum is the main interpreter between all nodes. It stores latest cluster configuration and resource data. This helps the other nodes to take ownership when one node goes down.
In real life, it is like emergency contact number and other medical information in your wallet which help others to take action in case of emergency.

Public Network and Private Network: Each server needs at least two network cards. Typically, one is the public network and the other is a private network between the nodes. The private network is used for internal communication of cluster.This is referred as heart beat. Public network is used to connect the external world or intranet.

Heart Beat: Heart beat is health check mechanism in cluster A single UDP packet sent between nodes in the cluster via the private network to confirm that  nodes are still online. By default cluster service will wait five seconds(one heart beat sent in every second)  before considering a cluster node to be unreachable.

Looks Alive check:Looks alive check is a basic resource health check to verify that the service(SQL service in our context) is running properly.To perform this , cluster service queries the windows service control manager to check the status of the service.By default looks alive check will happen in every five seconds.

Is Alive check: An exhaustive check to verify that a resource is running properly. If this check fails, the resource is moved offline and the failover process is triggered. During the Is alive check  the cluster service connects to the SQL server instance and execute select @@SERVERNAME.It will check only the SQL server instance availability and does not check the availability of user databases.

Failover: Clusters service monitor the health of the nodes in the cluster setup and the resources in the cluster. In the event of a server/resource failure, the cluster service  re-starts the failed server's workload on one of the remaining servers based on possible owners settings. The process of detecting failures and restarting the application on another server in the cluster is known as failover .


Virtual Server:Virtual server consist of a network name and IP address to which the client connect.This helps the client to connect the service  which hosted in the cluster environment with out knowing the details of the server node on which the service is running.In simple word , virtual server helps the client application to connect the SQL server  instance with out changing the connection string in case of failover of SQL instance.

Possible Owners :Possible owners are the server nodes on which cluster group (in our context SQL instance) can failover

Preferred Owner: Preferred owner is the server node which best suited to running an application or group.

Cluster Setup: Below schema diagram will give a clear picture about the windows cluster setup.























This picture depicts  two  node cluster setup.At this stage Node A and Node B will have only OS and windows cluster service.One of the disk from cluster disk will be designated as the quorum disk. This setup basically done by windows administrators and storage specialists.On top of this,  DBA's install  and configure the SQL instances. We are not going talk about the SQL server installation on cluster environment on this post.

To make it more clear, let us describe our cluster environment as given below.

















Let us look into the pictorial representation current setup.
Fig 1





















From the Fig-1, we can see that INST1 and INST2 are installed on both physical node , but at any point of time one instance will be online only in one node.At this stage INST1 is running on physical node Pnode-A and INST2 is running on node Pnode-B.The cluster disk D and E will be visible in the node Pnode-A and F and G will be visible in the node Pnode-B.The request from the App1 will be routed through the virtual server A and landed in Pnode-A where the INST1 is running.In the same way the request from the App2 will be routed through the virtual Server B and landed in Pnode-B where the INST2 is running.

Let us see what will happen if there is a hardware/network failure in Pnode-A. Fig 2 depicts the failover of  scenario of INST1 to Pnode-B.

Fig 2





















Now the INST1 went offline in Pnode-A and came online Pnode-B.The connection from the App1 will be routed through the same virtual Server A and landed in Pnode-B.All four cluster disk will be visible in Pnode-B and only C (Local disk) will be accessible from Pnode-A. Note that even after the failover of INST1,App1 will be connecting to the virtual server A.It helps to make the failover transparent to the application.

Thank you for reading this article. Please feel free to pass your comments to improve the quality of the post. To follow this blog visit my Facebook Page





Sunday 11 March 2012

Memory configuration in SQL server and break up of memory utilized by SQL server


In this post I would like to explain memory configuration option and memory utilization pattern in SQL server.I have seen many people worrying about the memory utilization (in task bar or through other monitoring tool) on a box where SQL server is installed. I have also seen people becoming  panic after seeing the alert 
The threshold for the Memory\% Committed Bytes In Use performance counter has been exceeded. The value that exceeded the threshold is: 90.5850397745768 

from SCOM (System Center Operations Manager) .I hope this post will help them to find an answer.

In SQL server the Physical memory utilization is controlled by following two parameters available through sp_configure 
  • min server memory (MB)
  • max server memory (MB)
These two parameters control only the memory utilized by the buffer pool(bpool). In SQL server bpool is the biggest consumer of the memory.There are other component which  consume memory apart from bpool.
Below is the list major component which use memory apart from bpool.
  • SQL Mail 
  • COM/OLE components loaded in SQL Server
  • Prepared document using sp_xml_preparedocument
  • Linked Server
  • Backup/Restore
  • SQL CLR
It is very important to define these two parameter especially in the case like SQL server is running on a box where other applications are also running,multiple instances are installed on the same machine,installation over cluster environment,etc

min server memory (MB): The min server memory setting define the lower limit of the memory available for buffer pool. On start up of SQL server, the buffer pool does not immediately acquire the amount of memory specified in min server memory. It starts with memory required to initialize. As the workload increase, it keeps acquiring memory.Once it acquired the amount of memory mentioned the min server memory configuration, bpool acquire more memory depends on the memory availability on the server and max server memory settings.bpool never drops the memory below the level specified in the min server memory once it acquired. The total amount of memory consumed by the bpool is completely depends on the workload. On a SQL instance that is not processing many request may never reach min server memory limit.By default this value set to 0.
max server memory(MB): The max server memory define the upper limit for the bpool. It will never  acquire the memory more than value specified in the max sever memory setting even if there is lot memory available on the server.Once it is reached the limit specified and if there is memory request from OS, bpool will keep releasing memory till it reaches the min server memory. The  default this value for this is 2147483647 (2TB).

To understand it in much better way, look at the Fig 1 where the green(40 GB) and orange(20 GB) portion  are occupied by bpool of INST1 as per the configuration settings of max server memory (60 GB) by leaving 4GB for OS ,other processes and non bpool components. Assume that we have installed one more instance on the same server with configuration setting as mentioned in the Fig 2.Now to satisfy the min server memory setting of INST2 (20 GB) ,INST1 is forced to release the memory which was above min memory setting by keeping only 2GB(orange portion). Now INST2 satisfied it min server memory configuration by leaving only 42 GB for INST1 and 2GB for OS,other processes and non bpool component.In later point of time if OS required more memory to perform some action , it can grab maximum of 2 GB from the INST2.If that is not enough for OS other task , you can feel overall degradation of the performance of the physical server.

Memory Configuration Consideration : I have seen many servers which running on the default value for these two parameters. It may not make harm on the stand alone server which dedicated to single instance of SQL server. In the case of multiple instances on the same server, we have to configure these two parameter in all the instances to guarantee  that all instances and OS will have minimum memory to process its workload. A typical setting in our environment where we run three instance on the same box is given below. 






The sum of min server is restricted to 27 GB by leaving 5 GB for OS and other process.Max server memory is configured in such a way that the , SQL instances can make use of the 5GB if the OS does not requires that.Also note that SQL server is very efficient in releasing the memory if there is a memory pressure from OS  but only  till the min server memory configuration.

It is more important to configure memory settings appropriately  in the cluster environment. Instances might work very smoothly when it running on its own preferred owner node.In case of some  issues , if one instance failed over to another node (assuming it is active-active cluster or multiple instances are failed over to the passive node in case of active-passive cluster environment), the performance of the instances might affect depends on the setting. So it is important to configure these value to make sure the multiple instances can run on the same node with out much memory crunch. Think about a scenario of two node active-active cluster and each node has 64GB memory. SQL instances on these nodes are configured with 50 GB as min server memory and 60GB as max server memory. What will happen if one of the instance failed over to another node ? I am sure  you will be able to figure out what will happen and how to resolve the issue.


Break up of memory consumed by SQL server: Below are the various objects that consume memory in  SQL server



















This Memory Utilized by various object in SQL server.sql will list the memory consumed by the various object in SQL server. From the listing you can easily identify that the Bpool is the biggest consumer of the memory. It is interesting to know the amount of bpool memory utilized by each database.The Bpool utilization by databases.sql will give you the details of Bpool memory utilized by each databases.It will be more interesting to know the details of objects in each database that consume bpool space.The  Memory Utilized by objects in db.sql will give that statistics.

Hope now you have better idea about the memory configuration and utilization in SQL server.  Please feel free to pass your comments.



Tuesday 6 March 2012

How to log off remote desktop users from a remote machine ?

When you try to log on to a  remote server on emergency to perform some task , it may throw an error "The terminal server has exceeded the maximum number of allowed connection". In many servers you may not have the rights to log on with /console or /admin option. I have faced this issue many times and the only option is to get the help from infrastructure team to log off users from the server.

I was searching for a solution for this and found  an option to log off the remote desktop users from  another machine (in the same domain) through command line. The first step to achieve is to list the current  log on session on the remote computer. For that we can use following command from the command prompt. Replace the xx.xx.xx.xxx with your server IP address.

C:\>quser /SERVER: xx.xx.xx.xxx


This will list the current log on session. Please find below screenshot. I have removed the IP address and log in name from the screenshot. You can see the session ID 0,1 and 2 for three available sessions. This is the ID which we will be using in our next step to log off user.







To log off session listed in the previous step, use the following command.Here I am trying to log off the session 0

C:\> logoff /SERVER:xx.xx.xx.xxx 0 /V 














Now you will be able to do the remote desktop to the server.





Sunday 4 March 2012

Resource Governor Configuration

Resource Governor(RG) is a technology which introduced in SQL server 2008 to manage the CPU and Memory resource that a incoming session can use.This feature is only available on Enterprise edition ,Develepoer edition and Evaluation Edition.

RG can be configured to restrict the resource utilization of user session. You can use it not only to limit the maximum usage , but also to guarantee minimum resourse for each connection. This can be implimented in many scenario like server consolidation where multiple application datbases are hosted on same instance of SQL server. As a DBA it is our responsibilty to guarantee minimum resource for connection coming from each application. RG will help you to slice the resource to each application based on the demand.

Below are three terminology which required  to understand the RG


Resource Pool : You can see resource pool as slice of SQL instance's CPU and Memory. It is like dividing the CPU power and Memory into multiple pool. A resource pool can be defined by specifying the Min CPU percentage ,Max Cpu Percentage,Min Memory Percentage and Max memory percentage. By default the min and max is defined as 0 and 100. By default there are two resource pools : Default and Internal. Internal Pool is reserved for SQL sever internal process. Internal Pool has precedence over other pool which can override setting of any other pool. we can define 18 user defined pool which make total of 20 resource pool. Let us assume that we have hosted the databases of three application (App1 ,App2,App3) in the same instance of the SQL server and we have to give minimum of  35% ,40%   and 10% of resource to App1 ,App2  and App3 respectively.Let us defined three Resource pool RP1,RP2 and RP3 for App1,App2,App3 respectivly.












The thumb rule is sum of Min% of all pool should not exceed the 100. In a nutshell, at least under heavy load, the effective maximum percentage of a resource that a specific pool can consume is the maximum for that pool less the sum of the minimum resources defined for all of the other pools, and the shared percentage is calculated as the effective maximum less the minimum. The T-Sql  for defining the pool is given below










Workload GroupWorkload groups are defined under the resource pool and incoming sessions are routed to the workload group which in turn use the underlying resource pool settings. With resource pool we can control the CPU and Memory but workload group below the resource pool will help us to add additional controls for different sets of users across the same resource pool. Let us assume that for one application APP2, there are two types of users OLTP users and Report Users. We can define more precedence to OLTP users over Report user under the same resource pool RP2 using workload group. As we have system defined resource pool, there are two internal workload group, Internal and default. A work load group can be defined with below parameters.


  • IMPORTANCE: Importance is not really the priority. it is relative importance and  allow the scheduler to balance the work for individual request in the workload group.This settings is only effective with in all the workload group in a specific resource pool. Possible values for this parameter are LOW,MEDIUM and HIGH . MEDIUM is the default value.


  • Max_DOP: It define the max degree of parallelism allowed for any query running under this workload group.The default for this is 0 which in turn take the global setting defined by Sp_Configure.When this value is non-zero, the scheduler will prevent a parallel plan from exceeding this number of parallel threads when it serve the request


  • REQUEST_MAX_MEMORY_GRANT_PERCENT: This will define the percentage of of total memory available in the underlying resource pool that can be granted to any request in the workload group. The default value is 25. If a request requires more memory than the defined one, for the default workload group (system defined)  the server grant the memory as far as it is available in the pool .Otherwise it will throw time out error with error number 8645. For used defined workload group, the server will try to lowering the DOP until the memory limit is satisfied.if the memory requirement is still not matching or parallelism is already 1 , then system will throw an error 8657.Increasing the value of this parameter allows a single query to consume larger amount of memory, and reducing it allows more memory-bound queries to run concurrently.


  • REQUEST_MEMORY_GRANT_TIMEOUT_SEC: This setting allows us to specify the number of seconds that a query can wait for a memory grant. The default is 0 and it means that the wait time will be determined based on the cost of the query. If the time exceed the setting system will throw an error 8651
  • GROUP_MAX_REQUEST: This setting allows us to specify the number of concurrent requests that can be run at any time with in the workload group. The default is 0 which means no limit.
  • REQUEST_MAX_CPU_TIME_SEC: This setting allows us to specify the CPU time any single query within the group can take before a CPU threshold exceeded event is raised.Unlike the query governor cost limit (sp_configure) ,which refuses to run the query which exceed the estimated cost , this setting does not prevent query from running the query.Instead, the trace event is raised at the first time a query is exceed the limit.The default value is 0 , which means there is no limit and no trace event will be raised
The T-SQL for define the workload group

 

Classifier Function: The classifier function is a user defined function which resides in the Master database and helps to route the user request to appropriate workload group and in turn to the underlying resource pool and it return the workload group name.While establishing the connection, the classifier function come into picture after the authentication process and log on triggers.There are several method to determine the the workload group of a incoming request.It can be done using the log in account,windows group,server role,application name,host name,time of the day, day of the week or month or your own custom classification logic using the LOGIN_NAME(),HOST_NAME(),APP_NAME(), CONNECTIONPROPERTY(), LOGINPROPERTY(),IS_MEMBER(),DATEPART/() etc.While writing the classifier function we should be very careful to make it very accurate and efficient as possible. This function has the potential to be the the biggest bottleneck on your system, because it is executed for every new connection to the server.

Below is the Tsql to define the classifier function





















Below flowchart will give better understanding of the resource governor. Red color blocks are internal pool/group

Finally to enable the Resource governor


ALTER RESOURCE GOVERNOR WITH (CLASSIFIER_FUNCTION = dbo.ResourceClassifier);     

GO      
ALTER RESOURCE GOVERNOR RECONFIGURE


Associated dynamic view:
  • sys.dm_resource_governor_workload_groups
  • sys.dm_resource_governor_resource_pools
By joining sys.dm_exec_sessions with  sys.dm_resource_governor_workload_groups on group_id column, we can identify the session workload group


Limitations of Resource Governor:

The Resource Governor administrates how much memory is assigned to a query at runtime, and NOT how much memory is used by the query’s data. In other words, a query which needs a small amount of memory to compile and run might bring tons of data into memory, and the Resource Governor cannot do anything about it. In short resource governor has no control over the Buffer Pool. It can control only the Working Memory for the query, memory used for sorting,locks,hashing etc. 

Resource governor does not have any control over the I/O system.In many of the system the I/O can often be the most prevalent bottleneck.The current implementation of resource governor does not provide control for I/O activities