Detailed Database and DB File Sizes Info

The following code provides information about Database and related files\filegroup sizes along with physical path location

--Offline DB's are not considered---
If Object_ID('tempdb..#tmpFileSizes') is not null
BEGIN
Drop Table tempdb..#tmpFileSizes
END
Create table #tmpFileSizes (DBName varchar(100),DBFileGroup varchar(200),FileID int,DBLogicalFileName varchar(200),DBPhysicalFilePath varchar(max))
GO
sp_msforeachdb 'USE [?] Insert into #tmpFileSizes select DB_NAME(),isnull(B.GroupName,''LOG''),A.FileID,A.Name,A.FileName from sys.sysfiles A LEFT OUTER JOIN sys.sysfilegroups B on A.groupID=B.groupID'

---- This query gives individual file sizes per filegroup for each DB--------
Select DB_Name(database_id)as DBName,B.DBFileGroup,B.DBLogicalFileName,DBPhysicalFilePath,
Sum(size_on_disk_bytes)/1024/1024 as [File_SizeIN_MB],NULL as [FG_SizeIN_MB],NULL as [DB_SizeIN_MB]
from sys.dm_io_virtual_file_stats(null,null) A INNER JOIN #TmpFileSizes B on
DB_Name(A.database_id)=B.DBName and A.File_id=B.fileID
group by db_name(database_id),B.DBLogicalFileName,B.DBFILEGROUP,DBPhysicalFilePath
Union
/*This Query groups the FileGroups and gives the Total Size per FileGroup. This will only group if the filegroup contains more than one file in it, else will show the individual file size in the [File_Size_in_MB] Column because the filegroup size and file size are same.*/
Select DB_Name(database_id)as DBName,+B.DBFileGroup+ ' Total FG Size',NULL as DBLogicalFileName,NULL as DBPhysicalFilePath,
NULL as [File_SizeIN_MB],Sum(size_on_disk_bytes)/1024/1024 as [FG_SizeIN_MB],NULL as [DB_SizeIN_MB]
from sys.dm_io_virtual_file_stats(null,null) A INNER JOIN #TmpFileSizes B on
DB_Name(A.database_id)=B.DBName and A.File_id=B.fileID
group by db_name(database_id),B.DBFILEGROUP
having count(DBLogicalFileName)>1
Union
---- This Query gives the overall Database Size---
Select DB_Name(database_id) + ' Total DB Size'as DBName,NULL as DBFileGroup,NULL as DBFileName,NULL as DBPhysicalFilePath,
NULL as [File_SizeIN_MB],NULL as [FG_SizeIN_MB],Sum(size_on_disk_bytes)/1024/1024 as [DB_SIZEIN_MB]
from sys.dm_io_virtual_file_stats(null,null) A INNER JOIN #TmpFileSizes B on
DB_Name(A.database_id)=B.DBName and A.File_id=B.fileID
group by db_name(database_id)
order by DBName,DBFileGroup ,DBLogicalFileName desc,[FG_SIZEIN_MB]

If Object_ID('tempdb..#tmpFileSizes') is not null
BEGIN
Drop Table tempdb..#tmpFileSizes
END

Advertisement

Database and DB File sizes with Rollup

Database and DB Files Size with Roll Up

Below query provides info on both total and individual database and it’s files.

select case when (Grouping(db_name(database_id))=1) then 'ALL DB''s' else DB_NAME(database_id) end as DB,
case when (Grouping((cast (file_id as varchar(3))))=1)
then isnull(DB_NAME(database_id),'ALL') +' DB' else (cast(file_id as varchar(3))) end as DBFiles,
sum(size_on_disk_bytes)/1024 as [SizeIN_KB]
from sys.dm_io_virtual_file_stats(null,null)
group by db_name(database_id),cast(file_id as varchar(3)) with rollup

Creation of procs,udfs and views

Creation of procs,udfs and views
While creating stored procedures and functions,the parser only checks for syntax and once the syntax has been cleared by the parser, the proc is created.This does not necessarily mean, the procedure is going to work. The reason is that parser does not check if all the referred objects are correct.For example,if the columns and\or tables mentioned inside procedure might not exists but still, if T-sql syntax is correct, creation of the procedure will succeed.

Obviously, while executing the procedure, it will try to generate an execution plan and that’s where it figures out *mistake* and throws out an error. The same thing applies for user defined functions.

But, it is different in Views, if you mistype a column or table name, the creation of the View fails. I think the reason for this,even though the view does not hold any data, it materializes the definition of view. So, this step requires both validating the syntax and table references.

Need for TempDB Log File

While working on some issue, it struck to me on “why we need log File for TempDB”. Generally, we need log file during crash recovery to rollback or roll-forward the transactions based on whether they are committed or not at that time. This helps the database to be transactionally consistent. Of course, we need log file so we can take log backups and this helps us to make  point in time restoration. PS: Log file keeps track of many other events that happen within the user database.

But neither of these points apply for TempDB as it always gets recreated every time the instance restarts. I did some research and found out that we need TempDB log file for ‘ROLLBACKS’.Apparently, I was not thinking clearly.(It was always makes sense, when you know the answer). Also, I learnt that Logging in TempDB is different than User databases and logging is lighter and faster in TempDB.

So, if we have to create a temp table, it lot better to create on TempDB than USer DB as it is much faster because of less logging

Script to Compress data across all tables in a database

Script to Compress data across all tables in a database:

/*Create a Temp Table to hold the Compression information.*/
Create Table #TestCompression
(ObjectName varchar(20),
SchemaName varchar(20),
indexid int,
partitionNumber int,
size_with_current_compression_setting int,
size_with_requested_compression_setting int,
sample_size_with_current_compression int,
sample_size_with_requested_compression int)

/*Insert estimated data compression info into the Temp table. So, we can query for individual tables on savings. We are using Page Compression here.*/

USE [<<Database Name>>]
GO
Select 'INSERT INTO #TESTCompression
Execute sp_estimate_data_compression_savings ''' +
B.Name+''','''+A.Name+''',NULL,NULL,''PAGE'''
from sys.objects A INNER JOIN
sys.schemas B on A.Schema_id=B.Schema_id
where type = 'U' and A.name not like 'dtproperties'

/*Sum the [Current Compression Size] and [Requested Compression Size], to figure out the compression we are going to get.We can also query for individual tables,to see which one yields better compression.In this example, I am finding only total savings. */
Select
sum(size_with_current_compression_setting) as [AsofNOWSizeInKB],
sum(size_with_requested_compression_setting) as [RequestedSizeinKB]
from #TestCompression

/*Finally, apply the compression. the below script is for Page Compression.Replace page with row, for row compression.*/
USE [<<Database Name>>]
GO
Select
'ALTER TABLE ['+B.Name+'].['+ A.Name
+ '] REBUILD PARTITION = ALL WITH (DATA_COMPRESSION = PAGE)'
from sys.objects A INNER JOIN
sys.schemas B on A.Schema_id=B.Schema_id
where type = 'U' and A.name not like 'dtproperties'

Clearing Plan Cache

Clearing Plan Cache

SYNTAX :  DBCC FREEPROCCACHE

We can also clear the plan cache for a specific database or query.

To remove all the plans related a database
SYNTAX : DBCC FLUSHPROCINDB(<databaseid>)
EXAMPLE : DBCC FLUSHPROCINDB(7)

To remove plan for a specific query, pass the plan_handle as parameter.

SYNTAX : DBCC FREEPROCCACHE(<plan_handle>)
EXAMPLE : DBCC FREEPROCCACHE(0x05000700210F020740A1DBAA000000000000000000000000)