Custom SQL Job Failure Notifications

Job failure notifications is a common way to send notification when a SQL job fails. Setting up job notifications is fairly straight forward. However, sometimes there may be a need to do custom logic on when these notifications need to be sent. Since the job history is logged in the msdb database, we can query the tables and build out of the box solutions. One solution I had to do recently was to send notification email only if the job failed 5 successive times.

In order to do this, whenever the job fails, it should execute a step( part of the job) to check the last 4 run’s job final output (step id = 0) and if they all failed, send an email notification.

Below code shows how this can be done.

Declare @JobID uniqueidentifier
SET @jobID = CONVERT(uniqueidentifier,$(ESCAPE_SQUOTE(JOBID)))

IF ( SELECT MAX(run_status)
FROM (SELECT A.run_status
FROM Msdb.dbo.sysjobhistory A 
INNER JOIN Msdb.dbo.sysjobs B ON A. Job_id=b.job_id
WHERE step_id=0 AND B.Job_ID = @jobID
ORDER BY A.instance_id DESC OFFSET 0 ROWS  FETCH First 4 ROWS ONLY) A)= 0

BEGIN
EXEC msdb.dbo.sp_send_dbmail 
    @profile_name = '<<SQLMailProfile>>', 
    @recipients = 'youremail@email.com', 
    @body = '<<email body>>', 
    @subject = '<<email subject>>' ; 
END

 
Advertisement

Query to search for a particular value in database

The following query should help search for a particular value in the database. This is particularly useful, when you have troubleshoot an issue and you do not any clue where the data might be and do not have to access to launch trace session. This script helped me few times when I inherited a database( with no documentation\guidance) and had to reverse engineer the process with very limited choices.

In the below example, we are searching for value “Toyota”.

CREATE TABLE #output ( schema1 VARCHAR(500),tablename VARCHAR(500),column1 VARCHAR(500),Columnvalue VARCHAR(2000))

SELECT TABLE_NAME,COLUMN_NAME,TABLE_SCHEMA,
'Select top 1 '''+TABLE_SCHEMA+''','''+Table_Name+''','''+Column_Name+''','+quotename(COLUMN_NAME)+' as [ColumnValue] from '+QUOTENAME(TABLE_SCHEMA)+'.'+QUOTENAME(TABLE_NAME)+ '(nolock) where '+quotename(COLUMN_NAME)+' like ''%Toyota%''' AS SQL1
INTO #Test
FROM INFORMATION_SCHEMA.COLUMNS
WHERE Data_Type IN ('char','varchar','text','nchar','nvarchar','ntext')

DECLARE @TABLE_NAME VARCHAR(500)
DECLARE @COLUMN_NAME VARCHAR(500)
DECLARE @TABLE_SCHEMA VARCHAR(500)
DECLARE @SQL1 VARCHAR(max)

DECLARE db_cursor CURSOR FOR
SELECT TABLE_NAME,COLUMN_NAME,TABLE_SCHEMA,SQL1
FROM #test

OPEN db_cursor
FETCH NEXT FROM db_cursor INTO @TABLE_NAME, @COLUMN_NAME,@TABLE_SCHEMA,@SQL1

WHILE @@FETCH_STATUS = 0
BEGIN
--PRINT @SQL1
INSERT INTO #output
EXEC (@SQL1)

FETCH NEXT FROM db_cursor INTO @TABLE_NAME, @COLUMN_NAME,@TABLE_SCHEMA,@SQL1
END

CLOSE db_cursor
DEALLOCATE db_cursor

SELECT * FROM #output

SQL 2016 SP1

The recently released service pack(SP1) for SQL 2016 includes very big enhancements. The biggest of all is providing premium features such as inmemory, columnstore to standard edition. Of course, there are some restrictions but it is definitely good news for people using standard edition or looking to upgrade.

One interesting enhancement is support to “Create or Alter” syntax while creating objects such as functions\procedures\triggers.

Sample Syntax :

Create or Alter procedure usp_test
as
begin
select 1 as id
end

XACT_ABORT – Why it should be set to ON

Certain errors cannot rollback a transaction. That’s why XACT_ABORT ON should be included in the code to prevent unhandled scenarios erroring out without rolling back the transaction and leaving an open transaction. In the example below, in the first scenario – you can see the session 1 query errored out leaving an open transaction and it blocks session 2 query( under read committed isolation). In the second scenario with XACT_ABORT ON, it will rollback the transaction.

--Set up Test Table
Create Table Test(id int,Name varchar(20))

--Scenario 1 with XACT_ABORT OFF
--Session 1
Begin
Set XACT_ABORT OFF
Begin Try
Begin Tran
Insert into test values(1,'test')
Select * from Idonotexist
If (@@ERROR=0)
Commit
End Try
Begin Catch
If (@@TranCount>0)
Rollback
End Catch
End

--Session 2
select * from test

--Scenario 2 with XACT_ABORT ON
--Session 1
Begin
Set XACT_ABORT ON
Begin Try
Begin Tran
Insert into test values(1,'test')
Select * from Idonotexist
If (@@ERROR=0)
Commit
End Try
Begin Catch
If (@@TranCount>0)
Rollback
End Catch
End

--Session 2
select * from test

--Clean Up
Drop Table Test

Not In vs Exists in TSQL

When using “Not In” to check for data between source and target tables, beware of Nulls in the target table. The correct output depends on the ANSI_NULLs property of the session and\or the way query is written(using exists;Not NULL and Not IN;ANSI_NULLS OFF and Not In).

Try the below example :

declare @TestA table (sno int)
declare @TestB table (sno int)

INSERT INTO @TestA(sno)
Values (1),(3)
INSERT INTO @TestB(sno)
Values (1),(2),(Null)

— Case#1 : Not In with Ansi_Nulls turned ON
Select * from @TestA where sno not in
(Select sno from @TestB)

— Case#2 : Not Exists with Ansi_Nulls turned ON
Select * from @TestA A where not exists
(Select 1 from @TestB B Where A.sno=B.Sno)

— Case#3 : Not In with Ansi_Nulls turned OFF
Set ANSI_NULLS OFF
Select * from @TestA where sno not in
(Select sno from @TestB)
Set ANSI_NULLS ON

— Case#4 : Not In with Ansi_Nulls turned ON
Select * from @TestA where sno not in
(Select sno from @TestB where sno is not null)

Table Variable vs Temp Table

One of the fundamental difference between temp table and table variable is the inability to create statistics or indexes on table variables and this can significantly effect the query performance.

Without statistics, the query optimizer does not have any idea on the data distribution to come up with the best query plan.Let’s consider a simple example:

1. Table Variable: You can see in the execution plan picture below, the estimated no of rows returned is 1 where as the actual no of rows returned is 10000.

declare @TestTable Table (sno int)
declare @a int = 1
while (@a<=10000)
Begin
insert into @TestTable(sno)
select 1
set @a=@a+1
End
--Get Execution Plan
set Statistics XML ON
select * from @TestTable
set Statistics XML OFF

@TableVariable

2. Temp Table: You can see in the execution plan picture, both estimated no of rows and actual no of rows returned are 10000.

Create Table #TestTable (sno int)
GO
insert into #TestTable(sno)
select 1
Go 10000
create statistics CS_TestTable on #TestTable(sno)
--Get Execution Plan
set Statistics XML ON
select * from #TestTable
set Statistics XML OFF

#TestTable

Cannot update primary key in InMemory Tables

InMemory Tables are introduced in SQL 2014 and they are lot improved in SQL 2016.

While the surface area has improved in SQL 2016 compared to 2014 such as Foreign key constraints can be defined between Inmemory Tables, Indexes can have NULL columns, table can be altered etc.

There are still some restrictions in what you can do with SQL 2016 InMemory Tables and one such thing is, modifying the primary key of the table. Typically, we should not be modifying the primary key of the table but if there is ever a need to do that, it cannot be done in SQL 2016 InMemory Tables.

USE [Master]
Go
CREATE DATABASE [InMemoryDB]
CONTAINMENT = NONE
ON PRIMARY
( NAME = N'InMemoryDB', FILENAME = N'C:\Temp\InMemoryDB.mdf' , SIZE = 8192KB , FILEGROWTH = 65536KB ),
FILEGROUP [InMemoryDB] CONTAINS MEMORY_OPTIMIZED_DATA
( NAME = N'InMemoryFile', FILENAME = N'C:\Temp\InMemoryFile' )
LOG ON
( NAME = N'InMemoryDB_log', FILENAME = N'C:\Temp\InMemoryDB_log.ldf' , SIZE = 8192KB , FILEGROWTH = 65536KB )
GO

USE InMemoryDB
Go
Create Table InMemoryTable(Sno int not null primary key nonclustered hash with (bucket_count=1000),sname varchar(20))
with (memory_optimized=ON,Durability=Schema_ONLY)

Insert into InMemoryTable
Values (1,'SomeName')

---Errors out
Update InMemoryTable set sno=2 where Sno=1
You will get error as below:
Capture

Date Range along with Week Number in SQL

I needed to get work week date range along with corresponding week number in sql and below is the TSQL code I came up with.
NOTE: I had to do this on SQL 2008, so, I could not use window functions available in SQL 2012 and later.

declare @StartDate date
declare @EndDate date
set @StartDate ='12/31/2015'
Set @EndDate='12/31/2016'

;with CTE as (Select datename(dw,dateadd(day,number,@StartDate)) as [Day],
cast(datename(week,dateadd(day,number,@StartDate)) as int) as [WeekNo],
dateadd(day,number,@StartDate) as [Date],number
from master.dbo.spt_values
where type='P' and number>0
and datename(dw,dateadd(day,number,@StartDate)) not in ('Saturday','Sunday')
and dateadd(day,number,@StartDate)<=@EndDate)

Select convert(varchar(20),min(Date),106)+ ' - ' +convert(varchar(20),max(Date),106) as [Date Range],
'Week '+cast(Weekno as varchar) as [WeekNo]
from Cte
group by Weekno
order by cast(Weekno as int)

TSQL Functions in SQL 2012

SQL 2012 has introduced several windows functions and below SQL code gives an idea on how to they work. Some of the functions such as row_number(),rank(),dense_rank(),Ntile() and grouping _sets() work in older versions as well.

Create Table WindowFunctions(id int,Sname varchar(20),Scity varchar(20))
GO
Insert into WindowFunctions
Values(1,'Martin','Dallas'),
(2,'Martin','Dallas'),
(3,'Martin','Houston'),
(4,'Martin','Austin'),
(5,'Sheri','Dallas'),
(6,'Sheri','Dallas'),
(7,'Sheri','Houston'),
(8,'Sheri','Austin')

--Row Number():
Select *,Row_Number() Over (Partition by Sname Order by Scity) as [RowNumber] from WindowFunctions
RowNumber
--Rank():
Select *,Rank() Over (Partition by Sname Order by SCity) as [Rank] from WindowFunctions
RankFunction
--Dense Rank():
Select *,Dense_Rank() Over (Partition by Sname Order by Scity) as [DenseRank] from WindowFunctions
DenseFunctions
--Lag():
Select *,LAG(id) over (Partition by Sname Order by Scity) as [LAG_ID] from WindowFunctions
LagFunction
--Lead():
Select *,LEAD(id) over (Partition by Sname Order by Scity) as [LEAD_ID] from WindowFunctions
LeadFunction
--First Value():
Select *,FIRST_VALUE(id) Over (Partition by Sname Order by Scity ROWS between 1 preceding and 2 following) as FirstValue from WindowFunctions
FirstValue
--Last Value():
Select *,LAST_VALUE(id) Over (Partition by Sname Order by Scity ROWS between current row and 2 following) as LastValue from WindowFunctions
LastValue
--Sum():
Select *,SUM(id) OVER (Partition by Sname Order by Scity ROWS between current row and 2 following) as [Sum] from WindowFunctions
SumOver
--Grouping Sets():
Select count(id) as [Count_of_ID],Sname,Scity from WindowFunctions GROUP BY Grouping Sets ((sname,scity),(Sname),(Scity))
CountID
--Max():
Select *,MAX(id) OVER (Partition by Sname Order by Scity ROWS between unbounded preceding and unbounded following) as [MaxValue] from WindowFunctions
MAX
--Min():
Select *,Min(id) OVER (Partition by Sname Order by Scity ROWS between unbounded preceding and unbounded following) as [MinValue] from WindowFunctions
Min
--NTILE():distributes the result set over the NTILE range
Select *,NTILE(3) OVER (Order by Scity) as [DistrutionRange] from WindowFunctionsNTILE