InMemory Tables Pointers

Quick Notes on InMemory Tables:

1. Inmemory table cannot be truncated.
2. Identity value cannot be reseeded for InMemory table columns.
3. Indexed view cannot be created on Inmemory table (Views can be created on InMemory tables).
4. DBCC CheckTable does not work on InMemory Table. DBCC CheckDB skips inmemory tables.
5. There is no need to do index defragmentation for Inmemory table indexes.

(08/03/2016)Update: Few more pointers on InMemory Tables:

  1. Identity value cannot be increment by more than 1 value.
  2. InMemory Tables cannot be used in cross-database transactions. However, Inmemory table variables can be used for cross-database transactions.

Example:  The below query does not work for InMemory Table

Use TargetDB
Insert into MyInMemoryTable
Select * from SourceDB.dbo.MyDiskBasedTable

However, we can define the inmemory table variable(create a Inmemory Data Type and the table variable is created with this data type) and can use it.

Use TargetDB
Create Type MyInMemoryDataType as Table (<<list the columns>>) as (memory_Optimized=ON)
declare InMemoryTableVariable MyInMemoryDataType
Insert into InMemoryTableVariable
Select  * from SourceDB.dbo.MyDiskBasedTable

Advertisement

InMemory Tables – Hash Index

Couple of quick notes about InMemory Objects:

  • Even with “Schema_and_Data” durability option on the Inmemory, the index data is not persisted to the disk. Indexes are rebuilt during the instance startup. This is something to be aware of as it increases the database recovery time.
  • All Inmemory Tables need to have at least one index and primary key is required only for “Schema_and_Data” durability, primary key is not necessary for “schema_only” durability option.
  • There are only two types of indexes that can be created on InMemory tables – 1. Hash and 2. Range.
  • With hash index, hash value of the column is calculated and the memory address for the row is stored in the hash bucket. The same data will have same hash value, however, sometimes two different values can also have the same value.

Hash Match Example:
Create Table InMemoryHash(sno int not null index IX_Sno nonclustered hash with (bucket_count=100))with (Memory_Optimized=ON,Durability=Schema_Only)

Insert into InMemoryHash
Values (1),(2)
--check the maximum chain length
select * from sys.dm_db_xtp_hash_index_stats where object_id=object_id('InMemoryHash')


Insert into InMemoryHash
Values (4),(49)
--check the maximum chain length
select * from sys.dm_db_xtp_hash_index_stats where object_id=object_id('InMemoryHash')

The example shows that 4 and 49 have the same hash value and hence will be stored in the same bucket.  The newer row will be pointed by the hash bucket and the newer row will point to older row for values having same hash match. So, if there are several hash matches, there will be large chain of rows and this can lead to performance issue as optimizer should traverse through all the chains to find the data.

  • We need to specify the bucket_count – for hash indexes. The bucket counts are created to the base of two. So, if you specify bucket_count=100 in the table DDL, the actual bucket_count created is 128(2^7). If you specify, bucket_count=1000000, the actual bucket_count created is 1048576(2^20).
  • Range Index are implemented similar to regular row index but with some difference. I will cover range indexes  in another post.

Table Variable vs Temp Table

One of the fundamental difference between temp table and table variable is the inability to create statistics or indexes on table variables and this can significantly effect the query performance.

Without statistics, the query optimizer does not have any idea on the data distribution to come up with the best query plan.Let’s consider a simple example:

1. Table Variable: You can see in the execution plan picture below, the estimated no of rows returned is 1 where as the actual no of rows returned is 10000.

declare @TestTable Table (sno int)
declare @a int = 1
while (@a<=10000)
Begin
insert into @TestTable(sno)
select 1
set @a=@a+1
End
--Get Execution Plan
set Statistics XML ON
select * from @TestTable
set Statistics XML OFF

@TableVariable

2. Temp Table: You can see in the execution plan picture, both estimated no of rows and actual no of rows returned are 10000.

Create Table #TestTable (sno int)
GO
insert into #TestTable(sno)
select 1
Go 10000
create statistics CS_TestTable on #TestTable(sno)
--Get Execution Plan
set Statistics XML ON
select * from #TestTable
set Statistics XML OFF

#TestTable

Monitoring InMemory Objects and Txns – Memory Utilization

For Inmemory objects, the data exists strictly in the memory. The data may or may not be persisted on to the disk depending on the table durability but that does not change the fact the data always stays in the memory. Monitoring Memory Utilization is always key for database systems but it is more important now than ever because Inmemory transactions cannot continue if memory is not available(no spilling\paging to disk ever happens with InMemory transactions). Also, Inmemory transactions use optimistic concurrency, which is achieved by using multiple row versions and all these stay in the memory as well. With all these complexities around, it is very important to monitor the memory utilization.

Using resource governor, we can create a InMemory resource pool, set up max and min memory limits and bind the Inmemory database to the resource pool, can help limiting the Inmemory objects and transactions memory usage.  By setting up appropriate values, this can help not bringing the entire system down(it still can bring the InMemory transactions down).

The bottom line is with InMemory you always need to monitor the memory utilization.

There are two easy ways to do this:

1.Performance Counters DMV:

select * from sys.dm_os_performance_counters
where counter_name ='XTP Memory Used (KB)' and instance_name = '_Total'

2.Using PerfMon Counters

MSSQL$<<InstanceName>>::Databases --> XTP Memory Used(KB) --> _Total

This gives us the total memory utilization by all InMemory objects and transactions across the instance. You can also track the Inmemory memory usage per individual databases.

 

 

 

 

 

Cannot remove InMemory File Group

To create an InMemory Table – we first have to create a Memory Optimized file group and then associate a “filestream” file type to this file group. After this, InMemory Table can be created.

The catch here(or atleast in SQL 2016 RTM\14 SP1) is you cannot remove the InMemory File or FileGroup even after you dropped all the Inmemory Objects. The only option to remove InMemory file is to script out all the database objects\data and drop the database and recreate with the scripts.

So, if you wanted try InMemory tables and later decide not use them and you can drop the objects but not the InMemory file or file group.

NOTE: InMemory File\File Group can be removed as long as no InMemory Table was ever created. The issue happens only when an InMemory Table was created.

USE [Master]
Go
CREATE DATABASE [InMemoryDB]
CONTAINMENT = NONE
ON PRIMARY
( NAME = N'InMemoryDB', FILENAME = N'C:\Temp\InMemoryDB.mdf' , SIZE = 8192KB , FILEGROWTH = 65536KB ),
FILEGROUP [InMemoryDB] CONTAINS MEMORY_OPTIMIZED_DATA
( NAME = N'InMemoryFile', FILENAME = N'C:\Temp\InMemoryFile' )
LOG ON
( NAME = N'InMemoryDB_log', FILENAME = N'C:\Temp\InMemoryDB_log.ldf' , SIZE = 8192KB , FILEGROWTH = 65536KB )
GO
USE InMemoryDB
Go
Create Table InMemoryTable(Sno int not null primary key nonclustered hash with (bucket_count=1000),sname varchar(20))
with (memory_optimized=ON,Durability=Schema_ONLY)

--Let's drop InMemory Table
Drop Table InMemoryTable
--Trying Removing the InMemory File\FileGroup
ALTER DATABASE [InMemoryDB] REMOVE FILE [InMemoryFile]
ALTER DATABASE [InMemoryDB] REMOVE FILEGROUP [InMemoryDB]
GO

You will get the below error message:
Capture