In-Memory Optimization in SQL Server: A Simple Experiment

In-Memory Optimization in SQL Server: A Simple Experiment

In-Memory Optimization in SQL Server: A Simple Experiment

Introduction

In one of my previous articles I briefly introduced In-Memory Optimization in SQL Server. In this article we will conduct a simple experiment utilizing this new technology and discussing the benefits. The experiment will be organized in the following steps:

  1. Create the In-Memory processing enabled database
  2. Create the tables (disk-based, memory-enabled)
  3. Populate the tables with sample data
  4. Prepare the processing code for three scenarios (execution modes):
    • Disk-based processing
    • In-Memory optimization
    • In-Memory optimization with natively compiled stored procedure
  5. Run all three execution modes and measure execution times
  6. Discuss findings

 

Let’s Write Some Code!

OK, enough words, let’s write some code!

 

Step 1: Create the In-Memory processing enabled database

--Creates a memory-enabled database by adding a memory-optimized filegroup with a specified directory.
--Note: This assumes that directory c:\tmp exists (the subdirectory is created by SQL Server).
CREATE DATABASE InMemDB;
GO
ALTER DATABASE InMemDB
ADD FILEGROUP [inMemOLTP_fg] CONTAINS MEMORY_OPTIMIZED_DATA
ALTER DATABASE InMemDB
ADD FILE ( name = [inMemOLTP_fg_dir],
   filename= 'c:\tmp\InMemOLTPDir')
   TO FILEGROUP [inMemOLTP_fg];
GO

 

Step 2: Create the tables (disk-based, memory-enabled)

--Create sample tables representing a scenario involving 200000 Products
USE InMemDB;
GO

--Create traditional table (disk-based)
CREATE TABLE tblDiskBased(
id INT PRIMARY KEY,
code VARCHAR(50),
descr VARCHAR(250),
price float
);
GO

--Create a durable (data will be persisted) memory-optimized table
--Alternative choice: SCHEMA_ONLY = Data will be lost if the server turns off unexpectedly
CREATE TABLE tblMemOptimized(
id INT NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT=400000),
code VARCHAR(50),
descr VARCHAR(250),
price float
)WITH (MEMORY_OPTIMIZED=ON, DURABILITY = SCHEMA_AND_DATA);
GO


--Create a durable (data will be persisted) memory-optimized table
--Alternative choice: SCHEMA_ONLY = Data will be lost if the server turns off unexpectedly
--Note: This table will be used by the natively-compiled stored procedure
CREATE TABLE tblMemOptimized2(
id INT NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT=400000),
code VARCHAR(50),
descr VARCHAR(250),
price float
)WITH (MEMORY_OPTIMIZED=ON, DURABILITY = SCHEMA_AND_DATA);
GO

 

So far: We have created a memory enabled database and data structures. Now, it’s time to populate the tables with sample data.

Step 3: Populate the tables with sample data

--Populate the tables with sample data (200K records)
--tblMemOptimized
DECLARE @i INT
SET @i=0
WHILE (@i<200000)
BEGIN
INSERT INTO tblMemOptimized
SELECT CAST(@i as varchar(6)) as id, ('Code '+ (CAST (@i AS varchar(6)))) as code, ('Description '+ (CAST (@i AS varchar(6)))) as descr, (10 + (500-10)*RAND()) as price
SET @i=@i+1
END
GO

--Populate the tables with sample data (200K records)
--tblDiskBased
--tblMemOptimized2
INSERT INTO tblDiskBased
SELECT * FROM tblMemOptimized;
GO

INSERT INTO tblMemOptimized2
SELECT * FROM tblMemOptimized;
GO

 

All three tables now have same data (200K rows each). You can easily check it out!
Step 4: Prepare the processing code for three scenarios (execution modes):
We will wrap each code block into a stored procedure for easier execution.
The scenario is to decrease each product’s price by 5%.

3a. Disk-based processing:

--Disk-based processing
CREATE PROCEDURE updateDiskBased as
BEGIN
UPDATE dbo.tblDiskBased
SET price=price-(price*0.05)
END
GO

 

3b. In-Memory optimization:

--Memory-Optimized
CREATE PROCEDURE updateMemoryOptimized as
BEGIN
UPDATE dbo.tblMemOptimized
SET price=price-(price*0.05)
END
GO

 

3c. In-Memory optimization with natively compiled stored procedure:

--Natively-Compiled Stored Procedure
CREATE PROCEDURE updateMemoryOptimizedNatively
with native_compilation, schemabinding, execute as owner
as 
begin atomic with (transaction isolation level = snapshot,language = N'English')
UPDATE dbo.tblMemOptimized2
SET price=price-(price*0.05)
END
GO

 

So far: We have created a memory enabled database and data structures. We have populated the tables with sample data (all three tables have identical data for experiment consistency purposes) and we have prepared the stored procedures that target each one of the three tables and perform the processing, that is the update of the ‘price’ column by decreasing it by 5%.

Running the Example

Now it’s time to run each stored procedure and measure the execution times.
But before executing the stored procedures let’s check a sample (TOP 5) from each table:
 In-Memory Optimization in SQL Server: A Simple Experiment - SQLNetHub

Figure 1: Top 5 Records from each table before update.

Ok, now it’s time to execute the three stored procedures, thus update all tables and measure execution times and here they are:

In-Memory Optimization in SQL Server: A Simple Experiment - SQLNetHub

Figure 2: Execution times for three modes.

 

Additionally, let’s check once more some records just to confirm that the output is the same in all three tables:

 

In-Memory Optimization in SQL Server: A Simple Experiment - SQLNetHub

Figure 3: Top 5 Records from each table after update.

 

Discussion

First of all, from the above figure we can see that the output of the three update operations is exactly the same which is something that validates the correctness of the three different execution modes.

As you can see from the screenshot with execution times (Figure 2), the slowest execution was the one that involved the disk-based table (elapsed time: 1924 ms) having also the highest CPU time (328 ms).

The fastest execution was the one of the memory-optimized table with the natively compiled stored procedure where the elapsed time was 329 ms and the CPU 93 ms!

The memory optimized execution was faster than the disk-based with 404 ms elapsed time and 250 ms CPU time but it was not faster than the memory-optimized scenario combined with the natively compiled stored procedure.

In-Memory optimization in SQL Server is a great new technology which can significantly improve performance in many cases. Especially in cases of certain workloads such as staging processes in Data Warehouse systems, high data insert rates (i.e. smart metering) and low latency execution scenarios (i.e. investment trading), In-Memory optimization can boost performance. Of course, before applying it on Production systems, a proper analysis is required along with testing.

 

Highly Recommended Tool: In-Memory OLTP Simulator

Our solution, “In-Memory OLTP Simulator“, helps you to easily benchmark standard and custom scenarios of data workloads and processing against the powerful In-Memory OLTP Engine in SQL Server. Moreover, you get rich analytics and insights, and see how Memory Optimization in SQL Server can help you boost the performance of your data processes.

In-Memory OLTP Simulator - SQLNetHub

Try In-Memory OLTP Simulator free for 30 days

 

Check our other related SQL Server Performance articles.

Subscribe to our newsletter and stay up to date!

Check out our latest software releases!

Check out Artemakis’s eBooks!

 

Rate this article: 1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)

Loading...

Reference: SQLNetHub.com (https://www.sqlnethub.com)

© SQLNetHub

 

Artemakis Artemiou
Artemakis Artemiou is a Senior SQL Server Architect, Author, and a 9 Times Microsoft Data Platform MVP (2009-2018). He has over 15 years of experience in the IT industry in various roles. Artemakis is the founder of SQLNetHub and TechHowTos.com. Artemakis is the creator of the well-known software tools Snippets Generator and DBA Security Advisor. Also, he is the author of many eBooks on SQL Server. Artemakis currently serves as the President of the Cyprus .NET User Group (CDNUG) and the International .NET Association Country Leader for Cyprus (INETA). Artemakis's official website can be found at aartemiou.com.