THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

Sergio Govoni

A place where share my experience on SQL Server and related technologies

  • SQL Server DML Triggers - Tips & Tricks - Part 1 ("Intro" & "Triggers activation order")

    This is the first blog post in a series of posts about Tips & Tricks of SQL Server DML Triggers.


    Introduction

    Triggers are often one of the main reason of quarrel between Developers and DBAs, between those who customize a database application and those who provides it. They are often the first database objects to be blamed when the performance degrades. They seem easy to write, but writing efficient Triggers is not easy. They have many side effects, but also a very important characteristic: they allow solving problems that cannot be managed in any other application layer.

    A Trigger is a special type of stored procedure: it is not called directly, but it is activated on a certain event with special rights that allow you to access in-coming and out-coming data that are stored in special virtual tables called Inserted and Deleted. Triggers exist in SQL Server since the version 1.0, even before CHECK constraint. They always work in the same unit-of-work of the T-SQL statement that has called them. There are different types of Triggers: Logon Trigger, DDL Trigger and DML Trigger; the most known and used type is Data Manipulation Language Trigger, also known as DML Trigger. This series of posts treats only aspects related to DML Triggers.

    There are many options that modify run time Triggers' behavior, they are:

    1. Nested Triggers
    2. Disallowed results from Triggers
    3. Server Trigger recursion
    4. Recursive Triggers

    Each of these options has, of course, a default value in respect to the best practices of Triggers development. The first three options are Server Level Options and you can change their default value using sp_configure system stored procedure, whereas the value of the last one can be set at the Database Level.

    What do you think about Triggers? In your opinion, based on your experience, are they useful or damaging?

    You will meet people who say: "Triggers are absolutely useful" and other people who say the opposite. Who is right? Reading the two bullet lists you will find the main reasons of the two different theory about Triggers.

    People say that Triggers are useful because with them:

    • You can develop customize business logics without changing the user front-end or the Application code
    • You can develop an Auditing or Logging mechanism that could not be managed so efficiently in any other application layer

    People say that Triggers are damaging because:

    • They are sneaky!
    • They can execute a very complex pieces of code silently!
    • They can degrade performance very much
    • Issues in Triggers are difficult to diagnose

    As usual the truth is in the middle. I think that Triggers are a very useful tool that you could use when there are no other ways to implement a database solution as efficiently as a Trigger can do, but the author has to test them very well before the deployment on the production environment.


    Triggers activation order

    SQL Server has no limitation about the number of Triggers that you can define on a table, but you cannot create more than 2.147.483.647 objects per database; so that the total of Table, View, Stored Procedure, User-Defined Function, Trigger, Rule, Default and Constraint must be lower than, or equal to this number (that is the maximum number that will be represented by the integer data type).

    Now, supposing that we have a table with multiple Triggers, all of them ready to fire on the same statement type, for example on the INSERT statement: "Have you ever asked yourself which is the exact activation order for those Triggers?" In other worlds, is it possible to guarantee a particular activation order?

    The Production.Product table in the AdventureWorks2014 database has no Triggers by design. Let's create, now, three DML Triggers on this table, all of them active for the same statement type: the INSERT statement. The goal of these Triggers is printing an output message that allows us to observe the exact activation order. The following piece of T-SQL code creates three sample DML AFTER INSERT Triggers on Production.Product table.

    USE [AdventureWorks2014];
    GO

    -- Create Triggers on Production.Product
    CREATE TRIGGER Production.TR_Product_INS_1 ON Production.Product AFTER INSERT
    AS
      PRINT 'Message from TR_Product_INS_1';
    GO

    CREATE TRIGGER Production.TR_Product_INS_2 ON Production.Product AFTER INSERT
    AS
      PRINT 'Message from TR_Product_INS_2';
    GO

    CREATE TRIGGER Production.TR_Product_INS_3 ON Production.Product AFTER INSERT
    AS
      PRINT 'Message from TR_Product_INS_3';
    GO

    Let's see all Triggers defined on Production.Product table, to achieve this task we will use the sp_helptrigger system stored procedure as shown in the following piece of T-SQL code.

    USE [AdventureWorks2014];
    GO

    EXEC sp_helptrigger 'Production.Product';
    GO

    The output is shown in the following picture.

     

    Picture 1 – All Triggers defined on Production.Product table

    Now the question is: Which will be the activation order for these three Triggers? We can answer to this question executing the following INSERT statement on Production.Product table, when we execute it, all the DML INSERT Triggers fire.

    USE [AdventureWorks2014];
    GO

    INSERT INTO Production.Product
    (
      Name, ProductNumber, MakeFlag, FinishedGoodsFlag, SafetyStockLevel,
      ReorderPoint, StandardCost, ListPrice, DaysToManufacture, SellStartDate,
      RowGUID, ModifiedDate
    )
    VALUES
    (
      N'CityBike', N'CB-5381', 0, 0, 1000, 750, 0.0000, 0.0000, 0, GETDATE(),
      NEWID(), GETDATE()
    );
    GO

    The output returned shows the default Triggers activation order.

    Message from TR_Product_INS_1
    Message from TR_Product_INS_2
    Message from TR_Product_INS_3

    As you can see in this example, Triggers activation order coincides with the creation order, but by design, Triggers activation order is undefined.

    If you want to guarantee a particular activation order you have to use the sp_settriggerorder system stored procedure that allows you to set the activation of the first and of the last Trigger. This configuration can be applied to the Triggers of each statement (INSERT/UPDATE/DELETE). The following piece of code uses sp_settriggerorder system stored procedure to set the Production.TR_Product_INS_3 Trigger as the first one to fire when an INSERT statement is executed on Production.Product table.

    USE [AdventureWorks2014];
    GO

    EXEC sp_settriggerorder
      @triggername = 'Production.TR_Product_INS_3'
      ,@order = 'First'
      ,@stmttype = 'INSERT';
    GO

    At the same way, you can set the last Trigger fire.

    USE [AdventureWorks2014];
    GO

    EXEC sp_settriggerorder
      @triggername = 'Production.TR_Product_INS_2'
      ,@order = 'Last'
      ,@stmttype = 'INSERT';
    GO

    Let's see the new Triggers activation order by executing another INSERT statement on Production.Product table.

    USE [AdventureWorks2014];
    GO

    INSERT INTO Production.Product
    (
      Name, ProductNumber, MakeFlag, FinishedGoodsFlag, SafetyStockLevel,
      ReorderPoint, StandardCost, ListPrice, DaysToManufacture, SellStartDate,
      RowGUID, ModifiedDate
    )
    VALUES
    (
      N'CityBike Pro', N'CB-5382', 0, 0, 1000, 750, 0.0000, 0.0000, 0, GETDATE(),
      NEWID(), GETDATE()
    );
    GO

    The returned output shows our customized Triggers activation order.

    Message from TR_Product_INS_3
    Message from TR_Product_INS_1
    Message from TR_Product_INS_2

    In this first part you have learnt how to set the activation of the first and of the last Trigger in a multiple DML AFTER INSERT Triggers scenario. Probably, one question has come to your mind: "May I set only the first and the last Trigger?" The answer is: "Yes, currently you have the possibility to set only the first Trigger and only the last Trigger for each statement on a single table"; as a friend of mine says (he is a DBA): "You can set the activation only of the first and of the last Trigger because you should have three Triggers maximum for each statement on a single table! The sp_settriggerorder system stored procedure allows you to set the first and the last Trigger fires, so that the third one will be in the middle, between the first and the last".

    All examples in this post are based on AdventureWorks2014 database that you can download from codeplex website at this link.

  • SQL Server Hardware

    From Red-Gate website you can download (for free) the ebook SQL Server Hardware written by Glenn Berry (@GlennAlanBerry).

    This ebook will provide the fundamental knowledge and resources you need to make intelligent decisions about choice, and optimal installation and configuration, of SQL Server hardware, operating system and the SQL Server RDBMS. 

    The full editorial review is available here.

    This book cannot miss in your digital library! And obviously you can buy the paperback on Amazon.

  • 24 Hours of PASS (September 2014): Recordings Now Available!

    Sessions of the event 24 Hours of PASS: Summit Preview Edition (which was held on last September 9th) were recorded and now they are available for online streaming!

    If you have missed one session in particular or the entire event, you can view it or review your preferred sessions; you can find all details here.

    What could you aspect from the next PASS Summit? Find it out on recorded sessions of this edition of 24 Hours of PASS.

  • 24 Hours of PASS (September 2014): Summit Preview Edition

    Which sessions you can expect to find at the next PASS Summit 2014 ? Find it out on September 09, 2014 (12:00 GMT) at the free online event: 24 Hours of PASS: Summit Preview Edition.

    Register now at this link.

    No matter from what part of the world you will follow the event, the important thing is to know that they will be 24 hours of continuous training on SQL Server and Business Intelligence on your computer!

  • AdventureWorks databases for SQL Server 2014

    From CodePlex website, you can download the AdventureWorks databases (OLTP, Data Warehouse, Tabular and Multidimensional model) for SQL Server 2014.

    Download links and installation's details are available here.

    Enjoy with the new features of SQL Server 2014!

  • 24 Hours of PASS (June 2014): Recordings Now Available!

    Sessions of the event 24 Hours of PASS: SQL Server 2014 (which was held on last June at 25th and 26th) were recorded and now they are available for online streaming!

     

    If you have missed one session (in particular) or the entire live event, you can view it or review your preferred sessions here.

    What could you aspect from the next PASS Summit? Find it out on September 9 at 24 Hours of PASS: Summit 2014 Preview Edition!

  • Updateable Column Store Indexes in SQL Server 2014

    Introduction

    Column store indexes had been released with SQL Server 2012 to optimize data warehouse workloads that have specific patterns, data are loaded through T-SQL scripts or through SSIS packages, several times a day, in any case the frequency is not important, only that the available data is loaded in the same execution. At the end of ETL process, data is read with reporting tools. Usually data is written one time, then read multiple times.

    In SQL Server 2012 there was the non-clustered column store index only; like a traditional B-Tree non-clustered index, it was a secondary index. However, it differs from a traditional B-Tree index because it is based on a columnar structure, though, the base table remains organized by row (in a structure called row-store and saved in 8K data pages).

    The column store indexes are part of Microsoft In-Memory Technologies because they use xVelocity engine for data compression optimization and its implementation is based on a columnar structure such as PowerPivot and SSAS Tabular. Data in column store indexes are organized by column, each memory page stores data from a single column, so each column can be accessed independently. This means that SQL Server Storage Engine will be able to fetch the only columns it needs. In addition, data is highly compressed, so more data will fit in memory and the I/O operations can greatly decrease.

    Column store indexes structure

    Before talking about new feature of column store indexes in SQL Server 2014, it is important to introduce three keywords: Segment, Row Group and Compression. In a column store index, a segment contains values for one column of a particular set of rows called row group. As it is possible to see in the following picture, each red and gray portions are segments. When you create a column store index, the rows in the table will be divided in groups and each row group contains about 1 million rows (the exact number of rows in each row group is 1,048,576; in other word there are 2^20 rows in each row group). Column store transforms the internal index organization from row organization to columnar organization and there will be one segment for each column and for each row group. Column store indexes are part of Microsoft In-Memory technologies in which data is compressed and the compression plays a very important role, so each segment is compressed and stored in a separate LOB.

    This article does not detail the algorithms to compress data in column store index. At any rate, keep in mind that each segment stores the same type of data, so in a segment, there will be homogeneous data and in this scenario, the compression algorithms will be more efficient than the one used to compress table rows because row usually contains different type of data. In general, data compression can be implemented using different techniques such as:

    • Dictionary Encoding
    • Run-length Encoding
    • Bit Packing
    • Archival Compression (only in SQL Server 2014)
      • It can reduce disk space up to 27%

    The techniques used by SQL Server to compress data are undocumented.

    The following picture shows an example of row groups and segments in a column store index... continue reading full article here.

     

    Thanks to my friend Viviana Viola for the grammar review.

  • Introducing Microsoft Azure HDInsight

    A post on Microsoft Press Blog has announced the publication of the eBook Introducing Microsoft Azure HDInsight.

    If you want to know what Big Data really means and how you can use it to your advantage in your company or organization, don't miss this eBook!

    Enjoy the book!

  • 24 hours of PASS is back!

    The most important free on-line event on SQL Server and Business Intelligence is back! The 24 Hours of PASS is coming back with a great edition fully based on the new features of SQL Server 2014.

    Register now at this link.

    No matter from what part of the world you will follow the event, the important thing is to know that it will be 24 hours of continuous training on SQL Server and Business Intelligence.

  • Review of the book Learning Windows Azure Mobile Services for Windows 8 and Windows Phone 8

    Recently I had the opportunity to read the book Learning Windows Azure Mobile Services for Windows 8 and Windows Phone 8 written by Geoff Webber-Cross (@webbercross) and published by Packt Publishing.

    In the last year, Windows Azure has increased the offer of cloud-based services which are hosted on Windows Azure platform. One of those new services is Windows Azure Mobile Services that allows developers to build web-connected application easily.

    Before reading this book my knowledge on Windows Azure was on other topics such as SQL Azure Database, Storage and Windows Azure Virtual Machine. When I have heard about the opportunity to read and review this book I have thought it was a great opportunity to learn something new about the services offered by Windows Azure for mobile application.

    The book covers all features of Windows Azure Mobile Services starting from the activity to prepare the Windows Azure Mobile Services Portal up to the Best Practices for Web-Connected Applications development. When you start to develop an Apps for Windows 8 or Windows Phone 8 with Windows Azure Mobile Services you may want to know what software and hardware are needed, this topic is covered in the second chapter. Security, Customization, Notification and Scalability are topics covered in the chapters 3, 4, 5 and 6.

    Another thing I have appreciated in this book is the attention to the cost of services; many times in the book I read sentences like this "At this point, if we choose the … we will start incurring costs". As confirm, the concept "Choosing a pricing plan for services you wish to implement" is covered at the beginning of the first chapter.

    There are lot of pictures in the book, which make it practical and easy to read. If you want to look inside the book you can download a sample chapter here and this is the table of contents:

    • Chapter 1: Preparing the Windows Azure Mobile Services Portal
    • Chapter 2: Start Developing with Windows Azure Mobile Services 19
    • Chapter 3: Securing Data and Protecting the User
    • Chapter 4: Service Customization with Scripts
    • Chapter 5: Implementing Push Notifications
    • Chapter 6: Scaling Up with the Notifications Hub
    • Best Practices for Chapter 7: Web-connected Apps 

    This book cannot missing in your digital or physical library, enjoy!

  • Introducing Microsoft SQL Server 2014

    The release of the latest two versions of SQL Server (2008 R2 and 2012) has been accompanied by the publication of their respective eBooks. On April 2nd, a post on Microsoft Press Blog has announced the publication of the eBook Introducing Microsoft SQL Server 2014!

     

    This eBook is divided in two parts; the first one is dedicated to Database Administration, here the authors describe the editions of SQL Server 2014, the characteristics of the new Engine In-Memory and how the transition from on-premises solutions to the Cloud is easy with added support for Hybrid environments. In the second part, the authors describe the components that support data analysis and what improvements have been done in related technologies such as Microsoft Excel 2013, Power BI for Office 365, HDInsight, and PolyBase.

    The PDF version of the eBook is available, for free, here. In addiction you will find this book into the eBook section on Microsoft Virtual Academy home site.

    Enjoy the book and thanks to authors Ross Mistry (@RossMistry) and Stacia Misner (@StaciaMisner).

  • Unexpected SPID change

    A few time ago I had the opportunity to investigate about an unexpected SPID (Session Process ID) change. The inquiry began when a customer of ours starts to complain for locks, and lock time out error. These locks sometime were more frequent and sometimes not.

    To investigate about this issue, I have taken two SQL Profiler trace files related of two execution of the Application functionality on which the customer has complained about the problem, with the same execution criteria. The only difference is that the first trace file was taken when the user complained locks and lock timeout error, while the second one was taken when no locks were occurred.

    Comparing the two SQL Profiler trace files I have noticed an "unexpected" SPID change happened exactly when the Application has been locked; I have written "unexpected" because the Application uses always one connection to perform the queries captured by SQL Profiler. I have checked the piece of code that execute the queries shown in the following picture and I have verified no new connection was been opened explicitly.

    The following picture shows the first execution in which you can see the unexpected SPID change.

    Picture 1 – Execution with unexpected SPID change

    Let me focus on the trace captured during the first execution. As you can see in the Picture 1, at certain point for the ClientProcessID number 192, there was been a SPID change from SPID number 111 to SPID number 110. The last query executed with SPID 110 has only the SP:StmtStarting event without SP:StmtCompleted event because this query was blocked from the previous SPID number 111 and for this reason the Application has been blocked.

    The following picture shows the second execution.

    Picture 2 – Execution without SPID change

    Let me focus on the second execution of the same Application functionality, on the same client and with the same execution criteria, of course. As you can see in the picture 2 the SPID number is always the 68 for all queries performed in the same piece of code (as I expect).

    Now the questions are: Which is the reason for this unexpected SPID change? Which are the conditions that force SQL Server to take the decision of changing the SPID number between two queries execution?

    Talking about this issue with Erland Sommarskog he asked me "What API does the application use?" and my answer: "OLE DB", so he replied me "We have the answer!".

    When you use OLE DB or something based on OLE DB and you perform a query on a connection which has not fully consumed the entire result-set of the previous, the default behavior is to open a new connection behind the scenes. This new connection will have a different SPID number, it is an attempt to be helpful, but it is not be ever helpful.

    In particular, the result-set not fully consumed was in the last query executed by the SPID number 111, the same tables were been accessed from the last query with SPID 110 and then there was been the lock.

    Thanks for the help Erland!

  • Hurry up! Buy one book and get one for free on Packt Publishing!

    Packt Publishing wants to celebrate the release of their 2000th title with the great offer "Buy One, Get One Free" applied into their comprehensive catalog.

    If you buy a book, of your choice, you will get another one for free. In addition, you will exploit this offer unlimitedly during the offer period.

    I suggest you to take a look to available titles, you can risk to bring home two books at the ones price :) in particular, I suggest you to look this categories:

    Enjoy the books!

  • Performance Tuning with SQL Server Dynamic Management Views

    From Red-Gate website you can download (for free) the ebook Performance Tuning with SQL Server Dynamic Management Views written by Louis Davidson (@drsql) and Tim Ford (@sqlagentman).

    DMVs are objects that publish information about the health state of a SQL Server instance. They allow you to monitor the heart of the system to diagnose problems and measure performance. They represent a very good tool for DBAs and database Developer as well. Reading this book you will learn how to obtain information from the most frequently used DMVs in investigating query execution, index usage, session and transaction activity, disk I/O, and how SQL Server is using or abusing the operating system.

    The editorial review is available here.

    This book cannot miss in your digital library! And obviously you can buy the paperback on Amazon.

  • Troubleshooting Clusters

    The session Troubleshooting Clusters by Allan Hirt (@SQLHA) has been published on channel SQLPASS TV.

    Abstract

    Whether you are new to clusters or have years of experience, you may still cross your fingers when implementing a failover cluster instance (FCI) of SQL Server or an availability group (AG). Both require an underlying Windows Server failover cluster (WSFC), but how many DBAs are involved in the planning and deployment of it? What happens if you try to install an FCI or configure an AG and things don't work? Or worse, what if the implementation fails after going live in production? This session will cover how to diagnose what went wrong and the free tools available to you, as well as demonstrate how to fix common issues that every DBA should know about.

    Link

    http://www.youtube.com/watch?v=ELtmP_1uEi8 

    Enjoy! 

More Posts Next page »
Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement