THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

Louis Davidson

  • Scenic City Summit is this Friday

    Scenic City Summit is a one day conference in Chattanooga, the Scenic City Summit is a new experience for me, although in a familiar, Devlink-esqe setting of the Chattanooga Convention Center in Chattanooga, TN.

    I am looking forward to attending, and seeing a couple of fellow Microsoft MVP's do keynotes: Cory House and David Neal. I have never seen/met Cory House, and as he is doing the opening keynote, there is only a 30% chance I will actually make it there. David Neal on the other hand, I have seen a few times at Music City Code and Devlink, and he and I also share a devotion to the cullinary masterpiece known as Nashville Hot Chicken (my taste bud just twanged a bit thinking about it).

    But I digress.

    Scenic City Summit isn't a large conference, and is even smaller when it comes to database content. Under the topic of database, there are 2 sessions: One by Eric Cobb, (he is from Nashville, so I may know him already) called Building Better SQL Server Databases, which I plan to attend.

    Then my session:

    Implementing Row Level Security in SQL Server

    Since the early days of SQL Server, it has been possible to use basic security measures to allow (and disallow) user access to any object in a database, and even limit access to a given column. A more difficult task has been to limit access to the individual rows in a table. This has made application development more complex because whereas most every user may have access to a table like HR.Employee, not every employee will have access to every employee's data. In this session I will cover several techniques to limit access to rows, from methods that have existed since early versions of SQL Server, to the new Row Level Security feature in SQL Server 2016.

    What is interesting about this session for me is the time frame. Some conferences you get an hour and 15 minutes and have to stretch your topic to fit (or to be fair, not hurry to get to the end because you have too much material to fill the typical hour. This presentation will be the other way. It is scheduled for 45 minutes. And since these will be more of a coder/developers type of audience, I really have no idea whether they will want to see 2016 topics, or how to do it with pre-2015 technologies. So I will have my 2 hours of material (all code in SSMS) ready, and see where things go.

    There are a few other sessions I am keen to attend, both from Gaines Kergosien (great guy who runs Music City Code), the first of which conflicts with Eric's session, but how can you not want to see: a session on "7 Habits of Highly Paid Developers" I work for a non-profit, but I am not dead inside. The one I do plan to make it to is called "From Developer to Data Scientist" as I am interested in the whole data scientist path, if not necessarily to become one, to build better databases that support them.

  • 50ths… SQL Saturday and Life

    This week I will experience a couple of events associated to the number 50.

    SQL Saturday Atlanta

    The second will be my 50th SQL Saturday Event! (49th speaking, 1 attended in casual mode… 50th speaking Saturday will be in August at SQL Saturday Louisville).  This weekend is SQL Saturday Atlanta. It is one of my favorite events, not just for the great people I have known for over 10 years, not just for the mid major conference feel, not even because of the awesome theming.  Well maybe it is the theming, because I can say the rest of those about all of the SQL Saturdays I have attended!

    And awesomely, for my 50th event I will be doing my good old standard database design session:

    Database Design Fundamentals

    Data should be easy to work with in SQL Server if the database has been organized as close as possible to the standards of normalization that have been proven for many years, but are often thought of as old-fashioned. Many common T-SQL programming "difficulties" are the result of struggling against these standards and can be avoided by understanding the requirements, applying normalization, as well as a healthy dose of simple common sense. In this session I will give an overview of how to design a relational database, allowing you to work with the data structures instead of against them. This will let you use SQL naturally, enabling the query engine internals to optimize your output needs without you needing to spend a lot of time thinking about it. This will mean less time trying to figure out why SUBSTRING(column,3,1) = 'A' is killing your performance, and more time for solving the next customer problem.

    This is by far the session I have done the most times through the years at SQL Saturday and PASS Summits, and it is the topic that got me into the whole speaking, writing, and SQL for a hobby/living situation I have lived for the past 20 years.

    Life

    But before I reach 50 SQL Saturdays, I have another milestone to scoot past. 50 years of life as of July 12, 2017. I have whined written about my travails over the past 20 years that make me feel old: gray hair, grandkids, hip replacements, knee problems, just to name a few. Reaching the decade when my father passed away in his life is mildly scary to me also (even if he died of an illness). Don't get me wrong, not afraid to pass away some day, but I am not looking forward to it either. (If I was, I wouldn't be consistently going to the gym for 50 minutes a day, and I would eat a lot more fried chicken from Chef Big Shakes!)

    While I certainly do not feel like I did when I was 20, I don't exactly feel old. Sure I can't stand up without making that middle aged groaning/grunting noise, but who can really? Sometimes I feel like just a big kid, and the SQL community is a big part of that, and the reasons are really twofold. First, in having a community of people who shares SQL Server with me, my world has stretched a lot since I first attended a conference and didn't leave the hotel where it was held. I have also learned that all of the seemingly "kiddie" stuff I like to do is not abnormal. Some of my favorite things to do are:

    • Legos - I can see 7 sets of Legos from my desk without turning my head (in which case there are several more sprinkled around my home office.
    • Sci-Fi/Fantasy - I love Star Wars (Movies, Comics, TV Shows), Marvel (Movies and TV Shows), Planet of The Apes (Going to the see the new trilogy on my birthday!), Lord of the Rings (Movies), to name just a few of my favorites
    • Kids movies - Way more than 1/2 of the movies I go to see are animated, and the same can be said of those I stream.
    • Theme Parks - My wife and I plan our vacations like two pre-teenagers would. Resort time? Nah, that is taking up ride time.
    • Roller Coasters - I have some challenges with them, but if I can ride it, I will ride the heck out of it. I currently hold the record for number of times to ride the Tennessee Tornado at Dollywood in a day (52 times).

    Every one of these are things that I have in common with some of the members of the SQL community, and often discuss them on Twitter with them. Then add on the stuff I do in my "hobby" SQL community, learning, writing, blogging, tech editing books, I don't exactly sleep 15 hours a day anymore.

    If I just had my father, mother, grandparents, and other (what were at the time) old people that I have known as my guide, I have no idea what I would do with my time. I love sitting down and watching copious amounts of TV, but something that makes watching TV better is that it is a break from doing way too much work. Looking back at my childhood, I have a much greater appreciation for just how hard my father had to work for a living compared to the amount of work I have to do as a data architect.

    What is kind of cool is that even those things that technically make me feel old help me feel young sometimes. Grandchildren makes you sound old, but kids are generally fun. My oldest likes roller coasters as much, if not more, than me. And while your own children getting older sounds bad, watching them grow up to be something awesome is worth getting older for. And hey, in 20-30 years she will get to become my parent, hopefully :)

    Future

    What does the future hold? Who knows? I still like working with SQL, and I love the SQL community. Travelling to 5+ events every year? Most likely if they will still have me, I will continue to put in to speak. When I am tired of that, I will probably still show up. I do at our local user group, even when the topic doesn't seem like it will be one I am initially keen on.

    Age is just a number, though as the saying goes, 50 is a big number. The only people who will tell you otherwise are either too young know or too old to be trusted…

    Note, this is the second in my trilogy of personal blogs, the next will come later this year. Technical stuff will resume when I am finished with some editing work that is kicking my rear end. I don't want people to think I am just getting old and reflective. Just some events kind of lead you to bring them up and discuss. The SQL Community has been a part of my life now for 20 years, and for the past 10 or so, it is not only growing, but becoming more tightly knit due to social media and blogs. I just want to say thanks :)

  • SQL Saturday Chattanooga is THIS WEEK!

    My goodness how time does fly. It seems like just a few months ago I was going to my hometown SQL Saturday in Nashville, and now here I go again to my NEW hometown SQL Saturday in Chattanooga, TN. Much like the Nashville event, I didn't really do anything in terms of organization of the event, but I will be speaking at the event this year (last year, Chattanooga was he event I just attended… I remember liking that feeling in the days leading up to the event, as a matter of fact.)

    I will be doing my Database Design Fundamentals session. Here is the abstract:

    Data should be easy to work with in SQL Server if the database has been organized as close as possible to the standards of normalization that have been proven for many years, but are often thought of as old-fashioned. Many common T-SQL programming "difficulties" are the result of struggling against these standards and can be avoided by understanding the requirements, applying normalization, as well as a healthy dose of simple common sense. In this session I will give an overview of how to design a relational database, allowing you to work with the data structures instead of against them. This will let you use SQL naturally, enabling the query engine internals to optimize your output needs without you needing to spend a lot of time thinking about it. This will mean less time trying to figure out why SUBSTRING(column,3,1) = 'A' is killing your performance, and more time for solving the next customer problem.

    This session is totally my "chestnut" session (the derivation of that expression is not what I thought it was!), as a variation of this session has been done by me for 15 years or so, and some of the concepts have been around since the mid eighties (when the music was REAL!) 

    Larry Ortega (@SpoChatt) and team puts on a great conference, I have enjoyed it every year, and he is the lead of our local user group as well.  I hope to see you there… Convinced? Click here: https://www.sqlsaturday.com/624/RegisterNow.aspx

    Still not convinced?

    How can you resist attending SQL Saturday #624 on 6/24? It will be like many 1000s of events before you can possibly do that again!

    If my session, and the 6/24 thing, isn't enough, check out the full schedule for, frankly, a heck of a group of speakers! http://www.sqlsaturday.com/624/Sessions/Schedule.aspx

    Still not convinced? Well then you are just too hard to please for me. Stay home and enjoy a rainy Saturday afternoon without SQL.

  • Father, Son, Daughter, and SQL

    I don't love Father's Day. Mostly because I spoke to my father for the last time on Father's day back in 1996. He wasn't feeling well, we exchanged a few pleasantries, and the words I remember were: "I don't feel well, let me give you to your mother." Better than the actual last word's that Marshall got: "that fungus thing is acting up again;" but not as good as the words just before those: "I love you" (don't get the reference? check here.)

    As a son I was by no means the worst, but still I was not the best. My father loved me, much in the same way that a drill instructor loves a new set of recruits. He wanted the best for me, and while his discipline was very (very) strong (yes, he hit me with belts and switches, and I more than deserved it,) the care part was there, wanting the best for me. He taught me to treat everyone equal. We tore down the fence of our neighbor who was the only non-Caucasian person in the neighborhood. I still remember the gist of his words, along with the geographic location of that belt utilization: "he has enough trouble just being different." Times have changed, but it is a lesson I still carry today, and not just based on the color of a person's ethnicity. Love your neighbor.

    He was actually who got me into SQL, indirectly. When I was in high school, he had me take a class that I insisted was just for girls (I think there may have been a few football players in there too,) typing. I was never going to type, much less for a living. Ha! (I also took home economics, which was a good thing too, though apparently it is now called: family and consumer sciences.)

    Then, he helped me get my first computer, a Commodore 64, with the most horrible off-brand TV for a monitor. We were relatively poor at the time, him being between careers (after losing his management position at a car dealer to the owner's son, he quit, took about a year off, and ended up with a great position with the USDA Forest Service, managing their fleet), and it was a great inspiration over the years, after my brain realized all that had occurred. My love of computers started there, though I was still working on an engineering degree, which I failed at. He had always questioned if that was what I should do, and to be realistic, my head was not where it should have been (you can guess where it was at.) It took me time to get back in school, and finally to work on a Computer Science degree. The rest is more than I am going to write about here :)

    Though we were never close in a buddy buddy manner, I feel his influence even today. We were similar teachers. If you don't want to learn, we are terrible. He was a master mechanic, but could never teach me a thing about cars. He was, by all accounts, a great driving instructor in his job, and won many awards for the job he did. I love teaching people about SQL, because everyone who attends my sessions does it because they love the subject.

    The other part of Father's Day I do like is being a father, though it is difficult too. Am/Was I a great father? I don't know. I never felt like a bad father (embarrassing at times for sure,) but never "great". I don't know how some of the SQL community does it. Speaking, writing, consulting on the road, I applaud you if you can balance things with being a parent. I remember sitting in many volleyball, basketball, and school musicals on my Palm Pilot writing/editing/planning my first book.

    My daughter was 10, 11, and/or 12 then. She is not now, as one of my grandkids is 8! If we measured SQL programmers by whether their children became nerds too, then I did fail big time. I never did a great job teaching her much of any skills, and she never really cared about technical things. But my son-in-law is currently working in IT, and my 8 year old granddaughter loves math.

    Happy Father's Day, and thanks to all of the dads out there…

  • Utility to temporarily drop FOREIGN KEY constraints on a set of tables

    I was working on a database a few weeks back (maybe longer, I am kind of busy with several projects right now), and I wanted to truncate a set of tables. There was one central table, and 8 tables related by a FOREIGN KEY constraint. The central table had 6 million rows, and a few of the other tables 20+ million. TRUNCATE is amazingly fast,and when I tried to just use DELETE, it took almost as long as it took me to load the tables. (Exaggeration? A little bit, but not as much as you might imagine, since I use SSIS to BULK LOAD these tables, and the source data is very simple.)

    I could just get rid of the constraints, but as a relational db lover, I love constraints. When enforced, they protect me. When not enforced (like when SSIS disables them for a BULK LOAD operation), they are still good documentation. So as any good nerd programmer type would, I started coding a tool to deal with the constraints for me. And as a decent blogger, as soon as it started to get interesting, I realized I could blog about it and upload the code to my website. This in the end makes the code better, because I have to test more, and I learn stuff from readers reminding me things (like a FK script needs to honor CASCADE and NOT FOR REPLICATION, oops.)

    I am not going to go over, or even paste, the code in this blog. What I will do is show the interface, and demonstrate how the code works. The interface for the tool is two stored procedures. The first is used to drop the foreign key constraints, storing the scripts for the constraints in a table that it creates named Utility.foreign_key$batch_drop_toRestore:

    utility.foreign_key$batch_drop
        @table_schema sysname = '%', --lets you control what schema to drop FKs from
        @table_name sysname = '%',  --lets you control the tables to drop FKs from
        @add_to_history_flag BIT = 0, --by default the procedure creates a table to hold history, this parameter tells it to add to history, if you need to do things incrementally
        @force_replace_status  VARCHAR(20) = 'AS_WAS' --Using the following values: ENABLED, UNTRUSTED, DISABLED, lets you force the status of the constraints, like to quickly turn on the constraint as UNTRUSTED

    Then, to recreate the foreign key constraints after you have done your business with the tables:

    utility.foreign_key$batch_recreate --if the utility.foreign_key$batch_drop_toRestore table exists, do what is in the table.

    Note, If your structures become out of sync with the data in utility.foreign_key$batch_drop_toRestore, you may have to manually apply scripts from the table and/or drop the table. The point of the tool  is to use to drop, do some action, and add the FKs back. The more time passes without applying the script, and things could get out of sync.

    To demonstrate their use, here are a few samples usages. The utility uses the code from the previous blog showing how to script a foreign key:  http://sqlblog.com/blogs/louis_davidson/archive/2017/05/24/utility-to-script-a-foreign-key-constraint.aspx and you can download the code for this utility here: https://www.dropbox.com/s/3m9lghtfrnhhxgh/Utility-ForeignKey%24BatchDropRecreate.sql?dl=0

    USE TestRebuildConstraints
    GO

    --Recreate the tables, so we have a known state that can be compared to a script 
    DROP TABLE IF EXISTS Demo.ChildTable;
    DROP TABLE IF EXISTS Demo.ParentTable;
    DROP TABLE IF EXISTS Demo.GrandParentTable;

    CREATE TABLE Demo.GrandParentTable
    (  
        GrandParentTableId INT NOT NULL
            CONSTRAINT PKGrandParentTable PRIMARY KEY
    );

    CREATE TABLE Demo.ParentTable
    (
        ParentTableId INT NOT NULL
            CONSTRAINT PKParentTable PRIMARY KEY,
        GrandParentTableId INT NULL,
        CONSTRAINT [ParentTable$ref$GrandParentTable_Enabled]
            FOREIGN KEY (GrandParentTableId)
            REFERENCES Demo.GrandParentTable (GrandParentTableId)
    );

    CREATE TABLE Demo.ChildTable
    (
        ChildTableId INT NOT NULL
            CONSTRAINT PKChildTable PRIMARY KEY,
        ParentTableId INT NULL,
    );

    --an untrusted constraint
    ALTER TABLE Demo.ChildTable WITH NOCHECK
      ADD CONSTRAINT [ChildTable$ref$ParentTable_NotTrusted]
           FOREIGN KEY (ParentTableId) REFERENCES Demo.ParentTable (ParentTableId);

    --adding a second constraint. Not typical (or a great idea) but good enough for this exercise
    --disabled constraint
    ALTER TABLE Demo.ChildTable WITH NOCHECK
        ADD CONSTRAINT [ChildTable$ref$ParentTable_Disabled]
            FOREIGN KEY (ParentTableId) REFERENCES Demo.ParentTable (ParentTableId)
            ON DELETE CASCADE
            ON UPDATE SET NULL;

    ALTER TABLE Demo.ChildTable
        NOCHECK CONSTRAINT [ChildTable$ref$ParentTable_Disabled];
    GO

    This is the set of data to compare against in order to make sure that nothing has changed that we did not want to change.

    SELECT is_not_trusted, is_disabled, delete_referential_action_desc, update_referential_action_desc, name
    FROM sys.foreign_keys
    WHERE OBJECT_SCHEMA_NAME(object_id) = 'Demo'
    ORDER BY name;

    Which returns:

    is_not_trusted is_disabled delete_referential_action_desc update_referential_action_desc  name
    -------------- ----------- ------------------------------ ------------------------------- ------------------------------------------
    1              1           CASCADE                        SET_NULL                        ChildTable$ref$ParentTable_Disabled
    1              0           NO_ACTION                      NO_ACTION                       ChildTable$ref$ParentTable_NotTrusted
    0              0           NO_ACTION                      NO_ACTION                       ParentTable$ref$GrandParentTable_Enabled

    The basic goal of the procedure is something like the following. I want to run the following set of statements on these tables I have created:

    TRUNCATE TABLE Demo.ChildTable;
    TRUNCATE TABLE Demo.ParentTable;
    TRUNCATE TABLE Demo.GrandParentTable;

    Which will fail (even if the constraint is disabled!)

    Msg 4712, Level 16, State 1, Line 52
    Cannot truncate table 'Demo.ParentTable' because it is being referenced by a FOREIGN KEY constraint.
    GO

    Now, run the following utility to drop the constraints:

    EXEC utility.foreign_key$batch_drop @table_schema = 'Demo';
    GO

    This creates a table in the utility schema that holds the scripts and object names:

    SELECT *
    FROM   Utility.foreign_key$batch_drop_toRestore

    Now, the TRUNCATE statements will execute.

    TRUNCATE TABLE Demo.ChildTable;
    TRUNCATE TABLE Demo.ParentTable;
    TRUNCATE TABLE Demo.GrandParentTable;
    GO

    Command(s) completed successfully.

    Execute the batch recreate procedure and it will restore the constraints.

    EXEC utility.foreign_key$batch_recreate;
    GO

    Check the foreign key metadata, it should match what you got from the first query of sys.foreign_keys. Next, I will demonstrate two other cases. I will use a TRANSACTION so we can do multiple tests without resetting our structures. In this example, I will show removing constraints one table at a time:

    --just remove constraints from ChildTable
    EXEC Utility.foreign_key$batch_drop @table_schema = 'Demo', @table_name = 'ChildTable';

    --To add more foreign keys to the table, you have to specify the @add_to_history_flag
    EXEC Utility.foreign_key$batch_drop @table_schema = 'Demo', @table_name = 'ParentTable';

    This causes the following error:

    Msg 50000, Level 16, State 1, Procedure foreign_key$batch_drop, Line 32 [Batch Start Line 164]
    Parameter @add_to_history_flag set to only allow initialize case

    Adding the @add_to_history_flag = 1 parameter value, and it will work:

    --This works, adding an additional table
    EXEC Utility.foreign_key$batch_drop @table_schema = 'Demo', @table_name = 'ParentTable',
        @add_to_history_flag = 1;

    --Now put the constraints back
    EXEC utility.foreign_key$batch_recreate;
    GO

    Finally, you can also change the constraint's enabled status using the @force_replace_status

    --the script that is saved off will be for enabled constraints.
    EXEC Utility.foreign_key$batch_drop @table_schema = 'Demo',@force_replace_status = 'ENABLED';
    GO
    EXEC utility.foreign_key$batch_recreate;
    GO

    Now check the metadata, and you will see the constraints are all trusted:

    SELECT is_not_trusted, is_disabled, delete_referential_action_desc, update_referential_action_desc, name
    FROM sys.foreign_keys
    WHERE OBJECT_SCHEMA_NAME(object_id) = 'Demo'
    ORDER BY name;

    Which you should now see:

    is_not_trusted is_disabled delete_referential_action_desc update_referential_action_desc  name
    -------------- ----------- ------------------------------ ------------------------------- ------------------------------------------
    0              0           CASCADE                        SET_NULL                        ChildTable$ref$ParentTable_Disabled
    0              0           NO_ACTION                      NO_ACTION                       ChildTable$ref$ParentTable_NotTrusted
    0              0           NO_ACTION                      NO_ACTION                       ParentTable$ref$GrandParentTable_Enabled

  • Utility to script a FOREIGN KEY Constraint

    Note: Amended to include the word include in the first Note (I am a terrible editor of my own writing!) and to fix a bug in the code with a misplaced parenthesis
    Note: Amended to include cascading and NOT FOR REPLICATION.

    As noted in my previous post, I am in the middle of building a utility (for work, and for my next SQLBLOG post), that will help when you need to drop the foreign key constraints on a table, but then replace them without having to save off the script manually. In that post, I showed how to manually create a foreign key constraint in three possible states.

    Next, in order to create the utility to script the FOREIGN KEY constraints before dropping them, I need a way to script the constraint. In this post I present a user defined function that will generate a script that does the scripting of a foreign key constraint. The code is largely based on some code from Aaron Bertrand in this blog (with his permission naturally!) with a few edits to script constraints as enabled, disabled or untrusted if the source constraint was in that condition (or you can force the constraints to a certain way if you so desire as well.)

    In this blog entry, I present the code for this function. The code has some comments to illuminate most of what is going on, but I am not going to do too much coverage of the code. Just the code, and some test cases.

    utility.foreign_key$script (download the code here)

    CREATE DATABASE TestRebuildConstraints;
    GO
    USE TestRebuildConstraints
    GO

    IF DB_ID() = DB_ID('TestRebuildConstraints') --helps me not create stuff in master
        EXEC('CREATE SCHEMA utility');
    GO

    CREATE OR ALTER FUNCTION utility.foreign_key$script(
        @schema_name sysname,
        @foreign_key_name sysname,
        @constraint_status VARCHAR(20) = 'AS_WAS' --ENABLED, UNTRUSTED, DISABLED
                                                   --ANY OTHER VALUES RETURN NULL
    )
    --------------------------------------------------
    -- Use to script a foreign key constraint
    --
    -- 2017  Louis Davidson  drsql.org
    --   Thanks to Aaron Bertrand and John Paul Cook's code
    --------------------------------------------------
    RETURNS NVARCHAR(MAX)
    AS
    BEGIN
        --based on code to gen list of FK constraints from this article by Aaron Bertrand
        --
    https://www.mssqltips.com/sqlservertip/3347/drop-and-recreate-all-foreign-key-constraints-in-sql-server/

        --and code from John Paul Cook:
        --
    https://social.technet.microsoft.com/wiki/contents/articles/2958.script-to-create-all-foreign-keys.aspx

        DECLARE @script NVARCHAR(MAX);

        IF @constraint_status NOT IN ('AS_WAS','ENABLED','UNTRUSTED','DISABLED')
            RETURN NULL;

        SELECT @script
            =  N'ALTER TABLE ' + QUOTENAME(cs.name) + '.' + QUOTENAME(ct.name) + CHAR(13) + CHAR(10) + '   '
                --code added to set the constraint's status if it is not to be checked (and
                --in the case of disabled, you create it not trusted and disable it
              + CASE
                    WHEN(is_not_trusted = 1
                         OR fk.is_disabled = 1
                          OR @constraint_status IN ( 'UNTRUSTED', 'DISABLED' ))
                        --not forcing it to be enabled
                         AND @constraint_status <> 'ENABLED' THEN
                         'WITH NOCHECK '
                    ELSE
                         ''
                END
              + 'ADD CONSTRAINT ' + QUOTENAME(fk.name) + CHAR(13) + CHAR(10) +
              '      FOREIGN KEY ('
              + STUFF((SELECT   ',' + QUOTENAME(c.name)
                        -- get all the columns in the constraint table
                        FROM     sys.columns c
                                INNER JOIN sys.foreign_key_columns fkc
                                     ON fkc.parent_column_id = c.column_id
                                        AND fkc.parent_object_id = c.object_id
                        WHERE    fkc.constraint_object_id = fk.object_id
                        ORDER BY fkc.constraint_column_id
            FOR XML PATH(N''), TYPE).value(N'.[1]', N'nvarchar(max)'),1,1,N'')
               + ')' + CHAR(13) + CHAR(10) + '         REFERENCES ' + QUOTENAME(rs.name) + '.' + QUOTENAME(rt.name)
              + '('
               + STUFF((SELECT   ',' + QUOTENAME(c.name)
                        -- get all the referenced columns
                       FROM     sys.columns c
                                INNER JOIN sys.foreign_key_columns fkc
                                    ON fkc.referenced_column_id = c.column_id
                                       AND fkc.referenced_object_id = c.object_id
                       WHERE    fkc.constraint_object_id = fk.object_id
                       ORDER BY fkc.constraint_column_id
            FOR XML PATH(N''), TYPE).value(N'.[1]', N'nvarchar(max)'),1,1, N'') + ')'
             + CASE fk.update_referential_action
                    WHEN 1 THEN CHAR(13) + CHAR(10) + '         ON UPDATE CASCADE '
                    WHEN 2 THEN CHAR(13) + CHAR(10) + '         ON UPDATE SET NULL '
                     WHEN 3 THEN CHAR(13) + CHAR(10) + '         ON UPDATE SET DEFAULT '
                    ELSE '' --could also say "no action" which is the default
               END
              + CASE fk.delete_referential_action
                    WHEN 1 THEN CHAR(13) + CHAR(10) + '         ON DELETE CASCADE '
                    WHEN 2 THEN CHAR(13) + CHAR(10) + '         ON DELETE SET NULL '
                     WHEN 3 THEN CHAR(13) + CHAR(10) + '         ON DELETE SET DEFAULT '
                    ELSE '' --could also say "no action" which is the default
                END
              + CASE fk.is_not_for_replication
                    WHEN 1 THEN CHAR(13) + CHAR(10) + '         NOT FOR REPLICATION '
                    ELSE ''
                 END
              + ';'
              + CASE
                    WHEN(fk.is_disabled = 1 AND @constraint_status IN ( 'DISABLED', 'AS_WAS' ))
                         OR @constraint_status = 'DISABLED'
                         THEN CHAR(13) + CHAR(10)+  CHAR(13) + CHAR(10)+   'ALTER TABLE ' + QUOTENAME(cs.name) + '.'
                              + QUOTENAME(ct.name) + CHAR(13) + CHAR(10)
                               + '   NOCHECK CONSTRAINT ' + QUOTENAME(fk.name) + ';'
                     ELSE
                        ''
                END
        FROM   sys.foreign_keys fk
               INNER JOIN sys.tables rt
                    -- referenced table
                   ON fk.referenced_object_id = rt.object_id
               INNER JOIN sys.schemas rs
                    ON rt.schema_id = rs.schema_id
               INNER JOIN sys.tables ct
                   -- constraint table
                   ON fk.parent_object_id = ct.object_id
               INNER JOIN sys.schemas cs
                   ON ct.schema_id = cs.schema_id
        WHERE  OBJECT_SCHEMA_NAME(fk.object_id) = @schema_name
               AND fk.name = @foreign_key_name;
        RETURN @script;
    END;

    Now, to test the code, I will create a few tables:

    --To test, using these tables, I will create three tables (which will anchor the tests of the
    --drop and recreate utilities as well.
    CREATE SCHEMA Demo;
    GO
    CREATE TABLE Demo.GrandParentTable
    (  
        GrandParentTableId INT NOT NULL
            CONSTRAINT PKGrandParentTable PRIMARY KEY
    );

    CREATE TABLE Demo.ParentTable
    (
        ParentTableId INT NOT NULL
            CONSTRAINT PKParentTable PRIMARY KEY,
        GrandParentTableId INT NULL,
        CONSTRAINT [ParentTable$ref$GrandParentTable_Enabled]
            FOREIGN KEY (GrandParentTableId)
            REFERENCES Demo.GrandParentTable (GrandParentTableId)
    );

    CREATE TABLE Demo.ChildTable
    (
        ChildTableId INT NOT NULL
            CONSTRAINT PKChildTable PRIMARY KEY,
        ParentTableId INT NULL,
    );

    --an untrusted constraint
    ALTER TABLE Demo.ChildTable WITH NOCHECK
      ADD CONSTRAINT [ChildTable$ref$ParentTable_NotTrusted]
          FOREIGN KEY (ParentTableId) REFERENCES Demo.ParentTable (ParentTableId);

    --adding a second constraint. Not typical (or a great idea) but good enough for this exercise
    --disabled constraint
    ALTER TABLE Demo.ChildTable WITH NOCHECK
        ADD CONSTRAINT [ChildTable$ref$ParentTable_Disabled]
            FOREIGN KEY (ParentTableId) REFERENCES Demo.ParentTable (ParentTableId);

    ALTER TABLE Demo.ChildTable
        NOCHECK CONSTRAINT [ChildTable$ref$ParentTable_Disabled];
    GO

    Now, check that the constraints are as expected:

    SELECT is_not_trusted, is_disabled, name
    FROM sys.foreign_keys
    WHERE OBJECT_SCHEMA_NAME(object_id) = 'Demo';

    is_not_trusted is_disabled name
    -------------- ----------- ----------------------------------------------
    0              0           ParentTable$ref$GrandParentTable_Enabled
    1              0           ChildTable$ref$ParentTable_NotTrusted
    1              1           ChildTable$ref$ParentTable_Disabled

    Next, I will test the constraints in several different was, sometimes using AS_WAS, and others forcing the different configurations:

    SELECT utility.foreign_key$script('Demo','ParentTable$ref$GrandParentTable_Enabled','AS_WAS') AS Original;

    Original
    ------------------------------------------------------------------------
    ALTER TABLE [Demo].[ParentTable]
       ADD CONSTRAINT [ParentTable$ref$GrandParentTable_Enabled]
          FOREIGN KEY ([GrandParentTableId])
             REFERENCES [Demo].[GrandParentTable]([GrandParentTableId]);


    SELECT utility.foreign_key$script('Demo','ChildTable$ref$ParentTable_NotTrusted','AS_WAS') AS Untrusted;

    Untrusted
    ------------------------------------------------------------------------
    ALTER TABLE [Demo].[ChildTable]
       WITH NOCHECK ADD CONSTRAINT [ChildTable$ref$ParentTable_NotTrusted]
          FOREIGN KEY ([ParentTableId])
             REFERENCES [Demo].[ParentTable]([ParentTableId]);

    SELECT utility.foreign_key$script('Demo','ChildTable$ref$ParentTable_Disabled','AS_WAS') AS Original;


    Original
    ------------------------------------------------------------------------
    ALTER TABLE [Demo].[ChildTable]
       WITH NOCHECK ADD CONSTRAINT [ChildTable$ref$ParentTable_Disabled]
          FOREIGN KEY ([ParentTableId])
             REFERENCES [Demo].[ParentTable]([ParentTableId]);

    ALTER TABLE [Demo].[ChildTable]
       NOCHECK CONSTRAINT [ChildTable$ref$ParentTable_Disabled];

    SELECT utility.foreign_key$script('Demo','ChildTable$ref$ParentTable_Disabled','AS_WERS') AS Fails;

    Fails
    ------------------------------------------------------------------------
    NULL

    Untrusted
    ------------------------------------------------------------------------
    ALTER TABLE [Demo].[ParentTable]
       WITH NOCHECK ADD CONSTRAINT [ParentTable$ref$GrandParentTable_Enabled]
          FOREIGN KEY ([GrandParentTableId])
             REFERENCES [Demo].[GrandParentTable]([GrandParentTableId]);

    SELECT utility.foreign_key$script('Demo','ParentTable$ref$GrandParentTable_Enabled','DISABLED') AS Disabled;

    Disabled
    ------------------------------------------------------------------------
    ALTER TABLE [Demo].[ParentTable]
       WITH NOCHECK ADD CONSTRAINT [ParentTable$ref$GrandParentTable_Enabled]
          FOREIGN KEY ([GrandParentTableId])
             REFERENCES [Demo].[GrandParentTable]([GrandParentTableId]);

    ALTER TABLE [Demo].[ParentTable]
       NOCHECK CONSTRAINT [ParentTable$ref$GrandParentTable_Enabled];

    --Now we test cascade and not for replication

    --First UPDATE and DELETE cascading

    ALTER TABLE [Demo].[ParentTable]
       DROP CONSTRAINT [ParentTable$ref$GrandParentTable_Enabled]

    ALTER TABLE [Demo].[ParentTable]
       ADD CONSTRAINT [ParentTable$ref$GrandParentTable_Enabled]
          FOREIGN KEY ([GrandParentTableId])
              REFERENCES [Demo].[GrandParentTable]([GrandParentTableId])
              ON UPDATE CASCADE
             ON DELETE SET DEFAULT;

    SELECT utility.foreign_key$script('Demo','ParentTable$ref$GrandParentTable_Enabled','AS_WAS') AS CascadeTest;

    CascadeTest
    --------------------------------------------------------------------------------------
    ALTER TABLE [Demo].[ParentTable]
       ADD CONSTRAINT [ParentTable$ref$GrandParentTable_Enabled]
          FOREIGN KEY ([GrandParentTableId])
             REFERENCES [Demo].[GrandParentTable]([GrandParentTableId]
             ON UPDATE CASCADE
              ON DELETE SET DEFAULT );


    --Next Add Not For Replication
    ALTER TABLE [Demo].[ParentTable]
       DROP CONSTRAINT [ParentTable$ref$GrandParentTable_Enabled]

    ALTER TABLE [Demo].[ParentTable]
        ADD CONSTRAINT [ParentTable$ref$GrandParentTable_Enabled]
           FOREIGN KEY ([GrandParentTableId])
             REFERENCES [Demo].[GrandParentTable]([GrandParentTableId])
             ON UPDATE CASCADE
             ON DELETE SET DEFAULT
             NOT FOR REPLICATION;

    SELECT utility.foreign_key$script('Demo','ParentTable$ref$GrandParentTable_Enabled','AS_WAS') AS CascadeNotForRepTest;

    Note that NOT FOR REPLICATION makes the constraint not trusted

    CascadeNotForRepTest
    ------------------------------------------------------------------------------------------
    ALTER TABLE [Demo].[ParentTable]
       WITH NOCHECK ADD CONSTRAINT [ParentTable$ref$GrandParentTable_Enabled]
          FOREIGN KEY ([GrandParentTableId])
             REFERENCES [Demo].[GrandParentTable]([GrandParentTableId]
             ON UPDATE CASCADE
             ON DELETE SET DEFAULT
             NOT FOR REPLICATION );

    Finally, to make sure that the code does deal with composite key references (which I didn't really doubt, but hey, you really need to test this stuff right?

    CREATE TABLE Demo.MultiKey
    (
        Column1 INT NOT NULL,
        Column2 INT NOT NULL,
        PRIMARY KEY (Column1, Column2)
    )
    CREATE TABLE Demo.MultiKeyRef
    (
        Column1 INT NOT NULL,
        Column2 INT NOT NULL,
        FOREIGN KEY (Column1, Column2) REFERENCES Demo.MultiKey (Column1, Column2) --no name for key here
    )
       
    First, need to look up the name to get the system generated name:

    SELECT is_not_trusted, is_disabled, name
    FROM sys.foreign_keys
    WHERE OBJECT_SCHEMA_NAME(object_id) = 'Demo';

    Now, execute the three different ways we can script:

    SELECT utility.foreign_key$script('Demo','FK__MultiKeyRef__31EC6D26','AS_WAS') AS Original;
    SELECT utility.foreign_key$script('Demo','FK__MultiKeyRef__31EC6D26','DISABLED') AS Disabled;
    SELECT utility.foreign_key$script('Demo','FK__MultiKeyRef__31EC6D26','UNTRUSTED') AS Untrusted;

    Original
    ------------------------------------------------------------------
    ALTER TABLE [Demo].[MultiKeyRef]
       ADD CONSTRAINT [FK__MultiKeyRef__31EC6D26]
          FOREIGN KEY ([Column1],[Column2])
             REFERENCES [Demo].[MultiKey]([Column1],[Column2]);

    Disabled
    ------------------------------------------------------------------
    ALTER TABLE [Demo].[MultiKeyRef]
       WITH NOCHECK ADD CONSTRAINT [FK__MultiKeyRef__31EC6D26]
          FOREIGN KEY ([Column1],[Column2])
             REFERENCES [Demo].[MultiKey]([Column1],[Column2]);

    ALTER TABLE [Demo].[MultiKeyRef]
       NOCHECK CONSTRAINT [FK__MultiKeyRef__31EC6D26];

    Untrusted
    ------------------------------------------------------------------
    ALTER TABLE [Demo].[MultiKeyRef]
       WITH NOCHECK ADD CONSTRAINT [FK__MultiKeyRef__31EC6D26]
          FOREIGN KEY ([Column1],[Column2])
             REFERENCES [Demo].[MultiKey]([Column1],[Column2]);

    Hopefully this script can be of some use to you, I will use it in the next blog where I build the utility to drop and recreate FOREIGN KEY constraints.

    (Note: This works with memory optimized tables as well, since the syntax is the same)

  • Creating FOREIGN KEY constraints as enabled, trusted, non-trusted, and/or disabled

    I am in the middle of building a utility (for work, and for my next SQLBLOG post), that will help when you need to drop the foreign key constraints on a table, but then replace them without having to save off the script manually. Part of the process is to generate the script for the foreign key constraint, so the first thing I need to do is to make sure I have all of the scripting possibilities understood.

    When I started hunting around to remember how to create a disabled constraint, I couldn't easily find anything, so I figures I would make this a two-parter. (My blogging rule is if I look for something and find a good article about it, reference it, then tweet the article out. If it is too hard to find, blog about it!) So today I will review how to create a FOREIGN KEY constraint in three ways:

    • Enabled, and Trusted - Just as you would normally create one
    • Enabled, Not Trusted - The "quick" way, not checking data to see if any wrong data already exists, but not allowing new, bad data in
    • Disabled, Not Trusted - The constraint is basically documentation of the relationship, but you are on your own to make sure the data matches the constraint

    First, create a couple of tables that can be used for the demonstration, and then show how to create the constraints as trusted, non-trusted, and disabled. (Note: CHECK constraints are very similar in the ways you can set them to the same three states, if you are needing to do the same for CHECKs).

    --Here is the script that will build the database, and then recreate the objects

    CREATE DATABASE TestRebuildConstraints;
    GO
    USE TestRebuildConstraints;
    GO
    --a safeguard against building stuff in master is to never use dbo schema, and
    --then create the schema only in the db you are expecting to
    IF DB_ID() = DB_ID('TestRebuildConstraints')
        EXEC ('CREATE SCHEMA Demo');
    GO

    Then I will create a set of tables (which can be dropped and recreated over and over as you test different configurations;

    --From here, you can rerun if you want to try the code:
    DROP TABLE IF EXISTS demo.ChildTable, demo.ParentTable, demo.GrandParentTable;

    CREATE TABLE Demo.GrandParentTable
    (
        GrandParentTableId    INT CONSTRAINT PKGrandParentTable PRIMARY KEY
    )

    CREATE TABLE Demo.ParentTable
    (
        ParentTableId    INT CONSTRAINT PKParentTable PRIMARY KEY,
        GrandParentTableId INT
    )

    CREATE TABLE Demo.ChildTable
    (
        ChildTableId    INT CONSTRAINT PKChildTable PRIMARY KEY,
        ParentTableId INT
    )
    GO

    Enabled, and Trusted

    To create your constraint enabled and trusted, it is the base, simple ALTER command, telling the column in the child table that references the parent (this could also be a part of the CREATE TABLE statement.)

    ALTER TABLE Demo.ChildTable
        ADD CONSTRAINT ChildTable$ref$ParentTable
            FOREIGN KEY (ParentTableId) REFERENCES Demo.ParentTable(ParentTableId);


    ALTER TABLE Demo.ParentTable
        ADD CONSTRAINT ParentTable$ref$GrandParentTable
            FOREIGN KEY (GrandParentTableId) REFERENCES Demo.GrandParentTable(GrandParentTableId);

    GO

    The data in the table will be checked, and as such the constraints will be trusted, and enabled. It never hurts to check your constraint's status using the system catalog views:

    SELECT is_not_trusted, is_disabled, name
    FROM sys.foreign_keys
    WHERE OBJECT_SCHEMA_NAME(object_id) = 'Demo';

    is_not_trusted is_disabled name
    -------------- ----------- -----------------------------------
    0              0           ChildTable$ref$ParentTable
    0              0           ParentTable$ref$GrandParentTable

    Enabled, Not Trusted

    Now, if you want to create the constraints as not trusted, not checking any existing data, you use WITH NOCHECK option on the ALER TABLE statement:

    --Drop the existing constraint for the demo
    ALTER TABLE Demo.ChildTable
        DROP CONSTRAINT ChildTable$ref$ParentTable;

    --Add the constraint:
    ALTER TABLE Demo.ChildTable
        WITH NOCHECK --<<< Added this
            ADD CONSTRAINT ChildTable$ref$ParentTable
                FOREIGN KEY (ParentTableId) REFERENCES Demo.ParentTable(ParentTableId);

    Now, check the system catalog, and the ChildTable$ref$ParentTable constraint is not trusted, but it is enabled:

    SELECT is_not_trusted, is_disabled, name
    FROM sys.foreign_keys
    WHERE OBJECT_SCHEMA_NAME(object_id) = 'Demo';

    is_not_trusted is_disabled name
    -------------- ----------- -----------------------------------
    1              0           ChildTable$ref$ParentTable
    0              0           ParentTable$ref$GrandParentTable


    Disabled, Not Trusted

    This is where things were kind of confusing in the syntax. Disabling a constraint is pretty straightforward (though slightly confusing with the NOCHECK showing up again with a slightly different meaning:

    ALTER TABLE Demo.ParentTable
        NOCHECK CONSTRAINT ParentTable$ref$GrandParentTable; 
    GO 

    You can see in the metadata that it is not not trusted nor enabled:

    SELECT is_not_trusted, is_disabled, name
    FROM sys.foreign_keys
    WHERE OBJECT_SCHEMA_NAME(object_id) = 'Demo';

    is_not_trusted is_disabled name
    -------------- ----------- -----------------------------------
    1              0           ChildTable$ref$ParentTable
    1              1           ParentTable$ref$GrandParentTable

    You re-enable the constraint, using the CHECK argument on the ALTER TABLE statement:

    --For not trusted, simply use CHECK:
    ALTER TABLE Demo.ParentTable
        CHECK CONSTRAINT ParentTable$ref$GrandParentTable; 
    GO 

    --To enable and make trusted add WITH CHECK, leading to my FAVORITE bit of DDL syntax, WITH CHECK CHECK:
    ALTER TABLE Demo.ParentTable
        WITH CHECK CHECK CONSTRAINT ParentTable$ref$GrandParentTable; 
    GO 

    But how to create it disabled? Answer, you can't, based on how SQL Server tools scripted the disabled constraint. I guessed the answer, but that didn't make it seem weirder to me. It is a two step process. First, create the constraint WITH NOCHECK, and then disable it:

    --DROP the existing constraint on the Demo.ParentTable:
    ALTER TABLE Demo.ParentTable
        DROP CONSTRAINT ParentTable$ref$GrandParentTable;

    --The create the disabled constraint by adding the non-trusted version, the disabling it 
    ALTER TABLE Demo.ParentTable
        WITH NOCHECK
            ADD CONSTRAINT ParentTable$ref$GrandParentTable
                FOREIGN KEY (GrandParentTableId) REFERENCES Demo.GrandParentTable(GrandParentTableId);

    ALTER TABLE Demo.ParentTable
        NOCHECK CONSTRAINT ParentTable$ref$GrandParentTable;
    GO 

    Of course, you don't have to create it as non-trusted to disable it, but it is probably not the desired process to create a constraint, check all of the data, just to have it be turned off immediately.

    Note: changed first sentence to be a bit more clear…

  • Speaking at the Nashville SQL Server User's Group on April 28th.

    And they say you can't go home again.. On the 28th of April I will be returning back to my old home of Nashville to speak at the Nashville SQL Server User's Group. (And where I moved to was very near the home I lived in when I was in high school.)

    The session I will be doing is a one off session of demos of some of the new T-SQL features that are in SQL Server 2016 SP1, using code that I have culled from the blogs I have done here on SQLBlog.com: Temporal, Row Level Security, Dynamic Data Masking, and a few queries to demonstrate JSON and some of the other new features like CREATE OR ALTER, DROP … IF EXISTS, etc.

    Here is the abstract:

    A Survey of New T-SQL Features in SQL Server 2016 (SP1)

    Last year, Robert Verell gave a presentation to the group on 2016 features where he gave examples of the new features in SQL Server 2016 at the product level. Our promise then was to give deeper dive sessions on the features, so this presentation is demo filled on the T-SQL features that are new for 2016. The features covered will include:

    Temporal Tables - The new table structures and syntax that let you capture history of all changes to a table, allowing you to query the table as it existed at a certain point in time in a very straightforward manner.

    Row Level Security - A mechanism to filter a user's access to data at the row level, in a manner that your code (stored procedures and views included) will honor with no changes to your existing code.

    Data Masking - Lets you define a pattern to mask a column's output in your T-SQL statements to obfuscate sensitive data.

    JSON - Allows you to format the output of a query as a JSON document (much like the FOR XML clause does for XML).

    New Syntax and Commands - Even if you don't use any of these new features in your coding, I will also show a selection of new functions and syntax constructs that you will definitely want to know about.

  • SQL Saturday Birmingham #593 is this weekend!

    It has been a while since I was last in Birmingham (Alabama) for a SQL Saturday. They didn't have one last year, and the event they did have was on the same weekend as Music City Code, so I wasn't able to attend. This year, I can't make it to SQL Saturday in Richmond due to this event, so I am fairly sure things have balanced themselves out.

    I will be doing two sessions (my two current sessions that I have been doing recently, the first one being my favorite session ever to give. Lots of demos that work, and material I actually know from heart. The second has no prep whatsoever, so I don't hate it either.

    Let Me Finish... Isolating Write Operations

    OLTP databases can be constantly written to and reporting databases are written to at least periodically. In order to ensure consistent results, connections must be isolated from one another while executing, ideally with the lowest possible cost to concurrency.  How this isolation is handled is based on the isolation level, whether the classic lock based or the newer optimistic scheme of the in-memory OLTP engine is used, or even if both engines are enlisted in the same transaction. In this session we will look at examples of how SQL Server isolates reading and writing operations from other writing operations to explore how this may affect your application through error messages and performance hits.

    Then I am doing a second professional development session with Robert Verell, and any other career nerd types that are there for the conference:

    Discuss Your Career (Panel)

    As a database professional, you have plenty of choice in your career path. Should you stay at the same company for 20 years, or change every few years? Much depends on what matters most to you. Do you want higher income, regular work hours, training, vacation time, group insurance, job security, lots of travel, telecommuting, or the possibility to advance to management?  The panel will give their observations and experience over decades of experience. Discussion will include at least one person who has spent over 19 years with the same employer, one person who has changed careers very recently, and everyone in attendance who is willing to give an opinion.

    The goal of this session will be to have a discussion about career matters. If attendees have questions about how to manage their career, we will discuss. Or there will be slides with different priorities to discuss if no one has questions. I would note that this session is not specifically about whether or not to change companies, positions etc. The goal is discuss the pros and cons of staying in one place, changing frequently, etc. There are so many factors to consider that it is maddening.  If you don't learn anything, maybe you will teach someone else something about their career. And if you neither learn, nor teach, the prize giveaways start right after this session, so it is a now lose situation.

  • Music City Code, a great conference that you may have never heard of yet

    For quite a few years, Nashville, and then Chattanooga had a conference called Devlink. It was low on cost, and high on education, and run by the awesome John Kellar (@johnkellar), who wrote the forward of my 2012 book. It was an amazing conference and was the place where I learned (well, in many cases learned about) a lot of stuff that existed in the non-Transact SQL world that I keep my head firmly planted in; along with a great database track that lots of my SQL Community friends presented.  Kellar ended the Devlink conference a few year ago, and I, along with a lot of others was pretty bummed.

    To fill this void, another great person (and fellow Microsoft MVP), Gaines Kergosien (@gainesk) and his team of people, started putting on Music City Code (@musiccitycode, musiccitycode.com) as a way to help replace the void left for a larger, general purpose programming conference in Nashville. One of the tough parts about putting on a conference is getting the word out. It isn't like one can advertise on TV and Radio, so I want to do my part to help you as a SQL Server professional to decide to either submit to speak and/or attend Music City Code. I actually didn't hear about Music City Code until I met Gaines at a session at the Nashville SQL Server BI user group where I was doing a lightning talk on picking speakers.

    With 3 days of sessions (including longer day long sessions and typical hour long ones too), there will be plenty to get excited about. The dates are 1-3 Jun 2017, and will be located at Vanderbilt University in Nashville (Music City, y'all). I spoke last year, and plan to submit several sessions again this year (once I figure out what I want to speak about!)  The deadline is March 1st, but it started on Feb 1, so you will want to submit as soon as you can. (https://www.musiccitycode.com/p/call-for-speakers-4). Last year, the only thing that the conference lacked for me were more sessions on data oriented topics. There were some really great sessions of general appeal, along with a major highlight was that they had Melinda Walker from OneSquigglyLine.com live draw several of the sessions, including one on IT Community:

    2016-08-20 11.05.53-2

    With plenty of learning, fun (did I mention they did a jam session every day at lunch?), and opportunities to meet interesting people and hang out and talk about computer topics (all for a tremendously amazing price!) like here where we were discussing "Moving Past Scrum?":

    2016-08-20 14.29.52

    Why not give it a try? Just check musiccitycode.com for details.

  • A new (to me, and possibly you) SSMS feature - Query Plan Comparing

    Wow, Microsoft has really changed in culture recently. This new rapid release cycle for SSMS is seemingly paying dividends in a major way. In a recent build of SSMS (okay, perhaps not "recent", more like this October of 2015, according to this blog by the MSSQL Tiger Team:), they added the ability to compare query plans. I tried it on a very large query with a lot of differences, and it was kind of hard to follow, but that is true with any large plans. Even using the greatest query plan reading tool of them all, SQL Sentry Plan Explorer I was lost in the details.  But for your typical, make a change to a query/index and see what has changed, it is pretty nice.

    As a quick example, take the following query using WideWorldImporters:

    SELECT *
    FROM   Sales.Invoices
            JOIN Sales.InvoiceLines
                ON Invoices.InvoiceId = InvoiceLines.InvoiceId
    WHERE  AccountsPersonID = 3105;

    Let's drop the index on the AccountsPersonID first (DROP INDEX FK_Sales_Invoices_AccountsPersonID ON Sales.Invoices; ), and execute the query, after enabling the Actual Query Plan. 360 rows will be returned, and you will see the plan as the following (along with a missing index hint):

    image

    Right-click the query plan and save the execution plan somewhere. Next, add back the index (using the script from SSMS with some formatting help from SQL Prompt:

    CREATE NONCLUSTERED INDEX FK_Sales_Invoices_AccountsPersonID ON Sales.Invoices
    (
        AccountsPersonID ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON USERDATA;
    GO

    EXEC sys.sp_addextendedproperty @name = N'Description',
        @value = N'Auto-created to support a foreign key',
        @level0type = N'SCHEMA', @level0name = N'Sales',
        @level1type = N'TABLE', @level1name = N'Invoices',
        @level2type = N'INDEX',
        @level2name = N'FK_Sales_Invoices_AccountsPersonID';
    GO

    Then execute the query again and you will see the following plan:

    image

    Right click the plan and choose: Compare Plan. This will give you an open file dialog to choose the file we stored earlier. This will give you a few windows. Graphically it gives you (by default, the following view that highlights the operators that are similar) In this case, the segments for join to the InvoiceLineItems are the same except for the cost. In the Compare Options window you can also have it highlight the dissimilar operations as well. Also in the compare options it tells you what each of the things it has highlighted is, in this case it only highlighted Key Lookup operators in the red, and Index Seeks in blue:

    image

    Beyond this, in the properties you can see specific differences in the plan by selecting each operator. For example, with the select operator selected as I have in the image above, you will see properties such as the following:

    image

    You can see the Actual Number of rows highlighted here, along with an explanation of what this means. This along with other bits of information that is used, estimated number of rows, estimated costs, amount of time to compile, estimated available memory, estimated degree of parallelism, set options, wait stats, etc. Clicking on each operator you can see comparisons between them as well. The ones that have similar operations will automatically jump to the other when you click on it, and the others you can chose any two to compare.

    All in all a dandy amount of information that one can find about two query plans, easily being able to compare two operators. When the query plan gets a lot larger, it can still be hard to pinpoint big differences (I had a query with 50+ operators, and the plan was wildly different from SQL Server version to version, so in that case it mainly told me how wildly different they were!) but one more tool to help look at query plan differences is definitely a great thing!

  • Saving a Few Lines of Code by Performing Multiple Operations in a Single Command

    I was in Mike Byrd's session on in-memory technologies the other day and I saw him do something I had never seen and it prompted me to write this blog today. It has nothing to do with in-memory, as the title notes, but has to do with stacking commands in a statement I had not seen before. This was SET STATISTICS. For my entire career, I have put them each on an individual line:

    --create a table for the demos
    CREATE TABLE dbo.Table1
    (
        Table1Id    int PRIMARY KEY
    );
    GO

    SET STATISTICS IO ON;
    SET STATISTICS TIME ON;
    SET STATISTICS PROFILE ON;
    SET STATISTICS XML ON; --Not typical to use both of these, I suppose but it is possible!
    GO
    SELECT *
    FROM   Table1;
    GO
    SET STATISTICS IO OFF;
    SET STATISTICS TIME OFF;
    SET STATISTICS PROFILE OFF;
    SET STATISTICS XML OFF;

    But then I saw Mike use the following syntax (sans PROFILE and XML):

    SET STATISTICS IO, TIME, PROFILE, XML ON;
    GO
    SELECT *
    FROM   dbo.Table1;
    GO
    SET STATISTICS IO, TIME, PROFILE, XML OFF;

    Wow, that is a lot easier! (And yeah, using SQL Prompt I can set a snippet to take away typing, but this saves space in any case.)

    -------------------------

    Another command that I often use with commas is a DROP (OR DROP IF EXISTS) command. For example, say we add a Table2 object to the example (including a FOREIGN KEY constraint to help make a point):

    CREATE TABLE Table2
    (
        Table2Id    int PRIMARY KEY,
        Table1Id    int REFERENCES dbo.Table1 (Table1Id)
    );


    Now, if we execute the following:

    DROP TABLE Table2, Table1;

    The tables will be dropped. However, it turns out that this method of executing DROP TABLE is not really a single executing command. If we recreate the tables and drop the objects in the wrong order (due to the reference from Table2 to Table1), we see an error:

    CREATE TABLE dbo.Table1
    (
        Table1Id    int PRIMARY KEY
    )
    CREATE TABLE dbo.Table2
    (
        Table2Id    int PRIMARY KEY,
        Table1Id    int REFERENCES dbo.Table1 (Table1Id)
    )
    GO

    DROP TABLE dbo.Table1, dbo.Table2
    GO

    This causes the following error message to be thrown:

    Msg 3726, Level 16, State 1, Line 18
    Could not drop object 'Table1' because it is referenced by a FOREIGN KEY constraint.

    For almost any command, you would expect it to behave as an atomic operation. However, in this case, it does not. Table2 is actually dropped:

    SELECT name
    FROM   sys.tables
    WHERE  name in ('Table1','Table2')

    name
    ----------------------
    Table1

    So while it is an excellent tool for saving a few keystrokes, it will not save you from the mistakes of malordered objects any more than:

    DROP TABLE Table1;
    DROP TABLE Table2;

    For this, you would need to do something with transactions, most simply something using a TRY CATCH block:

    --Recreate Table2, again
    CREATE TABLE dbo.Table2
    (
        Table2Id    int PRIMARY KEY,
        Table1Id    int REFERENCES dbo.Table1 (Table1Id)
    );
    GO

    BEGIN TRY
        BEGIN TRANSACTION;
        DROP TABLE dbo.Table1;
        DROP TABLE dbo.Table2;
        COMMIT TRANSACTION;
    END TRY
    BEGIN CATCH
        ROLLBACK;
        THROW;
    END CATCH;

    Now you get the error message, and dbo.Table2 is not dropped. But realistically, most of the time when we would be using a comma delimited list of tables to drop, I expect it is in an ad-hoc manner, and something like the following is good enough:

    DROP TABLE IF EXISTS dbo.Table1, dbo.Table2;

    Now simply repeat until no error messages and the tables are gone! The only downside with this command is that if you spell the table incorrectly, you may go slightly mad.

  • T-SQL Tuesday #86 My all-time favorite personal SQL Server Enhancement Request: Automatically maintained columns

    T-SQL TuesdayI have heard the name T-SQL Tuesday for (like,) ever; but I have always been so into my own bundle of insanity that never took the time to figure out what it is. This changes today because I read a blog on Adam Machanic's (@adammachanic) blog that outlined the rules (here) and I have been keen to get back to blogging regularly.

    When I saw what the topic was this month, hosted by Brent Ozar, I was even more excited. SQL Server Bugs & Enhancement Requests. Over the years, I have posted a lot of enhancement requests on the connect site, most of them in and around the SQL Server 2005 release. I was super enthusiastic about my requests, and made quite a few. I knew that most would fail, but one has stuck with me for years that I wanted to get into the product.

    Automatically maintained columns. https://connect.microsoft.com/SQLServer/feedback/details/203570/add-automatically-maintained-columns-perhaps-just-update-date-update-user

    The concept is very similar to a DEFAULT constraint, with two differences:

    1. Will work on an UPDATE operation, without specifying DEFAULT

    2. Could be configured to disallow the user from entering a value. My proposed syntax was pretty simple:

    AUTO [WITH OVERRIDE] (scalar expression)

    Now I realize that 10 years ago, I didn't take terribly long to consider that WITH was a terrible thing to add to the syntax, and AUTO is a keyword already, so I am going to rename it: AUTO_DEFAULT (scalar expression, [option]). Since I have thought a bit more about this in the years since writing it, I realized there were a few more options that would be nice. I was terrible in college doing syntax parsing, but the syntax itself is not important. Temporal in SQL Server 2016 has syntax that is similar to this for the new temporal columns which I got really excited about the first time I saw it: SysStartTime datetime2 GENERATED ALWAYS AS ROW START NOT NULL. Maybe in vNext?

    Possible options could include :

    • ALLOW_OVERRIDE -- let user input override scalar expression.
    • INSERT_ONLY -- only works for the insert operation. This makes it a kind of super default, and protects the column from future changes.

    It could be implemented as a constraint, or like dynamic data masking, just be a part of the table structure and syntax. I will leave that ambiguous as it isn't important.

    The major value of the feature would be to eliminate trigger use for the very typical need to automatically set a value for a column. It would also allow (as Rob Farley noted back in 2009) the OUTPUT clause to work when this is the only need you have for a trigger, and SCOPE_IDENTITY would not be affected like it is with an instead of trigger. (Allowing SCOPE_IDENTITY to work with an instead of trigger is also high on my list of fixes).

    As an example, consider a table such as the following:

    CREATE TABLE dbo.Test
    (
         TestId int NOT NULL CONSTRAINT PKTest PRIMARY KEY, --assume a SEQUENCE object name Test_SEQUENCE existed, the value should be always the next value
         Value int NOT NULL,
         RowCreateTime datetime2(0) NOT NULL, --should always be set on INSERT, and not modifiable on UPDATE
         RowModifiedTime datetime2(0) NOT NULL, --should always be the server time when the row was modified
         RowModifiedUserName nvarchar(128) NOT NULL --should be the original login of the user who modified the row, or whatever the user wants it to be
    )

    The proposed syntax to fill the need would be:

    CREATE TABLE dbo.Test
    (
         TestId int NOT NULL CONSTRAINT PKTest PRIMARY KEY AUTO_DEFAULT (NEXT VALUE FOR dbo.TestSEQUENCE, INSERT_ONLY), 
         Value int NOT NULL,
         RowModifiedTime datetime2(0) NOT NULL AUTO_DEFAULT(SYSDATETIME()), 
         RowModifiedUserName nvarchar(128) NOT NULL AUTO_DEFAULT(ORIGINAL_LOGIN(), ALLOW_OVERRIDE)  

    )

    Now, if the user executed:

    INSERT INTO dbo.Test(TestId, Value, RowModifiedTime)
    VALUES (-1000, 1, '1900-01-01')

    I would either like an error to occur, saying TestId and RowModifiedTime should not be modified in a DML statement, or just override their desires. RowModifiedUserName is acceptable to modify, since it allows override:

    TestId     Value RowModifiedTime     RowModifiedUserName
    ---------- ----- ------------------- -------------------
             1     1 2017-01-10 16:09:03 domain\louis

    Now, if I execute the following:

    UPDATE dbo.Test
    SET    Value = 2,
           RowModifiedUserName = 'Fred'
    WHERE  TestId = 1;

    The data should change to:

    TestId     Value RowModifiedTime     RowModifiedUserName
    ---------- ----- ------------------- -------------------
             1     2 2017-01-10 16:15:23 Fred

    Now, if I execute the following:

    UPDATE dbo.Test
    SET    Value = 3,
    WHERE  TestId = 1;

    The data should change to:

    TestId     Value RowModifiedTime     RowModifiedUserName
    ---------- ----- ------------------- -------------------
             1     3 2017-01-10 16:19:25 domain\louis

    It is definitely possible to write triggers to make this happen, but it is certainly a tedious task that definitely feels like a DDL configuration that would save thousands of lines of trigger code from needing to be written when you need to have a value that cannot be overwritten by the user (or in my case, where you don't trust developers, even yourself, to always get the values change for all operations without it automatically occurring.)

     
  • If a Tree Falls in the Forest, and No One is Around To Hear It…

    You no doubt have heard this old philosophical question that completes with, "Does it Make a Sound?" As nerds, we all know that the answer is that it certainly does make a sound. However, take the question to a more technical place, how much sound does it make, and what is the impact of that sound, is far more interesting, and far more complex.

    This brings me to my data question. If an order is processed in a store, but the expected data is not created, did that order ever occur?

    Very often, the staff of a business are very focused on pleasing the customer, making sure they get their product, but due to software limitations, may not end up not capturing information about every sale in a satisfactory manner. Most of the blame I have seen lies in software that doesn't meet the requirements of a customer, making capturing desired details tedious to achieve when the process is in the norm. Very often the excuse programmers give is that too much work of the work to build a system would need to be done for the atypical cases, but requirements are requirements, and it is generally essential that every action that occurs in a business is captured as data.

    In the same way that we know that the tree made a sound, we know that the order did actually occur. However, without data being captured about the transaction, this lack of evidence may cause several things to occur.

    1. If the customer comes back later with a receipt that has a transaction number that was never written to the database, they may be questioned as if they have stolen the product. Calling a customer a thief is not optimum for most businesses that file taxes.

    2. The income from the sale is like going to make cash balances off, causing headaches trying to reconcile cash amounts. (Alternatively, income may be stolen. I never did this when I worked in retail, but sometimes an employee takes a sale, they fake the paperwork, pockets the money. Again, I never did this, and the statute of limitations surely has passed in any case.)

    3. Inventory counts will be off, since product has left the building and hasn't been counted.

    The issues noted are just the short term, functional issues. Generally speaking though, the most important reason we store data is to report on data.

    From a data science perspective, when data is not stored about a transaction, it is impossible to determine how important that transaction was. Every order a customer places is important from the individual purchase income, naturally. But it is also important to help predict the next sale. Information like when the sale occurred, if any discounts or coupons were applied, demographics of the purchaser help to land the next sale both from that same customer and future customers.

    Now, consider when incomplete, or inaccurate information is entered because the database won't allow fringe cases to be stored? Somethings might be acceptable (if you can at least capture the income and inventory changes), but incomplete information can be worse than no information for reporting. If all transactions appear to be of the typical variety, it will look like fringe cases never occur (And sometimes the fringe cases turn out to be an interesting percentage of what is occurring, leading to opportunities with other similar customers.)

    So poorly created and managed databases could actually be harming future business. So while pleasing the customer is always important, equally important is making sure that the systems that we are creating as architects and programmers actually mirror the reality that is occurring for our clients. So when the tree falls, not only is someone there to hear it, the impact of the loss of the tree (reduced oxygen, loss of bird and squirrel habitats, future fire hazards) is calculable and turned into actions (like building the squirrels new housing complexes!)

  • My last SQL Event as a Nashvillian… SQL Saturday Nashville #581

    Of course, technically, I live in Thompsons Station, about 20 miles south of Nashville, and technically this event will be held in Murfreesboro, also 20 miles south of Nashville, and 30 miles south of Thompsons Statiion. But this is splitting hairs. I am leaving the Nashville area for the Chattanooga area (again, 20 miles away in Cleveland, TN) at the end of the month, and this one is going to be sad for me. It has been a great ride working with Robert Verell (@sqlcowbell), Tamera Clark (@tameraclark), Jon Boulineau (@jboulineau), Kerry Tyler (@airbornegeek)  and Daniel Glenn (@danielglenn) these past few years. It stinks leaving the area, but I expect to see these folks quite often in Nashville, Chattanooga, and other SQL Server venues like the Summit (and sooner or later I expect to see a few more of those names at the MVP Summit, assuming I am still attending!)

    This year, I will be doing two sessions. The first is my concurrency session that I premiered last year for SQL Saturday in Nashville. It has improved a lot with stuff I have learned over the year, and it is one of the best written presentations I have ever done (if it still tries to teach too much in an hour.)

    Let Me Finish... Isolating Write Operations

    OLTP databases can be constantly written to and reporting databases are written to at least periodically. In order to ensure consistent results, connections must be isolated from one another while executing, ideally with the lowest possible cost to concurrency.  How this isolation is handled is based on the isolation level, whether the classic lock based or the newer optimistic scheme of the in-memory OLTP engine is used, or even if both engines are enlisted in the same transaction. In this session we will look at examples of how SQL Server isolates reading and writing operations from other writing operations to explore how this may affect your application through error messages and performance hits.

    Then I am doing a second professional development session at lunch with Robert Verell:

    Discuss Your Career (Panel)

    As a database professional, you have plenty of choice in your career path. Should you stay at the same company for 20 years, or change every few years? Much depends on what matters most to you. Do you want higher income, regular work hours, training, vacation time, group insurance, job security, lots of travel, telecommuting, or the possibility to advance to management?  The panel will give their observations and experience over decades of experience. Discussion will include at least one person who has spent over 19 years with the same employer, one person who has changed careers very recently, and everyone in attendance who is willing to give an opinion.

    The goal of this session will be to have a discussion about career matters. If attendees have questions about how to manage their career, we will discuss. Or there will be slides with different priorities to discuss if no one has questions. I would note that this session is not specifically about whether or not to change companies, positions etc. The goal is discuss the pros and cons of staying in one place, changing frequently, etc. There are so many factors to consider that it is maddening. Come with questions, answers, opinions, and the crunchiest lunch choices that Tamera and team have served up.  If you don't learn anything, at least you will have something to eat and a seat to sit and eat

More Posts Next page »

This Blog

Syndication

Links to my other sites

Archives

Privacy Statement