THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

John Paul Cook

  • Charles Bonnet Syndrome and Implications for Digital Image Processing

    Those of you who know me or have read my biography are aware that I live in two worlds, information technology and healthcare. This post is about image processing in the brain and in a computer. In particular, this discussion is about what happens when there is insufficient data. I think there are some lessons from healthcare that can be applied to information technology.

    First, some background. Charles Bonnet was a Swiss naturalist (scientist who studies the natural world) who in 1760 wrote about his 89-year old grandfather’s visual hallucinations. The grandfather had cataracts which rendered him almost blind. In other words, his retinas were sending insufficient data to his brain. He reported seeing people, animals, and objects. This syndrome was named after Charles Bonnet and is also known as visual release hallucinations. Retinas and digital imaging sensors are alike in that they generate signals that must be processed to be understood. Processing is not error free. When the retina receives less input, it’s similar to trying to make sense of a a very pixelated image.

    When discussing machine learning, we mention training a model. Data is fed into a algorithm and a model is trained. New data such a digital image is submitted to the model. If an approximate match is found, the model classifies and categorizes the new image. The brain does something similar. Consider the case of a person with retinal damage caused by age related macular generation, diabetic retinopathy, glaucoma, or some other pathology. If the person is able to see brown, white, and black next to each other, the trained model in the brain may find a match on a beagle. The brain says brown, white, and black must be a beagle so I’ll put a beagle into the person’s field of view. Putting this into information technology terms, a beagle might be algorithmically correct based on the input data, but if there isn’t a beagle present, the outcome is a false positive.

    It’s important to understand that this discussion is about visual hallucinations in a person without any psychiatric problems. In fact, the people who have this syndrome are aware that the hallucinations are not real. In other words, they have insight into what is happening. The take away is that no matter how good our digital image processing systems are, we still need oversight from a thinking human being who asks and answers the question: Does this make sense? The Indian ophthalmologist G. J. Menon published a framework for evaluating visual hallucinations. In his framework, the presence or absence of insight into the hallucination is very significant. I’m suggesting that our work isn’t done when a model is trained. We need to consider developing problem  specific frameworks for critically appraising the results of machine learning.

    It’s important to consider the consequences of incorrect processing. There are reports of people with Charles Bonnet Syndrome fretting about their hallucinations and fearing they may be losing their grip on their sanity. They may not bring it up because of a fear of no longer being allowed to live independently in case they are diagnosed as psychiatrically ill. It’s a great relief to these patients when they finally find out that their hallucinations are actually normal. How family members and healthcare professionals treat a patient is quite different when they think a person is psychiatrically normal instead of abnormal. Following procedures and jumping to easy conclusions can have devastating consequences for people.

    You can read more about Charles Bonnet Syndrome at the website of the Charles Bonnet Syndrome Foundation. There are similar auditory hallucinations in psychiatrically normal people. Once again, the presence of insight about the hallucinations is significant. We need critical human thinking skills providing the sanity checks for signal processing.

  • PolyBase Error After Uninstalling JRE

    The new PolyBase feature in SQL Server 2016 has a dependency on the 64-bit Java Runtime or JRE. It must be installed prior to installing PolyBase. If you uninstall the JRE and install a later version of the JRE, you may experience a failure of PolyBase. I won’t speculate as to what will happen if you reinstall the same version of JRE that you uninstalled.

    Here’s the error message displayed from attempting CREATE EXTERNAL TABLE after the uninstall old JRE and install new JRE was done.

    Msg 105019, Level 16, State 1, Line 63
    EXTERNAL TABLE access failed due to internal error: 'An unexpected error has occurred while creating or attaching to JVM. Details: System.DllNotFoundException: Unable to load DLL 'jvm.dll': The specified module could not be found. (Exception from HRESULT: 0x8007007E)
       at JNI.JavaVM.JNI_GetCreatedJavaVMs(IntPtr& pVM, Int32 jSize1, Int32& jSize2)
       at Microsoft.SqlServer.DataWarehouse.Hadoop.HadoopBridge.JavaBridge.LoadOrAttachJVM()'

    It’s tempting to interpret the error message as a path problem and start fiddling with environment variables. If you’ve uninstalled the JRE, you should do a repair installation of SQL Server instead of trying path changes. After the repair was done, PolyBase started working again without a reboot.

    image

    Figure 1. Select Maintenance, then Repair.

    image

    Figure 2. Select the instance to repair.

    image

    Figure 3. Click Repair.

  • In the Cloud: Free Azure Training

    Microsoft has many free Azure training courses found at https://openedx.microsoft.com/. Each course is estimated to take 16-24 hours to complete.

    Oh my, I just realized Free Azure Training as a TLA is FAT. As a nurse, I’d never tell anybody to get fat. But as an Azure architect, I want you to get FAT. Check it out and upgrade your skills!

  • In the Cloud: Visual Studio 2017

    Visual Studio 2017 has new functionality to help you build apps for both on-premise and cloud solutions. The screen capture of the Visual Studio 2017 Enterprise installer shows a different interface than the previous several versions of Visual Studio have used. Notice I selected the options for both Azure development and Data storage and processing. The default selections for each of those are shown. I ended up installing all of the options in both categories. You’ll want the PowerShell tools and Azure Storage AzCopy. Notice that some Redgate tools are included. Even the Community edition has Redgate SQL Prompt.

    image

    Figure 1. Visual Studio Enterprise 2017 installer.

    You should click on Individual components and look for other options to add to your installation.

    image

    Figure 2. Individual components default selections.

    If you uninstall Visual Studio 2015 and related components prior to installing Visual Studio 2017, it’s possible you may need to do a repair installation of SQL Server Management Studio (SSMS). It depends on whether or not you uninstalled a dependency. Keep in mind that SSMS as of SQL Server 2016 is based on the Visual Studio shell.

    When you first run SSMS after installing Visual Studio 2017, you will see a message box like the one shown below. Just click OK.

    image

    Figure 3. Normal informative message after updating Visual Studio. Just click OK.

  • FLAS

    The information technology field is full of obfuscation. There are many three letter acronyms, including the meta three letter acronym TLA. Now in the era of Big Data and Cloud, we have four letter acronyms, what I call FLAS, two of which are HTAP and ACID. People have asked me what these are, so here is an explanation.

    ACID has a new meaning. No longer is it just Atomicity, Consistency, Isolation, and Durability. It can also mean Algorithms, Cloud, IoT, and Data. HTAP is Hybrid Transactional Analytic Processing. Transactional processing systems are by default not optimized for analytical processing. Transactional processing works well when data is stored in indexed rows. That type of storage does not work well for analytic processing, which involves the aggregation of columns of data. By arranging and storing data in columns, aggregations can be done much faster. A tried and true technique for providing fast analytical processing and fast transactional processing is to have ETL or ELT processes to periodically build columnar OLAP storage from row oriented OLTP storage. The disadvantage is the latency introduced.

    HTAP solutions provide both OLAP and OLTP storage allowing analytical processing on current transactional data without slowing down the transactional processing.

    Of course I’m writing about FLAS because my job title as of this writing is DPSA, Data Platform Solution Architect.

  • Role Based Access Control in Azure

    From talking to customers and people new to Azure SQL Database, it is apparent that many people are unaware of how Role Based Access Control (RBAC) in Azure affects Azure SQL Database. Last week I gave a presentation on this at SQL Saturday and discussed it with customers at the SQL PASS Summit. This post is a brief summary and provides links to the official documentation.

    Definitions

    RBAC - Role Based Access Control

    IAM - Identity and Access Management

    CLI - Command Line Interface

    TDETransparent Data Encryption

    NOTE: Actions described as being performed in the Azure portal can also be accomplished via scripting with PowerShell, the Azure CLI, or the Azure REST API. Actions described as being accomplish in SSMS can also be scripted with T-SQL.

    Get started with access management in the Azure portal is where I suggest you start reading. Keep in mind that both Azure SQL Database and Azure SQL Datawarehouse have their Azure roles at the Azure database server level. An Azure database server is really a logical construct, a container of databases that exists for administrative purposes.

    image

    Figure 1. Adding a role in the Azure portal.

    Pay particular attention to Owner vs. Contributor. If you put a user in the Contributor role, the user cannot grant permissions to other users. That’s the difference between the two roles.

    Although you can administer Azure roles through the Azure portal, eventually for production use you’ll want to script everything so you can have repeatable processes. You’ll need Azure PowerShell (Windows users) or the Azure CLI (cross platform and Docker). You can also use the REST API. See Managing Role-Based Access Control with the REST API, which also includes links to PowerShell and Azure CLI instructions.

    Use role assignments to manage access to your Azure subscription resources

    Documentation on Azure roles can be found at RBAC: Built-in roles. Notice there are three roles that begin with SQL:

    · SQL DB Contributor

    · SQL Security Manager

    · SQL Server Contributor

    image

    Figure 2. SQL roles in Azure portal.

    It’s important to understand that these are Azure roles. They are limited in scope to actions that can be performed in the Azure portal. The Storage Account Contributor role in the Azure portal is relevant to Azure SQL Database if you have auditing enabled.

    The Azure portal is for administering Azure resources. SQL Server Management Studio (SSMS) or equivalent T-SQL scripts are used for administering the contents of Azure SQL Database. There is very little overlap between what can be done in the Azure portal and in SSMS. TDE can be enabled in either, see Transparent Data Encryption with Azure SQL Database. Dynamic Data Masking can be enabled in either although it is more flexible in SSMS. You can create an Azure SQL Database from either the Azure portal or SSMS.

  • Big Data V’s

    Originally there were three V’s of Big Data. Doug Laney introduced Volume, Velocity, and Variety as the three V’s of data management in 2001 when he worked for the Meta Group, which was purchased by Gartner in 2005. He did not use the term Big Data in his 2001 publication.

    IBM uses the same three V’s and added a fourth B of Veracity no later than 2013. The author Bernard Marr added a fifth V of Value. He describes his five V’s in these slides. Rob Livingstone added Validity and Visibility. In 2013, Doug Laney made the distinction between definitional and aspirational qualities of data. According to Doug, only Volume, Velocity, and Variety are definitional. All other V words are aspirational.

    Mike Gualtieri of Forrester in 2012 asserted that Doug’s three V’s are not actionable. Mike offered what he calls a pragmatic definition of Store, Process, and Access.

    Neil Biehn when still at PROS (now at Siemens) stated that the fourth and fifth V’s are Viability and Value.

    If you are willing to have more than three V’s, Visualization is an obvious V to add and multiple authors have written about it. It’s hard to make sense of Big Data and difficult to derive value or take action if you can’t see what the data is telling you.

    I think people should use as few or as many V’s as they find helpful. I will not go so far as to formally propose another V, but I will state that Vexing sometimes fits Big Data. It can be vexing because of the technical issues of Doug’s three V’s and it can be vexing to interpret and act upon.

  • Using SSMS in Customer Presentations

    When using SSMS with a projector or very large screen HDTV, there are several customizations to make things easier to read for the people in the back of the room. First let’s look at a screen capture of SSMS without any customizations.

    image

    Figure 1. SSMS with default settings.

    Do you really need the Properties pane during a presentation? Probably not. You can close that out to obtain more SSMS real estate.

    image

    Figure 2. Properties pane closed. Go to Tools and select Options.

    To increase the size of most fonts in SSMS including the Object Explorer, within Options, open Environment, select Fonts and Colors, then under Show settings for, select Environment Font. Pick a font type and size. The changes will not fully take effect until you close and reopen SSMS.

    image

    Figure 3. Under Options, open Environment and select Fonts and Colors. Change the font settings for Environment Font.

    To make the grid results visible to the people in the back of the room, you can change the font settings for Grid Results.

    image

    Figure 4. Change the font for the Grid Results.

    Notice how much larger the fonts are except for the query text.

    image

    Figure 5. The grid results are larger, but the query text is still small.

    To increase the size of query text, you can use Fonts and Colors to change settings for the Text Editor. I prefer a different approach. You can change the size of query text by using the dropdown menu at the bottom of the query pane.

    image

    Figure 6. Use the dropdown at the bottom of the query pane to increase the query font size.

    Another option for making SSMS more readable is to use the free Microsoft tool ZoomIt.

  • BI Beginner: Using Power BI Desktop Instead of Excel for Visualizing Big Data

    Excel is a great tool but Power BI offers more in the realm of visualization. The 64-bit version of Excel is better at processing large files than the 32-bit version, but even it has limitations. Power BI Desktop (BI as in Business Intelligence) is a free alternative to Excel for working with large files and Big Data. It is possible to enable the Power Pivot add-in for Excel and overcome the 1,048,576 row limit. You can extend Excel’s visualization capabilities with Power Map and Power Query, but Power BI can do more. Power BI Desktop has very powerful and easy to use visualization features of interest to anyone who needs to analyze and visualize data. Like with many software packages, it takes longer to read the instructions than it does to actually do the work. Once you know how, this exercise can be completed in just a few minutes. You don’t have to be a geek or a power user to succeed with Power BI. When you’re done creating your visualizations, you can share them with your colleagues over the web.

    image

    Figure 1. File is too large for Excel.

    Use Get Data to load Excel and csv files into Power BI Desktop.

    image

    Figure 2. Use Get Data to load an Excel or csv file into Power BI Desktop. I selected CSV because that is the type of file I needed to analyze.

    The file that had too many rows for Excel was obtained from Kaggle, which is a great source of large, real world, free datasets. I downloaded the Center for Disease Control’s (CDC) mortality dataset from Kaggle. The DeathRecords.csv file is 2,631,171 rows and took about a minute to load on my system that has a 4 GHz processor, 32 GB of ram, and an SSD.

    image

    Figure 3. Select the file of interest.

    image

    Figure 4. Click Load to load the selected file.

    image

    Figure 5. DeathRecords.csv loaded into Power BI Desktop.

    Now it is time to do some visualizations. We’ll start with a pie chart showing what ages people died. There is more than one way to accomplish this task. Since we know we want a pie chart, go to Visualizations and select the pie chart icon. Next, select Age as shown. By default, it isn’t what we want. It is a sum and we need a count. After changing to a count, we need to set the Legend to Age.

    image

    Figure 6. Select pie chart under Visualizations.

    image

    Figure 7. Select Age.

    image

    Figure 8. Go to the dropdown to the right of Age under Values and select Count.

    image

    Figure 9. Under Fields, place the mouse over Age and hold the left mouse button down to drag it under Legend.

    image

    Figure 10. Release the left mouse button to drop Age under Legend.

    image

    Figure 11. Completed pie chart.

    The next step is to add a stacked column chart visualization.

    image

    Figure 12. Adding a stacked column chart.

    image

    Figure 13. Underneath Fields, check MaritalStatus to add it to the stacked column chart.

    image

    Figure 14. Drag MaritalStatus underneath Value. It correctly defaults to a count.

    image

    Figure 15. You can collapse Visualizations and Fields. You can also resize your two visualizations and reposition them.

    At first it appears that these two visualizations are independent of each other. They are interconnected. Move the mouse and place it over the far right column, the column for the number of widowed people who died. Click once. You will see that what you highlighted, the widowed data, is superimposed on the pie chart. The result is similar to the pie chart popularized by nurse and statistician Florence Nightingale. To undo the selection, click on the W columns again.

    image

    Figure 16. Completed pie chart and stacked column chart.

    image

    Figure 17. Click the W for widowed column.

    image

    Figure 18. After clicking the widowed column, the pie chart is updated to show how as people age, they are more likely to be widowed at the time of death.

    You can save your model and come back and work with it later. It is saved with a file extension of pbix. You can also publish the model to the Microsoft cloud and grant access to the model to your colleagues. It’s important to understand that you’re not sharing a static view with your colleagues. You are sharing the data and the dynamic visualization features. If the underlying data is updated, your colleagues will be able to see the visualization change after doing a refresh to get the latest data.

    image

    Figure 19. Using the published version of the Power BI model from a browser. The M for married column is selected. The pie chart now shows that people who die as younger adults are more likely to be married than those who die as elderly adults.

    Power BI can read all types of data sources, not just files. It natively supports reading files, folders of files, relational databases, Big Data stores, and more as you can see in the screen capture below.

    image

    Figure 20. Datasources supported by Power BI Desktop. If you have a csv file, select csv.

    NOTE: Long time readers of my blog have seen numerous references in my posts to making accessibility accommodations. Since my last post, I found this research titled Good Fonts for Dyslexia by Luz Rello and Ricardo Baeza-Yates indicating that italics decrease the readability of text for people with dyslexia. In the past, the captions underneath my screen captures were italicized. I will avoid the use of italics going forward. Also notice that I provided a hyperlink instead of an APA formatted citation. I think MLA and APA formatting are not up with the times. Strict adherence to these legacy formatting rules with origins long before the digital age causes people to waste time on pedantic matters. Use of hyperlinks to primary sources placed directly in the text are more in keeping with the digital millennium providing both attribution and convenience. As always, my opinions are my own and not those of my employer or graduate school.

  • Use AutoCorrect in Word to Prevent SQL Sever

    Too often I see SQL Server misspelled as SQL Sever. You can easily prevent this by using AutoCorrect in Word.

    Start by clicking on File.

    image

    Figure 1. Click on File in Word.

    Next, click on Options.

    image

    Figure 2. Click on Options.

    On the Word Options dialog box, select Proofing and then click the AutoCorrect Options button.

    image

    Figure 3. Click the AutoCorrect Options button.

    Enter sql sever under Replace and SQL Server under With. Click the Add button.

    image

    Figure 4. Click the Add button after entering the text strings.

    You may also wish to add the following automatic correction to ensure proper casing of SQL Server. This allows you to type sql server without using the shift key but ending up with SQL Server as the result.

    image

    Figure 5. Ensure the SQL Server is always properly cased.

  • SQL Server 2016 Temporal Queries Sample Code

    Temporal tables in SQL Server 2016 and Azure SQL Database allow you to have historical changes to data maintained for you automatically. You don’t have to write custom code to maintain historical values. New SQL syntax allows you to query and retrieve the data as it was in the past or you can simply retrieve the most recent value. The official documentation is found at https://msdn.microsoft.com/en-us/library/dn935015.aspx and I encourage you to take advantage of it.

    The code posted below will help you understand the nuances and subtleties of temporal SELECT statements. I created a simple table structure for you to practice with. What you need to focus on is understanding the boundary conditions such as when a query is inclusive of a datetime boundary or not. You also need to understand what date values would cause overlaps or gaps. Figure out how the various temporal expressions work with datetime boundary conditions.

    Copy the code below and paste it into SQL Server Management Studio connected to SQL Server 2016. Make sure you are connected to a user database instead of a system database. Pay attention to the comments in the code.

    -- It is recommended that you execute all of the code in each section,
    -- each section in order, one section at a time. Understand each
    -- section before proceeding to the next one. Pay attention to boundary
    -- conditions. It's important to understand what is included and what
    -- isn't.

    -- The approach taken in sections 1 is definitely not a pattern
    -- production code. The purpose of these examples is to teach you how
    -- to run temporal SELECT statements. How the data was faked and inserted
    -- does not represent best practices for production code. The code in
    -- section 1 sets things up for you to learn how to query in sections 2-7.
    -- Refer to the official documentation for examples of how to create
    -- temporal tables.

    ---------- BEGIN SECTION 1 ----------

    -- Several tricks are used to load explicit data values. Do not
    -- use this section as an example for production code. This section
    -- was done the way it was for the sole purpose of providing data for
    -- the subsequent sections.

    -- create the simplest possible table for temporal queries
    CREATE TABLE dbo.alphabet
    (
         letter   CHAR(1)   NOT NULL PRIMARY KEY CLUSTERED  -- must have a primary key
        ,sysstart DATETIME2 NOT NULL DEFAULT SYSUTCDATETIME()
        ,sysend   DATETIME2 NOT NULL DEFAULT CONVERT(DATETIME2, '9999-12-31 23:59:59.9999999')
    );
    GO

    INSERT INTO dbo.alphabet
    (letter, sysstart)
    VALUES
    ('X', '1-JAN-2016');
    GO

    ALTER TABLE dbo.alphabet ADD PERIOD FOR SYSTEM_TIME (sysstart, sysend);

    CREATE TABLE dbo.alphabet_history
    (
         letter   CHAR(1)      NOT NULL
        ,sysstart DATETIME2 NOT NULL
        ,sysend   DATETIME2 NOT NULL
    );
    GO

    INSERT INTO dbo.alphabet_history
    (letter, sysstart, sysend)
    VALUES
    ('A', '2015-01-01', '2015-02-01')  -- the boundary condiions are determind by the temporal expressions
    ,('B', '2015-02-01', '2015-03-01')
    ,('C', '2015-03-01', '2015-04-01')
    ,('D', '2015-04-01', '2015-05-01')
    ,('E', '2015-05-01', '2015-05-31')  -- gap between May 31 and June 1
    ,('F', '2015-06-01', '2015-06-30')  -- another gap, what happens at June 30 00:00:00?
    ,('G', '2015-07-01', '2015-08-01')
    ,('H', '2015-08-01', '2015-09-01')
    ,('I', '2015-09-01', '2015-10-01')
    ,('J', '2015-10-01', '2015-11-01')
    ,('K', '2015-11-01', '2015-12-01')
    ,('L', '2015-12-01', '2016-01-01')
    ;
    GO

    ALTER TABLE dbo.alphabet
    SET (SYSTEM_VERSIONING = ON (HISTORY_TABLE = dbo.alphabet_history, DATA_CONSISTENCY_CHECK = ON))
    GO

    -- look at the contents of the tables
    SELECT * FROM dbo.alphabet;  -- 1 row
    SELECT * FROM dbo.alphabet_history;  -- 12 rows
    ---------- END SECTION 1 ----------

    -- run temporal queries

    ---------- BEGIN SECTION 2 ----------

    -- use the ALL expression

    SELECT * FROM dbo.alphabet  -- 13 rows
    FOR SYSTEM_TIME ALL;

    SELECT * FROM dbo.alphabet  -- 13 rows
    FOR SYSTEM_TIME ALL
    ORDER BY sysstart;

    SELECT * FROM dbo.alphabet  -- 13 rows
    FOR SYSTEM_TIME ALL
    ORDER BY sysend;

    ---------- END SECTION 2 ----------

    ---------- BEGIN SECTION 3 ----------

    -- AS OF datetime
    --    means
    --    datetime value >= starting datetime
    --    and
    --    datetime value < ending datetime

    DECLARE @tempus DATETIME2 = CAST('2015-01-15' AS DATETIME2);
    SELECT *,'2015-01-15' FROM dbo.alphabet  -- data is actually in alphabet_history
    FOR SYSTEM_TIME AS OF @tempus;

    SET @tempus = CAST('2015-01-31' AS DATETIME2);
    SELECT *,'2015-01-31' FROM dbo.alphabet  -- data is actually in alphabet_history
    FOR SYSTEM_TIME AS OF @tempus;

    SET @tempus = CAST('2015-01-31 12:34:56' AS DATETIME2);
    SELECT *,'2015-01-31 12:34:56' FROM dbo.alphabet  -- data is actually in alphabet_history
    FOR SYSTEM_TIME AS OF @tempus;

    SET @tempus = CAST('2015-02-01' AS DATETIME2);
    SELECT *,'2015-02-01' FROM dbo.alphabet  -- data is actually in alphabet_history
    FOR SYSTEM_TIME AS OF @tempus;

    SET @tempus = CAST('2015-02-01' AS DATETIME2);
    SELECT *,'2015-02-01' FROM dbo.alphabet  -- data is actually in alphabet_history
    FOR SYSTEM_TIME AS OF @tempus;

    SET @tempus = CAST('2015-02-01' AS DATETIME2);
    SELECT *,'2015-02-01' FROM dbo.alphabet  -- data is actually in alphabet_history
    FOR SYSTEM_TIME AS OF @tempus;

    SET @tempus = CAST('2015-05-31' AS DATETIME2);  -- this day is undefined when using AS OF
    SELECT *,'2015-05-31' FROM dbo.alphabet  -- no data because the 2015-05-31 isn't < end datetime
    FOR SYSTEM_TIME AS OF @tempus;

    SET @tempus = CAST('2015-05-31 12:34:56' AS DATETIME2);  -- also undefined when using AS OF
    SELECT *,'2015-05-31 12:34:56' FROM dbo.alphabet  -- no data because it isn't < end datetime
    FOR SYSTEM_TIME AS OF @tempus;

    SET @tempus = CAST('2015-06-01' AS DATETIME2);
    SELECT *,'2015-06-01' FROM dbo.alphabet  -- works because 2015-06-01 is >= start datetime
    FOR SYSTEM_TIME AS OF @tempus;

    ---------- END  SECTION 3 ----------

    ---------- BEGIN SECTION 4 ----------

    -- use BETWEEN expression

    DECLARE @startBetween DATETIME2 = CAST('2015-01-01' AS DATETIME2);
    DECLARE @endBetween   DATETIME2 = CAST('2015-01-31' AS DATETIME2);
    SELECT * FROM dbo.alphabet
    FOR SYSTEM_TIME BETWEEN @startBetween AND @endBetween;

    SET @endBetween = CAST('2015-01-31 12:34:56' AS DATETIME2)
    SELECT * FROM dbo.alphabet
    FOR SYSTEM_TIME BETWEEN @startBetween AND @endBetween;

    SET @endBetween = CAST('2015-02-01' AS DATETIME2)
    SELECT * FROM dbo.alphabet
    FOR SYSTEM_TIME BETWEEN @startBetween AND @endBetween;

    SET @startBetween = CAST('2015-01-31' AS DATETIME2)
    SET @endBetween   = CAST('2015-02-01' AS DATETIME2)
    SELECT * FROM dbo.alphabet
    FOR SYSTEM_TIME BETWEEN @startBetween and @endBetween;

    SET @startBetween = CAST('2015-06-30' AS DATETIME2)
    SET @endBetween   = CAST('2015-07-01' AS DATETIME2)
    SELECT * FROM dbo.alphabet
    FOR SYSTEM_TIME BETWEEN @startBetween AND @endBetween;

    SET @startBetween = CAST('2015-04-15' AS DATETIME2)
    SET @endBetween   = CAST('2015-09-15' AS DATETIME2)
    SELECT * FROM dbo.alphabet
    FOR SYSTEM_TIME BETWEEN @startBetween AND @endBetween;

    ---------- END SECTION 4 ----------

    ---------- BEGIN SECTION 5 ----------

    -- use FROM expression

    DECLARE @startFrom DATETIME2 = CAST('2015-01-01' AS DATETIME2);
    DECLARE @endFrom   DATETIME2 = CAST('2015-01-31' AS DATETIME2);
    SELECT * FROM dbo.alphabet
    FOR SYSTEM_TIME FROM @startFrom TO @endFrom;

    SET @endFrom = CAST('2015-01-31 12:34:56' AS DATETIME2)
    SELECT * FROM dbo.alphabet
    FOR SYSTEM_TIME FROM @startFrom TO @endFrom;

    SET @endFrom = CAST('2015-02-01' AS DATETIME2)
    SELECT * FROM dbo.alphabet
    FOR SYSTEM_TIME FROM @startFrom TO @endFrom;  -- only returns one row

    SET @startFrom = CAST('2015-01-31' AS DATETIME2)
    SET @endFrom   = CAST('2015-02-01' AS DATETIME2)
    SELECT * FROM dbo.alphabet
    FOR SYSTEM_TIME FROM @startFrom TO @endFrom;

    SET @startFrom = CAST('2015-06-30' AS DATETIME2)
    SET @endFrom   = CAST('2015-07-01' AS DATETIME2)
    SELECT * FROM dbo.alphabet
    FOR SYSTEM_TIME FROM @startFrom TO @endFrom;  -- no rows because of the gap

    SET @startFrom = CAST('2015-06-30' AS DATETIME2)
    SET @endFrom   = CAST('2015-07-01' AS DATETIME2)
    SELECT * FROM dbo.alphabet
    FOR SYSTEM_TIME FROM @startFrom TO @endFrom;  -- no rows because of the gap

    SET @startFrom = CAST('2015-04-15' AS DATETIME2)
    SET @endFrom   = CAST('2015-09-15' AS DATETIME2)
    SELECT * FROM dbo.alphabet
    FOR SYSTEM_TIME FROM @startFrom TO @endFrom;  -- no rows because of the gap

    ---------- END SECTION 5 ----------

    ---------- BEGIN SECTION 6 ----------

    -- use CONTAINED IN expression

    DECLARE @startContained DATETIME2 = CAST('2015-01-01' AS DATETIME2);
    DECLARE @endContained   DATETIME2 = CAST('2015-01-31' AS DATETIME2);
    SELECT * FROM dbo.alphabet
    FOR SYSTEM_TIME CONTAINED IN (@startContained, @endContained);  -- no rows

    SET @startContained = CAST('2014-12-31' AS DATETIME2)
    SET @endContained   = CAST('2015-02-01' AS DATETIME2)
    SELECT * FROM dbo.alphabet
    FOR SYSTEM_TIME CONTAINED IN (@startContained, @endContained);

    SET @endContained   = CAST('2015-03-01' AS DATETIME2)
    SELECT * FROM dbo.alphabet
    FOR SYSTEM_TIME CONTAINED IN (@startContained, @endContained);

    ---------- END SECTION 6 ----------

    ---------- BEGIN SECTION 7 ----------

    -- Even though the sysstart and sysend columns aren't referenced
    -- in the UPDATE, they are automatically updated. Also notice
    -- that the row containing the letter X is deleted from the
    -- alphabet table and inserted into the alphabet_history table.

    UPDATE dbo.alphabet
    SET letter = 'Y';

    SELECT * FROM dbo.alphabet;

    SELECT * FROM dbo.alphabet
    FOR SYSTEM_TIME ALL;

    ---------- END SECTION 7 ----------

    -- can't drop a table that is participating in temporal SYSTEM_VERSIONING
    -- turn off SYSTEM_VERSIONING to allow DROP TABLE to work
    ALTER TABLE dbo.alphabet SET (SYSTEM_VERSIONING = OFF);
    DROP TABLE dbo.alphabet;          -- tables are no longer associated and
    DROP TABLE dbo.alphabet_history;  -- can be dropped in either order

    -- it is possible to create the main table and the history table
    -- with a single CREATE TABLE statement.
    -- Notice that this CREATE TABLE also contains GENERATED ALWAYS which
    -- prevents the insertion of explicit values.
    -- After creating a table using this syntax, refresh SSMS and expand
    -- the table in the Object Explorer. You will see that the history table
    -- was automatically created for you.

    CREATE TABLE dbo.alphabet
    (
         letter   CHAR(1)   NOT NULL PRIMARY KEY CLUSTERED  -- must have a primary key
        ,sysstart DATETIME2 GENERATED ALWAYS AS ROW START NOT NULL DEFAULT SYSUTCDATETIME()
        ,sysend   DATETIME2 GENERATED ALWAYS AS ROW END NOT NULL DEFAULT CONVERT(DATETIME2, '9999-12-31 23:59:59.9999999')
        ,PERIOD FOR SYSTEM_TIME (sysstart, sysend)
    )
    WITH (SYSTEM_VERSIONING = ON)
    ;
    GO

  • SQL Server 2016 Documentation Survey

    The SQL Server product team wants to hear from you. Here’s your chance to say what you expect from documentation. Hurry – the survey closes on March 18.

    http://www.instant.ly/s/cwef3/nav#p/186a0

  • SQL Server on Linux–sign up for the preview

    SQL Server runs on Linux now. Apply for the preview at https://www.microsoft.com/en-us/server-cloud/sql-server-on-linux.aspx

  • SQL Server, @@VERSION, and Hyper-V

    Yesterday my friend Kalen Delaney asked me about @@VERSION showing (HYPERVISOR) even though she wasn’t running inside a virtual machine. I was able to replicate the behavior on my machines. I asked my colleagues at Microsoft about this. It was confirmed that it is by design. The addition of (HYPERVISOR) to the output indicates only whether or not Hyper-V is currently enabled on the machine. It is not intended to indicate if SQL Server is running inside a Hyper-V virtual machine. Disable Hyper-V on the machine and the text goes away.

    I did not have access to any machines with hypervisors other than Hyper-V.

    If you didn’t intentionally enable Hyper-V on your machine, installing Visual Studio can do that depending on the options you selected.

    image

  • Protecting Credit Card Data

    Although the latter part of this post discusses Always Encrypted in SQL Server 2016, I’m starting with the topic of protecting your own credit cards. I have a certain credit card that will remain nameless that we use for as many purchases as possible because of the cash rebate it offers. It’s like electronic cash to us and always paid in full. Because of my card’s widespread use, it seems to get compromised about every 10 months. Fraudulent charges appear and I end up spending some amount of time on the phone affirming that a whole bunch of charges were fraudulent. The credit card issuer removes the fraudulent charges, cancels the old card, and sends a new card.

    The problem with using one card for everything is when you have preauthorized payments for things like insurance and utilities. Having your card cancelled is very bad for preauthorized payments. You can end up with late fees or service disruptions when a scheduled payment is attempted against a cancelled card. My wife and I got tired of this problem and adopted a new strategy. I have a card and she has a card from the same issuer offering the same cash rebate. Her card is used exclusively for preauthorized payments. My card is used exclusively for everything except preauthorized payments. Her card has never left the house. When my card was last compromised, it was cancelled. Since our preauthorized payments were tied to her card, they weren’t affected. We avoided the inconvenience in logging on to a dozen websites and providing new credit card data.

    SQL Server 2016 has a new encryption feature Always Encrypted. It’s well suited for encrypting credit card numbers in a database. It’s perfect for when you want to store encrypted data and keep even the all powerful DBA from seeing the actual unencrypted data values. Read more about Always Encrypted here.

More Posts Next page »

This Blog

Syndication

Privacy Statement