THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

John Paul Cook

  • Starting PowerPoint slide show from any slide

    If you do a lot of presentations, the default configuration for PowerPoint might not be what you want. Sometimes you don’t want to start the slide show from the beginning. Instead, you want to pick any arbitrary slide and start the slide show from that point. You can either use the keyboard combination of Shift and F5 or you can add an icon to the Quick Access Toolbar to do this.

    image

    Figure 1. Default configuration of PowerPoint.

    image

    Figure 2. Adding to the Quick Access Toolbar.

    In PowerPoint Options, within Quick Access Toolbar settings, select Start From This Slide and click Add > > to add the icon to the toolbar.

    image

    Figure 3. Add Start From This Slide to the toolbar.

    image

    Figure 4. Toolbar with Start From This Slide icon added.

  • Reserve your free copy of Windows 10

    If you have a retail version of Windows 7 or Windows 8, go to http://www.microsoft.com/EN-US/windows and click Reserve Windows 10 to sign up for your free copy of Windows 10. The Windows 10 release date is July 29, 2015. This offer does not apply to non-retail versions such as Enterprise edition.

    image

    Figure 1. Upgrade reservation obtained on Surface Pro 3 purchased last year.

    image

    Figure 2. Upgrade not available message seen on ineligible Windows 8.1 Enterprise edition.

  • SQL Server 2012 T-SQL language enhancements

    Part of my job with Microsoft includes demonstrating SQL Server features. I have a set of five T-SQL scripts that I use for demonstrating SQL Server 2012 language enhancements which I am providing to you below. These scripts were actually tested on SQL Server 2014, which is what I currently use for all of my demonstrations. The scripts use the AdventureWorks2014 and AdventureWorksDW2014 databases found on Codeplex.

    These scripts are not intended as comprehensive tutorials for beginners. They are intended to make an experienced T-SQL developer aware of new features introduced in SQL Server 2012.

    ANALYTIC FUNCTIONS    
     

    -- you probably should execute one query at a time so you can follow along and understand

    -- create a table that has a column for ordering the data
    CREATE TABLE #numbers (nbr INT
                          ,tempus DATE -- used for ordering the data
    ); -- insert some sample data
    INSERT INTO #numbers
    (tempus,nbr)
    VALUES
     
    ('1/1/2018',1)
    ,('2/2/2018',2)
    ,('3/3/2018',3)
    ,('4/4/2018',4)
    ,
    ('5/5/2018',5)
    ,('6/6/2018',6)
    ,('7/7/2018',7)
    ,('8/8/2018',8)
    ,('9/9/2018',9)
    ; -- run an ordinary query ordering by the tempus columns
    SELECT nbr, tempus
    FROM #numbers
    ORDER BY tempus;

    -- return the nbr value in the following row
    -- the first row retrieved has a NULL for the previous nbr
    -- the last row retrieved has a NULL for the following nbr
    SELECT nbr
          ,LAG (nbr, 1) OVER (ORDER BY tempus) AS prevNbr
          ,LEAD(nbr, 1) OVER (ORDER BY tempus) AS nextNbr
    FROM #numbers
    ORDER BY tempus;

    -- show the nbr value in the current row and in the previous row
    -- change the sort order of the overall query to see what happens
    SELECT nbr
          ,LAG (nbr, 1) OVER (ORDER BY tempus) AS prevNbr
          ,LEAD(nbr, 1) OVER (ORDER BY tempus) AS nextNbr
    FROM #numbers
    ORDER BY tempus desc;

    -- no surprises in the previous query
    -- now change the sort order for the LEAD
    -- the LEAD is now functionally providing the same results as the LAG
    SELECT nbr
          ,LAG (nbr, 1) OVER (ORDER BY tempus) AS prevNbr
          ,LEAD(nbr, 1) OVER (ORDER BY tempus desc) AS nextNbr
    FROM #numbers
    ORDER BY tempus;

    -- change the LEAD to a LAG
    -- a descending LAG works like an ascending LEAD
    SELECT nbr
          ,LAG (nbr, 1) OVER (ORDER BY tempus) AS prevNbr
          ,LAG (nbr, 1) OVER (ORDER BY tempus desc) AS nextNbr
    FROM #numbers
    ORDER BY tempus;

    -- return the first value
    SELECT nbr
          ,LAG (nbr, 1) OVER (ORDER BY tempus) AS prevNbr
          ,LEAD(nbr, 1) OVER (ORDER BY tempus) AS nextNbr
          ,FIRST_VALUE(nbr) OVER (ORDER BY tempus) AS firstNbr
    FROM #numbers
    ORDER BY tempus;

    -- return the last value
    -- notice how it is really the last value so far
    SELECT nbr
          ,LAG (nbr, 1) OVER (ORDER BY tempus) AS prevNbr
          ,LEAD(nbr, 1) OVER (ORDER BY tempus) AS nextNbr
          ,FIRST_VALUE(nbr) OVER (ORDER BY tempus) AS firstNbr
          ,LAST_VALUE(nbr) OVER (ORDER BY tempus) AS lastNbr
    FROM #numbers
    ORDER BY tempus;

    -- modify the code to always return the last value
    SELECT nbr
          ,LAG (nbr, 1) OVER (ORDER BY tempus) AS prevNbr
          ,LEAD(nbr, 1) OVER (ORDER BY tempus) AS nextNbr
          ,FIRST_VALUE(nbr) OVER (ORDER BY tempus) AS firstNbr
          ,LAST_VALUE(nbr) OVER (ORDER BY tempus ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) AS lastNbr
    FROM #numbers
    ORDER BY tempus;

    -- this returns the same results as the previous query

    SELECT nbr
          ,LAG (nbr, 1) OVER (ORDER BY tempus) AS prevNbr
          ,LEAD(nbr, 1) OVER (ORDER BY tempus) AS nextNbr
          ,FIRST_VALUE(nbr) OVER (ORDER BY tempus) AS firstNbr
          ,LAST_VALUE(nbr) OVER (ORDER BY tempus ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING) AS lastNbr
    FROM #numbers
    ORDER BY tempus;

    -- apply the boundary condition to FIRST_VALUE to see what happens
    SELECT nbr
          ,LAG (nbr, 1) OVER (ORDER BY tempus) AS prevNbr
          ,LEAD(nbr, 1) OVER (ORDER BY tempus) AS nextNbr
          ,FIRST_VALUE(nbr) OVER (ORDER BY tempus ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING) AS firstNbr
          ,LAST_VALUE(nbr) OVER (ORDER BY tempus ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING) AS lastNbr
    FROM #numbers
    ORDER BY tempus;

    -- fix the previous query to always show the very first value
    SELECT nbr
          ,LAG (nbr, 1) OVER (ORDER BY tempus) AS prevNbr
          ,LEAD(nbr, 1) OVER (ORDER BY tempus) AS nextNbr
          ,FIRST_VALUE(nbr) OVER (ORDER BY tempus ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS firstNbr -- UNBOUNDED FOLLOWING can be used instead of CURRENT ROW
          ,LAST_VALUE(nbr) OVER (ORDER BY tempus ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING) AS lastNbr
    FROM #numbers
    ORDER BY tempus;

    DROP TABLE #numbers;


    CREATE TABLE #sales (
       
    amount INT
      
    ,tempus DATETIME
    ); INSERT INTO #sales
    (amount, tempus)
    VALUES
    ( 10, CAST('01/31/2014' AS DATETIME))
    ,( 20, CAST('02/28/2014' AS DATETIME))
    ,( 30, CAST('03/31/2014' AS DATETIME))
    ,( 40, CAST('04/30/2014' AS DATETIME))
    ,( 50, CAST('05/31/2014' AS DATETIME))
    ,( 60, CAST('06/30/2014' AS DATETIME))
    ,( 70, CAST('07/31/2014' AS DATETIME))
    ,( 80, CAST('08/31/2014' AS DATETIME))
    ,( 90, CAST('09/30/2014' AS DATETIME))
    ,(100, CAST('10/31/2014' AS DATETIME))
    ,(110, CAST('11/30/2014' AS DATETIME))
    ,(120, CAST('12/31/2014' AS DATETIME))
    ,(130, CAST('01/31/2015' AS DATETIME))
    ,(100, CAST('02/28/2015' AS DATETIME))
    ,(110, CAST('03/31/2015' AS DATETIME))
    ,(120, CAST('04/30/2015' AS DATETIME))
    ,(120, CAST('05/31/2015' AS DATETIME))
    ,(100, CAST('06/30/2015' AS DATETIME))
    ,(150, CAST('07/31/2015' AS DATETIME))
    ,(155, CAST('08/31/2015' AS DATETIME))
    ,( 80, CAST('09/30/2015' AS DATETIME))
    ,(160, CAST('10/31/2015' AS DATETIME))
    ,(165, CAST('11/30/2015' AS DATETIME))
    ,(170, CAST('12/31/2015' AS DATETIME))
    ; SELECT tempus
         
    ,amount
          ,AVG(amount) OVER (
                             
    ORDER BY tempus
                             
    ROWS 11 PRECEDING
                           
    )
    FROM #sales
    ORDER BY tempus;

    SELECT tempus
         
    ,amount
          ,AVG(amount) OVER (
                             
    ORDER BY tempus
                             
    ROWS 11 PRECEDING
                           
    )
          ,SUM(amount) OVER (
                             
    ORDER BY tempus
                             
    ROWS 11 PRECEDING
                           
    )
    FROM #sales
    ORDER BY tempus;
     

    USE AdventureWorksDW2014
    GO -- official Microsoft examples from the SQL Server 2014 Update for Developers Training Kit
    -- http://www.microsoft.com/en-us/download/details.aspx?id=41704
    -- find the number of days since each product was last ordered
    SELECT rs.ProductKey, rs.OrderDateKey, rs.SalesOrderNumber,
           rs.OrderDateKey - (SELECT TOP(1) prev.OrderDateKey
                              FROM dbo.FactResellerSales AS prev
                              WHERE rs.ProductKey = prev.ProductKey
                              AND prev.OrderDateKey <= rs.OrderDateKey
                              AND prev.SalesOrderNumber < rs.SalesOrderNumber
                              ORDER BY prev.OrderDateKey DESC,
                              prev.SalesOrderNumber DESC)
           AS DaysSincePrevOrder
    FROM dbo.FactResellerSales AS rs
    ORDER BY rs.ProductKey, rs.OrderDateKey, rs.SalesOrderNumber;

    -- use LAG to simplify the query and speed it up
    SELECT ProductKey, OrderDateKey, SalesOrderNumber,
           OrderDateKey - LAG(OrderDateKey)
                          OVER (PARTITION BY ProductKey
                                ORDER BY OrderDateKey,
                                SalesOrderNumber)
    AS DaysSincePrevOrder
    FROM dbo.FactResellerSales AS rs
    ORDER BY ProductKey, OrderDateKey, SalesOrderNumber;

     

    FUNCTIONS

    -- find the last day of the current month

    SELECT DATEADD(D, -1, DATEADD(M, DATEDIFF(M, 0, CURRENT_TIMESTAMP) + 1, 0)); -- one of several DATEADD techniques
    SELECT EOMONTH(CURRENT_TIMESTAMP);
      -- much easier the new way

    -- locale aware date formatting
    SELECT FORMAT(CURRENT_TIMESTAMP, 'D', 'en-US'), FORMAT(CURRENT_TIMESTAMP, 'D', 'en-gb'), FORMAT(CURRENT_TIMESTAMP, 'D', 'de-de');


    SELECT LOG(10); -- use natural log to find number of years to obtain 10x growth assuming 100% growth compounded continuously

    SELECT
    LOG10(10);

    SELECT LOG(10,2);now you can specify a different base such as 2 shown here


    SELECT IIF ( 2 > 1, 'true', 'false')
         , IIF ( 1 > 2, 'true', 'false');

    -- if you uncomment the next line and execute it, it will generate an error message
    --SELECT CAST('XYZ' AS INT); -- error message because the CAST obviously can't work
    SELECT TRY_CONVERT(INT,'XYZ'); -- returns NULL

    SELECT ISNUMERIC('1')       , ISNUMERIC('A')       , ISNUMERIC('.'); -- 1, 0, 1
    SELECT TRY_PARSE('1' AS INT), TRY_PARSE('A' AS INT), TRY_PARSE('.' AS INT); -- 1, NULL, NULL


    OFFSET and FETCH

    USE AdventureWorks2014 -- or AdventureWorks2012
    GO

    -- look at the SalesTaxRate table to understand the data
    SELECT StateProvinceID, Name, TaxRate
    FROM Sales.SalesTaxRate;

    -- if we want to know the highest tax rates, an ORDER BY is helpful
    SELECT StateProvinceID, Name, TaxRate
    FROM Sales.SalesTaxRate
    ORDER BY TaxRate DESC;

    -- if we want to limit the results to the top 10, we can use non-ANSI TOP
    SELECT TOP 10 StateProvinceID, Name, TaxRate
    FROM Sales.SalesTaxRate
    ORDER BY TaxRate DESC;

    -- change to ANSI SQL
    SELECT StateProvinceID, Name, TaxRate
    FROM Sales.SalesTaxRate
    ORDER BY TaxRate DESC
    OFFSET 0 ROWS FETCH NEXT 10 ROWS ONLY;

    -- modify the OFFSET to get the second 10 rows
    SELECT StateProvinceID, Name, TaxRate
    FROM Sales.SalesTaxRate
    ORDER BY TaxRate DESC
    OFFSET 10 ROWS FETCH NEXT 10 ROWS ONLY;

    -- FETCH requires OFFSET, but OFFSET can be used alone
    -- OFFSET without FETCH specifies the starting point without having a boundary
    SELECT StateProvinceID, Name, TaxRate
    FROM Sales.SalesTaxRate
    ORDER BY TaxRate DESC
    OFFSET 10 ROWS;


    SEQUENCES

    USE AdventureWorks2014 -- or AdventureWorks2012
    GO CREATE SEQUENCE dbo.SeqDemoId AS INT
    START WITH 1
    INCREMENT BY 10;


    CREATE TABLE dbo.SeqDemoTable
    ( SomeId INT PRIMARY KEY CLUSTERED
     
    DEFAULT (NEXT VALUE FOR dbo.SeqDemoId),
     
    SomeString NVARCHAR(25)
    );

    INSERT INTO dbo.SeqDemoTable (SomeString) VALUES (N'demo');

    SELECT * FROM dbo.SeqDemoTable;


    DROP TABLE dbo.SeqDemoTable;

    DROP SEQUENCE dbo.SeqDemoId;

     

    THROW

    BEGIN TRY
       
    SELECT 1 / 0;
    END TRY
    BEGIN CATCH
       
    PRINT 'Divide by 0';
    END CATCH;

    -- BEGIN TRY
       
    SELECT 1 / 0;
    END TRY
    BEGIN CATCH
       
    PRINT 'Divide by 0';
        THROW -- use this if you still want to see the error message
    END CATCH;

  • Using Data for Product Recalls to Keep People Alive

    I just received a phone call from Kroger with a recorded message informing me that someone in my household purchased Blue Bell ice cream that may be part of the Listeria recall. As both a nurse and a database technologist who is very concerned about health, privacy, and security, I found the call interesting on many levels. The call was possible because of Kroger’s customer loyalty program. Kroger knows both how to contact my family and what we purchase. Of course, it requires that we opt-in to the customer loyalty program at the point of purchase. Sometimes we don’t. Until today, I never associated customer loyalty programs with public health announcements. This is a great use of data warehousing technology. Do you think you could do something good like this where you work?

    UPDATE: I went to Kroger to purchase produce and decided to provide my customer loyalty number. An alert appeared on the self-checkout terminal telling me to read the important message on my cash register receipt. The receipt mentioned that possibly affected products had been purchased and provided useful details.

  • Coping with Little Data

    With all of the hype about Big Data, Little Data is being overlooked. Not every business has zetabytes of data. Businesses that have a few bits here and maybe a few bytes there are being overlooked, but there is hope on the horizon. The most fundamental part of the Little Data ecosystem is Gnorm. Gnorm is named after the gnat in Jim Davis’s comic strip Gnorm Gnat. Jim wasn’t happy with a small success, so he abandoned Gnorm and created Garfield. But enough about Jim.

    Apache Gnorm is a set of algorithms for undistributed storage and undistributed processing of very small data sets on a single desktop computer. It was ported from an abacus. MapExpand is used to process the data into something large enough to see. Apache Hive is overkill for processing Little Data, so the developers created Apache Cell after extracting a single cell from Apache Hive to use as the data warehouse. Version 1 was a worker bee cell, but it in version 2 it was adapted from a queen bee cell. Similarly, Apache Zookeeper is too large for coordination of Little Data tasks, so Apache Petri was created. Real-time analysis is done with Breeze and machine learning is done with 65.

    I spoke with Liz N. Knot of the IT recruiting firm Loosey and Guice. She said it is very difficult finding IT professionals for Little Data projects. She said her clients want to solve simple problems like did the business bring in enough money to cover expenses today, but so many applicants only want to do correlations using R or Python. She just can’t get them to switch over to Worm.

  • Microsoft is Hiring for HoloLens

    If you go to https://careers.microsoft.com/search.aspx and use HoloLens as your search term, you’ll see that Microsoft is serious about this product. As of today, I found 100 job openings for HoloLens in the United States alone. Don’t know what HoloLens is? Take a look here: http://www.microsoft.com/microsoft-hololens/en-us

    As augmented reality and virtual reality become more commonplace, we’re going to be dealing with lots of large video files such as mp4 files. This creates data management challenges similar to what we see in databases. We use metadata to help manage data in a relational database. We can add metadata to video files, both to aid in discoverability and search as well as to add context. For example, an audiovisual (AV) recording of surgery by itself lacks context. It’s possible to add a track to the AV file that includes the patient’s electrocardiograph captured in real time. The viewer could view the metadata at the same time as the AV recording.

  • Office Mix for PowerPoint

    Office Mix is a free add-in for PowerPoint 2013. It records the screen, either a PowerPoint presentation or you can choose a specific region of your screen. It can record video, audio, or both. It also allows you to annotate your presentations with digital ink.

    image

    Figure 1. Technically Mix is a beta product.

    After installing Mix, a new Mix menu bar is added. Notice that it even has a feature for creating quizzes.

    image

    Figure 2. Mix menu bar.

    Learn more about it and download Mix from here.

  • Backup Basics–Doing it Faster

    Several customers have told me that the time it takes to back up their databases is taking longer than they are comfortable with. Others have told me their backups take a long time but they don’t mind. Whatever the case, getting things done more quickly is usually worth at least a quick look.

    There are two basic approaches to speeding up a backup. One is to have smaller backups. The other is to split the work into multiple streams that are done in parallel. Each has its advantages and disadvantages, which aren’t the same for everybody. Some organizations may not have enough disks or LUNs to parallelize a backup operation. Others may not be able to spare the CPU cycles for compressing backups.

    SQL Server does have some built-in features that you can experiment with to find your optimal approach to database backups. All of my tests were conducted on my MIMIC II database that I migrated to SQL Server 2014. The database files are on a Plextor SSD and the database backups were written to 7200 rpm Seagate hard drives.

    image

    Figure 1. Baseline for backup with default settings.

    Native compression of backups has been a feature in SQL Server since SQL Server 2008 Enterprise. I found differing advice on restoring a compressed SQL Server backup. I asked SQL Server product team members for a clarification. You can restore a compressed backup to an equal or higher version of SQL Server. Compression reduces the amount of disk activity at the cost of using more processor time to do the compression. If you have CPU cycles to spare, the cost of doing the compression should be more than offset by sharply reduced amount of writes to disk.

    image

    Figure 2. Enabling backup compression under Backup Options.

    image

    Figure 3. Smaller backup from enabling compression is written much faster.

    Another approach is to split the work of writing to disk into multiple streams. This is called striping. Since I have three spinning disks, I split the workload into three stripes.

    image

    Figure 4. Striping to parallelize the writes.

    You should conduct your own tests in your own environment. Expect your results to vary from mine. There are many fine third party solutions from various vendors that provide compression and striping. Some solutions are smart about not automatically doing a full backup if a previous one exists and few data changes have been made since the last full backup.

    Third party tools are particularly appealing to people who don’t like to write custom scripts. For those of you who want to script your own solution, Ola Hallengren published some great scripts here.

    Collect your baseline performance data, conduct some tests, and speed things up!

  • In The Cloud: Azure Operational Insights for Server Monitoring

    Microsoft is now offering a preview version of Azure Operational Insights, which is a cloud based server monitoring solution and replacement for System Center Advisor. I decided to try it out on my desktop’s default installation of SQL Server 2014. I picked this SQL Server instance because it’s not configured how I would configure a production server. I wanted to see what things Azure Operational Insights would find.

    I went to http://azure.microsoft.com/en-us/services/operational-insights/ to get started. After logging in to the Azure Portal, I created an Operational Insights workspace named jpcoperationalinsights.

    image

    Figure 1. Creation of Operational Insights workspace.

    Operational Insights has Intelligence Packs, which are similar to System Center Operations Manager (SCOM) management packs. I added the SQL Assessment Intelligence Pack so that I could monitor SQL Servers.

    image

    Figure 2. Add SQL Assessment Intelligence Pack to monitor SQL Server.

    On the Operational Insights Overview page, click Servers and Usage to advance to the Overview > Usage page.

    image

    Figure 3. Clicking Servers and Usage takes you to the Overview > Usage page.

    Operational Insights can connect to SCOM directly. Since I don’t have SCOM on my home desktop, I needed to install an agent on my machine to monitor SQL Server and report the results to Operational Insights running in Azure. This requires clicking Configure on the Overview > Usage page to advance to the Configure > Usage > Direct Agent Configuration page.

    image

    Figure 4. Click Configure.

    On the Configure > Usage > Direct Agent Configuration page, use the Agent (64-bit) link to install the agent on the server to be monitored. The Workspace ID and primary workspace key are needed to connect the agent to Azure.

    image

    Figure 5. You need the Workspace ID and primary workspace key to connect to the agent to Operational Insights.

    When running the installer for the monitoring agent, specify if the agent will connect directly to Azure Operational Insights or to SCOM.

    image

    Figure 6. Connecting the agent directly to Operational Insights.

    You must enter the Workspace ID and primary workspace key during the installation of the monitoring agent if you are connecting directly to Operational Insights.

    image

    Figure 7. Specifying the connection details.

    After a few hours, I returned to Operational Insights to see what it found. The report categories are:

    • Security and Compliance
    • Availability and Business Continuity
    • Performance and Scalability
    • Upgrade, Migration and Deployment
    • Operations and Monitoring
    • Change and Configuration Management

    image

    Figure 8. Partial screen capture of SQL Assessment reports.

    On the Security and Compliance report details, it indicated that Windows password policies weren’t being applied to SQL Server logins.

    image

    Figure 9. Recommendation for applying Windows password policies to SQL Server logins.

    On the Availability and Business Continuity report, we see both high priority and low priority recommendations. There is a high priority recommendation to schedule full backups at least weekly.

    image

    Figure 10. Details of Availability and Business Continuity report.

    All of the recommendations were good. Operational Insights documentation is found here.

  • In The Cloud: Enhancements to Azure SQL Database

    Last week Microsoft released a preview of the next version of Azure SQL Database, which is available now in a preview version. The feature set is nearly complete when compared to the standalone version of SQL Server 2014 and a clustered index on every table is no longer required. To help highlight the differences, I’ve used the SQL Database Migration Wizard available on Codeplex in a side by side comparison.

    image

    Figure 1. SQL Database Migration Wizard start page.

    AdventureWorks2014 is used in the compatibility analyses.

    image

    Figure 2. Azure SQL Database version 1 on the left and Azure SQL Database Update V12 on the right.

    image

    Figure 3. Choose Objects.

    image

    Figure 4. Script Wizard Summary.

    Notice that the Results Summary shows very few incompatibilities between AdventureWorks2014 on SQL Server 2014 and on SQL Database Update V12. As the output shows, full text search is not supported in the preview version of SQL Database.

    image

    Figure 5. Notice the scroll bar on the left window. AdventureWorks2014 is almost a direct port to SQL Database Update V12.

    Since none of the actual databases I’m migrating to SQL Database use full text search features, my databases are completely compatible with SQL Database Update V12.

    The premium version of the preview version of SQL Database Update V12 supports in-memory columnstore queries for greatly improved performance. T-SQL windowing functions are also supported. For more information about what is supported, look here.

    You must use the preview Microsoft Azure management portal at http://portal.azure.com/ to access SQL Database V12. For more information on getting started, look here.

  • In The Cloud: Manually Migrating a Hyper-V VM to Azure

    Azure is great for sharing reproductions on difficult problems. I managed to reproduce an application problem in a virtual machine on my laptop. It’s really not reasonable to ask for others to help me after following a long set of steps to build a matching test environment. Besides, something could go wrong. I had the perfect test case, but only in my local Hyper-V environment. I realized this is a perfect use case for migrating a vm to Azure. I learned a few things along the way that may be of interest. One of the most important things I learned is to make sure the vm is configured to allow remote desktop connections before migrating it to Azure. An Azure vm that can’t accept Remote Desktop Connection is pretty close to useless.

    Azure requires fixed size vhd files. I had dynamically expanding vhdx files. Use PowerShell to convert vhdx to vhd.

    Windows PowerShell
    Copyright (C) 2013 Microsoft Corporation. All rights reserved.

    PS C:\Users\j.NA> Convert-VHD

    cmdlet Convert-VHD at command pipeline position 1
    Supply values for the following parameters:
    Path: D:\Hyper-V\SQLServer2005\VHDs\SQLServer2005.vhdx
    DestinationPath: D:\Hyper-V\SQLServer2005\VHDs\SQLServer2005.vhd
    PS C:\Users\j.NA> Convert-VHD

    cmdlet Convert-VHD at command pipeline position 1
    Supply values for the following parameters:
    Path: D:\Hyper-V\SQLServer2005\VHDs\D.vhdx
    DestinationPath: D:\Hyper-V\SQLServer2005\VHDs\D.vhd
    PS C:\Users\j.NA>

    image

    Figure 1. PowerShell converting vhdx to vhd.

    Next I used the Microsoft Web Platform Installer to install Azure PowerShell. I used Add-AzureAccount to connect my Azure PowerShell session to my Azure account. Add-AzureVhd takes care of the conversion from dynamic to fixed size vhd. It prompted me to enter my credentials.

    image

    Figure 2. Authenticating to Azure in Azure PowerShell.

    Windows PowerShell
    Copyright (C) 2013 Microsoft Corporation. All rights reserved.

    For a list of all Azure cmdlets type 'help azure'.
    For a list of Windows Azure Pack cmdlets type 'Get-Command *wapack*'.
    For Node.js cmdlets type 'help node-dev'.
    For PHP cmdlets type 'help php-dev'.
    For Python cmdlets type 'help python-dev'.
    PS C:\> Add-AzureAccount
    VERBOSE: Account "microsoft@microsoft.com" has been added.
    VERBOSE: Subscription "Microsoft Azure" is selected as the default subscription.
    VERBOSE: To view all the subscriptions, please use Get-AzureSubscription.
    VERBOSE: To switch to a different subscription, please use Select-AzureSubscription.

    Id                             Type       Subscriptions                          Tenants
    --                             ----       -------------                          -------
    microsoft@microsoft.com        User       eefffffa-ffff-ffff-9fff-9fffffffffff   72ffffff-ffff-ffff-ffff-ffffffffffff


    PS C:\> Add-AzureVhd

    cmdlet Add-AzureVhd at command pipeline position 1
    Supply values for the following parameters:
    (Type !? for Help.)
    Destination:
    https://jstorage.blob.core.windows.net/vhds/SQLServer2005.vhd
    LocalFilePath:
    D:\Hyper-V\SQLServer2005\VHDs\SQLServer2005.vhd
    MD5 hash is being calculated for the file  D:\Hyper-V\SQLServer2005\VHDs\SQLServer2005.vhd.
    MD5 hash calculation is completed.
    Elapsed time for the operation: 00:06:03
    Creating new page blob of size 136365212160...
    Elapsed time for upload: 01:50:39

    LocalFilePath                                               DestinationUri
    -------------                                               --------------
    D:\Hyper-V\SQLServer2005\VHDs\SQLServer2005.vhd             https://jstorage.blob.core.windows.net/vhds/SQLServer...


    PS C:\> Add-AzureVhd

    cmdlet Add-AzureVhd at command pipeline position 1
    Supply values for the following parameters:
    (Type !? for Help.)
    Destination: https://jstorage.blob.core.windows.net/vhds/D.vhd
    LocalFilePath: D:\Hyper-V\SQLServer2005\VHDs\D.vhd
    MD5 hash is being calculated for the file  D:\Hyper-V\SQLServer2005\VHDs\D.vhd.
    MD5 hash calculation is completed.
    Elapsed time for the operation: 00:04:50
    Creating new page blob of size 136365212160...
    Elapsed time for upload: 00:00:54

    LocalFilePath                                               DestinationUri
    -------------                                               --------------
    D:\Hyper-V\SQLServer2005\VHDs\D.vhd                         https://jstorage.blob.core.windows.net/vhds/D.vhd


    PS C:\>

    Once the vhds are in an Azure blob storage container, it’s time to create disks for the virtual machine. Select CREATE A DISK to start the wizard.

    image

    Figure 3. Create a disk from a vhd.

    image

    Figure 4. Selecting a vhd from the vhds container.

     

    image

    Figure 5. After selecting the vhd where the OS is installed, check The VHD contains an operating system.

    The process was repeated for the virtual machine’s D drive, which is just a data disk.

    Next, an Azure virtual machine was created. Notice that the operating system disk created in the previous steps appears under MY DISKS.

    image

    Figure 6. Selecting the operating system disk for the virtual machine.

    After the virtual machine was created, but before it was started, the data disk was added by using the options at the bottom of the Azure portal page.

    image

    Figure 7. Adding the data disk to the vm.

    Once the virtual machine was migrated to Azure, I could easily share it with my colleagues so they could see the problem easily and quickly from wherever they are.

  • BI Beginner: Creating SSIS projects for 64-bit Office

    SQL Server Data Tools for Business Intelligence (SSDT-BI) is the replacement for Business Intelligence Development Studio (BIDS). SSDT-BI is a 32-bit application. If you have 64-bit Office installed instead of 32-bit Office, you will encounter problems when attempting to create SSIS packages that use Excel files. This is easily resolved by installing 32-bit drivers from here. Click the Download button and select AccessDatabaseEngine.exe.

    Here’s what appears in SSDT-BI when you don’t have 32-bit drivers.

    image

    Figure 1. SSIS-BI without 32-bit drivers for Excel.

    After installing the 32-bit drivers mentioned above, the worksheet can be selected.

    image

    Figure 2. Selecting the worksheet from the dropdown list.

    That solves the bitness problem and uncovers a permissions problem. We see [SQL Server Destination [18]] Error: Unable to bulk copy data. You may need to run this package as an administrator and error code 0xC0202071.

    image

    image

    Figure 3. Error importing data from Excel even though connections are valid.

    Running SSDT-BI (i.e., Visual Studio) as an administrator is necessary to elevate the permissions to allow access to SQL Server.

    image

    Figure 4. Running SSDT-BI as administrator – good use case for pinning it to the taskbar.

    Running as administrator allow you to rerun the job with elevated privileges to connect to SQL Server.

    image

    Figure 5. Success at last.

  • BI Beginner: Installing SSDT BI

    The Business Intelligence Development Studio (BIDS) does not come with SQL Server 2014 like it did in previous versions of SQL Server. Furthermore, it has been renamed as SQL Server Data Tools Business Intelligence. If you have Visual Studio 2012, download this. If you have Visual Studio 2013, download this. I recommend that you download and save the file and run the saved copy instead of running directly from the link.

    A few individuals have reported seeing a “Same architecture installation” error message. This is most likely caused by not taking the default of Perform a new installation of SQL Server 2014. Pick the default and you should be fine. It does not install a new instance of the SQL Server database engine.

    image

    Figure 1. Take the defaults for Installation Type.

    Expect to reboot. Open Visual Studio. Go to FILE | New | Project… to see the new templates that have been added to Visual Studio.

    image

    Figure 2. Select Project to see the new Business Intelligence Templates.

    Pick an appropriate template and get down to business.

    image

    Figure 3. Business Intelligence Templates in Visual Studio 2013.

  • In The Cloud: Creating Your First Azure SQL Database

    Creating your first SQL Database in Azure is quick and easy. There are a few things I’ve discovered that might help you be successful faster. Many people have asked me how to get started, so I’ve provided these steps to success. The steps I show you aren’t the only way to achieve success. This is an adaptation of one of my Azure demos.

    First, you need an Azure account. If you have an MSDN subscription, you should be able to use that. Your employer might have Azure credits. If those options aren’t available, you can get a one-month trial here.

    Go to http://azure.microsoft.com and login.

    image

    Figure 1. Default appearance of SQL DATABASES in Azure.

    Sometimes I click CREATE A SQL DATABASE, which is the default appearance of the SQL DATABASES page in Azure. If you’re just starting out on Azure, I suggest that you take a look around to see what all of your options are. That will help you see the big picture. Click on SERVERS.

    image

    Figure 2. Click on SERVERS to see the server view and create a new SQL Database server.

    When working with a physical SQL Server, we create a SQL Server before we create a database. It’s a familiar model, so that’s what I’m showing you how to do in Azure. After you click CREATE A SQL DATABASE SERVER, you’ll be presented with a CREATE SERVER box.

    image

    Figure 3. Be sure to pick a nearby REGION for better response times.

    Once your SQL Database server has been created, you should click MANAGE at the bottom of the page to configure it.

    image

    Figure 4. Click MANAGE to configure your server.

    What happens next and what you should do depends on your network environment. If you have a simple, uncomplicated network, click YES to include your IP address as a permissible IP address allowed to go through the firewall.

    image

    Figure 5. Adding your IP address as a firewall rule.

    Depending on your network’s configuration, the IP address you are shown may not be your actual public IP address. If that’s the case, you’re not going to achieve success until you add your actual public IP address as a firewall rule. Thank you SQL Server MVP Denny Cherry for pointing that out to me. How do you know if the IP address you were shown is your actual public IP address? There are several websites that will show you what your actual, public IP address is. One such site is found here.

    If you discover that your public IP address is different (my public IP address at work differs in the third octet), you’ll need to click your server’s name to configure it.

    image

    Figure 6. Click on your server’s name to see the list of options available.

    On your server’s page, click on CONFIGURE to add an IP address to your firewall rules.

    image

    Figure 7. Click CONFIGURE to add an IP address.

    You can enter additional IP addresses on your server’s CONFIGURE page. Click SAVE after making changes.

    image

    Figure 8. Adding an IP address. Remember to click SAVE.

    You can create a database from within the Azure portal. It’s the simplest way and avoids surprises. If you chose to deploy an existing database into Azure as a SQL Database, not all SQL Server datatypes are implemented in Azure. Notice that within Azure, we have SQL databases, not SQL Server databases. Native Azure database servers and databases have a subset of what is available in SQL Server. There are other differences. In particular, you need to be aware that to migrate a SQL Server database to Azure SQL database, every table must have a clustered index. If you have a small, unindexed table you want to migrate to Azure SQL database, you’ll need to add a clustered index.

  • Migrating Open Source Databases to SQL Server

    In my recent migration of a PostgreSQL database to SQL Server, I made several generic observations that I hope you will find helpful. I used the official PostgreSQL drivers found here. They did work and I was eventually able to use the Import and Export Wizard to load the data into SQL Server. Drivers that work with the Import and Export Wizard will also work with SSIS.

    If you are dealing with large volumes of data, you may need to use the 64-bit version of the Import and Export Wizard instead of the default 32-bit version. It’s easy to find either version by using the Windows search feature. Three tables did not load because they were too large and the 32-bit version of the Import and Export Wizard ran out memory. The 64-bit version of the Import/Export Wizard was able to load all of the tables, including the one with 182 million rows.

    image

    Figure 1. Windows 8 search feature showing both versions of the Import/Export Wizard.

    The bitness of the drivers must match the bitness of the tool. I installed both 32-bit and 64-bit versions of the PostgreSQL ODBC drivers. Once again, the Windows search feature comes in handy for selecting the correct version of the ODBC Data Source Administrator.

    image

    Figure 2. Windows 8 search feature showing both versions of the ODBC Data Source Administrator.

    I was not successful in connecting to the PostgreSQL database when I used the server name. Since this was a one time migration, I took the expedient approach of using the server’s IP address instead of the server name. To obtain an IP address from a Windows server, open a command prompt and enter ipconfig and hit return. For a Linux server, open a terminal console and enter ifconfig and hit return.

    image

    Figure 3. Connecting to the remote database by IP address.

    Although basic connectivity was established, the default option for Specify Table Copy or Query was not exactly what was needed for success. The table names in the PostgreSQL database were all enumerated, but they were in the format TableName instead of a SchemaName.TableName format. It was necessary to select Write a query to specify the data to transfer option so that the SchemaName.TableName format could be manually specified.

    image

    Figure 4. Specify Table Copy or Query.

    Success was still elusive after this change. None of the tables could be created on the destination data source, which was SQL Server 2014.

    image

    Figure 5. Improper schema mappings.

    I found a fix to the schema mappings problems in this blog post on MSDN. Notice that the problem database in the linked page was a Pervasive database. That reinforces my original point that there are some generic issues to consider when migrating data.

    You may want to consider purchasing a third party data migration utility. If you have recurring, mission critical data migration needs, a third party vendor can provide support. This is a particularly important consideration when your source or target database is open source such as PostgreSQL. Some vendors write their own data access code which gives them control (good for you because they can fix problems should any arise) and may also give you better performance. I did exchange emails with Damir Bulic at http://www.spectralcore.com/ who was very helpful in informing me about data migration in general. I particularly appreciate how he pointed out to me that although the tables were indexed and some had primary keys, the largest tables did not have a primary key. Next time, I’ll more carefully check for primary keys before starting a migration. As Damir correctly pointed out, having a primary key allows keyed access to tables so that the entire table doesn’t have to be loaded into memory. His product takes advantage of primary keys so that data can be read in chunks and thus avoid out of memory issues. If you have a 32-bit system, that could be very important. You may want to add a primary key to a large table before attempting a migration.

    I also exported all of the tables into csv files. I knew that my 182 million row table was too big even for my 64-bit version of Excel on a machine with 32 GB of ram, but I wanted to see what would happen. Excel gracefully informed me that it could not load the entire file. As you can see, Excel has a limit of 1,048,576 rows.

    image

    image

    Figure 6. Row limit in Excel.

More Posts Next page »

This Blog

Syndication

Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement