THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

Jamie Thomson

This is the blog of Jamie Thomson, a freelance data mangler in London

  • Kill all the project files!

    Like many folks I’m a keen podcast listener and yesterday my commute was filled by listening to Scott Hunter being interviewed on .Net Rocks about the next version of ASP.Net. One thing Scott said really struck a chord with me. I don’t remember the full quote but he was talking about how the ASP.Net project file (i.e. the .csproj file) is going away. The rationale being that the main purpose of that file is to list all the other files in the project, and that’s something that the file system is pretty good at. In Scott’s own words (that someone helpfully put in the comments):

    A file that lists files is really redundant when the OS already does this

    Romeliz Valenciano correctly pointed out on Twitter that there will still be a project.json file however no longer will there be a need to keep a list of files in a project file. I suspect project.json will simply contain a list of exclusions where necessary rather than the current approach where the project file is a list of inclusions.

    On the face of it this seems like a pretty good idea. I’ve long been a fan of convention over configuration and this is a great example of that. Instead of listing all the files in a separate file, just treat all the files in the directory as being part of the project. Ostensibly the approach is if its in the directory, its part of the project. Simple.

    Now I’m not an ASP.net developer, far from it, but it did occur to me that the same approach could be applied to the two Visual Studio project types that I am most familiar with, SSIS & SSDT. Like many people I’ve long been irritated by SSIS projects that display a faux file system inside Solution Explorer. As you can see in the screenshot below the project has Miscellaneous and Connection Managers folders but no such folders exist on the file system:

    image

    This may seem like a minor thing but it means useful Solution Explorer features like Show All Files and Open Folder in Windows Explorer don’t work and quite frankly it makes me feel like a second class citizen in the Microsoft ecosystem. I’m a developer, treat me like one. Don’t try and hide the detail of how a project works under the covers, show it to me. I’m a big boy, I can handle it!

    Would it not be preferable to simply treat all the .dtsx files in a directory as being part of a project? I think it would, that’s pretty much all the .dtproj file does anyway (that, and present things in a non-alphabetic order – something else that wildly irritates me), so why not just get rid of the .dtproj file?

    In the case of SSDT the .sqlproj actually does a whole lot more than simply list files because it also states the BuildAction of each file (Build, NotInBuild, Post-Deployment, etc…) but I see no reason why the convention over configuration approach can’t help us there either. Want to know which is the Post-deployment script? Well, its the one called Post-DeploymentScript.sql! Simple!

    So that’s my new crusade. Let’s kill all the project files (well, the .dtproj & .sqlproj ones anyway). Are you with me?

    @Jamiet

  • Want a headless build server for SSDT without installing Visual Studio? You’re out of luck!

    An issue that regularly seems to rear its head on my travels is that of headless build servers for SSDT. What does that mean exactly? Let me give you my interpretation of it.


    A SQL Server Data Tools (SSDT) project incorporates a build process that will basically parse all of the files within the project and spit out a .dacpac file. Where an organisation employs a Continuous Integration process they will likely want to automate the building of that dacpac whenever someone commits a change to the source control repository. In order to do that the organisation will use a build server (e.g. TFS, TeamCity, Jenkins) and hence that build server requires all the pre-requisite software that understands how to build an SSDT project.

    The simplest way to install all of those pre-requisites is to install SSDT itself however a lot of folks don’t like that approach because it installs a lot unnecessary components on there, not least Visual Studio itself. Those folks (of which i am one) are of the opinion that it should be unnecessary to install a heavyweight GUI in order to simply get a few software components required to do something that inherently doesn’t even need a GUI. The phrase “headless build server” is often used to describe a build server that doesn’t contain any heavyweight GUI tools such as Visual Studio and is a desirable state for a build server.


    In his blog post Headless MSBuild Support for SSDT (*.sqlproj) Projects Gert Drapers outlines the steps necessary to obtain a headless build server for SSDT:

    This article describes how to install the required components to build and publish SQL Server Data Tools projects (*.sqlproj) using MSBuild without installing the full SQL Server Data Tool hosted inside the Visual Studio IDE.
    http://sqlproj.com/index.php/2012/03/headless-msbuild-support-for-ssdt-sqlproj-projects/

    Frankly however going through these steps is a royal PITA and folks like myself have longed for Microsoft to support headless build support for SSDT by providing a distributable installer that installs only the pre-requisites for building SSDT projects. Yesterday in MSDN forum thread Building a VS2013 headless build server - it's sooo hard Mike Hingley complained about this very thing and it prompted a response from Kevin Cunnane from the SSDT product team:

    The official recommendation from the TFS / Visual Studio team is to install the version of Visual Studio you use on the build machine.

    I, like many others, would rather not have to install full blown Visual Studio and so I asked:

    Is there any chance you'll ever support any of these scenarios:

    • Installation of all build/deploy pre-requisites without installing the VS shell?
    • TFS shipping with all of the pre-requisites for doing SSDT project build/deploys
    • 3rd party build servers (e.g. TeamCity) shipping with all of the requisites for doing SSDT project build/deploys

    I have to say that the lack of a single installer containing all the pre-requisites for SSDT build/deploy puzzles me. Surely the DacFX installer would be a perfect vehicle for that?

    Kevin replied again:

    The answer is no for all 3 scenarios. We looked into this issue, discussed it with the Visual Studio / TFS team, and in the end agreed to go with their latest guidance which is to install Visual Studio (e.g. VS2013 Express for Web) on the build machine. This is how Visual Studio Online is doing it and it's the approach recommended for customers setting up their own TFS build servers. I would hope this is compatible with 3rd party build servers but have not verified whether this works with TeamCity etc.

    Note that DacFx MSI isn't a suitable release vehicle for this as we don't want to include Visual Studio/MSBuild dependencies in that package. It's meant to just include the core DacFx DLLs used by SSMS, SqlPackage.exe on the command line, etc.

    What this means is we won't be providing a separate MSI installer or nuget package with just the necessary build DLLs you need to run your build and tests. If someone wanted to create a script that generated a nuget package based on our DLLs and targets files, then release that somewhere on the web for easier integration with 3rd party build servers we've no problem with that.

    Again, here’s the link to the thread and its worth reading in its entirety if this is something that interests you.

    So there you have it. Microsoft will not be be providing support for headless build servers for SSDT but if someone in the community wants to go ahead and roll their own, go right ahead.

    @Jamiet

  • POST rows to HBase REST API on HDInsight using Powershell

    I’ve been very quiet on the blogging front of late. There are a few reasons for that but one of the main ones is that I’ve spent the past three months on a new gig immersing myself in Hadoop, primarily in Microsoft’s Hadoop offering called HDInsight. I’ve got a tonne of learnings that I want to share at some point but in this blog post I’ll start with a handy little script that I put together yesterday.

    I’m using a tool in the Hadoop ecosystem called HBase which was made available on HDInsight in preview form about a month ago. HBase is a NoSQL solution intended to provide very very fast access to data and my colleagues and I think it might be well suited for a problem we’re currently architecting a solution for. In order to evaluate HBase we wanted to shove lots of meaningless data into it and in the world of HDInsight the means of communicating with your HDInsight cluster is Powershell. Hence I’ve written a Powershell script that will use HBase’s REST API to create a table and insert random data into it. Likely if you’ve googled this post then you’re already familiar with Hadoop, HDInsight, REST, Powershell, HBase, column families, cells, rowkeys and other associated jargon so I won’t cover any of those, what is important is the format of the XML payload that has to get POSTed/PUTted up to the REST API. That payload looks like this:

    <?xml version="1.0" encoding="UTF-8"?>
    <CellSet>
      <Row key="myrowkey">
        <Cell column="columnfamily:column1">somevalue</Cell>
        <Cell column="columnfamily:column2">anothervalue</Cell>
      </Row>
    </CellSet>

    The payload can contain as many cells as you like. When the payload gets POSTed/PUTted the values therein need to be base64 encoded but don’t worry, the script I’m sharing herein takes care of all that for you. The script will also create the table for you. The data that gets inserted is totally meaningless and is also identical for each row, modifying the script to insert something meaningful is an exercise for the reader.

    Another nicety of this script is that it uses Invoke-RestMethod which is built into Powershell 4. You don’t need to install other Powershell modules, nothing Azure specific. If you have Powershell 4 you’re good to go!

    Embedding code on this blog site is ugly so I’ve made it available on my OneDrive: CreateHBaseTableAndPopulateWithData.ps1 Screenshot below gives you an idea of what’s going on here.

    Hope this helps!

    @Jamiet

    UPDATE. I’ve posted a newer script CreateHBaseTableAndPopulateWithDataQuickly.ps1 which loads data in much quicker. This one sends multiple rows in each POST and hence I was able to insert 13.97m rows in 3 hours and 37 minutes which, given latency to the datacentre and that this was over a RESTful API, isn’t too bad. The previous version of the script did singleton inserts and hence would have taken weeks to insert that much data.

    The number of POSTs and the number of rows in each POST are configurable.

    SNAGHTML14e9116b

  • Did you know documentation is built-in to usp_ssiscatalog?

    I am still working apace on updates to my open source project SSISReportingPack, specifically I am working on improvements to usp_ssiscatalog which is a stored procedure that eases the querying and exploration of the data in the SSIS Catalog.

    In this blog post I want to share a titbit of information about usp_ssiscatalog, that all the actions that you can take when you execute usp_ssiscatalog are documented within the stored procedure itself. For example if you simply execute

    EXEC usp_ssiscatalog @action='exec'

    in SSMS then switch over to the messages tab you will see some information about the action:

    SNAGHTMLa9937ec

    OK, that’s kinda cool. But what if you only want to see the documentation and don’t actually want any action to take place. Well you can do that too using the @show_docs_only parameter like so:

    EXEC dbo.usp_ssiscatalog @a='exec',@show_docs_only=1;

    That will only show the documentation. Wanna read all of the documentation? That’s simply:

    EXEC dbo.usp_ssiscatalog @a='exec',@show_docs_only=1;
    EXEC dbo.usp_ssiscatalog @a='execs',@show_docs_only=1;
    EXEC dbo.usp_ssiscatalog @a='configure',@show_docs_only=1;
    EXEC dbo.usp_ssiscatalog @a='exec_created',@show_docs_only=1;
    EXEC dbo.usp_ssiscatalog @a='exec_running',@show_docs_only=1;
    EXEC dbo.usp_ssiscatalog @a='exec_canceled',@show_docs_only=1;
    EXEC dbo.usp_ssiscatalog @a='exec_failed',@show_docs_only=1;
    EXEC dbo.usp_ssiscatalog @a='exec_pending',@show_docs_only=1;
    EXEC dbo.usp_ssiscatalog @a='exec_ended_unexpectedly',@show_docs_only=1;
    EXEC dbo.usp_ssiscatalog @a='exec_succeeded',@show_docs_only=1;
    EXEC dbo.usp_ssiscatalog @a='exec_stopping',@show_docs_only=1;
    EXEC dbo.usp_ssiscatalog @a='exec_completed',@show_docs_only=1;

    I hope that comes in useful for you sometime. Have fun exploring the documentation on usp_ssiscatalog. If you think the documentation can be improved please do let me know.

    @jamiet

  • Cortana and Schema.org

    Some definitions:

    • Cortana – Microsoft’s digital personal assistant, competing with Apple’s Siri and Google Now.
      “She gets to know you by learning your interests over time. She looks out for you, providing proactive, useful recommendations. And Cortana keeps you closer to the people and things you care about most, by keeping track of all that matters.” - http://www.windowsphone.com/en-US/features-8-1#Cortana
    • Schema.org – “a collection of schemas that used to markup HTML pages, and that can also be used for structured data interoperability Search engines rely on this markup to improve the display of search results” https://schema.org/.
      I think of schema.org as machine-readable metadata on a web page. This has existed for years in various guises (e.g. microformats), schema.org is a company-independent initiative to standardise those schemas.

    One of the interesting features that Cortana provides is the ability to parse your email in order to discover flight information and then keep you up to date with information about that flight. Upon reading MSDN article Sending flight information to Microsoft Cortana with contextual awareness it transpires that this feature is entirely dependent upon the email containing schema.org markup:

    Microsoft Cortana interprets schema.org markup in e-mails to extract airline flight reservation data
    http://msdn.microsoft.com/en-us/library/dn632191.aspx

    We all unknowingly make use use of http://schema.org markup all the time (every time you use Google for example, the share contract in Windows 8 & Windows Phone 8.1 can also leverage schema.org) however this is the first time I’ve been made aware of a user experience that is totally dependent on it.

    I find this fascinating, I’ve long had an interest in structured metadata on the web (I’ve been banging on about it in blog posts for the last 10 years) and now my mind is in overdrive thinking of other scenarios that could leverage this. Taking a look at the documentation shows us that schema.org provides definitions of a multitude of things (in fact Thing is the base class from which everything else is derived), some examples:

    Its not hard to envisage that Cortana might one day see mention of a music track in an email and offer the ability to play it for you, or see contact information and offer to add that to your contact list. Where this gets potentially even more interesting (to me, anyway) is when we consider that Cortana is already extensible by 3rd party apps – I foresee that in the future 3rd party apps will have the ability to subscribe to a particular schema and have Cortana notify them when it comes across an instance of that schema (imagine a Spotify app being informed that an email just arrived with information about a new music release and then offering to play that for you).

    Perhaps this 3rd party extensibility could work the other way too by presenting other data sources to Cortana. For example, if Twitter ever get round to implementing annotations then Cortana could potentially read metadata hidden within tweets as well as combing through emails.

    Expect that in the future Cortana will parse new data sources, find new types of information and allow more 3rd parties to act upon it. The potential for a degraded user experience with notifications flashing in your face all the time is a worry but frankly I don’t care, this is all fascinating stuff to a data geek like me.

    @Jamiet 

    P.S. Thank you to Savas Parastatidis from the Cortana dev team for highlighting Cortana’s use of schema.org.

  • Prompt for a password with a mask using Powershell

    Here’s some code that I absolutely know I’m going to need again in the future, what better place to put it than on my blog!

    If you need to prompt the user for a password when using Powershell then you want to make sure that the value types in isn’t visible on the screen. That’s quite easy using the –AsSecureString parameter of the Read-Host cmdlet however its not quite so easy to retrieve the supplied value. The following code shows how to do it:

    $response = Read-host "What's your password?" -AsSecureString
    $password = [Runtime.InteropServices.Marshal]::PtrToStringAuto([Runtime.InteropServices.Marshal]::SecureStringToBSTR($response)

    I don’t know of a quick and easy way to format Powershell code for a blog post so here’s a screenshot instead:

    image

    I’ve also put this on pastebin: http://pastebin.com/2D6xaz0U

    All credit goes to Paul Williams for his post Converting System.Security.SecureString to String (in PowerShell)

    @Jamiet

  • Azure Automation – the beginning of cloud ETL on Azure?

    I have maintained a watching brief on what I refer to as “cloud ETL”, that is the ability build ETL routines in a cloud environment and therefore leverage all the benefits that the cloud model brings*. Thus far my main opinion piece in this area is What would a cloud-based ETL tool look like? in which I laid out what features I thought a cloud ETL tool should have:

    • Data transformation would be done “in the cloud” i.e. I wouldn’t need to own my own hardware in order to run it
    • Ability to consume data from/push data to <many different data protocols>
    • Adapters (possibly with a plug-in model) for cloud storage and API providers
    • Job scheduler
    • Workflow. (e.g. Do this, then do that. Do these things in parallel. Only do this if some condition is true. Restart from here in case of failure.)
    • An IDE (open to debate whether the IDE should be “in the cloud” as well)
    • Ability to carry out common transformations (join, aggregate, sort, projection) on those heterogeneous data sources
    • Ability to authenticate using different authentication mechanisms
    • Configurable logging
    • Ability to publish transformed data in a manner that makes it consumable rather than insert it into another data store

    http://sqlblog.com/blogs/jamie_thomson/archive/2013/02/12/what-would-a-cloud-based-etl-tool-look-like.aspx

    Given that I have spent the majority of my career working with Microsoft technologies (in particular their ETL tool, SSIS) I am interested to know whether Microsoft will offer a cloud ETL tool. With that in mind I was interested to discover a new service on Azure that is currently in preview called Azure Automation (read Announcing Microsoft Azure Automation Preview). Azure Automation is essentially a a cloud-based workflow tool and, as I said above, workflow is a feature that I believe a cloud-based ETL tool should encompass:

    • Workflow. (e.g. Do this, then do that. Do these things in parallel. Only do this if some condition is true. Restart from here in case of failure.)

    SSIS developers will of course be aware that SSIS has its own workflow tool (termed the Control Flow). It always kind of bugged me that different Microsoft tools had their own workflow technology. SSIS had one, I believe BizTalk had one, there was another called Windows Workflow Foundation (WWF) and in fact there was a possibility within the SQL Server 2008 timeframe that SSIS would replace its Control Flow with WWF (that never happened and the Program Manager that wanted to do it has since left the SSIS product team).

    Azure Automation is built upon Powershell Workflow which in turn is built upon WWF (now simply called Workflow Foundation – WF). It certainly seems as though WF is becoming the foundational workflow technology to rule them all within Microsoft and that is no bad thing in my opinion – it seems foolish to reinvent the wheel every time. Powershell Workflow has the following cmdlets for building workflows:

    • Workflow
    • Parallel
    • Foreach –parallel
    • Sequence
    • InlineScript
    • Checkpoint-workflow
    • Suspend-workflow

    Those are all fairly self-explanatory. Of particular interest to me is Foreach –parallel (we’ve been asking for a native Parallel ForEach Loop in SSIS for years) and that might be even more useful in a scale-out infrastructure such as can be offered by the cloud (imagine firing off multiple FTP tasks in parallel, all working on different Azure nodes). Checkpoint-Workflow also sounds very interesting:

    A checkpoint is a snapshot of the current state of the workflow, including the current values of variables, and any output generated up to that point, and it saves it to disk. You can add multiple checkpoints to a workflow by using different checkpoint techniques. Windows PowerShell automatically uses the data in newest checkpoint for the workflow to recover and resume the workflow if the workflow is interrupted, intentionally or unintentionally.
    http://technet.microsoft.com/en-us/library/jj574114.aspx

    Stateful restartability that you can control, all out-of-the-box. How cool is that? So much better than the awful checkpointing feature within SSIS.

    It certainly appears to me that Azure Automation could satisfy my desire for a workflow engine for the purposes of cloud-ETL. Now if only Microsoft were working on cloud-based dataflows too we’d have something akin to SSIS-in-the-cloud Winking smile.

    @Jamiet

    *My own personal opinion is that the benefits of the cloud model can be summed up simply as “OPEX not CAPEX”. You may have your own definition, and that’s OK.

  • Restart Framework added to SSIS Reporting Pack

    On 31st March 2014 I released version 1.2.0.0 of SSIS Reporting Pack, my open source project that aims to enhance the SSIS Catalog that was introduced in SSIS 2012. This is a big release because it includes an entirely new feature  -the Restart Framework.

    Introduction

    The Restart Framework exists to cater for a deficiency within SSIS, that being the poor support for restartability. Let's define what I mean by restartability:

    A SSIS execution that fails should, when re-executed, have the ability to start from the previous point of failure.

    SSIS provides a feature called checkpoint files that are intended to help in this scenario but I am of the opinion that checkpoint files are an inadequate solution to the problem, I explain why in my blog post Why I don't use SSIS checkpoint files.
    The Restart Framework was designed to overcome the many shortcomings of checkpoint files.

    One of the fundamental tenets of the Restart Framework is that the packages that you, the developer, build for your solution should not be required to contain any variables, parameters, tasks, or event handlers in order to make them work with the Restart Framework. In fact your packages should be agnostic of the fact that they are being executed by the Restart Framework.

    TL;DR: A video that demonstrates the installation and base functionality of the Restart Framework can be viewed at https://www.youtube.com/watch?v=syV0Wpwhlnk

    Terminology

    Let's define some important terms that you will need to become familiar with if you are going to use the Restart Framework.

    ETLJob

    An ETLJob is the definition of some work that an end-to-end ETL process needs to perform. An ETLJob would typically incorporate many SSIS packages. Each ETLJob has a name (termed ETLJobName) which can be any value you want, some example ETLJobNames might include:
    • Nightly Data Warehouse Load
    • Monthly Reconciliation
    • All backups

    ETLJobStage

    Each ETLJob contains one or more ETLJobStages. These are the "building blocks" of your solution and for each ETLJobStage there must exist a package in your SSIS project with a matching name. For example, an ETLJobStage with the name "FactInternetSales" will require a SSIS package called "FactInternetSales.dtsx".

    The Restart Framework allows the declaration of dependencies between ETLJobStages - an ETLJobStage cannot start until all ETLJobStages with a lower ETLJobStageOrder have completed successfully. This is a fundamental tenet of the Restart Framework as it needs to know the order in which ETLJobStages need to occur in order that it can restart execution from the previous point of failure.

    The Restart Framework provides some stored procedures that should be used to define ETLJobs, ETLJobStages and the dependencies between them.

    One important point to make about ETLJobStages is that the Restart Framework only supports restartability of a failed ETLJobStage, the Restart Framework has no control (and, indeed, does not care) what occurs within that ETLJobStage. The implication therefore is that the onus is on the package developer to ensure that each ETLJobStage is re-runnable from the start of that package in the event of failure; in other words an ETLJobStage must be idempotent.

    ETLJobHistory

    Each time an ETLJob is executed a record is inserted into a table called ETLJobHistory and a unique ETLJobHistoryId is assigned. Crucially, when a previously-failed ETLJob is restarted it retains the same ETLJobHistoryId, compare this to SSIS' own execution_id which will be different whenever an ETLJob is restarted.

    The ETLJobHistoryId can be particularly useful when used for lineage purposes in a data warehouse loading routine. Every inserted or updated record can have the ETLJobHistoryId stored against it which is useful for providing lineage information such as when the record was inserted/updated.

    What's included

    SsisReportingPack database

    This is the same database that houses usp_ssiscatalog and all of its supporting code modules. All of the database objects that support the Restart Framework are in a schema called RestartFramework.


    20140408 restartframeworkschema.png

    SSIS packages

    The Restart Framework consists of two packages that must be included in every SSIS project that is intending to use the Restart Framework hence they will need to be added into your SSIS project within Visual Studio.
    Root.dtsx

    This package must be executed in order to have any execution be managed by the Restart Framework. It takes a single parameter, ETLJobName, to indicate which ETLJob it should execute. Root.dtsx will interrogate the Restart Framework metadata in the SsisReportingPack database to determine which ETLJobStages are included.

    For each ETLJobStage Root.dtsx will fire off a new instance of ThreadControllor.dtsx, passing it a ThreadID and an ETLJobStageOrder.

    Root.dtsx can fire off eight concurrent instances of ThreadController.dtsx. This number if configurable however eight is the maximum. You could easily extend Root.dtsx to fire off more than eight if you so desired.

    Here is a screenshot of Root.dtsx control flow:
    20140412 22-05-10 root.dtsx.png

    ThreadControllor.dtsx
    This package is responsible for calling your packages that actually do some work. It receives a ThreadId and ETLJobStageOrder from Root.dtsx which it uses to interrogate the database to get a list of ETLJobStageNames that it needs to execute. It loops over that list and executes a package of the same name from the current project.
    When an ETLJobStage completes successfully it is the job of ThreadController.dtsx to update the database to indicate that this has occurred.
    Here is a screenshot of ThreadController.dtsx control flow:
    20140412 22-13-06 threadcontroller.dtsx.png
  • SSDT gets some small enhancements for SQL Server 2014

    Release-to-manufacturing of SQL Server 2014 was announced today and with it comes some welcome news about some small enhancements to SSDT:

    The SSDT team has been heads down getting our SQL Server 2014 release ready to go.  However we've made time to make some targeted improvements in this area.  In our upcoming release for SQL Server 2014 the Change Connection part of this blog will be implemented.  The F6 functionality (included moving between multiple result sets) didn't make it, but will be in the update after that. – Patrick Sirr

    http://sqlblog.com/blogs/jamie_thomson/archive/2013/03/19/connected-development-in-ssdt-versus-ssms.aspx#53271

    and

    I've implemented F6 and shift-F6 to cycle through the tabs as SSMS does (including for multiple result sets). However the fix won't make it into our upcoming SQL Server 2014 release due in a couple weeks. It'll be in the next update.

    http://connect.microsoft.com/SQLServer/feedback/details/780990/ssdt-f6-to-move-between-panes-in-a-query-window

    Hopefully there’ll be a few more enhancements in the SSDT Connected Development arena other than the two mentioned here (more scripting options would be nice).

    Who says submitting to Microsoft Connect never works? Smile

     

    Don’t forget that you don’t need to have SQL Server 2014 in order to get these new features. SSDT is free and supports all versions back to SQL Server 2005.

    @Jamiet

  • OneNote API – finally! Now, about that Excel API…

    I’m a frequent user of OneNote and so was delighted with today’s news that there is now a public API available so that third party apps and services can put stuff into your OneNote notebooks (an API? welcome to the modern web, OneNote). One of those third party services is ifttt so I’ve set up a few ifttt recipes to dump stuff into OneNote:

    All very nice thanks very much.

    I do have a few quibbles though (otherwise why would I be writing a blog post, right? Smile ). Firstly, the API only allows you to create pages, it cannot append to existing ones. Second, and more importantly, you can’t choose which workbook section to create the page in. I find this really annoying, take the example of my ifttt recipe above that bungs all my blog posts into OneNote – how much more useful would it be if we could choose which section to put them into? As it stands right now I would have to go and move them all after the event. Still, credit where credit is due, the API exists and I harbour hopes that it will improve over time.


    A OneNote API is nice and all but one thing I’ve been craving for years is an API that allows me to insert data into an Excel spreadsheet residing on OneDrive. I’ve written in the past about the Excel Services REST API where I lamented:

    Although I haven't demonstrated it here Excel Services' REST API does provide a makeshift way of altering the data by changing the value of specific cells however what it does not allow you to do is add new data into the workbook. Google Docs allows this.
    Exploring the Excel Services REST API

    Chris Webb (who has joined me in this crusade) raised a forum thread in June 2010 entitled Excel Web App API? where he requested such a thing, nearly four years later and we’re still waiting.

    Ifttt allows recipes that trigger every time you tweet, how cool would it this could be used to insert a new row into an Excel spreadsheet on OneDrive for each of my tweets*? Well I would like that anyway and the existence of this new OneNote API rekindles my hope that one day such an API for Excel might exist – please don’t let me wait another 4 years though, Microsoft!

    @Jamiet 

    * Before anyone leaves a comment telling me so, I’m already aware that I can use ifttt to insert all my tweets into a Google Docs spreadsheet and indeed I’m already doing so. I’d just prefer it for Excel, that’s all.

  • Thoughts on Office 365, Windows Azure Active Directory, Yammer & Power BI

    This week a SharePoint conference took place somewhere and I took more than a passing interest because it clearly wasn't a SharePoint conference, it was a Office365/Yammer conference and as far as I can discern the big takeaways were:

    It was interesting to me because Power BI is something that is on my radar and which is delivered via Office 365. This got me thinking about scenarios where Power BI & Yammer could play together more effectively.

    The BI delivery team that I currently work for is trying find ways to make the information that we produce more discoverable, more accessible and to promote the use of the information that we provide throughout the company. The company is an Office365 customer however they pretty much use it only as an email & IM provider - none of the SharePoint-y stuff is used. The company is also a Yammer customer.

    The confluence of Yammer and Power BI might make an interesting story here. Imagine, for example, the ability to build a Power View report using Power BI and then share that throughout the organisation using Yammer, perhaps via a Yammer group. Anyone viewing their Yammer feed would be able to view and interact with that Power View report without leaving Yammer. I’m not talking about simply viewing an image of a report either – I’d want to be able to slice’n’dice that report right within my Yammer feed.

    I’ve long thought that we need to think of new ways of delivering BI to the masses and I believe social collaboration tools present a great opportunity to do that. I’m excited about what Yammer + Power BI could bring, let’s hope Microsoft don’t royally screw it up.

    I still believe that Microsoft’s Master Data Services (MDS) should be offered through Power BI and again the opportunity to collaboratively compile and discuss data that resides in MDS is compelling. I see no reason why people wouldn’t want to change MDS data from within their Yammer feed – why would we force them to go elsewhere? Again I opine, bring the data to wherever your users are, don’t make them go somewhere else.


    Hidden away behind all of the announcements was the implicit assertion that Windows Azure Active Directory is critical to Microsoft’s cloud efforts. Office 365 sits on top of Windows Azure Active Directory and I don’t think many people realise the significance of that. Whoever manages your company’s employees’ identities has a huge opportunity for selling new stuff to you and that’s why Windows Azure Active Directory is free. This is not a new play for Microsoft, over the past 20 years or so they’ve become a huge player in the corporate landscape and that’s in no small way down to Active Directory – own the identity and you can sell them other stuff like SharePoint, Windows, SQL Server etc… By allowing you to extend your Active Directory into the cloud and have pervasive groups its not far off being a no-brainer for companies to use Windows Azure & Office 365.

    Active Directory in the cloud, public and private groups, identity management, developer APIs … those are the big plays here and is very much like what I described in my blog post Windows Live Groups predictions and “Active directory in the cloud”. The names and players have changed but the concepts I outlined there are now happening. Back then I said:

    [This] gives rise to the idea of Groups becoming something analogous to an "active directory in the cloud". This is a disruptive idea partly because it could become the mechanism by which Microsoft grant access to their online properties in the future.

    Even more powerful is the idea that 3rd party websites that authenticate visitors … could use Groups to determine what each user can do on that site. Groups will become part of an authentication infrastructure that anyone in the world can leverage.

    This "active directory in the cloud" idea relies on a robust API that allows a 3rd party site to add and remove people from groups.

    Believe it or not that was six years ago. Don’t want to say I told you so, but…

    @Jamiet

  • Capturing query and IO statistics using Extended Events

    The commands

    SET STATISTICS TIME ON
    SET STATISTICS
    IO ON

     

    return information about query executions and are very useful when doing performance tuning work as they inform how long a query took to execute and the amount of IO activity that occurred as a result of that query.

    These are very effective features however to my mind they do have a drawback in that the information they provide is not accessible in the actual query window from which the query was executed. This means the results cannot be collected, stored in a table, and then queried – such information would have to be manually copied and pasted from the messages pane into (say) a spreadsheet for further analysis.

    This is dumb. I’m a SQL Server developer, I want my data available so that I can bung it into a table in SQL Server and issue queries against it. That is why, a couple of weeks ago, I submitted a request to Microsoft Connect entitled Access to STATS TIME & STATS IO from my query in which I said:

    I recently was doing some performance testing work where I was evaluating the affect of changing various settings on a particular query. I would have liked to simply run my query inside a couple of nested loops in order to test all permutations but I could not do that because every time I executed the query I had to pause so I could retrieve the stats returned from STATISTICS IO & STATISTCS TIME and manually copy and paste (yes, copy and paste) the information into a spreadsheet.

    This feels pretty dumb in this day and age. Why can we not simply have access to that same information within my query? After all, we have @@ROWCOUNT, ERROR_MESSAGE(), ERROR_NUMBER() etc... that provide very useful information about the previously executed statement, how about @@STATISTICS for returning all the IO & timing info? We can parse the text returned by that function to get all the info we need.
    Better still, provide individual functions e.g.:
    @@QUERYPARSETIME
    @@QUERYCOMPILETIME
    @@QUERYEXECUTIONTIME
    @@SCANCOUNT
    @@LOGICALREADS
    @@PHYSICALREADS
    @@READAHEADREADS

    Ralph Kemperdick noticed my submission and correctly suggested that the same information could be accessed using Extended Events. Based on this I’ve written a script (below) that issues a series of queries against the AdventureWorks2012 sample database, captures similar stats that would be captured by SET STATISTICS then presents them back at the end of the query. Here are those results:

    image

    The information is not as comprehensive as what you would get from SET STATISTICS (no Read-Ahead Reads for example, and no breakdown of IO per table) but should be sufficient for most purposes.

    You can adapt the script accordingly for whatever information you want to capture, the important part of the script is the creation of the XEvents session for capturing the queries, then reading and shredding the XML results thereafter.

    Hope this is useful!

    @Jamiet

    UPDATE: Turns out you don't need all of this. I've just been informed that Richie Rump has written a parser at http://statisticsioparser.com/ that does all of this for you. Simple paste in your STATISTICS IO output and press the button - it will do all the hard work for you and give you the results back in a nice readable graph. You can paste in multiple results at once too.

    --Create the event session
    CREATE EVENT SESSION [queryperf] ON SERVER
    ADD EVENT sqlserver.sql_statement_completed
    ADD TARGET package0.event_file(SET filename=N'C:\temp\queryperf.xel',max_file_size=(2),max_rollover_files=(100))
    WITH MAX_MEMORY=4096 KB,EVENT_RETENTION_MODE=ALLOW_MULTIPLE_EVENT_LOSS,
                
    MAX_DISPATCH_LATENCY=120 SECONDS,MAX_EVENT_SIZE=0 KB,
                
    MEMORY_PARTITION_MODE=NONE,TRACK_CAUSALITY=OFF,STARTUP_STATE=ON);

    --Set up some demo queries against AdventureWorks2012 in order to evaluate query time & IO
    USE AdventureWorks2012
    DECLARE    @SalesPersonID INT;
    DECLARE    @salesTally INT;
    DECLARE    mycursor CURSOR FOR
    SELECT
    soh.SalesPersonID
    FROM   Sales.SalesOrderHeader soh
    GROUP  BY soh.SalesPersonID;
    OPEN mycursor;
    FETCH NEXT FROM mycursor INTO @SalesPersonID;
    ALTER EVENT SESSION [queryperf] ON SERVER STATE = START;
    WHILE @@FETCH_STATUS = 0
    BEGIN
           DBCC
    FREEPROCCACHE;
          
    DBCC DROPCLEANBUFFERS;
          
    CHECKPOINT;
          
    SELECT @salesTally = COUNT(*)
          
    FROM Sales.SalesOrderHeader  soh
          
    INNER JOIN Sales.[SalesOrderDetail] sod        ON  soh.[SalesOrderID] = sod.[SalesOrderID]
          
    WHERE SalesPersonID = @SalesPersonID
          
    FETCH NEXT FROM mycursor INTO @SalesPersonID;
    END
    CLOSE
    mycursor;
    DEALLOCATE mycursor;
    DROP EVENT SESSION [queryperf] ON SERVER;

    --Extract query information from the XEvents target
    SELECT q.duration,q.cpu_time,q.physical_reads,q.logical_reads,q.writes--,event_data_XML,statement,timestamp
    FROM   (
          
    SELECT  duration=e.event_data_XML.value('(//data[@name="duration"]/value)[1]','int')
           ,      
    cpu_time=e.event_data_XML.value('(//data[@name="cpu_time"]/value)[1]','int')
           ,      
    physical_reads=e.event_data_XML.value('(//data[@name="physical_reads"]/value)[1]','int')
           ,      
    logical_reads=e.event_data_XML.value('(//data[@name="logical_reads"]/value)[1]','int')
           ,      
    writes=e.event_data_XML.value('(//data[@name="writes"]/value)[1]','int')
           ,      
    statement=e.event_data_XML.value('(//data[@name="statement"]/value)[1]','nvarchar(max)')
           ,      
    TIMESTAMP=e.event_data_XML.value('(//@timestamp)[1]','datetime2(7)')
           ,       *
          
    FROM    (
                  
    SELECT CAST(event_data AS XML) AS event_data_XML
                  
    FROM sys.fn_xe_file_target_read_file('C:\temp\queryperf*.xel', NULL, NULL, NULL)
                  
    )e
          
    )q
    WHERE  q.[statement] LIKE 'select @salesTally = count(*)%' --Filters out all the detritus that we're not interested in!
    ORDER  BY q.[timestamp] ASC
    ;

  • Why don’t app stores offer subscriptions?

    Accepted wisdom when one purchases an app from a business store is that one gets free updates for life. This is, quite obviously, an unsustainable business model and I suspect is the main reason why so many apps use advertising to generate income.

    There is though, in the enterprise world at least, a move to a subscription-based business model (i.e. renting software) the most obvious examples of which are Office 365 and Adobe Creative Cloud and I’m left wondering why app stores don’t offer a similar option.

    Today I installed an app called Tweetium that offers a (paid for) premium option, here is why the premium option exists:

    image

    Again this strikes me as unsustainable. The customer pays once yet Tweetium has to pay TweetMarker every month. Forever. It doesn’t take an expert mathematician to realise that eventually Tweetium’s monthly outlay could exceed the income they have saved up from purchases.

    It seems to me there is a simple solution to all this. App stores could offer an option for customers to rent apps rather than buy them. Its more sustainable for the app vendor and the app store provider gets a more predictable income stream (which CFOs seem to like). Why don’t app stores not do this? Seems like a no-brainer to me

    Just a random thought for a Sunday morning.

    @Jamiet

    UPDATE: Apparently iOS & Android app stores *do* offer subscription models, I just wasn't aware of it.
  • My SQL Server gripe at SQL Supper

    On 17th February 2014 (3 days ago) I visited an event called SQL Supper held at Microsoft’s central London office, Cardinal Place. The event was basically a QnA session with Mark Souza, Conor Cunningham, Nigel Ellis, Hatay Tuna & Ewan Fairweather and one part of the evening was loosely termed the gripe session where the attendees were invited to stick their hand in the air and when asked have a good old whinge about something in SQL Server that, well, frankly pissed them off. Given the members of the panel this was inevitably focused on the database platform in SQL Server rather than the BI stuff and this is what I was only too happy to gripe about:

    Microsoft seem to have dropped the ball on database developer productivity, both in the language and the tooling. A decade ago this is something that SQL Server was renowned for, I put it to you that this is no longer the case.  SSDT came out with SQL Server 2012 and its a great tool, I love it, but in the two years since there have been various maintenance releases but hardly any new features. SSMS has hardly changed for years, extensibility is still not truly supported. Intellisense does not work properly 100% of the time. As far as I can recall T-SQL has had only two major features (TRY/CATCH & windowing functions) in the last ten years.

    Please fix this. Show database developers some love again.

    I could write pages and pages of gripes just under the banner of developer productivity but I’ll leave you with that concise summary. It is of course a matter of opinion, feel free to agree or disagree.

  • Dacpac braindump - What is a dacpac?

    In this week’s earlier blog post First release of my own personal T-SQL code library on Github I talked of how one could use a dacpac to distribute a bunch of code to different servers. Upon reading the blog post Jonathan Allen (of SQL Saturday Exeter fame), with whom I’ve been discussing dacpacs with on-and-off recently, sent me this email:

    Hi Jamie,

    The DacPac thing I emailed about in December hasnt taken off yet but I have just downloaded your code library to take a look and I like the way the dacpac works. Should I be able to open that in VS or is the dacpac compiled/built in VS? The video you linked to didnt cover dapac at all so I am in the dark on how to create one/them.

    If I can build a database and create a dacpac simply then this could be really useful.

    Jonathan’s email made me realise that there is perhaps a lot of confusion about what dacpacs are, what they can be used for and how they can be used so I figured a braindump of what I know about them might be useful, that’s what you’re getting in this blog post.

     

    What is a dacpac?

    A dacpac is a file with a .dacpac extension.

    image

    In that single file are a collection of definitions of objects that one could find in a SQL Server database such as tables, stored procedures, views plus some instance level objects such as logins too (the complete list of supported objects for SQL Server 2012 can be found at DAC Support For SQL Server Objects and Versions). The fact that a dacpac is a file means you can do anything you could do with any other file, store it, email it, share it on a file server etc… and this means that they are a great way of distributing the definition of many objects (perhaps even an entire database) in a single file. Or, as Microsoft puts it, a self-contained unit of SQL Server database deployment that enables data-tier developers and database administrators to package SQL Server objects into a portable artifact called a DAC package, also known as a DACPAC. That in itself is, I think, very powerful.

    Ostensibly a dacpac is a binary file so you can’t just open it up in your favourite text editor and look at the contents of it. However, what many people do not know is that the format of a dacpac is simply the common ZIP compression format and hence we can add .zip to the end of a dacpac filename:

    image

    and open it up like you would any other .zip file to have a look inside. If you do so you will see this:

    image

    The contents of that zip file conform to something called the Open Packaging Convention (OPC). OPC is a standard defined by Microsoft for, well, for zipping up files basically. You have likely used files conforming to OPC before without knowing it, docx, .xlsx, .pptx are the most common ones that you might recognise if you use Microsoft Office and there are some more obscure ones such as .ispac (SSIS2012 developers should recognise that). (For a more complete list of OPC-compliant file types see the wikipedia page).

    Notice in the screenshot above showing the innards of TSQLCodeLibrary.dacpac the biggest file is model.xml. This is the file that contains the definition of all our SQL Server objects. I won’t screenshot that here but I encourage you to get hold of a .dacpac file (here’s one) and have a poke around to see what’s in that model.xml file.

    What are dacpacs for?

    Dacpacs are used for deploying SQL Server objects to an instance of SQL Server. That’s it. If your job does not ever involve doing that then you probably don’t need to read any further.

    Dacpac pre-requisites

    A .docx file (i.e. A Microsoft Word document) isn’t much use to someone if they don’t have the software (i.e. Microsoft Word) to make use of it and so the analogy holds for dacpacs; in order to use them you need to have some software installed and that software is called the Data-tier Application Framework (or DAC Framework for short, or DacFx for even shorter).

    Incidentally, you may be wondering what DAC stands for at this point. I think its “Data-Tier Application” in which case you may be thinking that the acronym DAC is a stupid one especially as DAC also stands for something else in SQL Server, I would agree!

    DacFx is available to download for free however you’ll probably never need to do that as installation of DacFX occurs whenever you install SQL Server, SQL Server client tools or SQL Server Data Tools (SSDT). If DacFX is installed you should be able to see it in Programs and Features:

    image

    How does one deploy a dacpac?

    In dacpac nomenclature the correct term for deploying a dacpac is publishing however the two generally get used interchangeably. There are two methods of publishing a dacpac which I’ll cover below.

    Publish via SSMS

    In SSMS’s Object Explorer right-click on the databases node and select “Deploy Data-tier Application…” (told you they used those terms interchangeably):

    image

    This launches a wizard that prompts you to choose a dacpac, fill in some particulars (e.g. database name) and then deploy it for you by calling out to DacFx. Unfortunately this wizard is not very good because it doesn’t (currently) support all features of dacpacs, namely if your dacpacs contain any sqlcmd variables (I won’t cover those here but they are commonly used within dacpacs) a value needs to be supplied for them; the wizard doesn’t prompt you for a value and hence the deployment fails.

    This. Is. Stupid. Microsoft should be suitably lambasted for not providing this basic functionality. Anyway, due to this limitation you’re most likely to be using the other method which is…

    Publish via command-line

    One component distributed in DacFx is a command-line tool called sqlpackage.exe which will quickly become your best friend if you use dacpacs a lot. sqlpackage.exe can do a lot of things and those “things” are referred to as actions, one of those actions is publishing a dacpac. Here’s the syntax for publishing a dacpac using sqlpackage.exe:

    "%ProgramFiles(x86)%\Microsoft SQL Server\110 \DAC\bin\SqlPackage.exe"
          /action:Publish
          /SourceFile:<path to your dacpac>
          /TargetServerName:<SQL instance you are deploying to>
          /TargetDatabaseName:<Name of either (a)the database to create or (b) the existing database to deploy into>

    Publishing is idempotent

    Notice from my comment above for TargetDatabaseName that you can deploy to an existing database. You might ask why you might want to publish into an existing database, after all, the objects you are publishing might already exist. This segues nicely into what I see as the biggest benefit of dacpacs and DacFx, the software interrogates the target database to determine whether or not the objects already exist or not and if they do not it will create them. If they do already exist it will determine whether the definition has changed or not and if it has, it will make those changes. DacFx will always protect your data so if it determines that an operation would cause data destruction (e.g. removing a column from a table) then it will (optionally) throw an error and fail. You never again need to write an ALTER statement or first check that an object exists in order to change an object definition, DacFx will do it for you. To put it another way, publishing using dacpacs and DacFx is idempotent.

    How does one create a dacpac?

    Of course in order to publish a dacpac you’re first going to have to create one and one of Jonathan’s questions above pertained to exactly this. There are two distinct ways to do create a dacpac.

    Use an SSDT Project

    SQL Server Data Tools (SSDT) projects are basically a project type within Visual Studio that provide a way of building DDL for SQL Server databases. I’m not going to cover SSDT projects in any detail here except to say that when such a project is built the output is a dacpac. Note that SSDT can also publish the dacpac for you however I didn’t mention that above as the publish operation is essentially another wrapper around the same DacFx functionality used by sqlpackage.exe.

    Create from an existing database

    One can right-click on a database in SSMS and click on “Extract Data-tier Application…” to create a dacpac containing the definition of all objects in that database:

    image

    Wrap-up

    Should you be using dacpacs? I can’t answer that question for you but hopefully what I’ve done is given you enough information so that you can answer it for yourself. Some people might like the way dacpacs encapsulate many objects into a single file and their idempotent deployment, others may prefer good old simple, handcrafted T-SQL scripts which don’t have any pre-requisites other than SQL Server itself. The choice is yours.

    Further reading

    UPDATE

    David Atkinson from Redgate has been in touch to tell me about another dacpac feature that I didn’t know about. It is possible to right-click on a dacpac in Windows Explorer and choose to unpack it:

    image

    That essentially unzips it but what you also get is a file called Model.sql that will create all of the objects in the dacpac:

    image

    Very useful indeed! David tells me that Redgate use this functionality to enable comparison of a dacpac using their SQL Compare tool as you can read about at Using a DACPAC as a data source.

This Blog

Syndication

Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement