THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

The Bit Bucket (Greg Low): IDisposable

Ramblings of Greg Low (SQL Server MVP, MCM and Microsoft RD) - SQL Down Under

  • Fix for SQL Server 2012 SP2 Data Corruption Issue Now Available

    Microsoft recently announced service pack (SP) 2 for SQL Server 2012. However, we were aware of a potential data corruption issue related to online index rebuilds, so we had been recommending that customers avoid installing it until the fix was available.

    That hotfix is now available: http://support.microsoft.com/hotfix/KBHotfix.aspx?kbnum=2969896&kbln=en-us 

    Note that there is also a fix for SQL Server 2012 SP1 and for SQL Server 2014 RTM.

  • The Power Query story keeps getting better

    It seems like anyone that's talked to me this year has heard me going on and on about how Power Query is just the nicest new bit of tech that's come from Microsoft lately. We cover it as part of our one-day Power BI Core Skills training class and as part of day 2 in our BI Core Skills classes that we recently updated for SQL Server 2014. Out of all the technologies that people see during that week, Power Query is the one that seems to promote the biggest discussions.

    It's also a product that keeps getting improved constantly. Another new update for Power Query is out and you can find it here: http://www.microsoft.com/en-us/download/details.aspx?id=39379&WT.mc_id=Blog_PBI_Announce_DI

    Here is a video that shows what's been updated: https://www.youtube.com/watch?v=Q9boOzu298Q 

    The blog post from the team that shows what's changed is here: http://blogs.msdn.com/b/powerbi/archive/2014/06/26/6-new-updates-in-power-query-june-2014.aspx

    For me, the one big thing that Power Query is now missing is a corporate story beyond Excel/Power BI. The queries that you create in Power Query are actually M language queries. This is a very capable language (unrelated to what the industry used to call the M language), that Microsoft have documented here: http://go.microsoft.com/fwlink/?LinkID=320633

    I would so love to be able to take the queries that I generate in Excel and paste them into an SSIS data source, or use them as a data source in SSAS tabular. Once those types of options appear (surely they must), this is going to be even more of a killer application. 

  • SQL Down Under Show 63 Jimmy May - Clustered Columnstore Indexes in SQL Server 2014

    It was great to catch up with Jimmy May recently. Jimmy is a principal database architect at Microsoft and was part of the SQL CAT team (customer advisory team) for quite a while.

     

    We recorded a new SQL Down Under podcast.

     

    In this podcast, Jimmy discusses his experiences with columnstore indexes and the new updatable clustered columnstore indexes.

     

    You'll find it here: http://www.sqldownunder.com/Podcasts

     

    Enjoy! 

  • Easier to Implement SQL Server Availability Groups on Azure

    With each update to Azure, it’s been getting easier to implement options for SQL Server. One of the challenges for the teams that write documentation and training is this rate of change.

    For a long time, Availability Groups weren’t able to be implemented properly on Azure as there was a limit of one IP address per VM. That problem was fixed some months back.

    I recently needed to set up Availability Groups wholly within Azure. I followed the documentation here:

    (Tutorial for setting up Availability Groups in Azure)  http://msdn.microsoft.com/en-us/library/dn249504.aspx 

    and the documentation here:

    (Tutorial for adding a listener)  http://msdn.microsoft.com/en-us/library/dn376546.aspx 

    which has just been updated here:

    (Tutorial for adding a listener)  http://msdn.microsoft.com/en-us/library/dn425027.aspx

    As I was reading the listener tutorial, I noted that the tutorial used Powershell for creating the endpoint as there was no GUI based method for enabling Direct Server Return on the endpoint. I thought “I’m sure I’ve seen that”. And this is the challenge for the documentation. In the portal, when I was creating the endpoint, the following dialog appeared:

    image

    This is wonderful in several ways. Not only has the option been added, the popup help text is actually very helpful, and also provides a link to details related to how to use it. (I’m a firm believer that help should first and foremost be helpful). Kudos to whoever implemented that.

    The one remaining real roadblock for Availability Groups in Azure was the use of them for disaster recovery. The VMs involved needed to be contained in the same VPN. That’s fine for local high availability and for read-only replicas for load balancing but it’s not OK for disaster recovery where we want a secondary VM to be in another datacenter in another region.

    That roadblock has also been removed now. The Azure team added support for interconnecting different regions via secure tunnels. This is documented here:

    http://msdn.microsoft.com/en-us/library/azure/dn690122.aspx?WT.mc_id=Blog_SQL_General_DI

    This allows you to construct a VPN like this: (image from the article)

     

    image

    In fact, with the site to site connectivity, you can also combine your on-premises systems as well: (image from the article)

    image

     

     

    This is important news for those wanting to implement Availability Groups in Azure, with a need for DR. Support for this has been announced:

    http://blogs.technet.com/b/dataplatforminsider/archive/2014/06/19/sql-server-alwayson-availability-groups-supported-between-microsoft-azure-regions.aspx

    And the guidance for using it has also already been updated here:

    http://msdn.microsoft.com/en-us/library/jj870962.aspx?WT.mc_id=Blog_SQL_General_DI

    Kudos to all involved in making this possible so quickly.

  • Azure now in the leader quadrant for IaaS from Gartner

    Gartner tends to publish magic quadrant leader boards related to a variety of technology areas.

    It was interesting to note that the latest leader board has Azure moved up into the Leader quadrant. The only other player in that quadrant is Amazon. That's a big step up for the team, given the IaaS business really only went to GA in April last year.

    You'll find details here: Gartner Report

     

  • SQL Server Data Tools–BI for Visual Studio 2013 Re-released

    Customers used to complain that the tooling for creating BI projects (Analysis Services MD and Tabular, Reporting Services, and Integration services) has been based on earlier versions of Visual Studio than the ones they were using for their other work in Visual Studio (such as C#, VB, and ASP.NET projects).

    To alleviate that problem, the shipment of those tools has been decoupled from the shipment of the SQL Server product. In SQL Server 2014, the BI tooling isn’t even included in the released version of SQL Server. This allows the team to keep up-to-date with the releases of Visual Studio. A little while back, I was really pleased to see that the Visual Studio 2013 update for SSDT-BI (SQL Server Data Tools for Business Intelligence) had been released. Unfortunately, they then had to be withdrawn.

    The good news is that they’re back and you can get the latest version from here:

    http://www.microsoft.com/en-us/download/details.aspx?id=42313

  • BI Project Templates for Visual Studio 2013 are also now available

    One quiet addition to the tools that have been released recently was the set of project templates for creating BI projects (Integration Services, Analysis Services, Reporting Services) within Visual Studio 2013.

    It is really great to be able to use the same version of these as we are using for other code development in Visual Studio.

    Thanks to the team for updating them so quickly!

    You’ll find them here:

    http://www.microsoft.com/en-us/download/details.aspx?id=42313

  • SDU Show #62: Paul Larson - SQL Server 2014 Underlying Technologies

    I had the distinct honour (honor) this week of recording a new SQL Down Under podcast with Paul Larson.

     

    Paul is a principal researcher at Microsoft and is one of the keen minds involved in the technologies behind the in-memory tables (Hekaton) and clustered columnstore indexes in SQL Server 2014.

     

    In this podcast, Paul explains his role, and discusses how these core enhancements in SQL Server 2014 have been implemented.

     

    You'll find it here: http://www.sqldownunder.com/Podcasts

     

    Enjoy! 

  • Adding Good Favicons to your MVC Based Website

    When you visit websites, you may or may not have noticed the small icons that appear in the title bar areas. These are called Favicons. Here’s an example of one in Chrome:

    SNAGHTML451ec3

    In Internet Explorer, they appear in several places:

    image     image

    They are only a small visible aspect of your website but they greatly improve how professional the site looks in a browser. In IE, they also appear when you open a new page:

    image

    So, how do you create these, and how do you add them to your site?

    I had tried creating them using a variety of icon editors including the excellent IcoFX. However, I could never create really good looking Favicons as I must be a little art-challenged. I’m sure it’s a left-brain/right-brain thing. I’d try to draw them but I’d end up giving up and using something boring like this:

    image

    It’s quite hard to create a small version of your company’s logo, etc.

    On one of the local mailing lists, our buddy Andrew Coates from Microsoft, pointed out one of their sites that I’d never seen before. It’s awesome for creating favicons by scaling existing images. Here’s how I created the one in the images above:

    First Step – Transparent PNG

    The first step was to make sure I had a transparent png file that contained the logo that I wanted to use. Even though you could use a white background, it looks much better if it’s transparent. (Skip this if you already have one). To create that, I used Paint.Net (yet another awesome free tool). I loaded the existing logo that had a white background like so:

    image

    In many of these tools, it’s not so obvious how to make the background transparent. In Paint.Net, all I needed to do was to select the paint bucket as a tool, then use the “More” option in the color palette, and move the Transparency-Alpha setting fully to the left:

    image

    Notice that the primary color now looks transparent. Then I just used the paint bucket tool to change the white to transparent.

    image

    Then I saved the file.

    Step 2 – Create the Favicon

    The site that Andrew recommended was http://ie.microsoft.com/testdrive/Browser/IconEditor/Default.html. Navigate to that site. We then want the icon editor:

    image

    Next choose to Import:

    image

    Use the Upload button to bring in the transparent png that was created earlier, and then move and resize the guide window until you get as much of the logo that you want displayed:

    image

    When it looks good, make sure that all four icon sizes are selected via the checkboxes and then click OK. You’ll then see your logo in the main window, and you can choose to export it:

    image

    Save the file that is downloaded as favicon.ico.

    Step 3 – Add the Favicon to your MVC Project

    Copy the icon file into the root  folder in your MVC project, and make sure you have included the file in the project. James Green pointed out to me that many older browsers still go looking for the specific filename favicon.ico and in the root folder of the website, so it sounds like good advice to always place them there.

    Finally, add a reference to the icon file in your _Layout.cshtml file in the head section:

    image

    And you’re done. Enjoy!

  • New SQL Server Spatial Course

    I’ve always thought that the support for spatial data types that was added in SQL Server 2008 was one of the best parts of that release.

    In SQL Server 2012, a great set of enhancements were made to the spatial support.

    We’ve had lots of interest lately in SQL Server spatial so we’ve been working on a new course that targets it specifically. We’re pleased that we’ve now added a brand new one day SQL Server Spatial Core Skills course to our list of regularly scheduled courses. It covers a wide variety of topics from understanding the data types, to loading data, geolocation, indexing, etc. It also covers the enhancements that were made in 2012.

    The first public schedule for this course is April 14th in Melbourne. Details are here: http://www.sqldownunder.com/Training/Courses/21

  • Fix: Incorrect status output in the SSIS Data Quality transform

    I hear comments from people all the time that say it’s not worth posting details on the Connect website as they don’t get actioned anyway. While it can seem a frustrating process, and often take quite a while, improvements do come from there.

    At a SQL Saturday event in Brisbane, I was speaking to an attendee (Ian Roberts) who was complaining that DQS was returning what he perceived as the wrong status. More importantly, the results from using the DQS client differed from using the SSIS transform. He described it this way:

    He set up a domain for Australian States and one for Australian suburbs, based on files that he had downloaded from the Australia Post website. He then created a Data Quality Project to test this. In that tooling, the results were as expected. Hobarrt got transformed to Hobart; Victoria got transformed to VIC; Melburn, New York and NWS all got tagged as New as they couldn't be found. With the Record Status, New seemed to override Correct, and Corrected seems to override them all.

    However, in the DQS Client Transform, since Correct overrides New, unless every field was in error, the record would go to the Corrected output.

    The problem with this is that in the SSIS conditional split transform, you should just be able to test the Record_Status to determine what to do with the record, rather than look at the status for each field. Looking at the status for each individual field, it would make the conditional split transform unwieldy.

    The other real issue is that the two tools should provide the same outcome. You should be able to look at the Record_Status, direct correct and incorrect records to their appropriate destination, then deal with the incorrect ones after the fact. Having the field status in the output would make application error reporting more meaningful as you could nominate the actual field in error. The main issue is that Correct gazumps New in SSIS, but New gazumps Correct in the DQS Client. That’s not appropriate.

    In one of my podcasts for SQL Down Under, I had interviewed Gadi Peleg from the DQS team and Gadi agreed that it looked incorrect. He agreed that the output should be:

    •    If 1 of the values is Invalid – the record is invalid
    •    If 1 of the values is corrected – the record is corrected
    •    If 1 of the values if New – record is new
    •    If all values are Correct – only then record is correct

    Gadi encouraged me to log it on the Connect website.

    Today I got an email to say it’s been fixed. That’s a great outcome, although the update doesn’t indicate in which version of service pack the change will apply:

    http://connect.microsoft.com/SQLServer/feedback/details/779183/dqs-client-in-ssis-returns-invalid-status

  • Recent thoughts on learning Chinese

    Over the last few days, I’ve been getting a lot of questions again about what I’ve found works and doesn’t work with learning Chinese.

    Here are a summary of things I’ve tried and thoughts on them:

    1. Local classes

    My wife and I attended some local classes in the first year I tried to learn. That was only about 2 hours per week and while it was interesting, it’s way too little to be very meaningful.

    2. ChinesePod

    This is a great podcast. They concentrate on spoken Chinese and much of it is good fun. I really liked the older material with Ken and Jenny but the new hosts are getting there. They have different levels. Newcomers might find much of the Newbies level quite challenging at first, but it’s important to persist with it. The step up to Elementary is substantial but not too scary. The step up to Intermediate though, is really too big a jump. I now find Elementary too easy but Intermediate is often too hard. I can’t comment on higher levels.

    The accents on ChinesePod are pretty clean but some people complain that it’s a bit too Shanghai-ish. It seems pretty good but it’s not as clean as material that I’ve used from Beijing. You can really tell the difference.

    ChinesePod also have lessons that you can do daily or weekly with teachers. These are not inexpensive but I think they’d be good value for those that want to progress quickly. Having someone to talk to for 20 mins a day, 5 days a week should help enormously, and allows for the teacher to provide one-on-one help in your weaker areas. I’ve done a demo session and I might take this up soon.

    3. Rosetta Stone

    This is probably the best known of the traditional language learning systems. For most languages, there are 3 levels. For Chinese, there are 5 levels. Each level consists of 4 units. I’m currently half a unit from completing the 5 levels. My inlaws (Chinese) say I’ve learned an enormous amount by doing this material. I’m sure that’s true but I have to say I’ve found it very, very frustrating at times.

    A surprise for most people is that all spoken and written words in the course are in your target language. Read that last statement carefully. They argue that children learn by seeing. While this concept mostly works, I could not have learned using Rosetta Stone without having Google Translate and Pleco open the whole time. There were way too many subtleties that I could not have picked up from the pictures alone. I have had many moments of “how the @#$@#$@ could I have known that???” when working with Rosetta Stone.

    When I signed up, they also had a Studio product that gave you online access for about $60 each six months. That was amazing value and one of the main reasons that I used it. It let you book into online classes that were conducted directly from Beijing. There was no limit to the number of classes you could book into. Clearly this was a problem for them so about a year ago, they dramatically changed the rules to where the Studio product was no longer worthwhile in my opinion, for many, many reasons. That was a real pity as it was the best part of the product and they broke it.

    4. Pleco

    This is by far the best app I’ve found for using on my iPad. It helps me translate Chinese to/from English in many useful ways. I use it every day. One of the reasons that I can’t move to a Windows Phone is that it’s not available on that platform and I wouldn’t want to live without Pleco.

    5. Serge Melnyk’s Podcast

    This is an odd one. Serge is a Russian (yes you read that correctly) that has a podcast on learning Chinese. I’ve come to really like Serge and have signed up to his site so that I could download all the content. If you have travelling time when you can listen to podcasts, it’s well worthwhile, although the accent is quite bizarre.

    6. HSK Review

    This is a great app for learning the Chinese characters that are required for the HSK exams. I hope to do at least HSK 1 and HSK 2 exams this year, perhaps HSK 3 if I’m lucky.

    7. AnkiApp

    This is in many ways an even better app for learning HSK based characters. It keeps track of what you are getting right and wrong, and how hard you find each character. It works out what you need to work on and provides great review drills for you based on that feedback. It also then assesses what level it thinks you are at for each of the levels.

    8. Google Translate

    This is invaluable and much superior translation-wise to the Bing Translator (sorry Microsoft). The pinyin input editor doesn’t work on IE11 (sorry again Microsoft) but no surprise, works like a charm on Chrome. Many of the translations are “interesting” so I’m glad I have a bunch of patient Chinese speaking buddies on Facebook that help me out where I just can’t work out what something really means.

    9. PeraPera Chinese Popup Dictionary

    This is an add-on for Chrome (and for Firefox) that has quickly become one of my favourite things. You just click a button in the top right of the browser when you want to use it, and you just point at words or phrases in Chinese that you don’t understand and it tells you about them. Awesome idea.

    10. FluentU

    This is another excellent service that I’ve signed up to. It’s about $19 a month when I signed up and it produces a bunch of content each month. I particularly like the way that different levels are supported, the way that the content is often quite current, is presented in a natural way, with characters, pinyin, and English all shown.

    GENERAL ADVICE

    1. PinYin

    I made the mistake of spending too long with pinyin visible when I was first learning. It can become a crutch for you, and eventually you need to learn to read Chinese characters. In Rosetta Stone, you can choose what’s displayed. For the last year or so, I changed to only having Chinese characters displayed. It was VERY painful at first but I’m so glad I persevered with it. I now only turn on the pinyin display (in addition to the characters) when I don’t understand something. It’s important to force yourself to read.

    Learning to pronounce pinyin correctly is also a challenge. There is no simple story here. You just need to learn the basic rules on how words sound, and after a while you will see “xin” but “shin” will come out of your mouth.

    Focus a lot on listening. Some sounds take a while to get used to. Good examples are the difference between the “u” in “qu” and “zhu” or the sounds that we really don’t have in English like “nv” (The v is really a u with an umlaut). Other challenges are getting used to “c” sounding more like “ts” in words like “cai”, etc. So, listen a lot.

    2. Movies and Video

    Watching movies is obviously good but it’s a real challenge. At first, I struggled to find many Chinese movies that weren’t endless martial arts of medieval warriors.

    My Spanish friends told me they learned English by watching movies in English but with English subtitles. This is important because you don’t want to know what something means, when learning you want to also know what they actually said. The problem with Chinese is that you have the added problem of not being able to read the characters, and certainly not at the speed that they are displayed at. (And I might be getting old, but they are really small too).

    I’m told that Karaoke is good for learning. I haven’t tried it. Many people swear by learning languages by singing songs in the target language.

    There is a world of good content on YouTube. I really like many of the game shows. You might as well find something entertaining while you’re learning.

    3. Growing up with Chinese

    CCTV has a series called “Growing up with Chinese” that is targeted at teenagers wanting to learn. There are 100 episodes that are available for download. I watch those while trying to get some exercise on a treadmill. They start out very basic but I think they move a little too fast into more complex topics. However, the host Charlotte is adorable, and the more of these things you learn from the better.

    BIGGEST HINT

    The biggest hint though is just to get out there and try to talk to native speakers. They will correct you much more quickly than you’ll ever work out yourself. I see this every day. For example, we were at my in-laws for lunch. I said to my mother-in-law “我很高兴我不需要药”, and she look puzzled then said “我很高兴我不需要吃药”. Note that I’d left a word out that helped a lot with what I meant. Only natives can do this for you.

    With all the formal types of courses too, you'll quickly find that what they teach really is quite formal and you'll come to a sad reality (at some point) where you find that's just not what people say. That's why it's so important to get to common spoken language pretty quickly. For example, people often learn "ni hao" as the first phrase for "hello". After a while, you'll realise that it's not common for people to say that. "nin hao" which is more formal is pretty common in some situations though. A better example is to ask yourself how often you'd say a phrase like "good morning" or "good day". Most Australians would be more likely to say just "morning" rather than "good morning". Similarly, while you'll learn "zao shang hao", you'll often hear people just say "zao". For "bus" the courses will teach "gong gong qi che" but as an example, my brother in law says "gong che". There is no answer to this apart from talking to natives.

    One source of natives that I’ve found useful is local meetup groups. There are lots of Chinese who want to learn English and are happy to chat to you back/forth with both languages, plus native speakers who are just happy to help because they get to meet new people.

    Anyway, that’s it for now. If I think of more, I’ll add them later. Good luck!

  • Reliably Dropping a Database in a T-SQL Script is Too Hard

    Recently I’ve been working on a system where I had to create a test script that reliably recreated a database each time it was executed. I’ve done this many times before and thought I knew how but I was wrong. It turns out that reliably dropping a database in a script that you run from SQL Server Management Studio (SSMS) is harder than it looks. (It’s also harder than it should be).

    In Books Online (BOL), the article for DROP DATABASE says :

    “You cannot drop a database currently being used. This means open for reading or writing by any user. To remove users from the database, use ALTER DATABASE to set the database to SINGLE_USER.”

    Based on this, I had been using the following style of script to drop and recreate a database:

    USE master;
    GO

    IF EXISTS(SELECT 1 FROM sys.databases WHERE name = N'Blah')
    BEGIN
        ALTER DATABASE Blah SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
        DROP DATABASE Blah;
    END;
    GO

    CREATE DATABASE Blah ON

    ...

    However, what I’ve found is that when I execute this from SSMS, it doesn’t always work. Fairly randomly (and hard to reproduce), it fails with an error message telling me that the DROP failed because the database was in use. But isn’t that the whole purpose of setting it to single user?

    The problem seems to be that although the database gets set to single user, another connection could be made to it before the DROP DATABASE statement occurs. Setting the database to SINGLE_USER isn’t enough as the current database context for the script is the master database, not the database in question.

    A number of users and fellow MVPs have told me that they’ve experienced the same issue. What we suspect is causing this is the Intellisense system within SSMS connecting to the database to check the syntax of objects later in the same script. The problem only really seems to happen in SSMS but it could happen elsewhere.

    A great suggestion from Rob Farley was to set the database OFFLINE instead of SINGLE_USER. While that would stop the other connection, the problem is that when you DROP a database that is OFFLINE, the files for the database don’t get deleted. You’d then need to have some additional script (nasty) to delete the files as well after the database was dropped.

    What is really needed to get around this race condition is:

    DROP DATABASE WITH ROLLBACK IMMEDIATE;

    If you agree, I’ve created a Connect item here that you can vote on:

    https://connect.microsoft.com/SQLServer/feedback/details/813014/cannot-reliably-drop-a-database-in-t-sql-script

  • SQL Server 2014 Hybrid: Storing data files in Azure storage - Bizarre or not?

    In the sessions that I was attending in Redmond yesterday, I was interested to see the presenter discussing the idea of having individual database files in the cloud as some sort of bizarre option that’s now available in SQL Server 2014. I don’t see it as bizarre at all. However, I see two distinct use cases that make complete sense to me:

     

    Overcoming Drive Limitations in Azure-based SQL Virtual Machines

     

    One of the current limitations of running SQL Server in Azure virtual machines is that you can only attach two additional data drives per CPU core, and each of those drives is up to 1 TB in size. For example, if you spin up a two core VM, you can attach 4 data drives.

     

    By allocating URLs for data drives, you can overcome this limitation. In a recent whitepaper that I was a reviewer on, we discussed the IOPs and size limitations of Azure storage based drives, both at the individual drive level and the storage account level. Please read that whitepaper for more info on those characteristics that remain unchanged. A core takeaway though is that you shouldn’t mix storage accounts for files that make up a single database, to make sure that you maintain write ordering. You could use different storage accounts (to overcome single account limits) for different databases though.

     

    Tiered Storage for Partitioned Tables in On-Premises Databases

     

    I commonly implement partitioned tables for sliding window scenarios. For example, a customer might want to keep 7 years of data online at any time. If table partitioning isn’t familiar to you, please read another whitepaper that I was a reviewer on. Even though it was created for SQL Server 2008, it’s pretty much still spot on.

     

    One way that I use partitioned tables is to keep active data in the current partition, and to keep older data in separate partitions, often one per month. This usually allows me to have the older data in read-only filegroups. This gives you a great situation where your application just sees a normal table, but most of the table is read-only. There are several advantages of this, some of which are:

    - While locks are still acquired on the read-only data, hints like NOLOCK are then safe to use on that data.

    - You won’t have a query accidentally overwrite your older data.

    - The older data could be stored on slower media.

    - The older data could have a higher compression rate. For example, you could use ROW compression for the active data, and PAGE compression for the older data.

    - SQL Server doesn’t need to run recovery on read-only filegroups.

     

    Using data files in Azure storage could make really good sense for some of this older data. You could store your old read-only data in storage files and keep it available. So your current data would be fast to access but your older data would still be present, but slower to access.

     

    I could see a situation where you might never need to remove your old data.

     

    One other question that comes up in this situation relates to how backups work. Clearly you don’t want to be pulling all your data out of the cloud when you perform a backup, particularly if you are going to also backup to the cloud. However, piecemeal backups in SQL Server are perfect for this. If you just store your read-only filegroups and archive data in Azure storage files, you can safely perform a backup with READ_WRITE_FILEGROUPS_ONLY.

     

    All in all, I see that some of these new hybrid scenarios could be quite compelling, and not bizarre at all.

  • More updates to Azure: Which changes relate to SQL people?

    The Azure team keep rolling out improvements at an amazing pace. Scott Guthrie posted recently about the latest set of changes. Here are the ones that I think are most important for SQL people:

    Import/Export Hard Drives

    Even though Azure storage accounts can hold a very large amount of data, one of the big questions has always been about how to get that data into the storage account in the first place. That question is now answered because the new Windows Azure Import/Export service lets you ship hard drives directly to the Azure team for uploading. The reverse is also available. If you export a large amount of data to a storage account, you can move it onto a hard drive and have the drive sent to you.

    Currently the service uses FedEx for transport. If you are concerned about the security of data in transit, bit locker encryption is also supported in both directions.

    HDInsight Out of Preview Mode

    HDInsight has made its way out of preview and into general availability. It allows you to process large volumes of data (in Azure Storage) using Apache Hadoop tools (including Pig and Hive).

    HDInsight gives you a way of rapidly (and economically) spinning up a Hadoop cluster when you need one, and shut it down again when you are done.

    Virtual Machine Gallery

    As the number of virtual machine images has continued to grow, it’s been getting harder and harder to find the image that you want when creating a new virtual machine. UI changes have been made to make this easier.

    One of the discussions that the team has been having lately is around how long images should be kept. For example, once SQL Server 2014 ships, should images of SQL Server 2008 R2, or SQL Server 2008 also be kept? I’d like to see them all kept but I’m not sure how much ongoing maintenance that is for the team. Also, if they are all kept, should they only keep the version with the operating system that was the most current at the time? If anyone has strong opinions on this, I’d love to hear them and I’ll pass them on.

    General Changes

    You can now configure endpoint security using PowerShell.

    You can now configure alerts related to billing. You can get an email when you bill exceeds a given threshold. One thing I’d really like to see (which isn’t there yet) is the ability to simply have your bill emailed to you each month, even when it’s prepaid. Does anyone else feel the need for that?

    For further details on areas that I haven’t mentioned, you’ll find Scott’s post here: http://weblogs.asp.net/scottgu/archive/2013/11/04/windows-azure-import-export-hard-drives-vm-acls-web-sockets-remote-debugging-continuous-delivery-new-relic-billing-alerts-and-more.aspx

This Blog

Syndication

Tags

No tags have been created or used yet.
Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement