THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

SQLBI - Marco Russo

SQLBI is a blog dedicated to building Business Intelligence solutions with SQL Server.
You can follow me on Twitter: @marcorus

  • New training on Power Pivot with recorded video courses

    I and Alberto Ferrari started delivering training on Power Pivot in 2010, initially in classrooms and then also online. We also recorded videos for Project Botticelli, where you can find content about Microsoft tools and services for Business Intelligence. In the last months, we produced a recorded video course for people that want to learn Power Pivot without attending a scheduled course.

    We split the entire Power Pivot course training in three editions, offering at a lower price the more introductive modules:

    • Beginner: introduces Power Pivot to any user who knows Excel and want to create reports with more complex and large data structures than a single table.
    • Intermediate: improves skills on Power Pivot for Excel, introducing the DAX language and important features such as CALCULATE and Time Intelligence functions.
    • Advanced: includes a depth coverage of the DAX language, which is required for writing complex calculations, and other advanced features of both Excel and Power Pivot.

    There are also two bundles, that includes two or three editions at a lower price.

    Most important, we have a special 40% launch discount on all published video courses using the coupon SQLBI-FRNDS-14 valid until August 31, 2014. Just follow the link to see a more complete description of the editions available and their discounted prices. Regular prices start at $29, which means that you can start a training with less than $18 using the special promotion.

    P.S.: we recently launched a new responsive version of the SQLBI web site, and now we also have a page dedicated to all videos available about our sessions in conferences around the world. You can find more than 30 hours of free videos here: http://www.sqlbi.com/tv.

  • DIVIDE vs division operator in #dax

    Alberto Ferrari wrote an interesting article about DIVIDE performance in DAX. This new function has been introduced in SQL Server Analysis Services 2012 SP1, so it is available also in Excel 2013 (which still doesn’t have other features/fixes introduced by following Cumulative Updates…). The idea that instead of writing:

    IF ( Sales[Quantity] <> 0, Sales[Amount] / Sales[Quantity], BLANK () )

    you can write:

    DIVIDE ( Sales[Amount], Sales[Quantity] )

    There is a third optional argument in DIVIDE that defines the result in case the denominator (second argument) is zero, and by default its value is BLANK, so I omitted the third argument in my example.

    Using DIVIDE is very important, especially when you use a measure in MDX (for example in an Excel PivotTable) because it raise the chance that the non empty evaluation for the result is evaluated in bulk mode instead of cell-by-cell. However, from a DAX point of view, you might find it’s better to use the standard division operator removing the IF statement. I suggest you to read Alberto’s article, because you will find that an expression applying a filter using FILTER is faster than using CALCULATE, which is against any rule of thumb you might have read until now!

    Again, this is not always true, and depends on many conditions – trying to simplify, we might say that for a simple calculation, the query plan generated by FILTER could be more efficient – but, as usual, it depends, and 90% of the times using FILTER instead of CALCULATE produces slower performance. Do not take anything for granted, and always check the query plan when performance are your first issue!

  • Issues with July 2014 Office Update and Click-to-Run #excel #powerbi

    An interesting experience with Office Click-to-Run is that trusting it to install updates the night before a conference where you are a speaker. I’m writing this blog post while Alberto Ferrari is delivering its part of the preconference training day at SQL Bits about how to create a complete solution in Power BI. But I think it’s important to share.

    Symptoms:

    • Excel crashes when you click Save As
    • Word crashes when you click Save As
    • Outlook crashes when you click File Office Account
    • Power Point crashes when you click Save As

    In a word: Office no longer works well and crashes often.

    Yesterday I installed the latest update of Office using the Click-to-Run distribution: it’s not Windows Updates, so I cannot check in Windows Update what are the updates installed. You have to check the version you installed and this could be hard if the window that should show the number makes the application crashing.

    However, after some investigation:

    • I took a look at the page that shows the versions released for click-to-run: http://support.microsoft.com/gp/office-2013-click-to-run
    • I had the version 15.0.4631.1002 – July 2014 update (http://support.microsoft.com/KB/2980001)
    • I wanted to revert back to the previous version (15.0.4623.1003 – June 2014 update)
    • I followed instructions described here: https://community.office365.com/en-us/f/172/t/251109.aspx
      • Open an administrative command prompt, then run one of the next two commands based on your version:
        • For an Office installation in a 32-bit version of Windows:
          cd %programfiles%\Microsoft Office 15\ClientX86
        • For an Office installation in a 64-bit version of Windows
          cd %programfiles%\Microsoft Office 15\ClientX64
      • Then run the following command:
        officec2rclient.exe /update user updatetoversion=15.0.4623.1003

    I think that this procedure could be useful in the future in case any similar issue will happen again (hopefully not…)

    Lessons learned: never install *any* update the day before speaking at a conference. Not only Windows Updates, but also Office Click-to-Run updates…

  • Use of RANKX with decimal numbers in DAX #powerpivot #ssas #tabular

    Using decimal numbers in Power Pivot and Tabular might produce small rounding differences in certain calculations. This is nothing new when you work with floating point, as many programmer knows. The implementation of RANKX might suffer of a behavior producing wrong results when the measures used for the ranking returns a decimal value.

    For example, consider the following model, where there are three names (A, B, C), each one with a value resulting from the sum of rows in the fact table and a Pos measure, calculated using the following measure:

    Pos :=
    IF (
        HASONEVALUE ( Sample[Name] ),
        RANKX (
            ALL ( Sample[Name] ),
            CALCULATE ( SUM ( Sample[Value] ) )
        )
    )

    clip_image001

    In this case, everything works fine and the Pos has values from 1 to 3. However, when you select only one name, you might see a wrong number. In the following example, the Pos value is higher than the number of available names.

    clip_image002

    It is not easy to find a reproducible case, usually the rounding error results from complex calculations. The purpose of the previous example is to describe the symptoms that you might experience.

    Under the cover, the RANKX calculate the value of the measure for each element of the list of names, and then it searches in that table the result of the expression for the current filter context. If there is any rounding error in this operation… the match does not happen (or it might happen with the wrong index, even if this is harder) and you see the wrong Pos number as a result.

    Hopefully, a fix to this behavior will be released sooner or later. In the meantime, there are two possible workarounds:

    1. Cast the expression to currency using the CURRENCY function, so that the values compared are of currency data type, which is not subject to the described issue
    2. Store the original value in a column of Currency data type, so that the result is still a currency and the match works well

    By using either one of the workarounds, you will see the correct result:

    clip_image003

    The first approach (cast the result) might have a minimal impact in query performance. I would prefer, whenever possible, storing the values in a Currency column, so that any measure will not suffer of this issue.

    In any case, be careful about the data type of the expressions using in a RANKX function.

  • Learn #tabular and #dax at PASS Summit 2014

    During the last months I’ve seen an increasing adoption of Analysis Services Tabular (and I’m writing a longer article about a particular area of adoption which was probably not expected – more on that in a few weeks). This year at PASS Summit 2014 there are plenty of opportunities to learn (or improve skills on) Analysis Services Tabular:

    • Mon, Nov 03, 2014: Data Modeling in SSAS Tabular – pre-conference seminar by Marco Russo (yes, myself)
      • This one-day seminar introduces the Tabular models using Visual Studio. The only prerequisite is a knowledge of SQL language. Previous experience in Analysis Services Multidimensional or other OLAP / Analytical tools is welcome but not required. If you already created projects in Tabular, you will understand how the Tabular engine works and how to create optimal data models. Compression efficiency is important, and this impact the way you model tables and relationships. DAX is not included in this day, because it is covered by Alberto in another seminar the following day.
    • Tue, Nov 04, 2014: From 0 to DAX – pre-conference seminar by Alberto Ferrari
      • Either you use Tabular or Power Pivot, even if you started your Tabular experience the day before, this seminar introduces you the DAX language syntax and the important concepts (filter context, evaluation context) you have to know. This knowledge makes you able writing the formula you need, without the “try and see” approach that could be very confusing in DAX. Have you had this experience, writing a DAX formula that didn’t result in what you expect? This is the right seminar for you.
    • General sessions (75 minutes):
      • Advanced Modeling with Analysis Services Tabular (Alberto Ferrari)
        • This session is about how to overcome “limitations” in Tabular data modeling by creating virtual relationships, balance at point in time without snapshots, dynamic currency conversions, measures of active events, surveys, and basket analysis.
      • DAX Patterns (Marco Russo)
        • I will explain some of the patterns available in DAX Patterns web site. Yes, you can read the articles and use the patterns, but this session has the goal of explaining how these pattern works, and not only how to use them.
      • Working with Time Functions in DAX (Michael Antonovich)
        • I don’t know Michael, but if you never used Time Intelligence functions in DAX, this is a topic that you have to study. When you fill ready, you can still discover how to rewrite Time Intelligence functions with Time Patterns!
      • Load Testing Analysis Services (Bob Duffy)
        • Bob wrote interesting articles about performance of Analysis Services, like this one about partitioning in Tabular, and I will certainly attend this session as an attendee (I hope it will not be overlapped with mine!)
    • Even if not strictly related to Tabular, I made a personal selection of sessions that a BI Developer engaged with Analysis Services Tabular and/or Multidimensional should see:

    This is going to be a very interesting PASS Summit. I’ve seen a rich sessions portfolio also for ETL, DWH SSIS, SSRS, And, of course SQL Server! Please let me know if I missed some important session for the BI Developers target!

    SIDE NOTE: BI Sessions at PASS Summit - I’ve seen comments about moving BI-related sessions to PASS BA Analytics, giving more sessions to SQL Server. I’m not sure it would be a good idea for PASS. Today, PASS BA Analytics is a conference that is not mature, I would like to see more advanced sessions for BI Developers, but the point is that PASS BA only attracts a few hundreds of attendees, whereas the PASS Summit attracts a 35-40% of the audience made by people working in Business Intelligence arena. Dropping all the BI sessions would mean probably cutting a large part of the conference budget. I’ve seen PASS growing well and the SQL Saturday initiative is incredible. I understand that nobody would take the risk of damaging the main source of revenues of the organization that makes all this possible. Thus, I see a future for PASS BA Analytics are the conference for emerging tools and technologies (sort of “build” conference for BI – it’s not that today, but it’s a direction I would like to see), whereas PASS Summit is the conference for established and released tools and products (sort of “TechEd” conference for BI & SQL DBA & DEV).

  • Calculate the rolling average for 12 months in #DAX and a nice IF optimization

    Alberto published the Rolling 12 Months Average in DAX article on SQLBI a few days ago, which includes interesting consideration about how to avoid the pitfall of touching the boundaries of the Date table, which could result in a calculation error.

    More interesting for the geek of us is the optimization of the measure to avoid the IF statement. As you may already know if you watched some of our last events or course, using IF statement in a measure might affect performance, especially (but not only) when you query a Tabular or Power Pivot model from MDX (i.e. from a PivotTable in Excel). In this article, instead of using:

    Avg12M := IF ( [Sales] <> 0, <expression> )

    the formula is

    Avg12M := DIVIDE ( [Sales], [Sales] ) * <expression> )

    as you can see, the DIVIDE has the only purpose to return 1 if the value to check is other than 0, and 0 if it is 0. The query plan generated by this expression is much faster than the IF one, and this technique can be used in many other similar scenarios.

  • Calculate New, Returning, Lost, and Recovered Customers in #dax

    Calculating the number of new and returning customers is a recurring question. I would say this is a “classical” Business Intelligence problem, very common in marketing department. I worked on these problems with many customers, with small and large datasets, and I wrote a DAX Pattern “New and Returning Customers” showing how to calculate:

    • New Customers: customers who never made any purchase
    • Returning Customers: customers who bought something in the past
    • Lost Customers: customers who bought something but did not buy in the last N days days
    • Recovered Customers: previously “lost customers” who made a new purchase

    This is not a brand new topic, you can find many other blog posts on this topic (Chris Webb, Javier Guillén, Gerhard Brueckl, David Hager, Rob Collie), so my goal was to show very generic formulas that were generally the best solution in term of performance. This make the formula less readable, such as the following:

    [Returning Customers] :=
    COUNTROWS (
        CALCULATETABLE (
            VALUES ( <customer_key_column> ),
            VALUES ( <customer_key_column> ),
            FILTER (
                ALL ( <date_column> ),
                <date_column> < MIN ( <date_column> )
            )
        )
    )

    As you see, using CALCULATETABLE ( VALUES ( table[column] ), VALUES ( table[column] ), … ) seems a useless thing. Why counting the rows returned by VALUES and passing it also as a filter argument? This is a not so intuitive behavior of CALCULATE. The first argument is an expression that will be evaluated in a modified filter context. The third argument is a FILTER on the date column, which extends the range of dates considered, including all the past sales transactions. At this point, the first VALUES would return any customers who made a purchase in the past, but the second argument will only considered those that made a purchase in the current selection of time. The final result is an AND condition between two sets of customers (the intersection of the two sets), which is faster than trying to calculate the number of past transactions of each customer who made a purchase in the current selection of time, filtering only those that results in zero transactions.

    In general, I prefer using more readable DAX formulas, also in DAX patterns, optimizing them only when necessary. But in this case the performance might be important (visible to the user) also with a few thousands of customers. As usual, any feedback on the New and Returning Customers pattern will be very welcome!

  • Basket Analysis with #dax in #powerpivot and #ssas #tabular

    A few days ago I published a new article on DAX Patterns web site describing how to implement Basket Analysis in DAX. This topic is a very classical one and is also covered in the many-to-many revolution white paper. It has been also discussed in several blog posts, listed here in historical order:

    As usual, in DAX Patterns we try to present the required DAX formulas in a way that is easy to adapt to specific models. We also try to show a good implementation from a performance point of view. Further optimizations are always possible in DAX. However, in order to keep the model simple to adapt in different scenarios, we avoid presenting optimizations that would require particular assumptions or restrictions on the data model.

    I hope you will find the Basket Analysis pattern useful. Even if you do not need it today, reading the DAX formula is a good exercise to check your knowledge of evaluation contexts in DAX. For example, describing how does it work the following expression is not a trivial task!

    [Orders with Both Products] :=
    CALCULATE (
        DISTINCTCOUNT ( Sales[SalesOrderNumber] ),
        CALCULATETABLE (
            SUMMARIZE ( Sales, Sales[SalesOrderNumber] ),
            ALL ( Product ),
            USERELATIONSHIP ( Sales[ProductCode], 'Filter Product'[Filter ProductCode] )
        )
    )

    The good news is that you can use the patterns even if you do not really understand all the details of the DAX formulas you are using!
    Any feedback on this new pattern is very welcome.

  • White Paper on Analysis Services Tabular Large-scale Solution #ssas #tabular

    Since the first beta of Analysis Services 2012, I worked with many companies designing and implementing solutions based on Analysis Services Tabular. I am glad that Microsoft published a white paper about a case-study using one of these scenarios: An Analysis Services Case Study: Using Tabular Models in a Large-scale Commercial Solution. Alberto Ferrari is the author of the white paper and many people contributed to it. The final result is a very technical document based on a case study, which provides a level of detail that I don’t see often in other case studies (which are usually more marketing-oriented).

    This white paper has the following structure:

    • Requirements (data model, capacity planning, client tool)
    • Options considered (SQL Server Columnstore Indexes, SSAS Multidimensional, SSAS Tabular)
    • Data Model optimizations (memory compression, query performance, scalability)
    • Partitioning and Processing strategy for near real-time latency
    • Hardware selection (NUMA analysis, Azure VM tests)
    • Scalability tests (estimation of maximum users per node)

    If you are in charge of evaluating Tabular as analytical engine, or if you have to design your solution based on Tabular, this white paper is a must read. But if you just want to increase your knowledge of Analysis Services, you will find a lot of useful technical information. That said, my favorite quote of the document is the following one, funny but true:

    […] After several trials, the clear winner was a video gaming machine that one guy on the team used at home. That computer outperformed any available server, running twice as fast as the server-class machines we had in house.

    At that point, it was clear that the criteria for choosing the server would have to be expanded a bit, simply because it would have been impossible to convince the boss to build a cluster of gaming machines and trust it to serve our customers.  But, honestly, if a business has the flexibility to buy gaming machines (assuming the machines can handle capacity) – do this.

    Owen Graupman, inContact

    I want to write a longer discussion about how companies are adopting Tabular in scenarios where it is the hidden engine of a more complex solution (and not the classical “BI system”), because it is more frequent than you might expect (and has several advantages over many alternative approaches).

  • The updated Survey pattern for Power Pivot and Tabular #powerpivot #tabular #ssas #dax

    One of the first models I created for the many-to-many revolution white paper was the Survey one. At the time, it was in Analysis Services Multidimensional, and then we implemented it in Analysis Services Tabular and in Power Pivot, using the DAX language.

    I recently reviewed the data model and published it in the Survey article on DAX Patterns site. The Survey pattern is the foundation for others, such as the Basket Analysis, and it is widely used in many different business scenario. I was particularly happy to know it has been using to perform data analysis for cancer research!

    In this article I did some maintenance on the DAX formulas, checking that the proper error handling is part of the formulas, and highlighting some differences in slicers behavior between Excel 2010 and Excel 2013, which could be particularly important for the Survey scenario. As usual, we provide sample workbooks for both Excel 2010 and Excel 2013, and we use DAX Formatter to make the DAX code easier to read. Any feedback will be appreciated!

  • Write DAX queries in Report Builder #ssrs #dax #ssas #tabular

    If you use Report Builder with Reporting Services, you can use DAX queries even if the editor for Analysis Services provider does not support DAX syntax. In fact, the DMX editor that you can use in Visual Studio editor of Reporting Services (see a previous post on that), is not available in Report Builder. However, as Sagar Salvi commented in this Microsoft Connect entry, you can use the DAX query text in the query of a Dataset by using the OLE DB provider instead of the Analysis Services one. I think it’s a good idea to show the steps required.

    First, create a Data Source using the OLE DB connection type, and provide the connection string the provider (Provider), the server name (Data Source) and the database name (Initial Catalog), such as:
    Provider=MSOLAP;Data Source=SERVERNAME\\TABULAR;Initial Catalog=AdventureWorks Tabular Model SQL 2012

    image

    Then, create a Dataset using the data source previously defined, select the Text query type, and write the DAX code in the Query pane:

    image

    You can also use the Query Designer window, that doesn’t provide any particular help in writing the DAX query, but at least can show a preview of the result of the query execution.

    image

    I hope DAX will get better editors in the future… in the meantime, remember you can use DAX Studio to write and test your DAX queries, and DAX Formatter to improve their readability!

    If you want to learn the DAX Query Language, I suggest you watching my video Data Analysis Expressions as a Query Language on Project Botticelli!

  • Tips for adapting Date table to Power View forecasting #powerview #powerbi

    During the keynote of the PASS Business Analytics Conference, Amir Netz presented the new forecasting capabilities in Power View for Office 365. I immediately tried the new feature (which was immediately available, a welcome surprise in a Microsoft announcement for a new release) and I had several issues trying to use existing data models.

    The forecasting has a few requirements that are not compatible with the “best practices” commonly used for a calendar table until this announcement. For example, if you have a Year-Month-Day hierarchy and you want to display a line chart aggregating data at the month level, you use a column containing month and year as a string (e.g. May 2014) sorted by a numeric column (such as 201405). Such a column cannot be used in the x-axis of a line chart for forecasting, because you need a date or numeric column. There are also other requirements and I wrote the article Prepare Data for Power View Forecasting in Power BI on SQLBI, describing how to create columns that can be used with the new forecasting capabilities in Power View for Office 365.

  • Power Query in Modern Corporate BI–Copenhagen, June 3, 2014–#powerquery

    I will be in Copenhagen to deliver the SSAS Tabular Workshop on June 2-4, 2014 (few seats still available, but hurry up!).

    In the same week I will be a speaker in an evening community event, MsBIP møde nr. 21, delivering the Power Query in Modern Corporate BI session that I also presented at TechEd North America 2014 last week. It’s not just a session about Power Query, there is a broader scope related to Corporate BI vs. Self-Service BI, which could be open to many consideration. I think that the two worlds can (and should) collaborate, instead of fighting against each other, especially when there is an existing investment in Corporate BI. I hope to meet many of you there!

  • Implement Budget Allocation in DAX for Power Pivot and Tabular #powerpivot #tabular #ssas #dax

    Comparing sales and budget, or costs and budget, is a very common operation. However, it is often the case that you have different granularities for different tables containing budget and the data to compare with. There are two ways to do that: you can limit the comparison to the granularity that is common to the two tables, or you can allocate the budget where it’s not defined.

    For example, if you have a budget defined by quarter and category, you might want to allocate it by month and product. In this way, you will do the comparison as you had a more granular definition of the budget, without actually having to do the manual job of allocating data (usually in an Excel worksheet!).

    If you want to do budget allocation in DAX, you can use the Budget Patterns we published on DAX Patterns. If you come from and MDX/OLAP background, at first you might find it hard to solve the problem of not having attribute hierarchies that helps you in propagating the budget values to lower hierarchical levels. However, I think that once you get used to DAX, you will find the behavior very predictable and easy to “debug” also for more complex allocation formula. You just have to be careful in writing the DAX formula, but probably the pattern we wrote should help you designing the right data model, without creating physical relationships to the budget table!

    This pattern is also based on the Handling Different Granularities scenario I discussed a couple of weeks ago.

  • Meet me at TechEd 2014 – where and when #msteched

    If you are attending TechEd North America in Houston this week, stop me and say hello! I am always happy to meet blog readers, and of course if you have question or topic to discuss, try to join me at the BI booth expo. I tried to put a list of where and when you can find me (thanks to Kasper for the idea):

    • Tuesday, May 13, 10:45am-12:30pm at Microsoft booth in expo area (Data Platform and Business Intelligence: Data Platform)
    • Tuesday, May 13, 2:15pm-4:00pm at Microsoft booth in expo area (Datacenter & Infrastructure Managment: Application Solutions)
    • Tuesday, May 13, 6:30pm-8:30pm at Ask the Experts
    • Wednesday, May 14, 3:15pm-4:30pm in room 330 - DBI-B323 Power Query in Modern Corporate BI
    • Thursday, May 15, 8:30am-9:45am in room 330 - DBI-B322 Improving Power Pivot Data Models for Microsoft Power BI
    • Thursday, May 15, 12:30pm-3:15pm at  Microsoft booth in expo area (Datacenter & Infrastructure Managment: Application Solutions)

    If you are not attending TechEd, remember you will be able to see most of the recordings on Channel 9.

This Blog

Syndication

Archives

Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement