THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

Microsoft OLAP by Mosha Pasumansky

  • Big announcement coming at MS BI Conference

    It isn’t news anymore that there is going to be big announcement during MS BI Conference next week. The signs of it show up in the session descriptions – like in this one “This session will describe and detail some important announcements that will be made during the BI conference”, and members of the product team dropped hints in their blogs – like in Ashvini Sharma’s blog. I especially liked his recollection of OLAP Services 7.0 announcement back in 1997 – “I remember a conference back in the 7.0 days when we introduced OLAP Services and someone from the audience hurried outside to make a phone call. I overheard him saying "You need to see this... it's going to be big!”. Well folks, we get big things like that once a decade, and time is up now. The announcement will be made during Ted Kummert and Tom Casey keynotes Monday morning, but to get the details (at least the details which product team is ready to talk about now), come to Amir’s session CL211 on Tuesday at 2:45PM. Of course, this material will be repeated, enhanced, rehashed etc many times later as well, but this is your chance to be in the historic moment – comparable (if not exceeding) to unraveling of the OLAP Services 7.0. I will be there, hope you can make it too. Trust me, you won’t regret it, this is your chance to tell your grandchildren “I was there when Microsoft for the first time ever talked about …”, and Amir is one of the best technical speakers (maybe second best after Steve Jobs).

    What the announcement will be about ? Well, of course, I am not going to tell now ! There are already speculations going on in blogosphere, but in three more days everybody will know what’s going to happen. See you there !

  • Displaying duration values – MDX expressions in FORMAT_STRING

    Suppose you have a calculation (or even a real measure) which returns elapsed time or duration. We want to present it to the user as such, i.e. break into hours, minutes and seconds. This sounds easy enough, we would just use appropriate FORMAT_STRING. If we read documentation, we quickly find that by using ‘hh’ for hours, ‘mm’ for minutes and ‘ss’ for seconds, we can have ‘hh:mm:ss’ formatting, i.e.

    with member measures.delta as 0.14159
     ,format_string = 'hh:mm:ss'
    select {measures.delta} on 0
    from [Adventure Works]

    will produce ‘03:23:53’ result. This works great, but if duration is greater than one day, we need to include number of days elapsed as well. So it may look like the format string ‘d “days” hh:mm:ss’ would work, but there are few problems with using ‘d’ inside format string. It is supposed to give the day of the month, therefore it can not work for more than 31 days, and it has other undesired effects on durations – like returning 31 for values between 0 and 1, and 30 for values between 1 and 2. So we need better solution. And the solution is to use real MDX expression to build FORMAT_STRING calculation property.

    I discovered that even some of the advanced MDX users were surprised that MDX expressions can be used with FORMAT_STRING. Certainly, everybody knows that it is possible to use MDX expressions for other calculations properties, like BACK_COLOR – there is whole chapter dedicated to it in my “Fast Track to MDX” book, or even more exotic ones, like LANGUAGE – see this blog for example. But there are no examples of using MDX expressions in the most popular and widely used calculation property – FORMAT_STRING. (Perhaps the reason for that in AS2000 it was not possible, but it certainly is possible according to OLEDB for OLAP spec, and both AS2005 and AS2008 fully support it).

    So in our case one way to write such expression would be

    with member measures.delta as 3.14159
     ,format_string 
       = 
        IIF(
          measures.delta < 1
         ,'d "days" hh:mm:ss'
         ,'"' + cstr(int(measures.delta)) + ' days" hh:mm:ss')
    select {measures.delta} on 0
    from [Adventure Works]
    And we get expected result of ‘3 days 03:23:53’ as a result.
  • Mosha’s blog in other languages

    Apparently some people find my blog useful enough to bother to translate it into other languages. Two sites in particular behaved nicely and asked my permission. So if you find it easier to read Japanese or Russian rather than English, the following resources could be helpful:

    Japanese

    PASSJ 長崎友嘉 Blog (SQL Server の BI をテーマにします) - http://blogs.sqlpassj.org/nagasaki/ 

    This site is hosted at PASSJ – Professional Association for SQL Server Japan, by Tomoyoshi Nagasaki and is dedicated to SQL BI. There are plenty of translations and original articles there. I was honored to have special section dedicated just to translations of my blog at http://blogs.sqlpassj.org/nagasaki/category/558.aspx

     

    Russian

    Infology.ru - http://www.infology.ru/

    This is project of Konstantin Lissianski and Andrei Pivovarov, site dedicated to data warehousing, business analytics, OLAP and CRM. It has very wide collection of mostly translated and also some original material. Again, I was honored to have translations of my blog featured on this site at http://www.infology.ru/category/author/pasumansky/ alongside with translations from Ralph Kimball, Bill Inmon, Nigel Pendse and Curt Monash.

     

    There are translations to other languages too, but since they didn’t ask my permission, I will not link to them.

    Interestingly, the Optimizing Count(Filter(...)) expressions in MDX blog turned out to be most popular, as it got translated into most number of languages.

  • Hierarchical trace profiling with MDX Studio

    Profiling and tracing functionality of Analysis Services is very valuable in analyzing query performance. Trace contains wealth of information about what is happening in the server. However, I found out, that this information might not be very easy to digest. Profiler just shows flat list of lines and lines of trace events. However, there is an inherited hierarchical grouping of events. For example, within Non Empty processing, we may have multiple SE queries, each one triggering several partition queries etc. It is difficult to pick up this hierarchical grouping from the trace, partly because design of trace events is somewhat inconsistent. Some events have clear Begin/End boundaries – like QueryBegin/QueryEnd events, others have only one event at the end – like QuerySubcube, yet others in addition to Begin/End pair also have multiple “Current” events in the middle – like SerializeResultsCurrent. And while some important fragments of query execution have special events – like NON EMPTY and Serialize Results, others don’t – like calculate query named sets and axes etc. New release of MDX Studio (version 0.4.7 – can be downloaded here) solves the above mentioned problems as much as possible. It features the “Profile” button (right next to “Execute” and “Debug” buttons), which produces the hierarchical trace view for the query.

    Let’s take a look at the following example:

    SELECT 
      {
        [Measures].[Reseller Sales Amount]
       ,[Measures].[Internet Sales Amount]
      } ON 0
     ,NON EMPTY 
        (
          [Date].[Date].MEMBERS
         ,[Product].[Subcategory].MEMBERS
        ) ON 1
    FROM [Adventure Works];

    If we profile it in MDX Studio we will get the following trace:

    We see in this picture that the query execution consisted of two major steps – NON EMPTY and serialization of results. NON EMPTY took 14.6% of the time, and serialization of results took 85.34% of the time. Within NON EMPTY, there were 8 subcube events, 6 of them coming from cache and 2 requiring query to partitions, and querying of 4 partitions in Reseller Sales measure group took 17.02% of NON EMPTY and 2.5% of the total time, and the same for the Internet Sales measure group (not surprisingly, these numbers are in agreement with perfmon counters reported on the left).

    It is well known, that performance of MDX query depends a lot on what data (and if any) is in the server’s cache. Therefore, MDX Studio in Profile mode executes query twice, once on cold caches and once on warm caches, and displays both hierarchical traces and perfmon stats for both. It also computes the ratio of time spent on cold caches and ratio spent on warm caches (sometimes this can serve as first approximation of SE vs. FE split, but more accurate calculation would be available in the next version of MDX Studio). For example here is how it looks for the following query:

    SELECT [Measures].MEMBERS ON 0 FROM [Adventure Works]

    Cold cache execution

    Time              : 671 ms
    Calc covers       : 254
    Cells calculated  : 3734
    Sonar subcubes    : 89
    SE queries        : 200
    Cache hits        : 146
    Cache misses      : 57
    Cache inserts     : 21
    Cache lookups     : 203
    Memory Usage KB   : 5108

    Warm cache execution

    Time              : 46 ms
    Calc covers       : 30
    Cells calculated  : 33
    Sonar subcubes    : 1
    SE queries        : 8
    Cache hits        : 8
    Cache misses      : 0
    Cache inserts     : 0
    Cache lookups     : 8
    Memory Usage KB   : 0

    Percent cold caches    :93.14 %
    Percent warm caches    :6.86 %

    In order to get accurate timing for the “Cold cache execution”, MDX Studio does couple of things. First, and obvious one, it clears all the caches. But this by itself is not enough. The problem is that ClearCache command also resets precomputed static results of MDX Script. They will be recalculated when next query executes, but we really don’t want to include time for MDX Script execution in the query execution times. Therefore, MDX Studio runs empty query first, just to make sure that MDX Script has executed.

    Profiling is an important performance tuning tool, but it also can help to find semantic problems with MDX query. Let’s consider the following example – we want to show top 10 products by their sales. The following query is supposed to provide an answer:

    SELECT 
      [Measures].[Internet Sales Amount] ON 0
     ,TopCount
      (
        [Product].[Product].[Product].MEMBERS
       ,10
       ,[Measures].CurrentMember
      ) ON 1
    FROM [Adventure Works];

    Of course, seasoned MDX practitioner will immediately see the problem with the above query, but for someone less experienced, the problem might not be particularly obvious. Let’s profile this query and look at the hierarchical trace:

    Now it is obvious to anybody looking at this picture, that there is something wrong here. We would expect to see querying the Internet_Sales_200X partitions, since we wanted to see products by sales, but why there is querying of Reseller_Sales_200X partitions, we didn’t ask anything in the query about Reseller Sales. Since these Reseller_Sales partitions show up under “Calculate axes” group of events, we know that there is something wrong with MDX defining the axes. Indeed, the expression

    TopCount([Product].[Product].[Product].MEMBERS ,10 ,[Measures].CurrentMember)

    is the problem here. It uses [Measures].CurrentMember as criteria of TopCount, but in MDX axes are computed independently of each other, so it doesn’t matter that we have [Internet Sales Amount] on columns, the TopCount is computed in the context of the default measure, which we now suspect must be from the Reseller Sales measure group. Indeed, if we start “Debug” mode in the MDX Studio, and navigate to [Measures].CurrentMember expression in the parse tree – we will see that it resolves to [Measures].[Reseller Sales]. Therefore changing query to

    SELECT 
      [Measures].[Internet Sales Amount] ON 0
     ,TopCount
      (
        [Product].[Product].[Product].MEMBERS
       ,10
       ,[Measures].[Internet Sales Amount]
      ) ON 1
    FROM [Adventure Works];
    fixes the problem, which again can be verified in the hierarchical trace window.
  • Ratio to “Parent on Rows” in MDX

    Every now and then the issue of computing a generic “ratio to parent” calculation comes up. There is a good overview of the problem in Darren Gosbell’s blog “MDX ratio of current parent issue”. I want to pick up the conversation where Darren left it:

    It is technically possible to get "kind of" close doing something like the following using the Axis() function: <MDX fragment skipped> which sort of gives us a  "Percent of Row parent" calculation and this is probably the best you can do, but if you crossjoin multiple hierarchies on the row axis we are in trouble again.

    I am not big fan of calculation which depends on the content of axis, mostly because when result of the same expression at the same coordinates differ from query to query (and this happens when expression uses Axis function), the whole caching is destroyed, and as a result performance suffers. However, if there is a real business requirement to build calculated member like that – it is certainly possible, and in most generic way, which will work no matter how many hierarchies are in the row axis. What’s more interesting, it is even possible to do it in pure MDX, and this MSDN forum thread provides different approaches. However, none of these approaches is practical one. The practical solution is to use the following stored procedure:

    public decimal RatioToParent(Set axis, Expression exp)
    {
        Hierarchy h = null;
    
        // Iterate over all hierarchies in the set
        int cHier = axis.Hierarchies.Count;
        int iHier;
        for (iHier = cHier-1; iHier >= 0; iHier--)
        {
            h = axis.Hierarchies[iHier];
            // and find the hierarchy where the current member is not yet at the highest possible level
            if (h.CurrentMember.ParentLevel.LevelNumber > 0)
                break;
        }
    
        // If there were no such hierarchy found - report ratio of 100%
        if (h == null || iHier < 0)
            return 1;
    
        // Since current member in this hierarchy is not yet at the highest level, we can safely call .Parent
        TupleBuilder tb = new TupleBuilder(h.CurrentMember.Parent);
        // and divide value at current cell by the value of its parent
        return (decimal)exp.Calculate(null) / (decimal)exp.Calculate(tb.ToTuple());
    }

    The way to use this stored procedure in the definition of calculated member is as in the following query – we have three different hierarchies on rows, and all the ratios come back correctly

    WITH 
      MEMBER Measures.SalesRatioToParent AS 
        IIF
        (
          IsEmpty([Measures].[Sales Amount])
         ,null
         ,ASSP.ASStoredProcs.SetOperations.RatioToParent
          (
            Axis(1).Item(0)
           ,[Measures].[Sales Amount]
          )
        )
       ,FORMAT_STRING = 'Percent'
       ,NON_EMPTY_BEHAVIOR = [Measures].[Sales Amount]
    SELECT 
      {
        [Measures].[Sales Amount]
       ,[Measures].[SalesRatioToParent]
      } ON 0
     ,NON EMPTY 
        (
          [Promotion].[Promotion Category].MEMBERS
         ,[Product].[Category].MEMBERS
         ,[Sales Territory].[Sales Territory].MEMBERS
        ) ON 1
    FROM [Adventure Works];

    Note the careful combination of IsEmpty([Sales Amount], NULL, …) construct and NON_EMPTY_BEHAVIOR. The condition of IIF ensures that NON_EMPTY_BEHAVOIR is set correctly in this scenario

    I added RatioToParent function to my local copy of Analysis Services Stored Procedures project, I hope that it will get approved and become part of the official ASSP release.

  • Inspecting calculation dependencies with MDX Studio

    MDX Studio is a universal tool for developing, debugging and analyzing MDX. With 0.4.6 release, there is a new feature in MDX Studio which allows to inspect and analyze dependencies between different calculations. Dependencies are automatically calculated whenever Parse Tree is built, and can be accessed in the dedicated tab. The picture below shows some dependencies from the MDX Script of Adventure Works sample cube (click on image for larger picture).

    Of course, such a picture could be overwhelming, therefore there is a helper navigational grid on the left side. For every object participating in the dependency graph, it shows number of objects it depends on (either directly or indirectly) and number of objects depending on it (either directly or indirectly). Clicking on object name shows all the dependencies of that object, whereas clicking on number shows only outgoing or incoming dependencies accordingly. For example, we now can quickly find out which object most of other objects depend on. In Adventure Works example, these are measures Reseller Sales Amount and Sales Amount with 6 dependencies each. Internet Sales Amount follows closely with 5 dependencies.

    To some degree, Adventure Works is not very interesting example to work with. Every object there either depends on others, or others depend on it, but never both. I.e. one of the numbers in either Out or In columns is always zero. This doesn’t make very complicated graphs, just one level of indirection. But MDX Studio is capable of following arbitrary number of dependencies computing transitive closure on either outgoing or incoming edges.

    Here is more interesting example:

    CREATE [Measures].[Internet Gross Profit] =
         [Measures].[Internet Sales Amount] 
        - 
         [Measures].[Internet Total Product Cost];
     
    CREATE [Measures].[Internet Gross Profit Margin] =
         [Measures].[Internet Gross Profit]
        /
         [Measures].[Internet Sales Amount];
    
    CREATE [Measures].[YTD Gross Profit] =
        Sum(YTD([Date].[Calendar]), [Measures].[Internet Gross Profit]);
    
    CREATE [Measures].[Previous Period YTD Gross Profit] =
        ([Date].[Calendar].PrevMember, [Measures].[YTD Gross Profit]);
    
    CREATE [Measures].[Previos Period Gross Profit] =
        ([Date].[Calendar].PrevMember, [Measures].[Internet Gross Profit]);

    And here is how the full graph of dependencies for Internet Gross Profit will look like (it has 4 dependant and 2 depending objects)

    The earliest build of MDX Studio which supports this functionality can be downloaded from here:

    http://cid-74f04d1ea28ece4e.skydrive.live.com/browse.aspx/MDXStudio/v0.4.6 (but always check for later release!).

    Special thanks to Greg Galloway, who suggested the idea of dependency viewer (on Friday, and I liked it so much, I implemented it over the weekend).

    As usual, please report all problems and make suggestions in MDX Studio Forum.

  • Finding lowest common ancestor in set

    This thread on MSDN forums asks seemingly easy question – given a multiple selection of elements in the parent-child hierarchy – how to find lowest common ancestor among them ? While it is fairly easy to do using stored procedure, it turned out to be far from trivial using pure MDX. Here is my solution to this puzzle, it isn’t fully optimized, I cared more to keep it simple to understand instead.

    Below is my solution, applied to the example of set

    {
      [Employee].[Employees].&[289]
     ,[Employee].[Employees].&[281]
     ,[Employee].[Employees].&[296]
    }

    The right answer for this example is Brian Welcker (of Reporting Services fame). If you have simpler or more elegant solution – I invite the competition ! My solution follows:

    WITH 
      // This is example set 
      SET SelectedEmployees AS 
        {
          [Employee].[Employees].&[289]
         ,[Employee].[Employees].&[281]
         ,[Employee].[Employees].&[296]
        }
      // We compute the highest level in the set, since
      // lowest common ancestor has to be at or above this level
      MEMBER HighLevel AS 
        Min
        (
          SelectedEmployees
         ,[Employee].[Employees].CurrentMember.Level.Ordinal
        ) 
      // Set of all common ancestors
      SET CommonAncestors AS 
        Filter
        (
          // go over ancestors of the first element in the set
          Ascendants(SelectedEmployees.Item(0)) AS iter
         ,
            // discard those below lowest possible common level
            [Employee].[Employees].CurrentMember.Level.Ordinal <= HighLevel
          AND 
            // Check that all the ancestors at given level
            // are the same
              Generate
              (
                SelectedEmployees
               ,Ancestors
                (
                  [Employee].[Employees].CurrentMember
                 ,iter.Current.Item(0).Level
                )
              ).Count
            = 1
        )
      // Pick lowest common ancestor among all common ancestors
      SET LowestCommonAncestor AS 
        // get first element
        Head
        (
          // ...of the reversed hierarchization
          Hierarchize
          (
            CommonAncestors
           ,POST
          )
         ,1
        )
    SELECT 
      LowestCommonAncestor ON 0
    FROM [Adventure Works];

    Update: Gerhard Brueckl has offered another, more elegant solution. It has clever use of the fact that EXISTING over the parent-child hierarchy is going to bring all of the ancestors too, so there is no need to do extra Generate(…, Ascendants()). Here is Gerhard solution:

    WITH 
      SET selected_employees AS 
        {(EXISTING [Employee].[Employees].MEMBERS)}
      SET common_ancestors AS 
        {
          Filter
          (
            selected_employees
           ,
              Intersect
              (
                [Employee].[Employees].CurrentMember.Level.MEMBERS
               ,selected_employees
              ).Count
            = 1
          )
        }
      MEMBER [Measures].LCA AS 
        Tail
        (
          common_ancestors
         ,1
        ).Item(0).Name 
    SELECT 
      [Measures].LCA ON COLUMNS
    FROM [Adventure Works]
    WHERE 
      {
        [Employee].[Employees].&[289]
       ,[Employee].[Employees].&[281]
       ,[Employee].[Employees].&[296]
      };
  • Drillthrough on calculated measures

    Drillthrough is a great feature of Analysis Services, but it has a limitation – only cells without calculations covering them can be drilled through. This automatically rules out drillthrough for calculated members in general and calculated measures in particular. This limitation has been a subject to much debate in forums and blogs. It has been logged on connect site as “Enable drillthrough on calculated measures” suggestion, where it gathered strong support. And competition struggles with very similar issues as well.

    While there is no universal solution to this problem, in this article we will show how many practical particular cases can be solved in AS2005. Let’s first understand what does it mean to drill through on calculated measure. According to the definition:

    “Drillthrough is the operation in which a user specifies a single cell and the Analysis server returns the most detail level data that contributed to that cell.”

    For the regular cells it is pretty straightforward to figure out which detailed level data contributed to them, but in case of arbitrary calculation, it can be very tricky. Basically, one needs to reverse engineer the expression of the calculation, find out all the dependencies, and drill through to each one of them. But in case of simple calculations, such dependencies are obvious. Let’s consider the [Internet Gross Profit] calculated measure from the Adventure Works cube. It is defined as

    Create Member CurrentCube.[Measures].[Internet Gross Profit]
    
     AS [Measures].[Internet Sales Amount] - [Measures].[Internet Total Product Cost],
     
    Format_String = "Currency";

    It is obvious, that [Internet Gross Profit] is computed from two real measures: [Internet Sales Amount] and [Internet Total Product Cost]. But how can we convey this to the engine, since no matter what the expression of the calculated measure is, the DRILLTHROUGH statement is going to fail on it. The trick here is to build a DRILLTHROUGH statement which redirects from calculated measure to the real measures it derived from. Below are the exact steps in order to achieve that:

    1. Define new action. Note, that it cannot be “Drillthrough action”, since it won’t even apply to calculated measures. Just a regular action.

    2. Set the “Action Target” property to “Cells” – after all drillthrough is done on cells

    3. Since the “Target Object” for cells actions is always “All cells”, we will use condition to constrain our action just to the desired calculated measure. In the “Condition” property set the following expression

    Measures.CurrentMember IS [Measures].[Internet Gross Profit]

    4. “Action Type” needs to be set to “Rowset”, since drillthrough returns rowset.

    5. Set the “Caption” to “Drillthrough”, so in UI of the client tool it will show up with this title, and it will be obvious to the user.

    6. “Action Expression” is the most important property. Here we need to build the MDX DRILLTHROUGH statement which redirects to two real measures. The MDX DRILLTHROUGH does not allow multiple cells to appear in the SELECT clause. So how will we get two measures ? We will use the fact that both of them belong to the same measure group, and instead of putting them into the SELECT clause, we will send them to RETURN clause. Building custom return clause also has a benefit of having flexibility to choose which attributes will show up in the result of drillthorugh. By default all the granularity attributes of the measure group will be there, but we can include less or more attributes. We also get to decide on the actual content in these attribute columns – whether we want keys, names, unique names etc. Here is how the RETURN clause may look like:

    RETURN 
     [Internet Sales].[Internet Sales Amount]
    ,[Internet Sales].[Internet Total Product Cost]
    ,NAME([$Date].[Date])
    ,NAME([$Customer].[Customer])
    ,NAME([$Product].[Product])
    ,NAME([$Promotion].[Promotion])
    ,NAME([$Source Currency].[Source Currency Code])
    ,NAME([$Sales Reason].[Sales Reason])

    In order for drillthrough to address the correct cell, we need to put current cell coordinates into the SELECT clause. One way of doing it would be to write long MDX string concatenation, using <hierarchy>.CurrentMember.UniqueName subexpression for every hierarchy in the cube. However, this is tedious and error-prone approach. There are hundreds of hierarchies, new ones can be added, old ones removed or renamed etc. Instead, we can just call stored procedure which will enumerate all the current coordinates (note that we could’ve done the same with the RETURN clause too).

    Analysis Services Stored Procedures Project (ASSP) actually features some sprocs very similar to what we need here, under FindCurrentMembers class. Similar, but not quite what we need. In our particular case, we want to skip measures (because we are hardcoded them), and also for brevity, we need to consider only attribute hierarchies. Below source code achieves just that:

    public static string CurrentCellAttributes()
    {
        // start with empty string
        string coordinate = String.Empty;
        bool first = true;
        foreach (Dimension d in Context.CurrentCube.Dimensions)
        {
            // skip measures
            if (d.DimensionType == DimensionTypeEnum.Measure)
                continue;
    
            foreach (Hierarchy h in d.AttributeHierarchies)
            {
                // skip user hierarchies - consider attribute and parent-child hierarchies
                // (parent-child is both user and attribute hierarchy)
                if (h.HierarchyOrigin == HierarchyOrigin.UserHierarchy)
                    continue;
                if (!first)
                    coordinate += ",";
                first = false;
                coordinate += h.CurrentMember.UniqueName;
            }
        }
    
        return coordinate;
    }

    Now, equipped with this powerful stored procedure, we can build the expression for the action:

    "DRILLTHROUGH MAXROWS 100 SELECT (" +
    ASSP.ASStoredProcs.FindCurrentMembers.CurrentCellAttributes()
    + ") ON 0 FROM [Adventure Works] 
    RETURN 
     [Internet Sales].[Internet Sales Amount]
    ,[Internet Sales].[Internet Total Product Cost]
    ,NAME([$Date].[Date])
    ,NAME([$Customer].[Customer])
    ,NAME([$Product].[Product])
    ,NAME([$Promotion].[Promotion])
    ,NAME([$Source Currency].[Source Currency Code])
    ,NAME([$Sales Reason].[Sales Reason])
    "

    We are done ! Now if we deploy this action, any client that supports actions (such as Excel 2007 or cube browser), on the right click on cell under [Internet Gross Profit] calculated measure, we will see additional menu item “Drillthrough”, and clicking on it will bring the most detail level data that contributed to that cell.

  • Parent-Child Dimension Table Naturalizer

    Parent-Child dimensions is an important feature of Analysis Services. Parent-Child dimension allow flexibility in the dimension modeling for scenarios such as Bill Of Materials, Chart of Accounts, Employee Organization Structure and others. However, this flexibility comes with the cost. Overuse of parent-child can cause performance problems. Additionally, there are some semantic quirks which can cause problems with calculations – such as parent-child not decoding related attributes like the normal dimensions do. So in some cases it is advisable to convert parent-child hierarchy to the regular multi-level hierarchy. This process of conversion is not a trivial one and it includes several steps.

    Jon Burchel, Senior Support Escalation Engineer in Microsoft, looked deeper into this problem, and came up with the tool called “Analysis Services Dimension Table Naturalizer” (or PCDimNaturalizer), which automates conversion of parent-child dimensions to regular ones. He put the project on www.codeplex.com – meaning that all the source code is available as well.

    Unlike most open source projects (at least on codeplex), this one is extensively documented (I guess Jon, being a support engineer, really understands the value of good documentation).

    The PCDimNaturalizer can be used from command line, with UI and it also exposes object model allowing it to be embedded in the .NET application or SSIS package. I am sure this tool will be useful for many Analysis Services practicioners.

    PCDimNaturalizer project at codeplex: http://www.codeplex.com/PCDimNaturalize

  • Intellisense in MDX Studio

    With Visual Studio and other modern IDEs, Intellisense became a must have tool in the developer’s toolset, boosting productivity. MDX developers have asked for it, but full, context-dependant intellisense wasn’t available. Until now. MDX Studio now features comprehensive support for intellisense. The first version with usable intellisense support is 0.4.4. (As usual, please visit MDX Studio forum to leave feedback and to find out about new releases).

    The following screenshots show some (but certainly not all!) of the MDX Studio intellisense capabilities:

     

    User starts typing new MDX statement

    It picks the “SELECT” keyword from the first two letters “se”. User can either keep typing, or hit Space or Enter to do auto-complete this keyword.

     

    The auto-completion is aware of the cube metadata when MDX Studio is connected to the server. So when user typed “D”, the auto-completion list featured MDX functions starting with “D”, MDX keywords starting with “D” but also appropriate metadata objects, in this case – dimensions, starting with “D”.

    Once “Descendants” function is chosen, the tooltip shows what arguments it takes

    The auto-completion list is context-aware, since [Date] is dimension name, it now offers all of the hierarchies of the Date dimension.

     

    The auto-completion is also aware of MDX type system. Now it understands that [Date].[Calendar] is hierarchy, therefore it offers MDX methods and properties which can be applied to the hierarchy, as well as levels of this hierarchy.

    Intellisense is also location-aware. In this case, the user just typed the “FROM” keyword, so auto-completion list offers list of cubes in the current database which can be used in the FROM clause.

    There are more features in MDX Studio Intellisense, like parenthesis matching etc. Hopefully this will make MDX developers more productive.

  • SQL PASS Summit 2008 and MS BI Conference 2008

    SQL PASS Summit 2008 and Microsoft Business Intelligence Conference 2008 are two major conferences targeting very similar audiences. And this year both conferences are happening in the same location very close to each other (there is just one month separating them). Thus, many SQL BI professionals are forced to choose which one to attend. And this time the choice is not an easy one. Here is a comparison table:

      SQL PASS Summit 2008 Microsoft Business Intelligence Conference 2008

    Keynotes

    Ted Kummert, Tom Casey, David J. DeWitt

    http://summit2008.sqlpass.org/keynotes.html

    Kurt DelBene, Stephen Elop, Ted Kummert

    http://www.msbiconference.com/pages/speakers.aspx

    Tracks

    • Application Development
    • Business Intelligence
    • Database Application
    • Professional Development
    • Microsoft BI Platform and Infrastructure
    • Microsoft BI Clients and Applications
    • Deployment and Best Practices
    • Customer and Industry Solutions
    • Business Value of Business Intelligence
    • Partner Training Track

    Sessions

    Over 130 technical sessions

    http://summit2008.sqlpass.org/program-sessions.html

    65 breakout sessions

    http://www.msbiconference.com/pages/tracksandsessions.aspx

    Extras

    14 preconference full day seminars

    Chalk talks

    SSAS specific sessions

    • Optimizing Dimension Designs in SSAS , Dave Fackler
    • Creating an SSIS, SSAS and SSRS monitoring solution with SSIS, SSAS and SSRS , Chris Webb
    • Do It Right: Best Practices for Analysis Services 2005 and 2008, Craig Utley
    • Troubleshooting MDX Query Performance , Stacia Misner
    • Deep dive into MDX (precon) - Mosha Pasumansky
    • Avoiding Common SQL Server Analysis Services Mistakes, Craig Utley
    • Designing High Performance Cubes in SQL Server 2008 Analysis Services, T.K. Anand
    • SQL Server Analysis Services Performance Monitoring, Carl Rabeler
    • Manageability and Scalability Improvements in Microsoft SQL Server 2008 Analysis Services, T.K. Anand
    • Optimizing Query Performance in Microsoft SQL Server 2008 Analysis Services, T.K. Anand
    Dates November 18-21, 2008 October 6-8, 2008
    Location Washington State Convention & Trade Center,
    Seattle WA.
    Washington State Convention & Trade Center,
    Seattle WA.

    Last year MS BI Conference was criticized for lack of technical sessions, but this year the list looks pretty strong, and totally comparable to the PASS’s one. In fact, comparing the SSAS specific sessions, it’s easy to notice that there is overlap. Craig Utley’s session is probably going to be the same. Each conference also features dedicated session on MDX performance. But there is plenty of different content, of course. Each conference has its own extras. PASS packs two days of preconference seminars, which are full day sessions. MS BI Conference, on the other hand, offers so called “chalk talks”, which were very popular last year. This time they are supposedly better organized – i.e. held in the real rooms with not only whiteboard, but computer and projector as well (the list of chalk talks is supposed to be published by next week).

    Overall not an easy choice. Luckily for me, I don’t have to choose between the two – since both are in Seattle – I am going to attend both of them ! Actually, I have speaking engagements in both of them. In SQL PASS, I have “Deep dive to MDX” preconference seminar, (the same one I was supposed to do last year, but it got canceled). And in MS BI Conference, I will be hosting the chalk talk “Got MDX problems ? MDX Studio to the rescue !”.

    So make your decisions, the registrations are open !

  • Product Volatility: Optimizing MDX with MDX Studio

    In this article we will show how to take typical, everyday MDX and optimize it by simply following advices of MDX Studio Analyzer.

    As a starting point, I will take “Product Volatility” example as published by Thomas Ivarson in his blog. It is a perfect example, because Thomas took a real business problem, and implemented it using very clean and straightforward MDX, just like I expect many MDX practitioners do. This MDX can be optimized, but some of these optimizations might not be obvious. Running MDX Studio Analyzer and following its advices takes the guesswork out of equation.

    Here is the MDX that computes total number of products, number of products with sales increased from last month and number of products with sales decreased from last month:

    WITH 
      MEMBER Measures.TotNumberOfProducts AS 
        Count
        (
          Descendants
          (
            [Product].[Product Categories].CurrentMember
           ,[Product].[Product Categories].[Product]
          )
        ) 
      MEMBER Measures.NumberOfSoldProductsIncreasing AS 
        Count
        (
          Filter
          (
            NonEmpty
            (
              Descendants
              (
                [Product].[Product Categories].CurrentMember
               ,[Product].[Product Categories].[Product]
              )
             ,[Measures].[Internet Sales Amount]
            )
           ,
                (
                  [Date].[Calendar].CurrentMember
                 ,[Measures].[Internet Sales Amount]
                )
              - 
                (
                  [Date].[Calendar].PrevMember
                 ,[Measures].[Internet Sales Amount]
                )
            > 0
          )
        ) 
      MEMBER Measures.NumberOfSoldProductsDecreasing AS 
        Count
        (
          Filter
          (
            NonEmpty
            (
              Descendants
              (
                [Product].[Product Categories].CurrentMember
               ,[Product].[Product Categories].[Product]
              )
             ,[Measures].[Internet Sales Amount]
            )
           ,
                (
                  [Date].[Calendar].CurrentMember
                 ,[Measures].[Internet Sales Amount]
                )
              - 
                (
                  [Date].[Calendar].PrevMember
                 ,[Measures].[Internet Sales Amount]
                )
            < 0
          )
        ) 
    SELECT
      {
        [Measures].[TotNumberOfProducts]
       ,[Measures].[NumberOfSoldProductsIncreasing]
       ,[Measures].[NumberOfSoldProductsDecreasing]
      } ON 0
     ,[Date].[Calendar].[Month] ON 1
    FROM [Adventure Works]

    Executing this query with hot cache in MDX Studio produces the following perfmon counter statistics:

    Time             : 359 ms
    Calc covers      : 85
    Cells calculated : 7672
    Sonar subcubes   : 79
    SE queries       : 76
    Cache hits       : 76
    Cache misses     : 3
    Cache inserts    : 3
    Cache lookups    : 79

    The time looks good, but only because both the cube and the query are small. The number of SE queries should raise a red flag here. With only 114 cells in the result, why there were 76 SE queries ? Obviously, on more serious cubes, this will become a major performance issue. So, let’s run MDX Studio Analyzer and see what it has to say about this query. We get back the following list:

    Line Col Message Link
    3 6 If you are trying to count number of members in current selection - consider introducing special measure group for this dimension More Info
    7 41 MDX function 'CurrentMember' may raise an error or produce non-desired result when user applies multiselect More Info
    12 6 Consider rewriting Count(Filter(set, cond)) construct as Sum(set, summator) with summator IIF(cond, 1, NULL) More Info
    20 45 MDX function 'CurrentMember' may raise an error or produce non-desired result when user applies multiselect More Info
    27 34 MDX function 'CurrentMember' may raise an error or produce non-desired result when user applies multiselect More Info
    32 34 MDX function 'PrevMember' may raise an error or produce non-desired result when user applies multiselect More Info
    39 6 Consider rewriting Count(Filter(set, cond)) construct as Sum(set, summator) with summator IIF(cond, 1, NULL) More Info
    47 45 MDX function 'CurrentMember' may raise an error or produce non-desired result when user applies multiselect More Info
    53 14 Same expression was used before at Line 26 Column 14. Consider eliminating common subexpressions for better performance and to take advantage of cache  

    Let’s go over this list and fix issue by issue.

    Line 3, Col 6: If you are trying to count number of members in current selection - consider introducing special measure group for this dimension

    The link points to “Counting days in MDX” blog. While we count here Products and not Days, the same technique applies here. These types of questions are best done not in MDX, but by introducing measure group, which only contains Products dimension, and single measure [Number Of Products] of type Count.

    Line 7, Col 41: MDX function 'CurrentMember' may raise an error or produce non-desired result when user applies multiselect

    Now, this is no longer a problem because of how we solved the previous warning ! Indeed, if we wanted to see total number of products across Bikes and Accessories categories, we would’ve added the following WHERE clause:

    WHERE {[Product].[Category].&[1],[Product].[Category].&[4]}

    It fails with the original query, but works fine with Count measure [Number of Products]

    Lines 12/39, Col 6: Consider rewriting Count(Filter(set, cond)) construct as Sum(set, summator) with summator IIF(cond, 1, NULL)

    This is a common construct, and as shown in the link to the “Optimizing Count(Filter(…)) expressions in MDX”, it performs much better if rewritten to use Sum. But before we do this rewrite, let’s take a look at the next warning:

    Line 53, Col 14: Same expression was used before at Line 26 Column 14. Consider eliminating common subexpressions for better performance and to take advantage of cache

    If we click on this message, the expression in question is the following:

                (
                  [Date].[Calendar].CurrentMember
                 ,[Measures].[Internet Sales Amount]
                )
              - 
                (
                  [Date].[Calendar].PrevMember
                 ,[Measures].[Internet Sales Amount]
                )

    Indeed, this expression represents Sales Growth, so we can isolate it into separate calculated member. But before we write the exact expression for the calculated member, let’s look deeper into the original expression. It features Count(Filter(NonEmpty(…))) construct, i.e. only products which had sales in current month are considered for increasing/decreasing calculation. I.e. even if product had sales in the previous month, but not in the current month, it won’t be counted in the number of products with decreasing sales. In order to repeat the same logic, the expression would be

      [Sales Growth] =
        IIF
        (
          IsEmpty([Measures].[Internet Sales Amount])
         ,NULL
         ,
            [Measures].[Internet Sales Amount]
          - 
            (
              [Date].[Calendar].PrevMember
             ,[Measures].[Internet Sales Amount]
            )
        ) 

    For more details why IIF(cond, NULL, exp) is good for performance, read “Performance of IIF function in MDX” blog. Now, putting all of together so far, we will get the following query:

    WITH 
      MEMBER [Sales Growth] AS 
        IIF
        (
          IsEmpty([Measures].[Internet Sales Amount])
         ,NULL
         ,
            [Measures].[Internet Sales Amount]
          - 
            (
              [Date].[Calendar].PrevMember
             ,[Measures].[Internet Sales Amount]
            )
        ) 
      MEMBER PositiveGrowth AS 
        IIF
        (
          [Measures].[Sales Growth] > 0
         ,1
         ,NULL
        ) 
      MEMBER Measures.NumberOfSoldProductsIncreasing AS 
        Sum
        (
          Descendants
          (
            [Product].[Product Categories].CurrentMember
           ,[Product].[Product Categories].[Product]
          )
         ,[Measures].[PositiveGrowth]
        ) 
      MEMBER NegativeGrowth AS 
        IIF
        (
          [Measures].[Sales Growth] < 0
         ,1
         ,NULL
        ) 
      MEMBER Measures.NumberOfSoldProductsDecreasing AS 
        Sum
        (
          Descendants
          (
            [Product].[Product Categories].CurrentMember
           ,[Product].[Product Categories].[Product]
          )
         ,[Measures].[NegativeGrowth]
        ) 
    SELECT
      {
        [Measures].[Number Of Products]
       ,[Measures].[NumberOfSoldProductsIncreasing]
       ,[Measures].[NumberOfSoldProductsDecreasing]
      } ON 0
     ,[Date].[Calendar].[Month] ON 1
    FROM [Adventure Works];

    When executed in MDX Studio, we get the following perfmon statistics:

    Time             : 171 ms
    Calc covers      : 15
    Cells calculated : 1993
    Sonar subcubes   : 6
    SE queries       : 3
    Cache hits       : 7
    Cache misses     : 5
    Cache inserts    : 5
    Cache lookups    : 12

    This is now much better than before, there are only 3 SE queries. All of the performance warnings have been addressed, but we still have warnings about CurrentMember not working well with multiselect. Indeed, when we put multiselect on product categories on Bikes and Accessories, we get errors for number of increasing and decreasing products. Links to “Writing multiselect friendly MDX calculations” and “Multiselect friendly MDX for calculations looking at current coordinate” blogs are helpful to understand what the problem is. Unfortunately, there is no universal solution, and each case needs to be evaluated differently.

    In this scenario, we can use apply to Sum(Descendants(…), exp) very similar technique that we applied to Count(Descendants(…)). The idea is not to use Descendants function, which forces us to use CurrentMember, which in turn fails on multiselect, but to put the expression “exp” directly on the Product attribute. In order to do that, we need to create fake measure in the measure group - “Num Products Increased”. It is bound to NULL expression, so this measure will always be NULL, and therefore won’t take any space in storage. But since this measure will have aggregation function Sum, it will properly aggregate when assigned expressions inside MDX Script.

    We will use MDX Studio to emulate MDX Script too (MDX Studio has a feature, where it automatically uses cube selected in the dropdown list to resolve current cube used in script statements).

    Here is the MDX:

    CREATE 
      [Sales Growth] = 
        IIF
        (
          IsEmpty([Measures].[Internet Sales Amount])
         ,NULL
         ,
            [Measures].[Internet Sales Amount]
          - 
            (
              [Date].[Calendar].PrevMember
             ,[Measures].[Internet Sales Amount]
            )
        );
    (
      [Product].[Product Categories].[Product]
     ,[Measures].[Num Products Increased]
    ) = 
      IIF
      (
        [Measures].[Sales Growth] > 0
       ,1
       ,NULL
      );
    SELECT
      {
        [Measures].[Number Of Products]
       ,[Measures].[Num Products Increased]
      } ON 0
     ,[Date].[Calendar].[Month] ON 1
    FROM [Adventure Works]
    WHERE 
      {
        [Product].[Category].&[1]
       ,[Product].[Category].&[4]
      };

    It’s performance characteristics are even slightly better than the query before:

    Time             : 31 ms
    Calc covers      : 6
    Cells calculated : 76
    Sonar subcubes   : 1
    SE queries       : 1
    Cache hits       : 2
    Cache misses     : 0
    Cache inserts    : 0
    Cache lookups    : 2

    Not to mention the fact that it works just fine with multiselect on products ! This is a great example, where fixing MDX to work properly in multiselect scenarios, has the side effect of improving performance.

    We still have warnings about PrevMember with respect to multiselect on the Time dimension. It is more difficult problem, since the definition of “Sales Growth” when there is multiselect on the Time dimension is ambiguous.

  • Microsoft buys DATAllegro

    Official pressrelease: http://www.datallegro.com/pr/7_24_08_microsoft_acquisition.asp

    Curt Monash coverage: http://www.dbms2.com/category/products-and-vendors/datallegro/

    This is a huge win for Microsoft Data Warehousing (and therefore BI) strategy !

  • Analyze MDX with MDX Studio

    Always adhering to the MDX best practices could be a difficult quest to follow. While there is plenty of information out there, it is scattered across book, whitepapers, blogs, BOL, forum posts etc. And this information could be difficult to digest, and sometimes not clear how to apply in the specific scenarios. On top of that, not all the information out there is reliable. Some of it is outdated, and what used to help in one version of AS, could be bad practice in newer version. All this makes practitioner’s life very difficult. But it doesn’t have to be this way. For long time, I wanted to automate many of the checks for MDX best practices, and today I am happy to announce that this functionality is now part of MDX Studio. Starting with version 0.4.0, MDX Studio features the “Analyze” button. It goes through the MDX, analyzes it, and flags issues. To illustrate this better, let’s consider the following example:

    WITH 
     MEMBER
      Measures.WorkingDays AS
        Count(
          Filter(
            Descendants(
              [Date].[Calendar].CurrentMember
              ,[Date].[Calendar].[Date]
              ,SELF)
           ,  [Date].[Day of Week].CurrentMember.Name <> "1"
          )
        )
     MEMBER
      Measures.SalesPerWorkingDay AS 
      '
        IIF(
         [Measures].[WorkingDays] IS NULL OR [Measures].[WorkingDays] = 0
         ,NULL
         ,[Measures].[Internet Sales Amount]/[Measures].[WorkingDays])
       '
    SELECT [Measures].[SalesPerWorkingDay]  ON 0
    , [Date].[Calendar].[Month].MEMBERS ON 1
    FROM [Adventure Works]

    In this example we are looking at sales per working day, where working day is defined as anything but Sunday. Let’s run Analyzer over this query and see what it will find out:

    Line Column Message Link
    7 30 MDX function 'CurrentMember' may raise an error or produce non-desired result when user applies multiselect More Info
    10 33 MDX function 'CurrentMember' may raise an error or produce non-desired result when user applies multiselect More Info
    10 47 Use IS operator to compare objects instead of comparing them by name More Info
    4 6 Consider rewriting Count(Filter(set, cond)) construct as Sum(set, summator) with summator IIF(cond, 1, NULL) More Info
    3 7 If you want to avoid division by zero - use 'value = 0' check, if you want to check whether cell is empty - use IsEmpty function, if you want to check that MDX member or tuple exists - use 'obj IS NULL' check More Info
    2 6 Don't use single quotes for definitions of calculation Measures.SalesPerWorkingDay More Info

     

     

     

     

     

     

     

     

     

    It is easier to navigate through the messages in the desktop edition of MDX Studio, where clicking on the message highlights the relevant portion of the MDX. Also there messages can be sorted, hidden (on right-click context menu) or copy/pasted to Excel. Messages are followed by the link to more information how the particular rule can be applied.

    The warning above are relatively simple, but MDX Studio Analyzer is capable of doing more sophisticated checks. Example of such check is detection of common subexpressions. For the following query

    WITH 
     MEMBER
      Measures.SalesPerWorkingDay AS 
      '
        IIF(
         Count(
          Filter(
            Descendants(
              [Date].[Calendar].CurrentMember
              ,[Date].[Calendar].[Date]
              ,SELF)
           ,  [Date].[Day of Week].CurrentMember.Name <> "1"
          )
        ) = 0
         ,NULL
         ,[Measures].[Internet Sales Amount]
          /
           Count(
             Filter(
               Descendants(
                 [Date].[Calendar].CurrentMember
                 ,[Date].[Calendar].[Date]
                 ,SELF)
              ,  [Date].[Day of Week].CurrentMember.Name <> "1"
             )
           )
        )
       '
    SELECT [Measures].[SalesPerWorkingDay]  ON 0
    , [Date].[Calendar].[Month].MEMBERS ON 1
    FROM [Adventure Works]

    MDX Studio Analyzer will report that the exactly same Count(Filter(…)) expression was used twice:

    Line Column Message
    15 9 Same expression was used before at Line 3 Column 7. Consider eliminating common subexpressions for better performance and to take advantage of cache

     

    The difficult choice that I faced was – for which version of Analysis Services to tailor the messages – AS2005 or AS2008. While some of the advices would stay the same, many others are different, and in some cases are contradictory. For example, in AS2005, CASE operator was never optimized, so at certain scenarios it was better to use nested IIFs, but in AS2008, CASE is as optimized as IIF, so it is preferable to nested IIFs. Another example is NON_EMPTY_BEHAVIOR property, which often times is essential to get good performance in AS2005, but usually is not needed in AS2008, and is better to be removed. With AS2005 everybody recommends replacing IIFs with SCOPE whenever possible, but it is not needed with AS2008 etc. At the end I decided to go with AS2008 only – it just feels as a better investment for the future. AS2008 should be released almost any day now, and I expect much faster migration to it from AS2005, then what we saw with migration from AS2000 to AS2005. The reason is simple – migration to AS2008 from AS2005 is very smooth – all the fundamental concepts stay the same, both client and management object models stay code compatible etc.

    One last thing to mention: Best practices are exactly what they are – best practices. It means, that most of the time using them you will be better off. However, they are absolute rules to be followed always. Sometimes there are exceptions from the rules. AS is a complex, feature-rich product, and MDX engine is probably the most sophisticated piece in it. Sometimes features interact in such a way, that some best practices might result in worse results. For example, one of the advices that MDX Studio will give is when it sees Filter(CrossJoin(…)), it will recommend using CrossJoin(Filter(…)) instead. It is common sense to reduce the size of the set before crossjoin’ing. But not only this is not always possible, but also there are rare cases where it would result in worse performance. So while you are encouraged to follow the best practices, always use them as a guide, and measure before and after applying the rules to make sure things do improve.

    MDX Studio 0.4.0 is released with desktop and online versions simultaneously.

    Desktop version can be downloaded from: http://cid-74f04d1ea28ece4e.skydrive.live.com/browse.aspx/MDXStudio/v0.4.0

    Online version can be accessed at: http://mdx.mosha.com

    As usual – please post your comments, suggestions, bug reports etc at MDX Studio forum.

  • The simpler MDX is – the better !

    Whenever I am asked “What is the single most important advice to write efficient MDX”, I always give same answer: The simpler MDX you write – typically the better performance you will get. The reasoning behind this is straightforward: If MDX is simple, it will be easier for the engine to understand and therefore to optimize it. I know that we all tend to think that we can do all kinds of tricks and outsmart engine, and sometimes it is true, but the engine becomes smarter and smarter with every release, and especially in AS2008. So the tricks which could’ve helped before, actually could be hurting more than helping.

    And now, it seems that I am not the only one saying that ! Mark Whitehorn (my coauthor of “Fast Track to MDX” book) together with Keith Burns have just published a technical whitepaper “Best Practices for Data Warehousing with SQL Server 2008”. This seems to be a whitepaper focused specifically on the improvements in SQL 2008, and it includes section named “Write simpler MDX without worrying about performance”. Nice title, and in addition it has explanation of block computation mode.

    I have really only couple of minor comments about it:

    - Block computation mode was not introduced in AS2008, it existed in AS2005 (I have plenty of examples in my blogs). AS2008 though, extended applicability of block computation mode very significantly.

    - Description of calculated member says that it is “running total of two consecutive years”, but the expression is not for running total, but rather for regular total, and also there is nothing specific about years in it. It will work with Month, Quarter, Day etc just as well.

    - I didn’t really understand the following statement “In most hierarchical structures we know that if data is present for one cell in 2003, it will be there for all cells in 2003. The trip to see if there is data for the previous period only needs to be carried out once”. I think this statement is really misleading. If data is present for one cell in 2003 – there is absolutely no guarantee that it will be present for all (even for any other) cells in 2003. The diagrams just above this statement show that in fact most of the cells don’t have data. What I think authors tried to say was that MDX function PrevMember doesn’t need to be called for every cell, because [2004].PrevMember will always result in [2003], and AS2008 is smart enough to make this evaluation only once, and then compute the entire subcube of 2003 for non-empty cells only

    - The example uses VISIBLE=2 property on calculated member. I wonder why…

Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement