THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

Joe Chang

Decoding STATS_STREAM

Data distribution statistics is one of the foundations of the cost-based query optimizer in all modern database engines including SQL Server. From SQL Server 2005 on, most of the information displayed by DBCC SHOW_STATISTICS is kept in a binary field accessible with the STATS_STREAM clause. Back in SQL Server 2000, it was possible to modify system tables directly, including the sysindexes stat_blob field. At the time, I described a decode of the stat_blob field with the purpose of influence the execution plan, presumably on a development system and not a production system.

Starting with SQL Server 2005, it was nolonger possible to directly modify system tables. An API was provided to access data distribution statistics to allow cloning the statistics from one database to another. The presumed usage is to clone statistics from a large production database to a small development database. In other database engines, I had heard of the idea of updating statistics on a backup system to be applied to the production system. While it was still possible to decode most of the 2005 stats_stream binary, it appears that a checksum was added so it was not possible to apply an externally generated statistics binary unless the "checksum" value could be correctly calculated.

Around this time, I was working on other SQL Server tools, most prominently SQL System for performance monitoring, Exec Stats for execution plan analysis and TraceAnalysis for trace processing. Work on the SQL Server data distribution cloning tool was discontinued, and I could not continue further research into the decoding of SQL Server data distribution statistics.

Since several people have asked about the data distribution statisics decode, I am making what I know about stat_stream available. It would be helpful is other people would contribute to the missing pieces.

Note that organization of stats_stream changed from SQL Server version 2000 (then sysindexes stat_blob) to 2005 and again to 2008? It is quite possible there are also changes in version 2012? Most of what I discuss here applies to version 2008 R2.

Decoding Stats Stream for SQL Server 2008R2

Here I am using 1 based reference. Byte index 1 is the first byte.
In C# and most other programming languages use zero based index.
PositionLengthValue/TypePurpose
1 4? 1 unkown
5 4?   number of vectors
9 4 0 zero
13 4 0 zero
17 4   checksum
21 4 0 zero
25 4   stats stream length
29 4 0 zero
33 4   stats stream length - minus vector variable length
The difference [25]-[33] is 64 for 1 vector (defined as off1).
Each additional vector adds 24 bytes starting at byte position 41
37 4 0 zero
Start of vector information, 24-bytes per vector
41 1   system type id
42 1   unkown
43 2   unkown
45 4   user type id
49 2   length
51 1   Prec
52 1   Scale
53 4   unknown
57 4   unknown
61 2   unknown
63 2   unknown
      Some of the unknown fields should be for nullable, collation, etc
Addition vectors if present
off1+1* 9   Updated?, 9 byte datetime2?
off1+10 3   unknown
off1+13 8   Rows
off1+21 8   Rows sampled
off1+29 4 4 byte real Density - Header
off1+33 4x33=132 4 byte real Density - vector, upto 33 values
off1+165 4 4 byte int Steps (first copy)
off1+169 4 4 byte int Steps (second copy)
off1+173 4 4 byte int number of vectors
off1+177 4 4 byte int Step size (in bytes)
off1+181 4 4 byte real Average Key length - header
off1+185 4 4 byte real Unfiltered rows
off1+189 4 4 byte int unknown
off1+193 4x33=132 4 byte real Average key length - vector
      Some fields may represent string index (bool), or filter expression
off1+325 8 8 byte int unkown, values 0x11, 0x13 and 0x19 observed,
may determine the # of post histogram 8 bytes values starting at off1+341?
off1+333 8 0 8 byte 0?
off1+341 8 0 offset for value after histogram?
off1+349 8 0 another offset
off1+357 8 0 another offset if value of [off1+25] is 19 or more?
more offsets if value of [off1+25] is 25 or more?
Eventually, this sequence appears: 0x10001100 followed by three 4-byte real,
a value in native type of the stat, and then ending with 0x040000
off2 2 0x10 - 16 length of core columns determines the organization of histogram structures?-->
off2+2 2 17 or higher size of step, excluding 3 byte trailer
off2+4 4 4 byte real Eq Rows
off2+8 4 4 byte real Range Rows
off2+12 4 4 byte real Avg Range Rows
off2+16 native length native type Range Hi Key
off2+16+x 3 byte 0x040000 step terminator?, x is the size of the type
off3 ? ? additional info
*off1 = value of 4(8) byte int at position [25] - value of [33]
**off2 = off1 + 341 + 16 if value of [off1+325] is 0x11, or 24 if 0x13 or 0x19
***off3 = off1 + 341 + value of 4(8) byte int at [off1+341]

So far, for SQL Sever 2008 R2, I have only looked at fixed length not nullable statistics. Variable length statistics has different organization, particularly in the histogram part. String statistics may have extended information after the histogram, per new feature of SQL Server 2008?

Umachandar provides a SQL function for converting 4-byte binary to real or 8-byte binary float, and vice versa.

Supporting SQL functions and procedures:

The updated tools now has a stored procedure that accepts table and index (or column stat) as input parameters, in addition to the original procedure that has the stats stream binary.

Updates on QDPMA Stats Stream - Updated

decoding stats stream - Updated

decoding stats stream

ps
An interesting fact is that it is not necessary for statistics to be highly accurate to be effective. Normally we are interested in distribution differences to shift the execution plan from one to another. The boundaries for this can be very wide. False statistics in certain circumstances might guard against catastrophically bad execution plans, example in out of bounds situations. Another is in skewed distributions, but this should be handled by other means, to ensure high and low distributions get different execution plans.

Published Saturday, May 05, 2012 7:16 AM by jchang
Filed under:

Comment Notification

If you would like to receive an email when updates are made to this post, please register here

Subscribe to this post's comments using RSS

Comments

 

Thomas Kejser said:

Hi Joe

Thanks for sharing this information. It would be nice if one was able to modify the stats directly. For people who know what they are doing, I agree that the stats dont need to be accurate, just well "designed".

I blogged about an indirect way to influence STATS_STREAM here:

http://blog.kejser.org/2011/07/07/the-ascending-column-problem-in-fact-tables-part-two-stat-job/

In principle, it should be possible to implement a tool that would allow you to describe the stats you want and then tool would then generate a "donor" table with the right rows in it that you could then transplant to the target.

Of course, it would be superior to get direct access to the data inside the engine.

May 11, 2012 6:13 AM
 

jchang said:

It used to be that there were more people good at decoding binary (at least this was the case among people I hung out with) who could help me with this, but times change, and people's interests change...

It would be nice if someone who has clout with the SQL Server team could allow a histogram editing tool to work with say developer edition?

Right now I am working an issue with complex queries in which the estimated row count is horribly  wrong at some intermediate point, with catastrophic consequences in the subsequent steps.

This problem can side stepped by pulling the sub-expression with the row estimate error, inserting into a temp table, which results in the true statistics being generated.

Wouldn't it be good to have the ability to hint: if the row count at this point is way off, regenerate the plan based on the actual row count? An example is when the execution plan has a loop join having a (large) table scan on the inner source. If the outer source provides 1 row, then no problem. If in fact it is many rows, then big problem.

Failing that, how about: if the actual row count is way off at some intermediate point, and the subsequent steps would now be horribly expensive - then give up?

May 12, 2012 10:29 AM
 

Gullimeel said:

"if the actual row count is way off at some intermediate point, and the subsequent steps would now be horribly expensive"

That is why i guess they need Tuning advisor(human beings not DTA).The day the optimizer will be smart enough to think like humans our jobs will be gone :). Optimizer is getting smart but human beings are getting smarter too so it will take some time to catchup. The index hints and forceorder are designed keeping these kind of issues in mind.

In the SQL server it is somewhat fine. I have seen worst in the Other DBMS i work on.If second table in join has multi column key(even though it is primary key or unique key or has unique/clusetred index) it used to estimate 70 rows for each row in table 1,whereas it should have estimated just 1 row. Thus after second row merge/hash joins are favoured whereas you do not need them at all.

May 25, 2012 7:31 AM
 

jchang said:

On one hand, I do not worry about the day when DTA-me is rendered obsolete by SQL Server DTA. On the other hand, based on current knowledge of database performance, the automated tuning tools are pathetic compared to what it should be. This means there deficiencies in both the SQL query optimized and various automated tuning tools, both DTA and third-party, which in turns means the tools vendors are not following the right approach to automated tuning.

May 25, 2012 4:46 PM
 

ALZDBA said:

apparently the links are broken ... e.g. Updates on QDPMA Stats Stream - Updated points to http://www.qdpma.com/cbo/statstream.html which gives me "Server not found"

January 4, 2013 8:54 AM
 

ALZDBA said:

must be something in the url on this page. the copy/pasted url works ?-)

January 4, 2013 8:56 AM

Leave a Comment

(required) 
(required) 
Submit

About jchang

Reverse engineering the SQL Server Cost Based Optimizer (Query Optimizer), NUMA System Architecture, performance tools developer - SQL ExecStats, mucking with the data distribution statistics histogram - decoding STATS_STREAM, Parallel Execution plans, microprocessors, SSD, HDD, SAN, storage performance, performance modeling and prediction, database architecture, SQL Server engine

This Blog

Syndication

Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement