THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

Adam Machanic

Adam Machanic, Boston-based SQL Server developer, shares his experiences with programming, monitoring, and performance tuning SQL Server. And the occasional battle with the query optimizer.

  • SQL Saturday 89 - Atlanta GA - Materials

    Last Saturday, September 17, I was lucky to be able to present two sessions at an excellent SQL Saturday in the Atlanta area. The day drew a large crowd and had a great speaker lineup. All in all, a huge success, and a very well-managed event. Congratulations to the organizers!

    One of the highlights for me, aside from speaking, was helping out with logistics the night before and creating a cocktail for the event. Check out Audrey Hammonds's blog for details.

    My two sessions were:

    "SQL Server Parallelism and Performance"

    Over the past five years, multi-core processors have made the jump from semi-obscure to commonplace in the data center. Today we regularly expect to see 16, 32, or 64 cores in even our lower-end servers. Are you getting everything you can out of the wealth of processing power at your disposal? Attend this session to take a detailed look at how and why SQL Server processes queries in parallel, as well as methods for controlling parallel processing via configuration options, the Resource Governor, and query-level hints. This session will enable you to immediately evaluate, understand, and improve the state of parallel processing on your servers.

    and

     "15 Powerful SQL Server Dynamic Management Objects"

    The Dynamic Management Objects--a set of views and functions that first shipped with SQL Server 2005--are a window into the inner workings of your SQL Server instance. Locked within is the data you need to help solve virtually any performance problem, quickly debug issues, and gain insight into what's actually happening on your server, right now. This session is a fast-paced tour of the ins, outs, whys, hows, and even pitfalls of 15 of the most important views and functions--information gleaned from heavy use of the objects in a number of environments. You will learn how to understand transaction behavior, locking, wait statistics, sessions, requests, and much more. Attend this session and you will be the master of your SQL Server instance.

    I promised to share the slides and demos from the second session, and those are attached to this post.

     

    Many thanks to everyone who attended my sessions and for all of the positive feedback I received! (I would also thank anyone who gave me negative feedback, but in this case the worst I received was that I had a typo on one of my slides.) I had a great time, and hope to return to Atlanta again sometime soon.

  • Where Can You Find Me the Rest of This Year?

    Autumn is creeping inevitably closer here in the US, and that means that speaking season is about to kick into high gear. Here's my current schedule for the remainder of the year:

     

    September 8, 17:00 GMT (online) - 24 Hours of PASS webcast: "Baseline Basics or: Who Broke the Database?"

    In this session, excerpted from my PASS Summit precon, I'll explain the whys and hows of using baselines to assist with performance tuning. If you find yourself more often than not tuning reactively rather than proactively, this session is for you. This is a free webcast, so why not join in?

     

    September 8, 18:30 EDT (Waltham, MA) - New England SQL Server Users Group: "Windowing Functions in SQL Server 2008, Denali, and Beyond"

    Two in one day! This session will kick off the 2011/2012 New England SQL Server season, and will cover what are (in my ever-so-humble opinion) the most important T-SQL enhancements in the past three versions of SQL Server: windowing function enhancements. I'll discuss what's there in today's shipping versions, and the new and incredibly powerful functionality that Denali brings to the table. If you're in the Boston area, don't miss it!

     

    September 17 (Atlanta, GA) - SQL Saturday #89: (Two Talks)

    After the great time I had at this year's TechEd show in Atlanta I could hardly wait to get back. Luckily, the fantastic SQL Server community in the Atlanta area scheduled this event, which gave me the perfect excuse to pack my bags for a return trip. The speaker lineup for this event is rock-solid, and I'll be contributing with two talks: my introductory dive into SQL Server parallelism, and an overview of my 15 favorite activity monitoring dynamic management objects. This should be a great event by anyone's standards, and it's free, so if you live nearby you have literally no excuse not to attend.

     

    October 11,  08:30 PDT (Seattle, WA) - PASS Community Summit Pre-Conference Seminar: No More Guessing! An Enlightened Approach to Performance Troubleshooting

    No more guessing! It's not just a catchphrase; it's a way of life. When faced with performance problems we have a choice: we can either run around panicking, wasting everyone's time (including our own), or we can use the huge amount of information at our disposal to figure out what's actually wrong and fix it. As the calm and collected type, I prefer the second option, and so should you. If you'll be attending the PASS conference, join me on Tuesday to learn how you, too, can quickly and accurately pinpoint the root cause of your performance issues.

     

    October 12-14 (Seattle, WA) - PASS Community Summit Spotlight Session: Query Tuning Mastery: Zen and the Art of Workspace Memory

    Your query is running, and it needs to sort some data. Or to hash some data. Or to perform a parallel operation. These things take memory, and as any SQL Server professional knows, in the world of SQL Server memory is worth much, much more than its weight in gold (even given today's hugely-inflated prices). Attend this session to learn the ins and outs of workspace memory: what it is, why it's needed, where the memory comes from, and most importantly, how to control it to make certain queries faster and other queries not have to wait as long. Workspace memory tuning is a mostly untapped performance opportunity that many DBAs can heavily benefit from learning how to leverage.

     

     November 1-3 (Las Vegas, NV) - SQL Server Connections: (Three Talks)

    My final speaking engagement of the year will be at the always-fun SQL Server Connections show in Vegas. (It's in Vegas! How could it not be fun?!) I'll be doing three talks during the course of the show: An introductory talk on my favorite topic the past couple of years, parallelism in SQL Server; a much more advanced parallelism talk to build on that one; and a talk on the various dynamic management objects that can be used in the quest for ultimate SQL Server performance. Save a spot for me at the poker table!

     

    I'm really looking forward to these events. If you're going to be there let me know in the comments, and/or feel free to find me at any of the shows and say hi. (Buying me a drink or two wouldn't hurt either.) See you there!

  • Data Education: Food For Your Brain

    I've written here before about Data Education, the training company I recently launched, building off my previous company Boston SQL Training. Things have moved along quickly, and I and the rest of the Data Education staff are very happy to announce a jam-packed fall and winter course roster. We'll be announcing even more classes later in the year, but I wanted to give you a run-down of what we're currently offering. (Note that several of these are currently in the Early Bird stage, which means there are steep discounts already available. For any that aren't, I'd like to offer SQLBlog readers a nice shiny $400 discount. Just enter the code SQLBLOG when registering.)

     

     

    Without further ado:

     

     

    The BI Master: TEO LACHEV

    Teo will be in Boston next month (September 19-23) sharing his impressive and encyclopedic knowledge of all things SSAS and PowerPivot. Considering he's literally written the books on the subject, this is one BI class you don't want to miss. (Class to include discussion and labs on OLAP, data mining, PowerPivot apps, data analysis, and best practices, plus much more.)

     

     

    The Entity Framework Guru: JULIE LERMAN

    This is one boot camp you can't afford to miss, plus world-renowned .NET expert Julie Lerman won't make you do push-ups. (I don't think.) Instead, she'll spend five days (October 3-7) telling you everything you need to know about Entity Framework 4.1, including CodeFirst. This one will be somewhat intensive and very, very thorough. (Class to include discussion of Code First A-Z, E.F 4.1 DbContext/DbSet, EF in the cloud, EF performance tricks, plus much more.)

     

     

    The PowerShell Virtuoso: ALLEN WHITE

    You'd have to live under a rock to not know that PowerShell is the thing to know these days. We're bringing in Allen White for a three-day course (November 14-16) to teach you Everything You Ever Needed to Know About PowerShell in a SQL Server World: writing scripts, managing scripts, and managing your environment. No knowledge of PowerShell required! (Class to include discussion and labs on SMO, backup and recovery, automating, and importing/exporting CSV and XML files, plus much more.)

     

     

    The T-SQL Super Star: ITZIK BEN-GAN

    Yep, we got him: Itzik Ben-Gan is coming to the East Coast (January 23-27) to present his wildly popular Advanced T-SQL Querying, Programming, and Tuning class. Itzik will talk logical query processing. He'll talk islands and gaps. He'll talk nested iterations using recursive queries. And then he’ll talk about so, so much more. There's a reason he sells out classes and fills rooms at conferences, you know. (Class to include discussion and labs on query tuning, aggregating and pivoting data, TOP and APPLY, programmable objects, plus much more.)

     

     

    Also, I wanted to point out a few other things we're doing at Data Education, all of which I think my fellow SQL fanatics might appreciate:

    • Twitter: We have a very active Twitter stream going (@dataeducation). Every weekday, we tweet the best in database blogs and articles. And sometimes we throw in a Chuck Norris joke. Just because.
    • Question of the Week: In light of the fact that a Friday is made even better by a $25 Amazon gift card, we've launched our new Question of the Week feature. Each Monday, we post a brainteaser. You get until Thursday at midnight to prove your riddle know-how. The questions have been fairly easy so far. Prepare for much trickier challenges. (Yes, that's a threat!)

     

    Enjoy, and we'll see you in class!
  • A Computer Scientist Meets T-SQL

    TSQLWednesdayThere I was. A freshly printed bachelor's degree in Computer Science tucked under my arm, I walked into my First Real Job. I'd never touched the technologies I was going to work with--ASP 3.0 and SQL Server--but my employer knew that, and I figured I'd be able to pick things up relatively quickly. After all, I'd been programming since the 2nd grade, knew a number of languages (more or less), and had all of the academic background I'd ever need.

    I was given a quick tour of the office, oriented by HR, and met the rest of my group. As a member of the small IT team, my coworkers included an Exchange administrator and couple of help desk support/networking guys. No other developers. I was given a cubicle (shared with a finance person) and told to start creating the company's intranet. So, I did. And with no one to learn from or bounce ideas off of, I made up my own rules.

    One of the first things I was asked to create was a threaded message system so that the remote marketing and sales teams could converse with one another. I got to work on a basic (and very ugly) user interface, and then tackled the database side of the equation.

    I'd never encountered a real live database before, and scarcely even knew what one was, but I took a day or two to read "Teach Yourself SQL in 24 Hours" and figured I was good to go. Somehow I recognized the need for a self-referencing table. I created it, populated it with some test data, and quickly discovered that the simple test queries I'd written to date weren't going to work. How would I navigate the hierarchy and display the messages in a threaded fashion?

    After messing around for quite some time and applying all of my Computer Science prowess, I came up with an algorithm:

    1. Using a cursor, start iterating the rows in the table.
    2. Insert "parent" rows into a temp table.
    3. For each "parent" row, recurse, calling the same stored procedure again and starting at step 1.
    4. Finally, return all collected results.

    With its nested cursors and various work being done in temp tables, this algorithm scaled so very well that shortly after rolling the message board out to production I got to work on implementing a cache so that users wouldn't have to wait several seconds when making a web request.

    Clearly, I'd done something very wrong. And I knew it. I just didn't know enough to know how to find the right answer.

    Much later I learned the correct term for what I'd created--an adjacency list hierarchy--and I learned about other methods of modeling hierarchies, including materialized paths and nested sets. I learned that many of the lessons I'd been taught in school--where the curriculum was heavily biased toward procedural languages--didn't apply well to SQL databases.

    And most importantly I learned how to ask (and answer!) questions on online forums. Being a team of one doesn't mean that you need to work in isolation. There is a huge community of people online who want to help you succeed. Finding these forums (and newsgroups. RIP, NNTP!) was a revelation. The ability to talk shop with people who understood what I was trying to do and how best to do it was invaluable to my learning how to be a better developer and not just a student of Computer Science.

    Many years later and I still get stuck on difficult problems, but these days I don't try to do everything in isolation. I know better than that. I reach out to my network and take advantage of some of the great minds I'm lucky enough to have access to. And you can, too. Next time you find yourself with a less than ideal solution, swallow your pride and ask for help.

    And don't forget to help someone else in return. As much as you'll learn from the people answering your questions, you'll get even more value from puzzling over the numerous problems that other people face on a day to day basis. Solving as many problems as you can--your own and those of others--is in my opinion the fastest way to truly master a given technology.

    Enjoy your journey--and always remember that you don't have to go it alone.

  • T-SQL Tuesday #21 - A Day Late and Totally Full of It

    TSQLWednesday“This ugly hack is only temporary,” you think. Six months later, a coworker curses your name, sacrificing a chicken to any deity that will help expedite your getting struck down by lightning, a school bus, or both.

    Crap code. We’ve all seen it. We’ve all created it. We’re all guilty. Yes, even you. Sometimes our crap is purposeful—the ugly, “temporary” hack. Sometimes we produce crap because we simply don’t know any better. But there is no excuse good enough. As professionals, we must strive to rid ourselves of bad habits. And the only way to learn the difference is to see lots, and lots, and lots of examples.

    That’s where you come in. This month’s T-SQL Tuesday challenge: reveal your crap to the world. Why is (or was) it crap? Why did you do it? And how did you learn from your mistake?

    I’m posting a day late, and the subject matter may require a bit of thought, so I’m slightly changing the rules—temporarily, of course.

     

    Here’s what you need to do.

    • Write a blog post talking about your crap. This doesn’t mean that your post should be crap. Two wrongs, in this case, do not make a right!
    • Your post must go live some time between 00:00:00.00 GMT and 23:59:59.99 GMT on Wednesday August 10, 2011
    • Your post must contain the T-SQL TuesdayWednesday logo from above and the image should link back to this blog post.
    • Make sure that a link appears in the comments section below, either via a trackback or a manual comment. I’ll take all of the posts that are reported and write a round-up.

    Even better…

    • Include a reference to T-SQL Tuesday (or Wednesday) in the title of your post
    • Tweet about your post using the hash tag #TSQL2sDay
    • Consider hosting one of these. Ask me how.

    Enjoy! Here’s hoping for a truly cathartic blog event.

  • Performance, Discounts, and an Excuse to Visit New York City

    A couple of weeks ago I announced a two-day advanced performance seminar in New York City, which will be delivered in July. This seminar will cover SQLCLR and parallelism techniques to help you take performance well beyond the levels that typical tuning exercises yield. Check out the links for more details, including a full outline.

    Thanks to the great response so far, we have decided to extend the early registration discount for a few more days. You have until the end of the day tomorrow, June 3, to take advantage of the $100 savings off of the $1050 course fee. To get the discount, use the code EARLYBIRD on the registration page.

    Finally, I would like to point out that one of the main reasons that I chose New York City for this course is that it is by far my favorite city to visit. I just noticed yesterday that a bunch of hotels are offering a free third night this summer--so I hope that you'll be able to come for the course and then stay an extra night or two to enjoy the city. Note that we also have a hotel discount available to course attendees, unrelated to the third night offer. Let me know if you would like more information on that.

    Hope to see you in New York next month!

  • Two Days of Advanced Performance Techniques - July 14-15, New York City

    I am pleased to announce that I will be delivering two days of training in New York City, July 14 and 15.

    This seminar focuses on achieving "next-level" performance--going beyond that which you can gain via normal tuning methodologies. The vehicles for this performance improvement are two technologies that I've been pushing on this blog and in other venues for a long time: SQLCLR and parallelism. The seminar will be based on the in-depth materials that I used for my full-day sessions PASS conference in 2009 and 2010. These seminars were both quite well-received, but I have tweaked and tuned the content to make it even better and more focused on the bottom line goal of achieving maximum performance.

    Full information on the seminar is available on the Data Education web site. There is also an early bird discount currently in effect. Use the discount code "EARLYBIRD" to save $100 on the $1050 registration fee.

     

    I would like to take this opportunity to mention that Data Education is a new training venture that I've recently launched. This will be the company's second public training event (our first featured Kalen Delaney in the Boston area). The company is an evolution of Boston SQL Training, a company that I started a couple of years ago with the goal of bringing extremely high-quality SQL Server training events to the Boston area. The new name, Data Education, reflects our desire to focus beyond Boston and on a broader technology spectrum. We plan to eventually move into training on data-related programming (Entity Framework and similar), other DBMS platforms, NoSQL technologies, and wherever else the database industry moves.

    Currently, aside from my course in New York we've announced an Analysis Services and PowerPivot course featuring Teo Lachev, which will take place in the Boston area September 19-23. Several other courses will be announced shortly, so stay tuned and consider following us on Twitter (@DataEducation).

    If you've read this far, I would greatly appreciate your taking part in a quick and informal poll: in the comments section below, please let me know what geographic location would be interesting to you for an advanced SQL Server course, and what topic areas you're not seeing enough of.

     

    Thanks, everyone, and I'm looking forward to seeing you in New York!

  • TechEd 2011 - Performance Tuning and Optimization in SQL Server 2008 R2 and SQL Server Code Named "Denali"

    Thanks to everyone who took the time out of their conference experience to join Mike Wachal and me for yesterday's session on SQL Server performance tuning!

    For those who weren't there, we focused in on troubleshooting techniques, highlighting some of the key DMVs and new Extended Events features that will help with proactive diagnosis of problems. My section, in particular, was a bit of a taste of some of the sessions I've submitted for this fall's PASS summit: I showed a brief demo of troubleshooting using my No More Guessing methodology, and applied it to the problem of workspace memory contention. The sessions for PASS cover these topics in detail, so if you find this area interesting please vote here.

    The demo script I showed yesterday is attached, along with a copy of the latest versions of my Who is Active stored procedure and the SQLQueryStress tool. Feel free to leave a comment below or drop me a line via e-mail if you have any questions.

  • Twenty Nine Days of Activity Monitoring (A Month of Activity Monitoring, Part 30 of 30)

    This post is part 30 of a 30-part series about the Who is Active stored procedure. A new post will run each day during the month of April, 2011. After April all of these posts will be edited and combined into a single document to become the basis of the Who is Active documentation.


    Pop quiz: What happens when you promise to write 30 posts on a topic, but you don’t plan properly and lose a bit of steam toward the end?

    Answer: A recap post, a day or two early! Woo hoo!

     

    The month started with a few introductory posts. A history of monitoring (from my point of view), some information on Who is Active’s design philosophy, and a few words on the Who is Active license.

    The key takeaways from these first posts? The monitoring story has been pretty poor for most of SQL Server’s history. It’s better now, but it’s still a lot of work. Who is Active was created to make things more accessible. It’s free, with a few restrictions designed to keep people who make money from monitoring from making money on Who is Active. Anyone else can and should use and enjoy.

     

    Next I got new users up to speed. My post on installation covered security considerations, followed by a post on why you may not see as much data as you're used to when you first start using Who is Active. I continued with a post on the various options supported by the procedure (plus a nod to the online help mode) and another post describing the default output columns.

    One of the more important posts of the month described how Who is Active treats active requests and sleeping sessions. This can be a confusing area for new users, since it's not always obvious what's going on unless you're looking in the right place at the output (the [status] column).

     

    Once the refresher was finished I began covering some of the basic options exposed by the procedure. Filters were one of the first things I implemented, so it was a logical place to start (the "not" filters came much later). Seeing the query text is a key part of the procedure's functionality, and that was next on my list. If you're not sure what the [sql_text] column means, this post will set the record straight.

    The ability to see the query text is nice, but so is the ability to get a query plan--and of course Who is Active supports that too. And since everything a query does is transactional, the procedure allows users to collect information about what those transactions are up to.

     

    The next section of the month was all about query processing. I started with a couple of background posts: One on how the query processor works (at a somewhat high level), and another on what blocking really means.

    Who is Active exposes two different modes that help with analysis of real-time waits. My 15th post of the month covered the default--lightweight--collection mode. My followup post covered the more extensive full waits collection mode.

     

    Sometimes a query plan and wait information isn't quite enough to diagnose a performance issue. What if the query plan usually works well, but isn't performing properly only in this specific case? Who is Active has a feature to help you figure that out. And what if you need a bit more information on some of the settings that the request is using?

     

    Mining wait information yields some amazing returns. One of the additional pieces of information that you can get is the actual name of the object that's causing a block. Another thing you can see is the exact node within a query plan where a task is currently doing work. You can (and should!) also use waits to figure out whether tempdb is properly configured. There are still more tricks you can play with waits, but they'll have to wait for a future post.

     

    Like wait information, lock data can also be mined. However, there is so much locks information that the real challenge is rendering it in a human-readable manner. Who is Active does that using a special custom XML format. The procedure also helps with another type of blocker analysis, in the form of finding the "lead blocker."

     

    Once you've figured out what information you're interested in, why not set up the output so that you can see the information the way you want to see it? Who is Active helps with this by allowing users to dynamically customize output columns and sort order in a few different ways. And while you could always run Who is Active interactively, that might get a bit dull. Automated data collection is a much nicer methodology in many cases.

     

    Most of the numbers shown by Who is Active are cumulative. But oftentimes it's more interesting to compare data between two snapshots. The procedure can help you do that, using its delta mode.

     

    What fun would a month of Who is Active be without an official release? v11.00 has numerous new features, several of which were perfected this month thanks to feedback I received on the various posts in the series. It's great to have such enthusiastic users! Some of these new features didn't make it into prior posts, and other existing features are a bit hidden. So I did a writeup covering the more important things that you may not have noticed while working with Who is Active.

     

    I finished the month with a discussion on security for slightly tougher situations. I hope that the module signing technique will allow Who is Active to be used in a number of places where security auditing requirements have made things difficult.

     

    And that's that. A month of activity monitoring with Who is Active. Thank you for reading! Next, this text is going to be edited, expanded in places, and put into a much more comprehensive form. Watch your RSS reader for more information.

    Enjoy!

  • Access for All! (A Month of Activity Monitoring, Part 29 of 30)

    This post is part 29 of a 30-part series about the Who is Active stored procedure. A new post will run each day during the month of April, 2011. After April all of these posts will be edited and combined into a single document to become the basis of the Who is Active documentation.


    Early in the month I discussed basic security requirements for Who is Active. I mentioned the VIEW SERVER STATE permission and the fact that I consider it to be “a relatively low barrier to entry.”

    But what if, in your organization, it’s not? Auditing requirements being what they are, you might be required to lock things down. And granting someone full and unrestricted VIEW SERVER STATE may simply not be an option.

    Enter module signing. By securing Who is Active (or any other securable, for that matter) via inherited permissions, it’s often possible to get around auditing requirements, as long as the module itself has been reviewed. This is not at all a difficult thing to do, but in my experience most DBAs haven’t played much with signed modules. Today I’ll show you how quick and easy it can be to set things up.

    Start by creating a certificate.

    USE master
    GO

    CREATE CERTIFICATE WhoIsActive_Permissions
    ENCRYPTION BY PASSWORD = '1bigHUGEpwd4WhoIsActive!'
    WITH SUBJECT = 'Who is Active',
    EXPIRY_DATE = '9999-12-31'
    GO

    Once you have a certificate in place, you can create a login from the certificate. The goal is to grant permissions, and to do that you need a principal with which to work; a certificate does not count. A login based on the certificate uses the certificate’s cryptographic thumbprint as its identifier. These logins are sometimes referred to as “loginless logins,” but I refer to them as “proxy logins” since that’s what they’re used for: proxies for the sake of granting permissions.

    CREATE LOGIN WhoIsActive_Login
    FROM CERTIFICATE WhoIsActive_Permissions
    GO

    The login can be granted any permission that can be granted to a normal login. For example, VIEW SERVER STATE:

    GRANT VIEW SERVER STATE
    TO WhoIsActive_Login
    GO

    Once the permission has been granted, the certificate can be used to sign the module—in this case, Who is Active. When the procedure is executed, a check will be made to find associated signatures. The thumbprint of the certificates and/or keys used to sign the module will be checked for associated logins, and any permissions granted to the logins will be available within the scope of the module—meaning that the caller will temporarily gain access.

    ADD SIGNATURE TO sp_WhoIsActive
    BY CERTIFICATE WhoIsActive_Permissions
    WITH PASSWORD = '1bigHUGEpwd4WhoIsActive!'
    GO

    Getting to this step will be enough to allow anyone with EXECUTE permission on Who is Active to exercise most of its functionality. There are a couple of notes and caveats: First of all, every time you ALTER the procedure (such as when upgrading to a new version), the signature will be dropped and the procedure will have to be re-signed. You won’t have to create the certificate or the login again; you’ll just have to re-run that final statement. Second, you’ll only be able to use most of the functionality. Certain features, such as blocked object resolution mode, won’t operate properly, depending on whether the caller has access to the database in which the block is occurring. This may or may not be a problem—it depends on your environment and what users need to see—and Who is Active itself won’t throw an exception. An error message will be returned somewhere in the results, depending on what the user has tried to do.

    If you would like to grant database-level permissions based on the certificate login so as to avoid these errors, that’s doable to. Just do something like:

    USE AdventureWorks
    GO

    CREATE USER WhoIsActive_User
    FOR LOGIN WhoIsActive_Login
    GO

    EXEC sp_addrolemember
        'db_datareader',
        'whoisactive_user'
    GO

    This will allow Who is Active to figure out what the various blocked or locked object names are. Since the login is just a proxy no one can actually log in and get direct access to read the data, so this isn’t something I consider to be a security risk. However, keep in mind that if anyone has the password for the certificate and sufficient privileges in master, a new module could be created and signed. Keep the password secure, and make sure to carefully audit to catch any infractions before they become a risk.

    Security policy should never be a reason to limit your monitoring choices. Module signing is a powerful tool for Who is Active in addition to many other applications. I highly recommend studying it in detail in order to enhance your ability to provide high-quality, totally flexible, and completely secure solutions.

  • Who is Active's Hidden Gems (A Month of Activity Monitoring, Part 28 of 30)

    This post is part 28 of a 30-part series about the Who is Active stored procedure. A new post will run each day during the month of April, 2011. After April all of these posts will be edited and combined into a single document to become the basis of the Who is Active documentation.


    Over the past month this series has covered a lot of ground. After writing almost 30 blog posts on the stored procedure it’s interesting to look back and see the fact that it’s gotten much bigger and more complex than I realized. Kind of like watching a plant grow, you don’t notice the day-to-day changes until one day you go to water it and see that it’s taken over your entire garden.

    A few things that I’ve added along the way have been especially useful, but there hasn’t been a good place to mention them thus far in the series. With just three posts left for the month, now is the time.

     

    Always Show Blockers

    In a post earlier this month I talked about filters. Filters let you decide exactly what you want to see, and what you not don’t want to see (“not” filters). But sometimes you have no choice: if you’re filtering so that you only see session 123, and it’s being blocked by session 456, you’ll also see information about session 456. The idea is that you should always get enough context to fully evaluate the problem at hand. Even if it means that you see more information than you were intending to see the first time around. You probably would have asked for 456 next anyway.

     

    “RUNNABLE” Waits

    In @get_task_info = 2 mode, you may see waits called “RUNNABLE.” This could strike you as an oddity, given that there is no such wait type in SQL Server. I wanted to show tasks on the runnable queue, and making up a fake wait type seemed like a reasonable way of accomplishing the task. In practice, it has worked extremely well—I’ve used this feature countless times to help understand scheduler contention on a SQL Server instance.

     

    Accurate CPU Time Deltas

    CPU time is a tricky metric. It gets handled by Who is Active’s delta mode, and has for several versions. But historically, never very well. The data simply isn’t represented in an easily-obtainable fashion in the core DMVs. Recently I decided to dig deeper into this problem and discovered that I could get better numbers from some of the thread-specific DMVs. They’re cumulative numbers, based on the lifetime of the entire thread—not too good for the usual Who is Active output. But for snapshot and delta purposes, just about perfect. Meaning that in v11.00 of Who is Active, you can see the [CPU] column show a value of 0, while the [CPU_delta] column shows a value in the thousands. It’s not a bug. It’s a feature. (It really is!)

     

    Text Query Plans and the XML Demon

    SQL Server 2005 introduced query plans as XML. Management Studio knew how to render these plans graphically. And we were able to pull the plans from DMVs. Life was great. Except, perhaps, when you actually wanted to view one of these plans, and you had to save the thing out to a .SQLPLAN file, close the file, then re-open it. That’s about three steps too many for my taste, so I was overjoyed when the Management Studio team decided to wire things up the right way in SQL Server 2008. Click on a showplan XML document, see a graphical plan. Simple as that.

    Unfortunately, the XML data type has its own issues, including one particularly nasty arbitrary limitation that has to do with nesting depth. The idea is to make sure that XML indexes don’t crash and burn too often (not a big concern for me, given that I’ve never seen one used in a production environment—but I digress). The problem is that query plans are heavily nested. And to get that nice graphical plan workflow, SSMS needs the plan rendered as XML.

    In prior versions of Who is Active I gave up and returned either an error or a NULL. But in v11.00 I decided to make things a bit better. If the nesting issue occurs, Who is Active will now return the plan as XML encapsulated in some other XML in a text format, along with instructions on how to view the plan. This won’t give you a nice one-click experience, but it will give you the ability to use Who is Active to see some of the bigger plans that are causing performance issues.

     

    Service Broker Needs Love Too

    One of the most interesting features of Service Broker is activation. But a vexing design choice on behalf of the Service Broker team was to make activation procedures launch as system sessions. This means, among other things, that prior versions of Who is Active filtered them right out of the default view. To see them you’d have to enable @show_system_spids. And then you’d have to ignore all of the other system stuff. And you’d get woefully bad time information (no, the activation process hasn’t been running for 25 days; that’s the last time you restarted the SQL Server instance). In Who is Active v11.00 this has been fixed. Service Broker activation processes are now displayed by default along with other user activity. And I found a way to fix the timing issue, thanks to some advice on Twitter from Remus Rusanu, one of the guys who originally worked on Service Broker. So if you’re using activation and monitoring with Who is Active, life is good.

     

    Homework

    The homework section of this series is officially closed. Your assignment is to go enjoy the rest of your day.

  • Who is Active v11.00 (A Month of Activity Monitoring, Part 27 of 30)

    This post is part 27 of a 30-part series about the Who is Active stored procedure. A new post will run each day during the month of April, 2011. After April all of these posts will be edited and combined into a single document to become the basis of the Who is Active documentation.


    Thanks to your feedback over the past month, I’ve managed to get a lot of work done on the next version of Who is Active.

    So much work, in fact, that I’ve finished a new major release.

    Click here to download Who is Active v11.00

    There are numerous enhancements in this release. In no particular order:

    • CPU deltas use-time thread-based metrics for more accurate data (use both @delta_interval and @get_task_info = 2)
    • command_type information added to [additional_info] column for active requests
    • Modified elapsed time logic to retrieve more accurate timing information for active system SPIDs
    • SQL Agent job info (job name and step name) is now included in the additional_info column (use @get_additional_info = 1)
    • If there is a lock wait, information about the blocked object (name, schema name, and ID) is now included in the additional_info column (use both @get_additional_info = 1 and @get_task_info = 2)
    • Service Broker activated tasks are now shown by default, without using @show_system_spids mode. The program_name column contains the queue_id and database_id associated with the activation procedure
    • Various numeric columns, including reads, writes, cpu, etc, have been made nullable. These will occasionally return NULL, on extremely active systems where the DMVs return data more slowly than queries start and complete
    • Query plans that cannot be rendered due to XML type limitations are now returned in an encapsulated format, with instructions, rather than sending back an error
    • Added wait information for OLEDB/linked server waits
    • Wait collection will now "downgrade" to get_task_info = 1 style data if no other information is available in get_task_info = 2 mode
    • Added header information to online help
    • Added a login_time column to the output
    • The duration for sleeping sessions is now the sleep time, rather than the time since login
    • Fixed various bugs

    I’ve written about several of these things over the past month, and there are a couple of posts left for the remainder of the month, so I won't elaborate here.

    Thank you, thank you, and thank you again to everyone who has taken the time to give me feedback and/or report bugs! Who is Active would not be what it is without you!

    I would especially like to thank Paul White. Paul has acted as my unofficial "lead QA engineer" for the past few versions. He's found some very interesting bugs in my code, in addition to pointing out some quirks in the DMVs that I wasn’t handling properly. Thanks, Paul, for the great job you’ve done!

    One other thing I would like to point out is an addition to the header: a donation link. Several people recently have asked me how to donate, and I didn’t have a good answer. Now, I do.

    Use this link to support the Who is Active project!

    While I feel strange asking for money, the truth is that I’ve invested well over a thousand hours in the stored procedure, documentation, and support (I respond to EVERY e-mail I receive from Who is Active users). If you’ve been able to improve your day-to-day performance thanks to Who is Active, it would be great if you—or even better, your employer—could help support my work on the project. I’ve listed a recommended donation of $40 per installed instance, but please donate however much you feel Who is Active is worth to you and/or your organization.

    Thanks again, and enjoy!

  • Delta Force (A Month of Activity Monitoring, Part 26 of 30)

    This post is part 26 of a 30-part series about the Who is Active stored procedure. A new post will run each day during the month of April, 2011. After April all of these posts will be edited and combined into a single document to become the basis of the Who is Active documentation.


    The CPU is pegged. Your hard disks are fried. Memory consumption is through the roof. Who is to blame?!?

    Well that’s easy. Blame the session with the most CPU consumption. Blame the session doing the most I/O operations. Ask Who is Active what’s going on—it’ll tell you...

    F26_01_cpu

    Clearly session 52 is the problem here. It’s consumed over 2,000,000 milliseconds of CPU time. Session 53, on the other hand, has consumed a paltry five seconds. No brainer, right?

    Except that it’s not.

    Session 52? It’s in a wait state. (WAITFOR, to be exact.) Not consuming any CPU time at all at the time I ran Who is Active. Session 53? Using plenty. So why the discrepancy? Does Who is Active have a major bug?

    The fact is—and this will be obvious to a lot of you, so forgive me—most of the metrics reported by Who is Active are cumulative. They’re totals from the entire session, or the entire lifetime of the request (depending on what’s going on). In the case of session 52 in the image above, that much CPU time was consumed over a five-minute period prior to the point in time when I ran Who is Active. Session 53, on the other hand, is currently consuming CPU time. But how do I figure that out?

    The answer: delta mode. This feature is something for which you can thank Jimmy May, who kept telling me about his “SQL Deltoids” script that he’d written back in the bad old days of SQL Server 2000. “If only there were a way to apply that script to the SQL Server 2005 DMVs,” he lamented one day. So I added the functionality into Who is Active. It’s quite simple, and surprisingly effective.

    To use delta mode, simply invoke Who is Active’s @delta_interval option. Setting an interval will cause the procedure to execute its main logic branch, then wait the specified amount of time—a number of seconds between 1 and 255—before running the same main logic branch again. All of the numeric values that changed during that period will be compared, and the changed numbers will be output in a series of columns named with _delta at the end.

    For example, here are the same two sessions as above, viewed in delta mode:

    EXEC sp_WhoIsActive
        @delta_interval = 5

    F26_02_delta

    During the five-second delta interval, session 52 consumed no CPU time. Session 53, on the other hand, consumed over 5,000 milliseconds of time. If the CPU is pegged, 52 is no longer contributing—53 is the session of interest.

    Enabling delta mode will add some or all of the following columns to the output, depending on what other options have been set:

    • physical_io_delta
    • reads_delta
    • physical_reads_delta
    • writes_delta
    • tempdb_allocations_delta
    • tempdb_current_delta
    • CPU_delta
    • context_switches_delta
    • used_memory_delta

    By leveraging these delta columns in conjunction with the @sort_order option, it’s easy to quickly see which sessions are currently consuming your valuable server resources—which is generally more interesting than seeing which sessions may have been consuming resources but are now waiting or sleeping.

    Note: The various sessions and requests you'll see in delta mode pertain to the information captured after the wait interval. Information about requests that were running as of the first collection but are not running as of the second is discarded. New requests that started after the first collection will be shown, but with delta values of NULL. Locks, transactions, and other optional information, will also be captured only for the second collection, since no delta calculation is possible for those metrics (at least, not yet).

     

    Homework

    Delta mode, in its current form, is only really useful if you have requests that last longer than a second (at minimum—I usually do five or ten-second intervals). What kind of workload do you see on your servers? Is delta mode as it exists today something that you’re making use of (or that you’ll start making use of, now that I’ve written a post about it)?

  • Capturing the Output (A Month of Activity Monitoring, Part 25 of 30)

    This post is part 25 of a 30-part series about the Who is Active stored procedure. A new post will run each day during the month of April, 2011. After April all of these posts will be edited and combined into a single document to become the basis of the Who is Active documentation.


    Yesterday’s post was about configuring the output based on what you want to see. Today’s post is about taking that perfect output and persisting it.

    There are many reasons that you might like to store the results of a Who is Active call to a table. Some of the real use cases that I’ve been told about by Who is Active users include:

    • Scheduled monitoring. Capturing the results of Who is Active calls in 5 or 10 minute intervals to see what’s happening on the database server throughout the day (or night)
    • Using it as part of a build process, to verify that the correct things are happening in the correct order at the correct times
    • Using it as part of an exception handling system that automatically calls Who is Active every time an error occurs, to snapshot the current state of the database instance

    And there are various other use cases in addition to these. The point is that there are a number of reasons that you might want to capture the output.

    Unfortunately, it’s not as simple a task as you might think. The first time I tried to make it work, I did something like:

    CREATE TABLE #output
    (
        ...
    )

    INSERT #output
    EXEC sp_WhoIsActive

    This approach failed miserably. If you try it, as I did, you’ll see the following error message:

    Msg 8164, Level 16, State 1, Procedure sp_WhoIsActive, Line 3086
    An INSERT EXEC statement cannot be nested.

    Who is Active uses a number of INSERT EXEC statements, and they cannot be easily changed or removed, so for a while it seemed like all was lost. After a bit of brainstorming, however, I realized that I could simply build yet another INSERT EXEC into Who is Active—one that will insert into a table of your choice.

    Of course, first you need a table. And if you’ve been reading this series you’re no doubt aware that the output shape returned by Who is Active is extremely dynamic in nature, and depends on which parameters are being used. So the first option I added was a method by which you can get the output schema. Two parameters are involved: If @return_schema is set to 1, the schema shape will be returned in an OUTPUT parameter called @schema. This is best shown by way of example:

    DECLARE @s VARCHAR(MAX)

    EXEC sp_WhoIsActive
        @output_column_list = '[temp%]',
        @return_schema = 1,
        @schema = @s OUTPUT

    SELECT @s

    The idea is that you set up your Who is Active call with all of the options you’d like, then bolt on the @return_schema and @schema parameters. Here the column list is being restricted to only those columns having to do with tempdb. If you run this code, the SELECT will return the following result:

    CREATE TABLE <table_name> ( [tempdb_allocations] varchar(30) NULL,[tempdb_current] varchar(30) NULL)

    This result can be modified by replacing the “<table_name>” placeholder with the name of the table you actually want to persist the results to. Of course this can be done either manually or automatically—after the call to Who is Active, the text is sitting in a variable, so a simple call to REPLACE is all that’s needed. That call could even be followed up by a call to execute the result and create the table...

    DECLARE @s VARCHAR(MAX)

    EXEC sp_WhoIsActive
        @output_column_list = '[temp%]',
        @return_schema = 1,
        @schema = @s OUTPUT

    SET @s = REPLACE(@s, '<table_name>', 'tempdb.dbo.monitoring_output')

    EXEC(@s)

    Of course now you probably want to put something into the table. Crazy! To do this, drop the @return_schema and @schema parameters and replace them with @destination_table—the name of the table into which the results should be inserted:

    EXEC sp_WhoIsActive
        @output_column_list = '[temp%]',
        @destination_table = 'tempdb.dbo.monitoring_output'

    Now the results of the call will be inserted into the destination table. Just remember that every time you change the Who is Active options, you’ll have to re-acquire the output shape. Even a small change, such as adding an additional column to the output list, will result in a catastrophic error.

    EXEC sp_WhoIsActive
        @output_column_list = '[session_id][temp%]',
        @destination_table = 'tempdb.dbo.monitoring_output'

    Msg 213, Level 16, State 1, Line 1
    Column name or number of supplied values does not match table definition.

    How far you take this feature depends on how creative you are. Some of you have come up with elaborate schemes, but I generally keep it simple. Something that I like to do is to set up a short semi-automated process by using Management Studio’s GO [N] option. I use this when I’m doing intense debugging, and will do something like:

    DECLARE @s VARCHAR(MAX)

    EXEC sp_WhoIsActive
        @format_output = 0,
        @return_schema = 1,
        @schema = @s OUTPUT

    SET @s = REPLACE(@s, '<table_name>', 'tempdb.dbo.quick_debug')

    EXEC(@s)
    GO

    EXEC sp_WhoIsActive
        @format_output = 0,
        @destination_table = 'tempdb.dbo.quick_debug'

    WAITFOR DELAY '00:00:05'
    GO 60

    This will first create a table in tempdb, after which it will collect the results every five seconds for a five-minute period. I set @format_output to 0 in order to get rid of the text formatting so that I can more easily work with the numeric data. The results can be correlated to performance counters or other external information using the [collection_time] column, which was added to Who is Active specifically to support automated data collection.

     

    Homework

    Share your experiences with Who is Active’s data collection feature. How are you using it? Have you hit any problems or roadblocks? Any awesome success stories? A few sentences will be great, and more is fine if you’re in the mood to tell a story.

  • The Output of Your Dreams (A Month of Activity Monitoring, Part 24 of 30)

    This post is part 24 of a 30-part series about the Who is Active stored procedure. A new post will run each day during the month of April, 2011. After April all of these posts will be edited and combined into a single document to become the basis of the Who is Active documentation.


    Early in the process of creating Who is Active it became clear that there was no way everyone was going to agree.

    With each release I received requests to move some column so that it would show up on the lefthand side, or to change the sort order, or to make some other modification that someone felt was necessary to help them more easily digest the data. The problem: it was impossible to accommodate all of these requests. So I decided to go with a self-service model.

    In order to allow users to dynamically customize the output (without touching any code), the stored procedure exposes three options: @output_column_list, @sort_order, and @format_output. Each of these is discussed in the following sections.

    @output_column_list controls not only whether or not certain columns will be shown in the output, but also the order in which the columns are displayed. The correct argument is a list of bracket-delimited column names (or partial names with wildcards). Delimiters are not necessary (use whatever delimiter you like, or none at all; they’re ignored). The key to successfully using this option is to remember that inclusion of columns in the output is additive: many columns (such as [additional_info]) are only added to the output if both the correct options are enabled for the stored procedure and the columns are included in the column list. If you start modifying the list and don’t take this into account, you may not see the columns you’re expecting when you go back later and start changing options. To keep things flexible, make sure to use wildcard columns, especially a generic wildcard ([%]) at the end.

    Using the column list can be as simple as specifying the exact columns you’re interested in:

    EXEC sp_WhoIsActive
        @output_column_list = '[tempdb_allocations][tempdb_current]'

    Notice that no delimiter is used here. Again, any extraneous text aside from the column specifiers is ignored, so the following call is equivalent:

    EXEC sp_WhoIsActive
        @output_column_list = 'this[tempdb_allocations]is[tempdb_current]ignored'

    Easier than specifying exact column names is to use wildcards that match the pattern of the columns you’re interested in:

    EXEC sp_WhoIsActive
        @output_column_list = '[tempdb%]'

    All three of these calls will yield the same output, similar to the following:

    F24_01_column_list

    Of course, this will return only these two columns. Generally I’ll use the column list feature just to move things around so that I don’t have to do as much scrolling, and in those cases I almost always want everything else, too. That’s where the generic wildcard comes into play:

    EXEC sp_WhoIsActive
        @output_column_list = '[tempdb%][%]'

    F24_02_all_cols

    Now the tempdb-specific columns appear on the lefthand side, and all of the other columns follow, in a default order.

     

    The next option we’ll cover is @sort_order. This option controls the order of the rows output by the stored procedure. Like @output_column_list, the input is a list of bracket-delimited column names. In this case, wildcards are not supported, but the keywords ASC[ENDING] and DESC[ENDING] are supported after the column name.

    The following call returns data sorted by [login_name] ascending (ascending is default, so the keyword is optional), with ties broken by [session_id] descending:

    EXEC sp_WhoIsActive
        @sort_order = '[login_name][session_id] DESC'

    As before, delimiters are optional and are ignored. Please note that the current betas of Who is Active have bug where multi-column sorts like this one are not properly honored. That issue will be fixed in the next version of Who is Active.

    Sorting is especially useful when doing any kind of comparison of the various requests currently running. I use it extensively in block leader mode ([blocked_session_count] DESC), and it is quite handy in delta mode—which will be covered in a post in a few days.

     

    The final option this post will cover is @format_output. This one is based on the fact that Who is Active is designed to leverage SSMS as a “graphical user interface” of sorts. Query text is sent back in an XML format, numbers are formatted as right-justified strings, and elapsed time is formatted in the more easily-digestible [dd hh:mm:ss.mss] form.

    If you’ve been using Who is Active for a while you’ve probably noticed the right-justified numbers:

    F24_03_right_justified

    This format was suggested fairly early on by Aaron Bertrand, and is one of my favorite things about the stored procedure. It makes it much easier to pick out bigger numbers when you’re looking at a large set of data. By default, SSMS uses a non-fixed width font for grid results, so the default argument to @format_output, 1, takes this into consideration. But some people—like Aaron Bertrand—change the SSMS settings and use a fixed width font instead. If you’re one of these people you can use an argument of 2 to get nicely-formatted numbers. If you don’t change the argument you might notice that the numbers don’t seem to properly line up when you’re working with a set of data containing numbers of greatly differing size.

    Still other people don’t like the right-justified numbers or are doing collection to a table (see tomorrow's post), and so formatting can be completely disabled by using an argument of 0. It’s up to you...

     

    Homework

    Today’s homework is to enjoy your Sunday. You’re a busy DBA; you’ve earned a break. Get outside and off of the computer!

This Blog

Syndication

Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement