THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

Adam Machanic

Adam Machanic, Boston-based SQL Server developer, shares his experiences with programming, monitoring, and performance tuning SQL Server. And the occasional battle with the query optimizer.

  • SQLbits London 2012 - Demos

    Thanks to everyone who attended my sessions last Friday and Saturday at SQLbits! It was great to meet many new people, not to mention spending some time exploring one of my favorite cities, London.

    Attached are the demos for each of the two talks I delivered:

    Query Tuning Mastery: The Art of and Science of Manhandling Parallelism
    As a database developer, your job boils down to one word: performance. And in today's multi-core-driven world, query performance is very much determined by how well you're taking advantage of the processing power at your disposal. Are your big queries using every available clock tick, or are they lagging behind? And if your queries are already going parallel, can they be rewritten for even greater speed? In this session you will learn how to take full advantage of parallelism, from a developer's point of view. After a quick terminology review and technology refresher the session will go deep, covering T-SQL patterns that allow certain queries to scale almost linearly across your multi-core CPUs. Alas, not all T-SQL queries can go parallel, so you will also learn to watch for those things that can restrict the query optimizer's decisions. Along the way you’ll learn to manipulate costs and row goals, challenge generally accepted tuning practices, and take complete control of your parallel queries.

    ... and ...

    Query Tuning Mastery: Workspace Memory Internals
    As SQL Server professionals, we often think of memory in vague, instance-level terms: buffer pool, procedure cache, Virtual Address Space, and so on. But certain tasks require a more in-depth focus, and query tuning is one of them. Large, complex queries need memory in which to work--workspace memory--and understanding the how's, when's, and why's of this memory can help you create queries that run in seconds rather than minutes. This session will give you an in-depth understanding of how the optimizer makes its query memory decisions, with lots of tips and tricks along the way to help you guide the process for top performance. If you work with large queries and are serious about achieving scalability and consistently great performance, you owe it to yourself to attend this session.

     

    Enjoy!

  • Released: Who is Active v11.11

    It's been several months since the last Who is Active fix, so I thought I'd call this one out specifically via a blog post.

    v11.11 contains a few minor fixes and enhancements, which you can read about on the download page.

    This will (I believe) be the last release that is compatible with SQL Server 2005 and 2008. v11.xx has been quite a stable release in general, with very few bugs found in the 11 months since I've released it--I do not expect to need to release any more fixes.

    In the meantime, I have started work on a new version that will take advantage of several SQL Server 2012 features, as well as some SQL Server 2008 features that I was unable to previously leverage due to my efforts to keep the procedure backward-compatible with SQL Server 2005. The new version will be released in a few months, once I've had time to both implement and thoroughly test the new functionality.

    As always, I appreciate any comments or feedback.

    Enjoy!

  • SQL Server Connections Fall 2011 - Demos

    Today is the last day of the annual SQL Server Connections show in Vegas, and I've just completed my third and final talk. (Now off to find a frosty beverage or two.)

    This year I did three sessions:

     

    SQL302: Parallelism and Performance: Are You Getting Full Return on Your CPU Investment?

    Over the past five years, multi-core processors have made the jump from semi-obscure to commonplace in the data center. While servers with 16, 32, or even 64 cores were once an out-of-reach choice for all except the biggest databases, today we regularly expect such specifications in even our lower-end servers. So, are you getting everything you can out of the wealth of processing power at your disposal? By default, SQL Server automatically handles many of the parallel processing details, but DBAs still need to consider a number of things if they want to ensure that they’re taking full advantage. In this session, we will take a detailed look at the ins and outs of how and why SQL Server processes queries in parallel, with examples to help you identify which queries are being processed in parallel and what they’re doing. You will then learn the various methods of controlling parallel processing: SQL Server configuration options, the SQL Server 2008 Resource Governor, and query-level hints. The information you take from this session will enable you to immediately evaluate, understand, and improve the state of parallel processing on your servers. 

     

    SQL405: Query Tuning Mastery: The Art and Science of Manhandling Parallelism

    As a database developer, your job boils down to one word: performance. And in today’s multi-core-driven world, query performance is very much determined by how well you’re taking advantage of the processing power at your disposal. Are your big queries using every available clock tick, or are they lagging behind? And if your queries are already going parallel, can they be rewritten for even greater speed? In this session you will learn how to take full advantage of parallelism from a developer’s point of view. After a quick terminology review and technology refresher the session will go deep, covering T-SQL patterns that allow certain queries to scale almost linearly across your multi-core CPUs. Alas, not all T-SQL queries can go parallel, so you will also learn to watch for those things that can restrict the query optimizer’s decisions. Along the way you’ll learn to manipulate costs and row goals, challenge generally accepted tuning practices, and take complete control of your parallel queries. 

     

    SQL323: What’s Really Happening on Your Server? 15 Powerful SQL Server Dynamic Management Objects

    There are two kinds of DBAs in this world: those who scratch their heads, unsure of how to find answers, and those who demand real-time, comprehensive insight. This session is for the latter type, the Type A DBAs who are serious about managing their servers as efficiently as possible. The Dynamic Management Objects – a set of views and functions that first shipped with SQL Server 2005 – are a window into the inner workings of your SQL Server instance. Locked within the objects is the information you need to help you solve virtually any performance problem, quickly debug issues as they’re occurring, and gain insight into what’s actually happening on your server, right now. This session is a fast-paced tour of the ins, outs, whys, hows, and even pitfalls of 15 of the most important views and functions – information gleaned from heavy use of the objects in a number of environments over the past five years. You will learn how to understand transaction behavior, locking, wait statistics, sessions, requests, and much more. No longer will you need to scratch your head, wondering what is slowing down your queries: You will be the master of your SQL Server instance.

     

    Huge thanks to everyone who decided to attend one of my talks! The decks are available on the conference DVD, and the demos are attached to this post.

    Please let me know if you have any questions about the material. Thanks again, and enjoy!

  • Thinking Big (Adventure)

    If the title of this post doesn't have you scratching your head, you may have been paying very rapt attention last time you saw me speak.

    I love the portability of AdventureWorks and the fact that anyone can download it. Since it was released I've used it almost exclusively for demos in talks I've written. However, In recent months I've been moving away from the core tables in the database. Fact is, they're just a bit too small to show performance artifacts of parallelism, spilling to tempdb, and the like -- the topics that I'm currently enamored with.

    Instead I've started using a couple of tables modeled after Production.Product and Production.TransactionHistory. These tables are called dbo.bigProduct and dbo.bigTransactionHistory, and I refer to them collectively as bigAdventure.

    The bigAdventure tables are several times larger than their AdventureWorks brethren, and allow me to easily create queries that overwhelm the 8 cores on my laptop. Which is exactly what I need to emulate the large data warehouse queries we see in the real world. So far I've been able to do most of what I need with only the two tables, but I hope to add more to the mix soon (for example, I've been working on a bigger version of Sales.CurrencyRate to help illustrate some SQLCLR techniques).

    The current bigAdventure script is attached to this post. I meant to include it in the demo download for my PASS session, also posted today, but forgot to put it into the ZIP file. I thought that someone out there who didn't attend my session might want to use it, so here you are.

    Enjoy!

  • PASS Summit 2011 - Zen and the Art of Workspace Memory - Demos

    What a rush. Standing on the stage in an almost-full 1,000-person room, I (very) momentarily wondered what I'd been thinking when I submitted a 500-level talk for the biggest SQL Server conference in the world. But despite a rough start--my laptop crashed and I had to reboot it two minutes into the talk--I found my rhythm and the entire 90 minutes went by in a flash. I wish I'd been able to take 90 more!

    The scene? PASS Summit 2011. Friday, October 14, 10:15 a.m. (Room 6E, to be exact.) The last day of one of the best PASS Summits I've had the pleasure of attending.

    The topic? A fairly obscure area of SQL Server, called workspace memory. Here's the abstract for the talk:

    Query Tuning Mastery: Zen and the Art of Workspace Memory

    As SQL Server professionals, we often think of memory in vague, instance-level terms: buffer pool, procedure cache, Virtual Address Space, and so on. But certain tasks require a more in-depth focus, and query tuning is one of them. Large, complex queries need memory in which to work--workspace memory--and understanding the how's, when's, and why's of this memory can help you create queries that run in seconds rather than minutes. This session will teach you how to guide the query processor to grant enough memory for top performance, while also keeping things balanced for the sake of concurrency. You will learn advanced monitoring techniques, expert-level application of specialized query hints, and the memory internals needed to put it all together. If you work with large queries and are serious about achieving scalability and consistently great performance, you owe it to yourself to attend this session.

    If you were in the audience, I thank you for choosing my session over the many others that were running concurrently. I had a great time, and I hope you did too.

    The demos for the talk are attached to this post. Apologies, but I am not sharing the deck at this time as I'm going to be integrating it into a larger course that I hope to start delivering next year. (Through Data Education, naturally!)

    Enjoy! And as always, let me know in the comments if you have any questions.

  • SQL Saturday 89 - Atlanta GA - Materials

    Last Saturday, September 17, I was lucky to be able to present two sessions at an excellent SQL Saturday in the Atlanta area. The day drew a large crowd and had a great speaker lineup. All in all, a huge success, and a very well-managed event. Congratulations to the organizers!

    One of the highlights for me, aside from speaking, was helping out with logistics the night before and creating a cocktail for the event. Check out Audrey Hammonds's blog for details.

    My two sessions were:

    "SQL Server Parallelism and Performance"

    Over the past five years, multi-core processors have made the jump from semi-obscure to commonplace in the data center. Today we regularly expect to see 16, 32, or 64 cores in even our lower-end servers. Are you getting everything you can out of the wealth of processing power at your disposal? Attend this session to take a detailed look at how and why SQL Server processes queries in parallel, as well as methods for controlling parallel processing via configuration options, the Resource Governor, and query-level hints. This session will enable you to immediately evaluate, understand, and improve the state of parallel processing on your servers.

    and

     "15 Powerful SQL Server Dynamic Management Objects"

    The Dynamic Management Objects--a set of views and functions that first shipped with SQL Server 2005--are a window into the inner workings of your SQL Server instance. Locked within is the data you need to help solve virtually any performance problem, quickly debug issues, and gain insight into what's actually happening on your server, right now. This session is a fast-paced tour of the ins, outs, whys, hows, and even pitfalls of 15 of the most important views and functions--information gleaned from heavy use of the objects in a number of environments. You will learn how to understand transaction behavior, locking, wait statistics, sessions, requests, and much more. Attend this session and you will be the master of your SQL Server instance.

    I promised to share the slides and demos from the second session, and those are attached to this post.

     

    Many thanks to everyone who attended my sessions and for all of the positive feedback I received! (I would also thank anyone who gave me negative feedback, but in this case the worst I received was that I had a typo on one of my slides.) I had a great time, and hope to return to Atlanta again sometime soon.

  • Where Can You Find Me the Rest of This Year?

    Autumn is creeping inevitably closer here in the US, and that means that speaking season is about to kick into high gear. Here's my current schedule for the remainder of the year:

     

    September 8, 17:00 GMT (online) - 24 Hours of PASS webcast: "Baseline Basics or: Who Broke the Database?"

    In this session, excerpted from my PASS Summit precon, I'll explain the whys and hows of using baselines to assist with performance tuning. If you find yourself more often than not tuning reactively rather than proactively, this session is for you. This is a free webcast, so why not join in?

     

    September 8, 18:30 EDT (Waltham, MA) - New England SQL Server Users Group: "Windowing Functions in SQL Server 2008, Denali, and Beyond"

    Two in one day! This session will kick off the 2011/2012 New England SQL Server season, and will cover what are (in my ever-so-humble opinion) the most important T-SQL enhancements in the past three versions of SQL Server: windowing function enhancements. I'll discuss what's there in today's shipping versions, and the new and incredibly powerful functionality that Denali brings to the table. If you're in the Boston area, don't miss it!

     

    September 17 (Atlanta, GA) - SQL Saturday #89: (Two Talks)

    After the great time I had at this year's TechEd show in Atlanta I could hardly wait to get back. Luckily, the fantastic SQL Server community in the Atlanta area scheduled this event, which gave me the perfect excuse to pack my bags for a return trip. The speaker lineup for this event is rock-solid, and I'll be contributing with two talks: my introductory dive into SQL Server parallelism, and an overview of my 15 favorite activity monitoring dynamic management objects. This should be a great event by anyone's standards, and it's free, so if you live nearby you have literally no excuse not to attend.

     

    October 11,  08:30 PDT (Seattle, WA) - PASS Community Summit Pre-Conference Seminar: No More Guessing! An Enlightened Approach to Performance Troubleshooting

    No more guessing! It's not just a catchphrase; it's a way of life. When faced with performance problems we have a choice: we can either run around panicking, wasting everyone's time (including our own), or we can use the huge amount of information at our disposal to figure out what's actually wrong and fix it. As the calm and collected type, I prefer the second option, and so should you. If you'll be attending the PASS conference, join me on Tuesday to learn how you, too, can quickly and accurately pinpoint the root cause of your performance issues.

     

    October 12-14 (Seattle, WA) - PASS Community Summit Spotlight Session: Query Tuning Mastery: Zen and the Art of Workspace Memory

    Your query is running, and it needs to sort some data. Or to hash some data. Or to perform a parallel operation. These things take memory, and as any SQL Server professional knows, in the world of SQL Server memory is worth much, much more than its weight in gold (even given today's hugely-inflated prices). Attend this session to learn the ins and outs of workspace memory: what it is, why it's needed, where the memory comes from, and most importantly, how to control it to make certain queries faster and other queries not have to wait as long. Workspace memory tuning is a mostly untapped performance opportunity that many DBAs can heavily benefit from learning how to leverage.

     

     November 1-3 (Las Vegas, NV) - SQL Server Connections: (Three Talks)

    My final speaking engagement of the year will be at the always-fun SQL Server Connections show in Vegas. (It's in Vegas! How could it not be fun?!) I'll be doing three talks during the course of the show: An introductory talk on my favorite topic the past couple of years, parallelism in SQL Server; a much more advanced parallelism talk to build on that one; and a talk on the various dynamic management objects that can be used in the quest for ultimate SQL Server performance. Save a spot for me at the poker table!

     

    I'm really looking forward to these events. If you're going to be there let me know in the comments, and/or feel free to find me at any of the shows and say hi. (Buying me a drink or two wouldn't hurt either.) See you there!

  • Data Education: Food For Your Brain

    I've written here before about Data Education, the training company I recently launched, building off my previous company Boston SQL Training. Things have moved along quickly, and I and the rest of the Data Education staff are very happy to announce a jam-packed fall and winter course roster. We'll be announcing even more classes later in the year, but I wanted to give you a run-down of what we're currently offering. (Note that several of these are currently in the Early Bird stage, which means there are steep discounts already available. For any that aren't, I'd like to offer SQLBlog readers a nice shiny $400 discount. Just enter the code SQLBLOG when registering.)

     

     

    Without further ado:

     

     

    The BI Master: TEO LACHEV

    Teo will be in Boston next month (September 19-23) sharing his impressive and encyclopedic knowledge of all things SSAS and PowerPivot. Considering he's literally written the books on the subject, this is one BI class you don't want to miss. (Class to include discussion and labs on OLAP, data mining, PowerPivot apps, data analysis, and best practices, plus much more.)

     

     

    The Entity Framework Guru: JULIE LERMAN

    This is one boot camp you can't afford to miss, plus world-renowned .NET expert Julie Lerman won't make you do push-ups. (I don't think.) Instead, she'll spend five days (October 3-7) telling you everything you need to know about Entity Framework 4.1, including CodeFirst. This one will be somewhat intensive and very, very thorough. (Class to include discussion of Code First A-Z, E.F 4.1 DbContext/DbSet, EF in the cloud, EF performance tricks, plus much more.)

     

     

    The PowerShell Virtuoso: ALLEN WHITE

    You'd have to live under a rock to not know that PowerShell is the thing to know these days. We're bringing in Allen White for a three-day course (November 14-16) to teach you Everything You Ever Needed to Know About PowerShell in a SQL Server World: writing scripts, managing scripts, and managing your environment. No knowledge of PowerShell required! (Class to include discussion and labs on SMO, backup and recovery, automating, and importing/exporting CSV and XML files, plus much more.)

     

     

    The T-SQL Super Star: ITZIK BEN-GAN

    Yep, we got him: Itzik Ben-Gan is coming to the East Coast (January 23-27) to present his wildly popular Advanced T-SQL Querying, Programming, and Tuning class. Itzik will talk logical query processing. He'll talk islands and gaps. He'll talk nested iterations using recursive queries. And then he’ll talk about so, so much more. There's a reason he sells out classes and fills rooms at conferences, you know. (Class to include discussion and labs on query tuning, aggregating and pivoting data, TOP and APPLY, programmable objects, plus much more.)

     

     

    Also, I wanted to point out a few other things we're doing at Data Education, all of which I think my fellow SQL fanatics might appreciate:

    • Twitter: We have a very active Twitter stream going (@dataeducation). Every weekday, we tweet the best in database blogs and articles. And sometimes we throw in a Chuck Norris joke. Just because.
    • Question of the Week: In light of the fact that a Friday is made even better by a $25 Amazon gift card, we've launched our new Question of the Week feature. Each Monday, we post a brainteaser. You get until Thursday at midnight to prove your riddle know-how. The questions have been fairly easy so far. Prepare for much trickier challenges. (Yes, that's a threat!)

     

    Enjoy, and we'll see you in class!
  • A Computer Scientist Meets T-SQL

    TSQLWednesdayThere I was. A freshly printed bachelor's degree in Computer Science tucked under my arm, I walked into my First Real Job. I'd never touched the technologies I was going to work with--ASP 3.0 and SQL Server--but my employer knew that, and I figured I'd be able to pick things up relatively quickly. After all, I'd been programming since the 2nd grade, knew a number of languages (more or less), and had all of the academic background I'd ever need.

    I was given a quick tour of the office, oriented by HR, and met the rest of my group. As a member of the small IT team, my coworkers included an Exchange administrator and couple of help desk support/networking guys. No other developers. I was given a cubicle (shared with a finance person) and told to start creating the company's intranet. So, I did. And with no one to learn from or bounce ideas off of, I made up my own rules.

    One of the first things I was asked to create was a threaded message system so that the remote marketing and sales teams could converse with one another. I got to work on a basic (and very ugly) user interface, and then tackled the database side of the equation.

    I'd never encountered a real live database before, and scarcely even knew what one was, but I took a day or two to read "Teach Yourself SQL in 24 Hours" and figured I was good to go. Somehow I recognized the need for a self-referencing table. I created it, populated it with some test data, and quickly discovered that the simple test queries I'd written to date weren't going to work. How would I navigate the hierarchy and display the messages in a threaded fashion?

    After messing around for quite some time and applying all of my Computer Science prowess, I came up with an algorithm:

    1. Using a cursor, start iterating the rows in the table.
    2. Insert "parent" rows into a temp table.
    3. For each "parent" row, recurse, calling the same stored procedure again and starting at step 1.
    4. Finally, return all collected results.

    With its nested cursors and various work being done in temp tables, this algorithm scaled so very well that shortly after rolling the message board out to production I got to work on implementing a cache so that users wouldn't have to wait several seconds when making a web request.

    Clearly, I'd done something very wrong. And I knew it. I just didn't know enough to know how to find the right answer.

    Much later I learned the correct term for what I'd created--an adjacency list hierarchy--and I learned about other methods of modeling hierarchies, including materialized paths and nested sets. I learned that many of the lessons I'd been taught in school--where the curriculum was heavily biased toward procedural languages--didn't apply well to SQL databases.

    And most importantly I learned how to ask (and answer!) questions on online forums. Being a team of one doesn't mean that you need to work in isolation. There is a huge community of people online who want to help you succeed. Finding these forums (and newsgroups. RIP, NNTP!) was a revelation. The ability to talk shop with people who understood what I was trying to do and how best to do it was invaluable to my learning how to be a better developer and not just a student of Computer Science.

    Many years later and I still get stuck on difficult problems, but these days I don't try to do everything in isolation. I know better than that. I reach out to my network and take advantage of some of the great minds I'm lucky enough to have access to. And you can, too. Next time you find yourself with a less than ideal solution, swallow your pride and ask for help.

    And don't forget to help someone else in return. As much as you'll learn from the people answering your questions, you'll get even more value from puzzling over the numerous problems that other people face on a day to day basis. Solving as many problems as you can--your own and those of others--is in my opinion the fastest way to truly master a given technology.

    Enjoy your journey--and always remember that you don't have to go it alone.

  • T-SQL Tuesday #21 - A Day Late and Totally Full of It

    TSQLWednesday“This ugly hack is only temporary,” you think. Six months later, a coworker curses your name, sacrificing a chicken to any deity that will help expedite your getting struck down by lightning, a school bus, or both.

    Crap code. We’ve all seen it. We’ve all created it. We’re all guilty. Yes, even you. Sometimes our crap is purposeful—the ugly, “temporary” hack. Sometimes we produce crap because we simply don’t know any better. But there is no excuse good enough. As professionals, we must strive to rid ourselves of bad habits. And the only way to learn the difference is to see lots, and lots, and lots of examples.

    That’s where you come in. This month’s T-SQL Tuesday challenge: reveal your crap to the world. Why is (or was) it crap? Why did you do it? And how did you learn from your mistake?

    I’m posting a day late, and the subject matter may require a bit of thought, so I’m slightly changing the rules—temporarily, of course.

     

    Here’s what you need to do.

    • Write a blog post talking about your crap. This doesn’t mean that your post should be crap. Two wrongs, in this case, do not make a right!
    • Your post must go live some time between 00:00:00.00 GMT and 23:59:59.99 GMT on Wednesday August 10, 2011
    • Your post must contain the T-SQL TuesdayWednesday logo from above and the image should link back to this blog post.
    • Make sure that a link appears in the comments section below, either via a trackback or a manual comment. I’ll take all of the posts that are reported and write a round-up.

    Even better…

    • Include a reference to T-SQL Tuesday (or Wednesday) in the title of your post
    • Tweet about your post using the hash tag #TSQL2sDay
    • Consider hosting one of these. Ask me how.

    Enjoy! Here’s hoping for a truly cathartic blog event.

  • Performance, Discounts, and an Excuse to Visit New York City

    A couple of weeks ago I announced a two-day advanced performance seminar in New York City, which will be delivered in July. This seminar will cover SQLCLR and parallelism techniques to help you take performance well beyond the levels that typical tuning exercises yield. Check out the links for more details, including a full outline.

    Thanks to the great response so far, we have decided to extend the early registration discount for a few more days. You have until the end of the day tomorrow, June 3, to take advantage of the $100 savings off of the $1050 course fee. To get the discount, use the code EARLYBIRD on the registration page.

    Finally, I would like to point out that one of the main reasons that I chose New York City for this course is that it is by far my favorite city to visit. I just noticed yesterday that a bunch of hotels are offering a free third night this summer--so I hope that you'll be able to come for the course and then stay an extra night or two to enjoy the city. Note that we also have a hotel discount available to course attendees, unrelated to the third night offer. Let me know if you would like more information on that.

    Hope to see you in New York next month!

  • Two Days of Advanced Performance Techniques - July 14-15, New York City

    I am pleased to announce that I will be delivering two days of training in New York City, July 14 and 15.

    This seminar focuses on achieving "next-level" performance--going beyond that which you can gain via normal tuning methodologies. The vehicles for this performance improvement are two technologies that I've been pushing on this blog and in other venues for a long time: SQLCLR and parallelism. The seminar will be based on the in-depth materials that I used for my full-day sessions PASS conference in 2009 and 2010. These seminars were both quite well-received, but I have tweaked and tuned the content to make it even better and more focused on the bottom line goal of achieving maximum performance.

    Full information on the seminar is available on the Data Education web site. There is also an early bird discount currently in effect. Use the discount code "EARLYBIRD" to save $100 on the $1050 registration fee.

     

    I would like to take this opportunity to mention that Data Education is a new training venture that I've recently launched. This will be the company's second public training event (our first featured Kalen Delaney in the Boston area). The company is an evolution of Boston SQL Training, a company that I started a couple of years ago with the goal of bringing extremely high-quality SQL Server training events to the Boston area. The new name, Data Education, reflects our desire to focus beyond Boston and on a broader technology spectrum. We plan to eventually move into training on data-related programming (Entity Framework and similar), other DBMS platforms, NoSQL technologies, and wherever else the database industry moves.

    Currently, aside from my course in New York we've announced an Analysis Services and PowerPivot course featuring Teo Lachev, which will take place in the Boston area September 19-23. Several other courses will be announced shortly, so stay tuned and consider following us on Twitter (@DataEducation).

    If you've read this far, I would greatly appreciate your taking part in a quick and informal poll: in the comments section below, please let me know what geographic location would be interesting to you for an advanced SQL Server course, and what topic areas you're not seeing enough of.

     

    Thanks, everyone, and I'm looking forward to seeing you in New York!

  • TechEd 2011 - Performance Tuning and Optimization in SQL Server 2008 R2 and SQL Server Code Named "Denali"

    Thanks to everyone who took the time out of their conference experience to join Mike Wachal and me for yesterday's session on SQL Server performance tuning!

    For those who weren't there, we focused in on troubleshooting techniques, highlighting some of the key DMVs and new Extended Events features that will help with proactive diagnosis of problems. My section, in particular, was a bit of a taste of some of the sessions I've submitted for this fall's PASS summit: I showed a brief demo of troubleshooting using my No More Guessing methodology, and applied it to the problem of workspace memory contention. The sessions for PASS cover these topics in detail, so if you find this area interesting please vote here.

    The demo script I showed yesterday is attached, along with a copy of the latest versions of my Who is Active stored procedure and the SQLQueryStress tool. Feel free to leave a comment below or drop me a line via e-mail if you have any questions.

  • Twenty Nine Days of Activity Monitoring (A Month of Activity Monitoring, Part 30 of 30)

    This post is part 30 of a 30-part series about the Who is Active stored procedure. A new post will run each day during the month of April, 2011. After April all of these posts will be edited and combined into a single document to become the basis of the Who is Active documentation.


    Pop quiz: What happens when you promise to write 30 posts on a topic, but you don’t plan properly and lose a bit of steam toward the end?

    Answer: A recap post, a day or two early! Woo hoo!

     

    The month started with a few introductory posts. A history of monitoring (from my point of view), some information on Who is Active’s design philosophy, and a few words on the Who is Active license.

    The key takeaways from these first posts? The monitoring story has been pretty poor for most of SQL Server’s history. It’s better now, but it’s still a lot of work. Who is Active was created to make things more accessible. It’s free, with a few restrictions designed to keep people who make money from monitoring from making money on Who is Active. Anyone else can and should use and enjoy.

     

    Next I got new users up to speed. My post on installation covered security considerations, followed by a post on why you may not see as much data as you're used to when you first start using Who is Active. I continued with a post on the various options supported by the procedure (plus a nod to the online help mode) and another post describing the default output columns.

    One of the more important posts of the month described how Who is Active treats active requests and sleeping sessions. This can be a confusing area for new users, since it's not always obvious what's going on unless you're looking in the right place at the output (the [status] column).

     

    Once the refresher was finished I began covering some of the basic options exposed by the procedure. Filters were one of the first things I implemented, so it was a logical place to start (the "not" filters came much later). Seeing the query text is a key part of the procedure's functionality, and that was next on my list. If you're not sure what the [sql_text] column means, this post will set the record straight.

    The ability to see the query text is nice, but so is the ability to get a query plan--and of course Who is Active supports that too. And since everything a query does is transactional, the procedure allows users to collect information about what those transactions are up to.

     

    The next section of the month was all about query processing. I started with a couple of background posts: One on how the query processor works (at a somewhat high level), and another on what blocking really means.

    Who is Active exposes two different modes that help with analysis of real-time waits. My 15th post of the month covered the default--lightweight--collection mode. My followup post covered the more extensive full waits collection mode.

     

    Sometimes a query plan and wait information isn't quite enough to diagnose a performance issue. What if the query plan usually works well, but isn't performing properly only in this specific case? Who is Active has a feature to help you figure that out. And what if you need a bit more information on some of the settings that the request is using?

     

    Mining wait information yields some amazing returns. One of the additional pieces of information that you can get is the actual name of the object that's causing a block. Another thing you can see is the exact node within a query plan where a task is currently doing work. You can (and should!) also use waits to figure out whether tempdb is properly configured. There are still more tricks you can play with waits, but they'll have to wait for a future post.

     

    Like wait information, lock data can also be mined. However, there is so much locks information that the real challenge is rendering it in a human-readable manner. Who is Active does that using a special custom XML format. The procedure also helps with another type of blocker analysis, in the form of finding the "lead blocker."

     

    Once you've figured out what information you're interested in, why not set up the output so that you can see the information the way you want to see it? Who is Active helps with this by allowing users to dynamically customize output columns and sort order in a few different ways. And while you could always run Who is Active interactively, that might get a bit dull. Automated data collection is a much nicer methodology in many cases.

     

    Most of the numbers shown by Who is Active are cumulative. But oftentimes it's more interesting to compare data between two snapshots. The procedure can help you do that, using its delta mode.

     

    What fun would a month of Who is Active be without an official release? v11.00 has numerous new features, several of which were perfected this month thanks to feedback I received on the various posts in the series. It's great to have such enthusiastic users! Some of these new features didn't make it into prior posts, and other existing features are a bit hidden. So I did a writeup covering the more important things that you may not have noticed while working with Who is Active.

     

    I finished the month with a discussion on security for slightly tougher situations. I hope that the module signing technique will allow Who is Active to be used in a number of places where security auditing requirements have made things difficult.

     

    And that's that. A month of activity monitoring with Who is Active. Thank you for reading! Next, this text is going to be edited, expanded in places, and put into a much more comprehensive form. Watch your RSS reader for more information.

    Enjoy!

  • Access for All! (A Month of Activity Monitoring, Part 29 of 30)

    This post is part 29 of a 30-part series about the Who is Active stored procedure. A new post will run each day during the month of April, 2011. After April all of these posts will be edited and combined into a single document to become the basis of the Who is Active documentation.


    Early in the month I discussed basic security requirements for Who is Active. I mentioned the VIEW SERVER STATE permission and the fact that I consider it to be “a relatively low barrier to entry.”

    But what if, in your organization, it’s not? Auditing requirements being what they are, you might be required to lock things down. And granting someone full and unrestricted VIEW SERVER STATE may simply not be an option.

    Enter module signing. By securing Who is Active (or any other securable, for that matter) via inherited permissions, it’s often possible to get around auditing requirements, as long as the module itself has been reviewed. This is not at all a difficult thing to do, but in my experience most DBAs haven’t played much with signed modules. Today I’ll show you how quick and easy it can be to set things up.

    Start by creating a certificate.

    USE master
    GO

    CREATE CERTIFICATE WhoIsActive_Permissions
    ENCRYPTION BY PASSWORD = '1bigHUGEpwd4WhoIsActive!'
    WITH SUBJECT = 'Who is Active',
    EXPIRY_DATE = '9999-12-31'
    GO

    Once you have a certificate in place, you can create a login from the certificate. The goal is to grant permissions, and to do that you need a principal with which to work; a certificate does not count. A login based on the certificate uses the certificate’s cryptographic thumbprint as its identifier. These logins are sometimes referred to as “loginless logins,” but I refer to them as “proxy logins” since that’s what they’re used for: proxies for the sake of granting permissions.

    CREATE LOGIN WhoIsActive_Login
    FROM CERTIFICATE WhoIsActive_Permissions
    GO

    The login can be granted any permission that can be granted to a normal login. For example, VIEW SERVER STATE:

    GRANT VIEW SERVER STATE
    TO WhoIsActive_Login
    GO

    Once the permission has been granted, the certificate can be used to sign the module—in this case, Who is Active. When the procedure is executed, a check will be made to find associated signatures. The thumbprint of the certificates and/or keys used to sign the module will be checked for associated logins, and any permissions granted to the logins will be available within the scope of the module—meaning that the caller will temporarily gain access.

    ADD SIGNATURE TO sp_WhoIsActive
    BY CERTIFICATE WhoIsActive_Permissions
    WITH PASSWORD = '1bigHUGEpwd4WhoIsActive!'
    GO

    Getting to this step will be enough to allow anyone with EXECUTE permission on Who is Active to exercise most of its functionality. There are a couple of notes and caveats: First of all, every time you ALTER the procedure (such as when upgrading to a new version), the signature will be dropped and the procedure will have to be re-signed. You won’t have to create the certificate or the login again; you’ll just have to re-run that final statement. Second, you’ll only be able to use most of the functionality. Certain features, such as blocked object resolution mode, won’t operate properly, depending on whether the caller has access to the database in which the block is occurring. This may or may not be a problem—it depends on your environment and what users need to see—and Who is Active itself won’t throw an exception. An error message will be returned somewhere in the results, depending on what the user has tried to do.

    If you would like to grant database-level permissions based on the certificate login so as to avoid these errors, that’s doable to. Just do something like:

    USE AdventureWorks
    GO

    CREATE USER WhoIsActive_User
    FOR LOGIN WhoIsActive_Login
    GO

    EXEC sp_addrolemember
        'db_datareader',
        'whoisactive_user'
    GO

    This will allow Who is Active to figure out what the various blocked or locked object names are. Since the login is just a proxy no one can actually log in and get direct access to read the data, so this isn’t something I consider to be a security risk. However, keep in mind that if anyone has the password for the certificate and sufficient privileges in master, a new module could be created and signed. Keep the password secure, and make sure to carefully audit to catch any infractions before they become a risk.

    Security policy should never be a reason to limit your monitoring choices. Module signing is a powerful tool for Who is Active in addition to many other applications. I highly recommend studying it in detail in order to enhance your ability to provide high-quality, totally flexible, and completely secure solutions.

This Blog

Syndication

Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement