As I am preparing my talk for the UK SQL Server Users Group on November 16, I was going through other presentations on plan cache management. At the PASS Conference in Denver in September there was a session on management of cache, and I was surprised to find a slide that indicated there was a hard upper limit for plan cache size, and that the limit was actually decreased as of SQL Server 2005 SP2. This is not true. There is no upper limit to plan cache size, other than any upper limit you might have on total SQL Server memory. But usually we don't want to use too much memory for query plans, because it will reduce the memory available for data cache. In fact, there is a situation called 'plan cache bloat' where memory gets filled with very cheap adhoc plans that aren't reused, but take up memory space that could be much more effectively used for other purposes. It is to reduce this plan cache bloat that the caching management algorithm changed in SP2.
The changes do NOT enforce an upper limit. The change actually computes a plan cache size at which SQL Server recognizes that there is memory pressure, and it will start removing plans from cache. The plans to be removed are the cheap plans that have not been reused, and this is a GOOD THING.
The following explanation is based on a section in Chapter 5 of Inside SQL Server 2005: Query Tuning and Optimization, used by permission of the author. :-)
When discussing memory pressure, we refer to the term ‘visible’ memory. Visible memory is that which is directly addressable in the SQL Server process’s virtual address space. On a 32-bit SQL Server instance, the maximum value for the visible memory is either 2 GB or 3 GB, depending on whether you have the /3GB flag set in your boot.ini file. Memory with addresses greater than 2 GB or 3 GB is only available indirectly, through AWE-mapped-memory. On a 64-bit SQL Server instance, ‘visible’ memory has no special meaning, as all the memory is directly addressable. In any of the discussion below, if we refer to visible target memory greater than 3 GB, keep in mind that is only possible on a 64-bit SQL Server. The term ‘target’ memory refers to the maximum amount of memory that can be committed to the SQL Server process. Target memory is the lesser of the value you have configured for ‘max server memory’ and the total amount of physical memory available to the operating system. So ‘visible target memory’ is the visible portion of the target memory. Query plans can only be stored in the non-AWE-mapped memory, which is why the concept of ‘visible’ memory is important. You can see a value for visible memory, specified as the number of 8-KB buffers, when you run the DBCC MEMORYSTATUS command. The section called ‘Buffer Counts’ displays values for Visible memory as well as Target memory.
SQL Server defines a ‘plan cache pressure limit’ value which varies depending on the version you’re running, and the amount of visible target memory. The formula for determining the plan cache pressure limit changed in SQL Server 2005, Service Pack 2. The table below shows how to determine the plan cache pressure limit in SQL Server 2000 and 2005, and indicates the change in Service Pack 2, which reduced the pressure limit with higher amounts of memory. Be aware that these formulas are subject to change again in future service packs.
As an example, assume we are on SQL Server 2005, Service Pack 1, on a 64-bit SQL Server instance with 28 GB of target memory. The plan cache pressure limit would be 75% of 8GB plus 50% of the target memory over 8 GB (or 50% of 20GB), which is 6GB + 10GB or 16 GB.
On SQL Server 2005, Service Pack 2, on the 64-bit SQL Server instance with 28 GB of target memory, the plan cache pressure limit would be 75% of 4GB plus 10% of the target memory over 4 GB (or 10% of 24GB), which is 3GB + 2.4GB or 5.4 GB.
There are more details about different caches and global vs local pressure, that I'm not going to go into detail about here. But to simplify, when SQL Server detects that a cache has reached 75-80% of the cache pressure limit, it will consider the SQL Server instance to be under memory pressure. When memory pressure is detected, all zero cost plans will be removed from cache and the cost of all other plans is reduced by half.
For the rest of the details, you can read the Query Tuning and Optimization book. The point I wanted to make is that there is no hard limit and having memory pressure kick in is a good thing to prevent cache bloat. I have actually heard of people not upgrading to SP2 because they thought it would mean they would have less space for plans in cache. Even under memory pressure, plans that are regularly reused will NOT be removed from cache, so all you'll be doing is getting rid of the clutter, and have more room left for the important stuff.