Yeah, yeah, yeah, let's get this out of the way right from the start:
Don't concatenate rows into delimited strings in SQL Server. Do it
Except if you really have to create delimited strings in SQL Server. In which case you should read on.
There was a little discussion on SQLTeam
about the best way to concatenate. I recommended a scalar UDF solution,
whereas Rob Volk recommended a solution involving a temp table.
I mentioned my dislike for the temp table solution for a couple
of reasons. First of all, it relies on a clustered index for ordering.
That will probably work in this example, but is not guaranteed to
always work and relying on indexes rather than ORDER BY for ordering is
definitely not a habit I want anyone to get into. The clustered index
as it was described in Rob's example also has another problem that I
didn't even notice until I was writing this entry. But I'll get to that
in a moment. The second reason I dislike the temp table is that I felt
it would be less efficient than the scalar UDF.
Rob didn't agree about the efficiency. And so I set out to prove him wrong...
We'll use the Authors table in Pubs. I want a comma-delimited list, per state, of the last name of each author who lives there.
First, the scalar UDF:
CREATE FUNCTION dbo.ConcatAuthors(@State CHAR(2))
DECLARE @Output VARCHAR(8000)
SET @Output = ''
SELECT @Output = CASE @Output
WHEN '' THEN au_lname
ELSE @Output + ', ' + au_lname
WHERE State = @State
ORDER BY au_lname
To find the list I want:
SELECT DISTINCT State, dbo.ConcatAuthors(State)
ORDER BY State
... And the adaptation of Rob's temp table method... I did change
two things due to problems I discovered during testing. One, I've
altered the au_lname column to VARCHAR(8000); the column in the Authors
table is VARCHAR(40), not large enough for all of the California
authors. What if we were dealing with a much larger dataset? Second, I
added an IDENTITY column, and I'm clustering on that instead of the
actual data to get the ordering. I'm doing so because of the
VARCHAR(8000). Index rows can be a maximum of 900 bytes, so if we had
enough data to exceed that length, this method would fail.
CREATE TABLE #AuthorConcat
State CHAR(2) NOT NULL,
au_lname VARCHAR(8000) NOT NULL,
Ident INT IDENTITY(1,1) NOT NULL PRIMARY KEY
DECLARE @Authors VARCHAR(8000)
SET @Authors = ''
DECLARE @State CHAR(2)
SET @State = ''
SET @Authors = au_lname = CASE
WHEN @State = State THEN @Authors + ', ' + au_lname
ELSE au_lname END,
@State = State
SELECT State, MAX(au_lname)
GROUP BY State
Clever, but more complex and harder to read than the scalar UDF
version. Output is identical, but that's not why we're here. Which one
is more efficient?
Results were tabulated using STATISTICS IO, STATISTICS TIME, and
Query Analyzer's Show Execution Plan. DBCC DROPCLEANBUFFERS and DBCC
FREEPROCCACHE were run before each test.
Scalar UDF Method
Total cost: 0.0492
Total Scan count: 1
Total Logical reads: 2
Total Physical reads: 2
Total time: 25 ms
Temp Table Method
Total cost: 0.2131
Total Scan count: 4
Total Logical reads: 9
Total Physical reads: 2
Total time: 88 ms
So in conclusion, neither method is incredibly taxing with the
tiny Pubs dataset, but I think I have proven that the UDF is far more
Update, February 28, 2005: Modified the adapation of Rob Volk's
method to use a CREATE TABLE instead of SELECT INTO, as the latter is
not necessarily guaranteed to insert rows in the right order for the
sake of this example. Thanks to "PW" on SQLServerCentral for pointing
this problem out. Note that this changed the total costs very slightly
-- for the better -- but the UDF still performs better by quite a large