My office is a cross-platform shop with both Oracle 10g and SQL Server 2000/5/8. Lately we've started doing some cross-training, so that all the DBAs, the theory at least, could function on either platform. The Oracle guys, I'm afraid, are way out ahead of me in this regard. In my defense, though, I have to manage more than 100 servers to their 5!
Testing the waters with Oracle has been really interesting - I like looking at different technologies for comparison. One issue that we've had great discussion over is, of course, concurrency. I have seen posts and articles by Oracle folks slamming critiquing SQL Server for its historical locking model, which is a pessimistic design that locks any data that is being modified, which Oracle doesn't do by default. This has created a strong perception that SQL Server is just "bad" from a concurrency point of view. The real situation is a little more nuanced, though. Here's the real deal, if I understand it correctly:
- Oracle has basically always used a row versioning system, which allows SELECT operations to proceed against data that is being modified, by keeping the pre-transaction version(s) of each row and serving that up. If you've worked with SQL Server in the last five years, you'll note the ... ahem ... profound similarity to Read Committed Snapshot Isolation, introduced to SQL Server in 2005.
- In theory the new similarity between the two platforms ought to make it simpler to port applications, or write them from scratch, for both, because the introduction of snapshot isolation makes them much more similar.
- In practice, though, there are a handful of billions of lines of code already written in existing apps, and it's been my experience that ISVs are slow to go back and modify and then test their existing code to enable snapshot isolation in SQL Server. We have, for example, one application out of 50 or so in production that can use this feature, after about six years. So if you get into a friendly debate with an Oracle aficionado, some amount of backpedalling goes on: "well, SQL Server can do that too. But I can't switch that feature on, because ..."
Coming to the point of this post: in the past, using temp tables in SQL Server, when warranted, could get you a concurrency benefit by mimicking the behavior of snapshot isolation. If you had a long series of transformations to perform, instead of holding a lock on the underlying data it might make sense to fetch it out of the source table as rapidly as possible, into a temp table, then perform work on that "snapshot" of the data. Other transactions might run and change the source table, but if your design was right, it didn't matter. It's really a kind of roll-your-own read-committed-snapshot.
In the past Oracle would essentially handle that scenario as a default behavior, without the need for this workaround. SQL Server can now, too, but here's the rub: the row versioning pattern is well established in the Oracle space, while in the SQL Server ecosystem huge amounts of code and mindset have to be changed to make this new feature real. In SQL Server it’s still an option and not the default. A small difference with big consequences.
Here's the question for you: Are you changing practices or code to implement snapshot isolation? Retraining developers? What's the worst pain point? Or have you made the switch already?