I've got a third-party app that uses a SQL Server 2005 database. The database has grown to 190GB, with 150GB of that in one table (116M rows). I've been trying to get the greenlight to partition this table since June, and finally got it this weekend.
I wasn't able to start my process until Friday at noon, and had space on my backup drive to export the table data. (By law I have to keep two years worth of data in that table for auditing purposes - it has less than a month over that now.) I used bcp native mode to export the data and it took from Friday noon until 9am Sunday morning.
Once the data was exported I dropped the table. (Until this point all tables are in the PRIMARY filegroup, and it has three physical files in the group. Now I have two ndf files of 80-90GB each that have 16-18GB of data, the rest free. I need that space to allocate for the partition files, so I need to shrink the files.
I started the shrink (dbcc shrinkfile - sorry Paul [Randal], I know, but I need the space) at 9am. After 8 hours I decided it was not the correct solution.
I then built a new database to temporarily house the data so I could drop and recreate the application database with appropriate file sizes. The copy, using a combination of "select into" for small tables, and SSIS data transfer for the larger tables, took over 11 hours. By this time it was after 4:30am and the start of business was closing in rapidly. I knew I didn't have time to build the new database and get the data back in from the temporary database, so I abandoned the task. The import of the large table was causing enough performance problems that we've decided (in a meeting with the business owners) to restore the full database from the backup I have from before I dropped the large table.
The problem we have with the database still exists. That table grows by over 8 million rows per month, increasing by as much as 10 percent each month. All tables are in the PRIMARY file group so I have to backup data that stays constant every night.
The point really is that size makes a real difference in how to solve a problem. Solutions that work well on databases of 1 or 2 GB aren't at all appropriate for databases approaching 200GB. I have a couple of ideas on how I could have done some things differently, but of course they're based on the experience I gained in attempting to perform this maintenance.
Hindsight isn't 20/20, but it's close.