4

I have a database file that's currently 150GB, but only 75GB is being used - it's because I moved all the indexes (the other 75GB) to a new data file. I'd like to reclaim at least part of the space from this data file, but when I attempt to shrink the file, it "Executes" indefinitely, eventually being cancelled because of a network interruption or something else out of my control (after a day of running). Even using the "shrink to specific size" feature and specifying that it just trim off 10MB never seems to return - it just sits until the process is interrupted.

Is there another way that I can reclaim this space, even a little at a time?

EDIT: Somebody posted a link explaining why I shouldn't shrink my database. I understand, and I want to shrink it anyways. Disk space is at a premium on this server, and the database will not expand again into this unused space for a very long time - as I stated earlier, I migrated indexes out of the data file to free up this space, so now it's wasted.

SqlRyan
  • 906
  • 5
  • 13
  • 22
  • Realize you want to reclaim your space but here is a link to an article about why not to shrink datafiles: http://www.sqlskills.com/BLOGS/PAUL/post/Why-you-should-not-shrink-your-data-files.aspx – jl. Oct 11 '10 at 16:33
  • Is the db FULL or SIMPLE recovery mode? How's your log space? What's the load like on the database while you're doing the shrink? – SqlACID Oct 11 '10 at 16:34
  • 1
    @SqlACID: The load is pretty heavy and pretty constant, around the clock - I suspect this is a large part of my issue. It's in simple logging mode, and the transaction log is until 1GB, so that's not a concern. @jl: I understand the argument, am I'm not looking to completely shrink the file down to the minimum - I'm only looking to reclaim some of the 75GB. The database won't grow into that 75GB for a very long time, since I moved data out of the file that caused this space to begin with. – SqlRyan Oct 11 '10 at 16:54

3 Answers3

6

No, using DBCC SHRINKFILE ('filename', target_size) is the right way to do it.

If you want to do it in "chunks", you can either set progressively smaller target sizes, or just let it run as long as you can before it gets cancelled.

A few comments:

  • Put a reasonable target size, with some margin of allowed free space. Maybe 90GB total for 75GB of data?
  • While the shrink is running, check the activity monitor to see if the SPID is being blocked. If there is an open transaction on a page at the very end of the file, then shrink won't be able to move it until that transaction commits or is rolled back.
  • Is the spid actually making progress? (The CPU and IO numbers are changing)
  • Shrink can sometimes take a very, very long time, but it should save its progress (meaning it moves 1 page at a time, and when it gets cancelled, all the completed page moves have already been done)
  • After cancelling the shrink, try doing a DBCC SHRINKFILE ('filename', TRUNCATEONLY). It should recover all the space its already freed at the end of the file (see my prior point)
  • If you get desperate, try restarting SQL in single-user mode, so you know that nothing else is working against the db at that time (obviously, this could be impossible on a prod server)
  • Once you are able to complete the shrink, make sure to do a full reindex on the database to eliminate the fragmentation that the shrink creates. This may reclaim some of the space you just freed.
  • If you still can't get the shrink to work, check out some of the discussion on this SO question. There are apparently some situations where shrinks might not progress.
BradC
  • 2,200
  • 4
  • 25
  • 35
1

We have approached several options in our environment:

  1. If old tables can be faced-out, create new tables on a brand new filegroup and default the database to it. Overtime, drop the old tables until the previous filegroup is empty. Then drop it.
  2. If old tables cannot be faced-out but contain historical data that can be offlined for several hours, create a new empty table pointing to a new filegroup. Swap the tables and start copying over rows from the old table to the new one in batches. This can cause fragmentation though.
  3. Issue a DBCC SHRINKFILE with EMTPYFILE option and the DB will move all the objects to the new file. Then you can drop the old file. This can take a long time though.
  4. Recreate the clustered index (primary key with DROP_EXISTING) of all tables into the new filegroup. This will lock the table though.

Good Luck

ozamora
  • 171
  • 4
0

If you don't want to simply reclaim the space (which you specifically have refused to want to do) but insist on attempting to shrink the database expect this to take a very very long time and your transaction log expansion should just about cover any space you reclaimed from the database. As a bonus, watch your DB performance go thru the floor afterward. If Paul Randal can't convince you (Which JL commented with but I will repost here:Why you should not shrink your data files that shrinking is an awful idea, I'm not sure anyone can. With any luck shrink will be removed from SQL server (or at least be changed so it works like Paul recommends) in the next revision.

Jim B
  • 23,938
  • 4
  • 35
  • 58