12

Let's say I have a SQL Server database whose data files have been created with an initial size of 100 GB, but it only contains 10 GB of data. A database backup will then be only 10 GB in size.

I want to restore this backup to a different server (or a different database on the same server), but I don't want it to take the same disk space as the original one (100 GB), which is what happens by default.

I can't shrink the original database before taking a backup (it's a production database, and it needs that much pre-allocated space); I could shrink the restored database after the restore is done, but I would really prefer to have it not take up 100 GB while doing that; besides, in this specific scenario I don't have that much free disk space, so the restore isn't going anywhere.

Is there any way I can restore the database and have it only take up as much space as the actual data it contains?

Massimo
  • 68,714
  • 56
  • 196
  • 319
  • I've had a similar issue. The original DB space taken on disk was ~30GB (with around 1.5GB of real data) and I want to restore it at my local MSSQL Express server. The obvious problem is that my local server allows only max. of 10GB, since it is a free edition. So, I'd be able to do it, if I'd be able to shrink the files before trying to restore the DB, but... – userfuser Nov 14 '16 at 13:51

3 Answers3

6

No, sorry - no way. Restore restores files as they were at the backup. Schinking has to be done after that or before taking the backup.

TomTom
  • 50,857
  • 7
  • 52
  • 134
2

If your tight on disk space then you can put the .bak file on a network share & restore it from there. Should work if your running sql server with a domain account & give the share enough rights to read the file.

The other option that was previously in the are you f'ing nuts basket (but only useful if your running sql server 2008 r2) is that SQL Server supports creation of database files directly to a share without having to use a traceflag & I can tell from personal experience you it works! So you can do a restore WITH MOVE to a share.

Nick Kavadias
  • 10,758
  • 7
  • 36
  • 47
1

Generally speaking, no. A few random ideas that may or may not be of any help to you:

  • Unless you absolutely need data from that specific backup, then you could create a new (empty) database of the target size, and use Bulk copies (or SSIS) to push all tables from the current (live) database into your copy.
  • There are 3rd-party tools (Redgate Compare, for example) that can help automate this kind of thing, if this is more than a one-time operation.
  • Some 3rd-party backup software (Quest Litespeed, for example) has the ability to do an "object-level recovery", which can restore individual tables or other objects to a new (empty) database. Even if the backup wasn't created using Litespeed, I believe the product should work on Native SQL backups.

Finally, I do like some "elbow room" in my production databases, too, but 90GB free of 100GB total sounds a bit extreme. The following steps might give you what you need and shouldn't impact production:

  1. Run a DBCC SHRINKFILE ('myfile.MDF', TRUNCATEONLY) on the production data file to temporarily release any free space at the end of the file (a TRUNCATEONLY is not IO intensive, and will not fragment indexes)
  2. If the log file is also big, run a DBCC SHRINKFILE on the production log file during a time of low activity, just after taking a log backup.
  3. Run your backup
  4. Do an ALTER DATABASE MODIFY FILE to re-grow the production data file back to the original size.

There shouldn't be any production impact using these steps. The only risk is if some of the data just happens to be at the very end of the 100gb data file, in which case Step (1) won't release much if any space.

BradC
  • 2,200
  • 4
  • 25
  • 35
  • Thanks, I already know about this and tried to devise some workarounds. I just can't touch the original database, only the backup is made available... and it was configured that way. – Massimo Sep 22 '10 at 15:59