3

What's the simplest way to shrink transaction log file on a mirrored production database?

I have to, as my disk space is running out.

I will make a full database backup before I do this, so I don't need to keep anything from the transaction log (right? I have daily full database backup, probably never need point-in-time restore, though I'll keep the option open if I can - that's all the .ldf is really for, correct?).

Solved:
OK, after doing 2 backups through SSMS (not TSQL) of just the log, creating a totally new backup set, the Shrink-Files-Log dialog in SSMS finally actually worked, freeing some disk space.

Not sure why 2 backups where needed, or why TSQL didn't work, and there was no difference in the reported "space available to be reclaimed" in the shrink dialog (it was at 99% for all the shrink attempts after the first backup too, but still didn't free any space), but problem solved for now. Thanks all.

MGOwen
  • 307
  • 2
  • 4
  • 11

3 Answers3

3

This is an older question, but there are some things that were not explained well and this will hopefully shed some light. The how to shrink was answered, mostly, but this will explain the "why" part of what you are actually doing.

Backups, specifically log backups, do several things. The data is written to disk as a backup set to be used or discarded later as needed. Each subsequent backup is added to that set, in a Fully logged DB. Truncating the log or starting a new backup set breaks the chain and inhibits, or in many cases, negates your ability to restore to a point in time before the chain was broken.

The data in the log is not actually deleted, nor is the log file shrunk during a backup. The active VLFs are marked inactive, except for those that cannot be fully committed - they remain active. Inactive VLFs are able to be rewritten, making your log circular, in effect like a snake eating its own tail. A checkpoint is issued, as part of the backup, which tells the DB to start writing to the beginning of the log, once the current VLF fills up. If you perform a shrink at this point, you will gain back all the space at the end of the log, up to the point of the active VLF.

The reason a second backup seems to "do the trick" is because the active VLF typically fills up during this time and the log is being written from the beginning. When you take the second backup, the active VLF is written to disk as part of the backup set (or not), and the VLF is marked inactive. Since this was the tail end of the log due to the previous shrink, performing a shrink now frees up all the space to the beginning of the log, up to the currently active VLF.

All of this assumes a couple of things, 1.) you don't have massive VLFs that take hours or days to fill up and 2.) your database is fairly inactive and there aren't a bunch of transactions being written to the log. If either of these conditions are an issue for you, shrinking your log will also be an issue.

All of this is true for both unmirrored and mirrored databases. The difference is that you only need to perform the maintenance on the primary in a mirrored scenario, assuming your mirror is built identically.

2

Do a transaction log backup, with whatever method you feel most comfortable with.

This will cause the transaction logs that have already been committed to the database to be deleted from disk. Ideally, you should actually create a database maintenance task to do this for you on a recurring basis for precisely this reason - to eliminate the old transaction logs so you don't fill up your disk.

Per the other bit of your question... no, not really. Yes, they perform that function, but not only that function.

Databases are not backed up (or written to) in the traditional fashion that other files are, because the database file itself is constantly in use and constantly changing. So a single "point in time" back up would either require taking the database offline to "freeze it" in a consistent state or result in different parts of the backup containing different data than was there when the backup started.

What transaction logs are is records of every "transaction" the database performed. Rather than writing to the database file every time a record is changed, updated, added, removed, etc., those actions are written to a separate file, a transaction log, and then committed into the database file when the SQL server determines it's safe to do so without bringing any activity to a halt. So transaction logs are, in effect, where changes to the database go before they actually become changes to the database [file].

So, if you need to get back to a given database state, or point in time, the transaction logs are "replayed." Essentially, not copying the file data, but going to the most recent point-in-time state found for the database and then doing all the same things that got the database to the specified [later] state. But, it's important to note that at any given time, your transaction logs will contain transactions that will not have been committed to the database yet. So they're more than just the ability to perform a point-in-time restore. They contain [some] changes that are being made, or will shortly be made to the database.

This is why you're forced to do a backup before purging the transaction logs - once that backup's done, the system has a point-in-time copy of the database to reference for any future restores, and is able to determine which transactions have been committed to the database, and which have not. And with that information, the system knows which obsolete transaction logs to delete for you and which not to.

This can, however, take some time, depending on the size of your transaction logs. If you've never done one before, brace yourself it'll take a while.

HopelessN00b
  • 53,385
  • 32
  • 133
  • 208
  • 1
    If MGOwen isn't bothered about losing any historical log data it may be worth switching to simple recovery mode or performing a log backup `WITH TRUNCATE_ONLY` to truncate the log first - and then perform a full backup and regular log backups from that point forward (switching back to full recovery mode first of course) – Chris McKeown Aug 30 '12 at 07:41
  • @HopelessNOOb Thanks, this answer explains a lot; but backing up the log didn't actually free up any space. I even did Shrink afterwards (just right click Shrink, Files, then Log) and it still did nothing, though it said 99% of the log was now free to be removed. Any idea what I'm doing wrong? – MGOwen Aug 30 '12 at 11:37
  • @Chris McKeown: Thanks. If I've just done a full database backup, is there any real advantage to keeping that log data? It's no use at all, right? – MGOwen Aug 30 '12 at 11:40
  • @MGOwen You mean keeping the log data from before you performed a full backup? You could use that (in conjunction with a previous full backup) to restore to a point in time in the past. If you want to continue to be able to restore to a point in time, you need to keep backing up the transaction log which will also keep the log file size down. If you're not bothered about restoring to a point in time, switch the recovery mode to Simple and just do regular full and diff backups. – Chris McKeown Aug 30 '12 at 11:44
  • @Chris McKeown: I think I might prefer Simple, but I tried changing to simple, and it said I can't do that if I'm mirroring. – MGOwen Aug 30 '12 at 12:20
  • 1
    The Simple recovery model tells SQL Server to truncate the log after every checkpoint, and not just after a manual log backup. This means that the log is no longer guaranteed to contain all the data the mirror needs to stay in sync, so you can't use Simple. – adaptr Aug 30 '12 at 14:47
  • @adaptr Ah, so simple is irrelevant to my question, as I am mirroring. Back to my original question, then: How do I flush my backed up log (when mirroring)? – MGOwen Aug 31 '12 at 01:50
  • @MGOwen OK, so probably should have asked this to begin with, as it turns out... but how much free space is in your `.ldf` file? As in, is it possible the backup emptied most of the transaction logs, but is waiting on you to shrink it? You can check this through SQL Management Studio. (And, if possible, you might want look into creating a separate partition or drive for transaction log files - it's standard practice, if not best, practice for databases with any significant amount of transaction logging.) – HopelessN00b Aug 31 '12 at 03:51
  • OK, after doing **2 backups through SSMS** (not TSQL) of **just the log, creating a totally new backup set**, the Shrink-Files-Log dialog in SSMS finally actually worked, freeing some disk space. Not sure why 2 backups where needed, or why TSQL didn't work, and there certainly was no difference in the reported "space available to be reclaimed" in the shrink dialog, but problem solved for now. Thanks all. – MGOwen Sep 01 '12 at 00:24
2

The mirroring feature uses the log to keep track of things until it knows that the other server has those changes. So, no, the ldf is not just for point-in-time recovery. (It's also important for some replication schemes, but you aren't doing that.) Even TRUNCATE_ONLY will not throw away logged changes for stuff that SQL might need. The classic example is some large or long-running transaction. If you are halfway through an hour-long transaction and a DBA runs TRUNCATE_ONLY, your stuff won't be purged from the LDF. The LDF may continue to grow or experience other problems. If the DBA kills your connection, waits for the rollback to finish and then runs TRUNCATE_ONLY, then the log should free up.

Have you tried using:

select log_reuse_wait_desc from sys.databases where name = 'mydb' 

to see why the log is so large? Microsoft documents that system table here.

You can also run :

dbcc opentran() 

This is kind of old-school, but it should show you any long-running transactions in that database. Microsoft documents that command here.

I would do is make sure that there is a log backup happening on a schedule.

I would make sure that I give the TRUNCATE_ONLY command a little time to work, sometimes it takes a while for SQL to start writting to a VLF towards the front of the LDF file. If the last VLF in the LDF file is the one that is being written to, then SQL can't shorten the file. Failing that, I'd do a full BACKUP DATABASE (with COPY_ONLY, or wait until the regular backup happens, if that isn't too far off in the future). Sometimes that seems to kick start things, but it might just be that a backup distracts me while waiting for the current VLF to move to the front of the LDF file. After the current VLF moves towards the front of the LDF file, you should be able to use dbcc shrinkfile() and get the expected result. I'd recommend trying to remove small chunks first, rather than trying to do it all in one shot.

Also, you want to avoid doing this on a regular basis, as getting into a repetitive shrink-autogrow-shrink-autogrow cycle can be a performance killer. (It can lead to fragmented files and the actual growth process can take a surprising amount of time, during which no transaction would be able to commit. Grow your files large enough so that they don't autogrow. Autogrow should be a failsafe thing and not relied on.

Darin Strait
  • 2,012
  • 12
  • 6