1

I'm trying to replicate (transactional, push) a relatively small table (50K rows), subject to several different filters, which are rather expensive (each taking at least an hour long to complete, involving a join with a table having 4 billion rows in another database). I expect that the published article will have to be synchronized a few times a day.

Is this feasible, or should I look for a different solution?

  • Will there be any issue with exclusive locking of the original table during replication?
  • Currently, the Snapshot Agent status says "[0%] The process is running and is waiting for a response from the server." Is this normal for an expensive filter? Is this acceptable in practice?
user35961
  • 121
  • 3
  • 1
    I can't answer the question as asked but I do suggest looking at your indexes and creating additional ones as appropriate. With proper database design I believe you should be able to massively reduce the time required. – John Gardeniers May 09 '11 at 22:23

1 Answers1

1

For what it's worth, here's an excerpt from MSDN - Filtering Published Data:

Row filters in transactional publications can add significant overhead because the article filter clause is evaluated for each log row written for a published table, to determine whether the row should be replicated. Row filters in transactional publications should be avoided if each replication node can support the full data load, and the overall data set is reasonably small.

It's probably best to filter rows using a very quick sargable criteria that is fully indepedent of other rows/tables/databases.

user35961
  • 121
  • 3