1

Currently, I have a replication task that looks something like this:

{
   "continuous": true,
   "create_target": true,
   "owner": "admin",
   "source": "https://remote/db/",
   "target": "db",
   "user_ctx": {
       "roles": [
           "_admin"
       ]
   }
}

Using http, I see no errors in the log. Using https, replication technically does work but a huge number of errors also show up in the logs. I would like to fix these errors.

The errors look like this:

[Fri, 01 Nov 2013 22:11:49 GMT] [info] [<0.2227.0>] Retrying GET request to https://remote/db/doc?atts_since=%5B%2271-315ddf7e3d31004df5cd00846fd1cf38%22%5D&revs=true&open_revs=%5B%2275-a40b4c7d00c17cddcbef5b093bd10392%22%5D in 0.5 seconds due to error req_timedout

However, I can curl these URLs without timing out:

$ curl -k 'https://remote/db/doc?atts_since=%5B%2273-7a26ae649429b96ed01757b477af40bd%22%5D&revs=true&open_revs=%5B%2276-c9e25fe15497c1c60f65f8da3a68d57d%22%5D'
<returns a bunch of garbage (expected garbage ;)>

And I have a very generous 120s connection_timeout set on couchdb replication:

[Fri, 01 Nov 2013 22:13:00 GMT] [info] [<0.3359.0>] Replication `"36d8a613224f3749a73ae4423b5f9733+continuous+create_target"` is using:
    4 worker processes
    a worker batch size of 500
    20 HTTP connections
    a connection timeout of 120000 milliseconds
    10 retries per request
    socket options are: [{keepalive,true},{nodelay,true}]
    source start sequence 100321

I cannot think of a difference significant enough that curl receives a response in seconds and CouchDB replicator times out with a 120s timeout. What am I missing, what else can I try to tweak?

CouchDB v1.2.0 on Ubuntu 13.04 running Linux ip-10-40-65-137 3.8.0-32-generic #47-Ubuntu SMP Tue Oct 1 22:35:23 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

Mike S
  • 420
  • 4
  • 13

0 Answers0