2

First time question, and I am also new to attempting to configure/administer google cloud services. Please be gentle.

My employer uses gcloud ontainer registry to store images, and on the client side, we use use gcloud docker pull ... commands to push and pull from this registry. Due to circumstances beyond our control, we sometimes need to transfer large images over a very slow network connection. This can sometimes take long enough that the oauth bearer token (timeout: 3600s) expires during the transfer. When this happens, the next image layer that the gcloud docker pull command attempts fails.

We end up several successfully pulled layers, and then see an error message something like:

Server error while fetching image layer Please login prior to pull

Is it possible to configure the timeout of the oauth bearer token? If so, how? Nothing obvious in gcloud developer console.

Is there another solution to this problem I may be missing?

Diamond
  • 8,791
  • 3
  • 22
  • 37
  • If you re-try the pull does it resume where it left off (but with a refreshed token)? If so, it seems like you could just wrap `gcloud docker pull` in a retry loop that tries a few times before giving up. – Robert Bailey Mar 02 '16 at 23:25
  • That is the leading solution right now. Of course the pull that died actually locks up too, so we would have to watch the stderr stream and detect that error. Hoping for a cleaner solution before going there. – user6005293 Mar 02 '16 at 23:52
  • That sounds icky. It seems like the gcloud wrapper should take care of credential refresh automatically. – Robert Bailey Mar 03 '16 at 01:11
  • Where do you run your containers? If on Google Cloud, then why not uploading the image to a GCE VM instance and then do pushing and pulling from there? – Kamran Mar 03 '16 at 03:48
  • containers are not running in the cloud, otherwise this would not be an issue. They are hosted on physical hardware that is (sometimes) behind a slow network. Agreed there might be an issue in how the underlying utility is implemented, but I've got a mandate to make it work "now" – user6005293 Mar 03 '16 at 16:10

1 Answers1

3

please see the link below. If you use a service account to perform the pulls you can avoid this:

https://cloud.google.com/container-registry/docs/auth#using_a_json_key_file

Unfortunately with the access-token based approach Docker isn't designed to allow us to refresh it upon expiration, which is one part of why we added private key support.

Wei
  • 146
  • 1