0

I use Jenkins running with the official helm chart, spawning kubernetes pods on GKE, and I have the following part in my Jenkinsfile:

...
withCredentials([file(credentialsId: "${project}", variable: 'key')]) {
withEnv(["GOOGLE_APPLICATION_CREDENTIALS=${key}"]) {
  sh("gcloud --verbosity=debug auth activate-service-account --key-file ${key} --project=${project_id}")
  sh("gcloud --verbosity=debug container clusters get-credentials ${project} --zone europe-west1-b")
...

And this is randomly failing, here is the output that is not really helpful:

+ gcloud --verbosity=debug container clusters get-credentials tastetastic --zone europe-west1-b
DEBUG: Running gcloud.container.clusters.get-credentials with Namespace(_deepest_parser=ArgumentParser(prog='gcloud.container.clusters.get-credentials', usage=None, description='Updates a kubeconfig file with appropriate credentials to point\nkubectl at a Container Engine Cluster. By default, credentials\nare written to HOME/.kube/config. You can provide an alternate\npath by setting the KUBECONFIG environment variable.\n\nSee [](https://cloud.google.com/container-engine/docs/kubectl) for\nkubectl documentation.', version=None, formatter_class=<class 'argparse.HelpFormatter'>, conflict_handler='error', add_help=False), _specified_args={'verbosity': '--verbosity', 'name': 'NAME', 'zone': '--zone'}, account=None, api_version=None, authority_selector=None, authorization_token_file=None, calliope_command=<googlecloudsdk.calliope.backend.Command object at 0x7f48d48e6e10>, command_path=['gcloud', 'container', 'clusters', 'get-credentials'], configuration=None, credential_file_override=None, document=None, flatten=None, format=None, h=None, help=None, http_timeout=None, log_http=None, name='$project', project=None, quiet=None, trace_email=None, trace_log=None, trace_token=None, user_output_enabled=None, verbosity='debug', version=None, zone='europe-west1-b').
Fetching cluster endpoint and auth data.
DEBUG: unable to load default kubeconfig: [Errno 2] No such file or directory: '/home/jenkins/.kube/config'; recreating /home/jenkins/.kube/config
DEBUG: Saved kubeconfig to /home/jenkins/.kube/config
kubeconfig entry generated for $project.
INFO: Display format "default".
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withCredentials
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code -1
Finished: FAILURE

Do you have an idea where this could come from?

How to further debug this random failure?

And finally, what about looping, say 5 times, to be sure to avoid any network hiccup?

It looks like what I'm doing is fine: https://github.com/NYTimes/drone-gke/blob/f23a63fd8269182c4ce1d86302e1affc505b6441/main.go#L145

Pierre Ozoux
  • 215
  • 2
  • 5
  • Does it work on the command line, when logged in as the jenkins user? You can use the [`retry` step](https://jenkins.io/doc/pipeline/steps/workflow-basic-steps/#code-retry-code-retry-the-body-up-to-n-times) to retry a block. – Christopher Mar 14 '17 at 13:04
  • It works most of the time, it is one out of 5 times I'd say. Thanks a lot for the `retry` step, I think I'll use it, but it is not satisfying. – Pierre Ozoux Mar 14 '17 at 13:06
  • 1
    Try adding the flag --log-http in addition to --verbosity=debug to the gcloud command to log HTTP requests, which may provide more insight. – Adam Mar 25 '17 at 22:25
  • @PierreOzoux Were you able to resolve this problem? If so, please post the resolution as an answer, so to benefit the community. – N Singh Aug 21 '17 at 19:29

0 Answers0