3

Upon push I want to create a single kaniko build job, it currently works but after the job is finished it shows:

kaniko-5jbhf                  0/1     Completed   0          9m13s

Yet when I run the following it just pauses indefinitely:

kubectl wait --timeout=-1s --for=condition=Completed pod/kaniko

My question can be summarized in 2 parts: 1). How can I wait for a pod/job to finish? 2). How can I remove the job after it has finished?

I have tried ttlSecondsAfterFinished but enabling feature gates in the cluster is problematic and there is no example of how to do it.

Max0999
  • 163
  • 2
  • 2
  • 7

5 Answers5

4

For wait to evaluate the state of a resource, you need to correctly identify it. For the second snippet, you need to provide the pod id instead of the job name: kubectl wait --timeout=-1s --for=condition=Completed pod/kaniko-5jbhf. However, the syntax seems correct for calling the job itself as job/kaniko.

For further reference on wait.

Now, for the Job deletion, if you don't want to use the feature gates, I think you can either access the API programmatically to locate and delete finished Jobs or make them dependant on a parent object that deletes them in cascade. For Jobs specifically, there's only CronJobs. The downside is that CronJobs are meant to be time-scheduled objects, this means that you need to start designing on a time-based object.

Consider that by design, Jobs are meant to stay after the completion to preserve data related to what happened while they're processing data. Also, from v1.12, they're also designed to delete themselves, meaning that enabling these feature gates is probably the most straightforward way to achieve what you want.

yyyyahir
  • 255
  • 1
  • 6
1

To wait until a resource (like Deployment, Job, etc.) has rolled out, and all its objects are ready, run:

kubectl rollout status {Resource Type} {Resource name}

For example:

  $ kubectl rollout status deployment my-app

    Waiting for rollout to finish: 0 of 1 updated replicas are available...
    Waiting for latest deployment config spec to be observed by the controller loop...
    replication controller "my-app" successfully rolled out

Another way is to wait for a specific pod, by id, or even better - by label. For example:

kubectl wait --for=condition=ready pod -l app=my-app

Note that if the pod was created with kubectl run my-app ... (and not with "create deployment"), the label would probably be run=my-app

Noam Manos
  • 287
  • 1
  • 2
  • 7
1

The indefinite waiting is probably caused by using incorrect condition names. I had the same trouble. I eventually checked the source code and came up with the following

$ kubectl wait --for=condition=Complete job/my-job

and

$ kubectl wait --for=condition=Failed job/my-job

The correct naming can also be found as follows:

$ kubectl explain job.status.conditions.type
KIND:     Job
VERSION:  batch/v1

FIELD:    type <string>

DESCRIPTION:
     Type of job condition, Complete or Failed.
Peter Evans
  • 113
  • 1
  • 10
1

You should not wait on pods, wait works for jobs. I am not sure that Completed should be the condition - complete seems to work at least locally for me, so it should be kubectl wait --timeout=-1s --for=condition=complete job/${job_name}

And besides that, unless you want to wait for it indefinitely, check that you are specifying --timeout=-1s, negative one, as timeout. kubectl wait takes any negative timeout param as "wait for a week".

From Kubernetes docs:

    The length of time to wait before giving up. 
    Zero means check once and don't wait, negative means wait for a week.
M3rr1
  • 11
  • 3
1

If you need to wait for a Job to be either Completed or Failed, you can do the following:

kubectl wait --for=condition=ready pod --selector=job-name=YOUR_JOB_NAME --timeout=-1s
kubectl logs --follow job/YOUR_JOB_NAME
user11153
  • 115
  • 1
  • 5