When running a kubectl
command using the bitnami/kubectl
image from inside a kubernetes (EKS based) cluster I am expecting the command to pick up the KUBERNETES_SERVICE_HOST
and KUBERNETES_SERVICE_PORT
environment variables and connect to the local cluster to run commands. Specifically I am using this to run some housekeeping kubernetes cronjobs
on the cluster but the container just errors out (and ends up in a crashBackoff loop).
The error message from the container logs is as follows:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
The localhost:8080
is particularly odd since this has never been in use and is not configured anywhere that I am aware of - switching to a simple shell command allows the job to run successfully but kubectl refuses to work. Running env
confirms that the KUBE
variables are indeed being injected and set correctly. The only recent change was moving these jobs to be managed by the terraform kubernetes cronjob resource rather than directly via a YAML file. Each cronjob is associated with a service account with appropriate permissions and that is still correctly configured in the cronjob.
For reference, here is a slightly redacted version of the cronjob:
resource "kubernetes_cron_job" "test_cronjob" {
provider = kubernetes.region
metadata {
name = "test-cronjob"
namespace = "default"
}
spec {
concurrency_policy = "Allow"
failed_jobs_history_limit = 5
schedule = "*/5 * * * *"
job_template {
metadata {}
spec {
backoff_limit = 2
parallelism = 1
completions = 1
template {
metadata {}
spec {
container {
name = "kubectl"
image = "bitnami/kubectl"
command = ["/bin/sh", "-c", <<-EOT
env && echo "test";
EOT
]
}
restart_policy = "OnFailure"
service_account_name = "sa-test"
}
}
}
}
}
}