5

I cannot get kubectl to authenticate with the EKS Kubernetes instance my coworker created. I've followed the documentation: the AWS CLI can run aws eks commands (I'm an AWS Full Administrator), and the heptio authenticatior is in my path and can generate tokens.

When I run kubectl I get this error:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.4", 
GitCommit:"5ca598b4ba5abb89bb773071ce452e33fb66339d", GitTreeState:"clean", 
BuildDate:"2018-06-06T15:22:13Z", GoVersion:"go1.9.6", Compiler:"gc", 
Platform:"darwin/amd64"}
error: You must be logged in to the server (the server has asked for the client
to provide credentials)

Here's my ~/.kube/config file. It's the exact kubeconfig my coworker can successfully use.

apiVersion: v1
clusters:
- cluster:
    server: https://myinstance.sk1.us-east-1.eks.amazonaws.com
    certificate-authority-data: base64_cert                                                                                                                                                                                                   name: kubernetes                                                                                                                                                                                                                          contexts:                                                                                                                                                                                                                                   - context:                                                                                                                                                                                                                                      cluster: kubernetes                                                                                                                                                                                                                         user: aws                                                                                                                                                                                                                                 name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      command: heptio-authenticator-aws
      args:
        - "token"
        - "-i"
        - "dev-qa"
        # - "-r"
        # - "<role-arn>"
spiffytech
  • 1,043
  • 2
  • 11
  • 16

5 Answers5

9

I needed to add my IAM user to the mapUsers section of the ConfigMap configmap/aws-auth, per these AWS docs.

You can edit the configmap using the same AWS user that initially created the cluster.

$ kubectl edit -n kube-system configmap/aws-auth

apiVersion: v1
data:
mapRoles: |
    - rolearn: arn:aws:iam::555555555555:role/devel-worker-nodes-NodeInstanceRole-74RF4UBDUKL6
    username: system:node:{{EC2PrivateDNSName}}
    groups:
        - system:bootstrappers
        - system:nodes
mapUsers: |
    - userarn: arn:aws:iam::555555555555:user/admin
    username: admin
    groups:
        - system:masters
    - userarn: arn:aws:iam::111122223333:user/ops-user
    username: ops-user
    groups:
        - system:masters
mapAccounts: |
    - "111122223333"
spiffytech
  • 1,043
  • 2
  • 11
  • 16
  • 2
    This answer worked in my case. If the cluster was created by an IAM user then that user gets automatically mapped into the cluster. HOWEVER ... any other IAM users have to be manually mapped/added. – user183744 Jun 21 '18 at 20:08
  • I get an error: `error: the server doesn't have a resource type "configmap"` My understanding is that you have to log in to edit the configmap, resulting in a catch-22 scenario – Marcello Romani Aug 04 '18 at 12:27
  • I know I'm late to the party but just wanted to clarify that as of 2019 EKS now creates the config file for you: aws eks --region region update-kubeconfig --name cluster_name and you can download the config map from: curl -O https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-02-11/aws-auth-cm.yaml it is all here: https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html – eco Mar 08 '19 at 20:57
  • 1
    `kubectl edit -n kube-system configmap/aws-auth` doesn't work. it wants `please enter username` which is a cyclic error as this page deals with not being able to authenticate.Therefore this solution is irrelavent. – JasonGenX Nov 13 '20 at 21:15
2

Unfortunately, AWS doesn't yet have a command like GKE's "gcloud container clusters get-credentials", which creates kubectl config for you. So, you need to create kubectl config file manually.

As mentioned in creating a kubeconfig for Amazon EKS document, you should get two things from the cluster:

  1. Retrieve the endpoint for your cluster. Use this for the <endpoint-url> in your kubeconfig file.

    aws eks describe-cluster --cluster-name <cluster-name>  --query cluster.endpoint
    
  2. Retrieve the certificateAuthority.data for your cluster. Use this for the <base64-encoded-ca-cert> in your kubeconfig file.

    aws eks describe-cluster --cluster-name <cluster-name>  --query cluster.certificateAuthority.data
    

Create the default kubectl folder if it does not already exist.

mkdir -p ~/.kube

Open your favorite text editor and paste the following kubeconfig code block into it.

apiVersion: v1
clusters:
- cluster:
    server: <endpoint-url>
    certificate-authority-data: <base64-encoded-ca-cert>
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: aws
  name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      command: heptio-authenticator-aws
      args:
        - "token"
        - "-i"
        - "<cluster-name>"
        # - "-r"
        # - "<role-arn>"
      # env:
        # - name: AWS_PROFILE
        #   value: "<aws-profile>"

Replace the <endpoint-url> with the endpoint URL that was created for your cluster. Replace the <base64-encoded-ca-cert> with the certificateAuthority.data that was created for your cluster. Replace the <cluster-name> with your cluster name.

Save the file to the default kubectl folder, with your cluster name in the file name. For example, if your cluster name is devel, save the file to ~/.kube/config-devel.

Add that file path to your KUBECONFIG environment variable so that kubectl knows where to look for your cluster configuration.

export KUBECONFIG=$KUBECONFIG:~/.kube/config-devel

(Optional) Add the configuration to your shell initialization file so that it is configured when you open a shell.

For Bash shells on macOS:

echo 'export KUBECONFIG=$KUBECONFIG:~/.kube/config-devel' >> ~/.bash_profile

For Bash shells on Linux:

echo 'export KUBECONFIG=$KUBECONFIG:~/.kube/config-devel' >> ~/.bashrc

Test your configuration.

kubectl get svc

Output:

NAME             TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
svc/kubernetes   ClusterIP   10.100.0.1   <none>        443/TCP   1m

Note
If you receive the error "heptio-authenticator-aws": executable file not found in $PATH, then your kubectl is not configured for Amazon EKS. For more information, see Configure kubectl for Amazon EKS.

VAS
  • 370
  • 1
  • 9
  • I have followed that same documentation and created my kube config file with the server and certificate data filled in, but I'm getting the error listed in my question. – spiffytech Jun 11 '18 at 13:10
  • There are several closed issues on that error. Most of them was caused by errors in configuration file. Try to check your configuration file for tabs instead of spaces. It may cause the file to read incorrectly by kubectl. – VAS Jun 11 '18 at 15:53
1

Things have gotten a bit simpler over time. To get started on Linux (or indeed WSL) you will need to:

  1. Install the AWS CLI and configure valid AWS CLI credentials (aws configure or e.g. use AWS SSO to generate time-limited credentials on the fly)
  2. Install eksctl and kubectl
  3. Install aws-iam-authenticator

At this point, assuming you already have a running Kubernetes Cluster in your AWS account you can generate/update the kube configuration in $HOME/.kube/config with this one command:

aws eks update-kubeconfig --name test

Where test is your cluster name according to the AWS Console (or aws eks list-clusters).

You can now run for instance kubectl get svc without getting an error.

M Jensen
  • 111
  • 2
0

Pass in your AWS configuration variables in line with your command (or set them as global variables).

Example:

AWS_PROFILE=profile_name kubectl get all
getglad
  • 101
  • 2
0

I resolved this issue by fixing the base64 encoded certificate in the kubeconfig file I created. The documentation is a little confusing because it says to use the --cluster-name switch with the aws cli for the EKS service and for me the --name switch worked. This printed the base64 value to the cli and I copy pasta it into the kubeconfig file saved and it worked.

$ AWS_ACCESS_KEY_ID=[YOUR_ID_HERE] AWS_SECRET_ACCESS_KEY=[YOUR_SECRET_HERE] aws eks describe-cluster --name staging --query cluster.certificateAuthority.data