1

I'm setting up AWS EKS cluster using terraform from an EC2 instance. Basically the setup includes EC2 launch configuration and autoscaling for worker nodes. After creating the cluster, I am able to configure kubectl with aws-iam-authenticator. When I did

kubectl get nodes 

It returned

No resources found

as the worker nodes were not joined. So I tried updating aws-auth-cm.yaml file

apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
    - rolearn: <ARN of instance role (not instance profile)>
      username: system:node:{{EC2PrivateDNSName}}
      groups:
        - system:bootstrappers
        - system:nodes

with IAM role ARN of the worker node. And did

kubectl apply -f aws-auth-cm.yaml

It returned

ConfigMap/aws-auth created

Then I understood that role ARN configured in aws-auth-cm.yaml is the wrong one. So I updated the same file with the exact worker node role ARN.

But this time I got 403 when I did kubectl apply -f aws-auth-cm.yaml again.

It returned

Error from server (Forbidden): error when retrieving current configuration of: Resource: "/v1, Resource=configmaps", GroupVersionKind: "/v1, Kind=ConfigMap" Name: "aws-auth", Namespace: "kube-system" Object: &{map["apiVersion":"v1" "data":map["mapRoles":"- rolearn: arn:aws:iam::XXXXXXXXX:role/worker-node-role\n username: system:node:{{EC2PrivateDNSName}}\n groups:\n - system:bootstrappers\n - system:nodes\n"] "kind":"ConfigMap" "metadata":map["name":"aws-auth" "namespace":"kube-system" "annotations":map["kubectl.kubernetes.io/last-applied-configuration":""]]]} from server for: "/home/username/aws-auth-cm.yaml": configmaps "aws-auth" is forbidden: User "system:node:ip-XXX-XX-XX-XX.ec2.internal" cannot get resource "configmaps" in API group "" in the namespace "kube-system"

I'm not able to reconfigure the ConfigMap after this step.

I'm getting 403 for commands like

kubectl apply
kubectl delete
kubectl edit 

for configmaps. Any help?

Magesh
  • 121
  • 4

1 Answers1

1

I found the reason why kubectl returned 403 for this scenario.

As per this doc, the user/role who created the cluster will be given system:masters permissions in the cluster's RBAC configuration

When I tried to create a ConfigMap for aws-auth to join worker nodes, I gave the ARN of role/user who created the cluster instead of ARN of worker nodes.

And it updated the group(system:masters) of admin with groups system:bootstrappers and system:nodes in RBAC which basically locked the admin himself. And it is not recoverable since admin has lost the privileges from group system:masters.

Magesh
  • 121
  • 4