1

We have one cluster in aks. Where we deployed consul helm chart in consul namespace. It created many CRDs.

The using these CRDs internally created one more namespace applicationns

When we deleted consul, it deleted.

Then, when we trying to delete applicationns, it stuck in terminating state for a long time.

So, followed this link and deleted the namespace.

Now, when I ran "kubectl get ns", it is not showing but.

kubectl get serviceintentions -n applicationns
NAME           SYNCED   LAST SYNCED   AGE
servi1       True     41d           42d
servi2    True     41d           42d
servi3   True     41d           42d

Please suggest how to cleanup, there are many CRDs like them. Not deleting also.

commands tried

1 Answers1

1

Follow the steps mentioned in How to force delete a Kubernetes Namespace to cleanup the namespace.

After following the document if you find that Custom CRDs are not getting deleted, even after deleting the namespace then follow the below steps:

Perform a kubectl get crd -A -o jsonpath='{.items[*].metadata.finalizers' to check if the delete operation is in deadlock with finalizers set on the CRDs.

In that case, you can perform the following:

$ kubectl patch crd <custom-resource-definition-name> -n <namespace> -p '{"metadata":{"finalizers":[]}}' --type=merge
$ kubectl delete crd <custom-resource-definition-name> -n <namespace>

If you are not able to delete CRDs by following the above procedure then manually edit the CRD by using below command and delete the finalizer section from the CRDs, so that it gets deleted directly.

 $ kubectl edit crd <CRD-Name>

To do a mass delete of all resources in your current namespace context, you can execute the kubectl delete command with the -all flag.

$ kubectl delete --all

To delete all resources from a specific namespace use the -n flag.

$ kubectl delete -n <namespace-name> --all

To delete all resources from all namespaces we can use the -A flag.

$ kubectl delete -A