We have one cluster in aks. Where we deployed consul helm chart in consul namespace. It created many CRDs.
The using these CRDs internally created one more namespace applicationns
When we deleted consul, it deleted.
Then, when we trying to delete applicationns, it stuck in terminating state for a long time.
So, followed this link and deleted the namespace.
Now, when I ran "kubectl get ns
", it is not showing but.
kubectl get serviceintentions -n applicationns
NAME SYNCED LAST SYNCED AGE
servi1 True 41d 42d
servi2 True 41d 42d
servi3 True 41d 42d
Please suggest how to cleanup, there are many CRDs like them. Not deleting also.