4

I'm in the process of upgrading a cluster to Kubernetes v1.16. There are some notes about deprecations in that update. One of those notes is that a the apiVersion extensions/v1beta1 for Deployment is removed, and that one should use the apiVersion apps/v1 prior to upgrading.

That sounds simple enough, yet I cannot figure out how to actually effect that change.

We updated the YAML file to include the newer apiVersion; that is, the YAML file containing our Deployment starts with,

apiVersion: apps/v1
kind: Deployment

However, if we kubectl apply this to our server, and then kubectl get deployment -o yaml it back, to verify the change, it hasn't updated; the apiVerison reported by the server is still:

apiVersion: extensions/v1beta1
kind: Deployment
metadata
  annotations:
    [...]
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion": "apps/v1", ...}

I included the metadata.annotations field here; you can see the annotation from kubectl that even acknowledges that it is applying a manifest with the correct apiVersion, yet the resource still comes back with the old version!

The linked deprecation notice also talks about using kubectl convert; we've also tried that. AFAICT, kubectl convert just works on offline YAML representations of the resource, and even kubectl apply-ing the result of kubectl convert doesn't change the above behavior.

I've also tried kubectl patch to patch that specific field, but again, verifying it indicates that no actual change happens. (The output from kubectl patch also says that nothing changed.)

How do I actually update the in-Kubernetes version of this resource?

(Some minor notes: this cluster is an Azure AKS instance; Azure actually flagged this resource as needing attention.)

Thanatos
  • 356
  • 2
  • 11

2 Answers2

3

Thank you backwards compatibility!!!

kubectl get deployment XXX is apparently ambiguous, since the server has deployments in multiple api groups. When a resource exists in multiple api groups, kubectl uses the first group listed in discovery docs published by the server which contains the resource. For backwards compatibility, that is the extensions api group.

Try: kubectl get deploy.extensions XXX and kubectl get deploy.apps XXX to verify that your deployment actually exists in two api groups.

https://github.com/kubernetes/kubernetes/issues/58131#issuecomment-356823588

  • Hmm. That's interesting. The comments there make me think there is still some innate version ("the simplest approach is to get/put every object after upgrades. objects that don't need migration will no-op (they won't even increment resourceVersion in etcd). objects that do need migration will persist in the new preferred storage version."); is it possible to change that? – Thanatos Jun 04 '20 at 14:18
  • Your reply made me think to check other Deployments we have; I figured they had the right apiVersion, as they're not being reported by Azure as not. But they also report the same old apiVersion if I query w/o qualifying, and report the newer version if it I qualify it. So, they act the same as the problematic resource. But Azure doesn't report them as requiring upgrade. (This also makes me think there is some innate server-side version that the resource has, and kubectl is merely showing different representations of the object.) – Thanatos Jun 04 '20 at 14:20
0
  1. Install/Configure kubectl-convert plugin
  2. kubectl-convert -f deprecated-version.yaml --output-version networking.k8s.io/v1