Answer for this question we can find in section Deployments from kubernetes.io
So, why will I need the selectors as well?
Quotes below from documentation for k8s v 1.14
.spec.selector
is an required field that specifies a label selector
for the Pods targeted by this deployment.
.spec.selector
must match .spec.template.metadata.labels
, or it
will be rejected by the API.
In API version apps/v1, .spec.selector and .metadata.labels do not
default to .spec.template.metadata.labels if not set. So they must be
set explicitly. Also note that .spec.selector is immutable after
creation of the Deployment in apps/v1.
A Deployment may terminate Pods whose labels match the selector if
their template is different from .spec.template or if the total number
of such Pods exceeds .spec.replicas. It brings up new Pods with
.spec.template if the number of Pods is less than the desired number.
Pods are already being started separately, but later brought under the umbrella of Deployment to be managed together?
Simply speaking, No
Note: You should not create other pods whose labels match this
selector, either directly, by creating another Deployment
, or by
creating another controller such as a ReplicaSet
or a
ReplicationController
. If you do so, the first Deployment
thinks
that it created these other pods. Kubernetes
does not stop you from
doing this. If you have multiple controllers that have overlapping
selectors, the controllers will fight with each other and won’t behave
correctly.