0

Let's say you have a large organisation that is running its own kubernetes cluster on bare metal.

The idea is that different business units in this organisation can get cloud resources 'on demand' and do what they want on it.

To this end you could create namespaces and give each BU their own namespace to do what they want with.

But what's say either:

  • They want further split this namespace into namespaces - are sub-namespaces a thing?
  • They want to run their own kubernetes cluster. ie. the usecase might be that this organisation is developing a kubernetes solution for someone else - so they're going to build it here, and then once it's all built, deploy it to a fresh kubernetes cluster on the client's site.

Is this possible?

dwjohnston
  • 149
  • 1
  • 1
  • 5

3 Answers3

2

There are two concepts I can refer you to, that use different approach to have Kubernetes cluster as a subordinate of different Kubernetes cluster. None of them are ready to use, but these articles have a good explanation of how it could be done:

Kubernetes comes with its own growing feature set for multi-tenancy use cases. However, we had the goal of offering our users a fully-managed Kubernetes without any limitations to the functionality they would get using any vanilla Kubernetes environment, including privileged access to the nodes. Further, in bigger enterprise scenarios a single Kubernetes cluster with its inbuilt isolation mechanisms is often not sufficient to satisfy compliance and security requirements. More advanced (firewalled) zoning or layered security concepts are tough to reproduce with a single installation. With namespace isolation both privileged access as well as firewalled zones can hardly be implemented without sidestepping security measures.

Now you could go and set up multiple completely separate (and federated) installations of Kubernetes. However, automating the deployment and management of these clusters would need additional tooling and complex monitoring setups. Further, we wanted to be able to spin clusters up and down on demand, scale them, update them, keep track of which clusters are available, and be able to assign them to organizations and teams flexibly. In fact this setup can be combined with a federation control plane to federate deployments to the clusters over one API endpoint.

Based on the above requirements we set out to build what we call Giantnetes - or if you’re into movies, Kubeception. At the most basic abstraction it is an outer Kubernetes cluster (the actual Giantnetes), which is used to run and manage multiple completely isolated user Kubernetes clusters.

The physical machines are bootstrapped by using our CoreOS Container Linux bootstrapping tool, Mayu. The Giantnetes components themselves are self-hosted, i.e. a kubelet is in charge of automatically bootstrapping the components that reside in a manifests folder. You could call this the first level of Kubeception.

Once the Giantnetes cluster is running we use it to schedule the user Kubernetes clusters as well as our tooling for managing and securing them.

We chose Calico as the Giantnetes network plugin to ensure security, isolation, and the right performance for all the applications running on top of Giantnetes.

Then, to create the inner Kubernetes clusters, we initiate a few pods, which configure the network bridge, create certificates and tokens, and launch virtual machines for the future cluster. To do so, we use lightweight technologies such as KVM and qemu to provision CoreOS Container Linux VMs that become the nodes of an inner Kubernetes cluster. You could call this the second level of Kubeception.

Currently this means we are starting Pods with Docker containers that in turn start VMs with KVM and qemu. However, we are looking into doing this with rkt qemu-kvm, which would result in using a rktnetes setup for our Giantnetes.

Friday, January 20, 2017 by Hector Fernandez, Software Engineer & Puja Abbassi, Developer Advocate, Giant Swarm

Coincidently, the idea of cloud native applications brought up the pets vs. cattle discussion, where you start to consider every component of your infrastructure as a disposable part of a herd and not as an irreplaceable pet anymore. According to this new way of thinking, every component must be able to fail without an impact: servers, racks, data centers… everything. Ironically, however, many companies now treat their Kubernetes cluster like a pet and spend much time and resources on its care and well-being.

To us, this seemed very strange and not how it should be, since it contradicts the base concept of cloud native applications. Therefore, our mission was clear: We wanted Kubernetes clusters to become low-maintenance cattle: fully-managed, scalable, multitenant, and disposable at any time. Also, we wanted to have a single API for all our clusters.

The first thing to do is to set up an outer Kubernetes cluster which runs the master components of multiple separate customer clusters. Like any other Kubernetes cluster, the master cluster consists of four master components: the API server, the etcd key value store, the scheduler, and the controller. In order to prevent downtimes, we create a high availability setup with several entities for every component.

Then, to start the inner clusters, we create a namespace, generate certificates, tokens and SSH keys, and deploy the master components. Subsequently, we add an ingress to make the API server and etcd accessible from the outside. Finally, we install basic plugins like Heapster, kube-proxy, Kube-dns, and the dashboard.

15 Mar 2017 9:49am, by Sebastian Scheele

VAS
  • 370
  • 1
  • 9
1

It's not possible to run production Kubernetes clusters inside containers.

However, you can run Kubernetes nodes on virtual machines rather than bare metal. In this way you can allocate resources more easily to the business units that need it than by running Kubernetes directly on bare metal.

You should also look into OpenShift, which is a fork of Kubernetes with additional functionality for multi-tenancy.

Michael Hampton
  • 237,123
  • 42
  • 477
  • 940
1

Checkout Gardener project which attempting to do this.

In short, you have a kubernetes cluster which runs the control plane of other clusters, as normal pods. Kubernetes within kubernetes.