0

I have a Kubernetes cluster setup using KubeAdm (Setup in EC2 instances in AWS Cloud. Master node is a t2.large instance and worker node is an m5.metal instance). The CNI used is Flannel and the cluster is running quite well. I have setup KubeVirt (https://kubevirt.io/) in my Cluster to provision VMs along with pods in my cluster. KubeVirt is an open-source sandbox project from CNCF. I have setup KubeVirt and it's running fine. I followed the steps in this link to setup KubeVirt: https://kubevirt.io/quickstart_cloud/

To run my VM in the cluster, first I created a container-disk based of Ubuntu using the following dockerfile:

dockerfile

FROM scratch
ADD --chown=107:107 bionic-server-cloudimg-arm64.img /disk/

The 'bionic-server-cloudimg-arm64.img' was downloaded from https://cloud-images.ubuntu.com/bionic/

The image got created and I had pushed the image to DockerHub.
After that, I spin the VM using the following YAML file.

vm.yaml

apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
  name: testvm
spec:
  running: false
  template:
    metadata:
      labels:
        kubevirt.io/size: small
        kubevirt.io/domain: testvm
    spec:
      domain:
        devices:
          disks:
            - name: containerdisk
              disk:
                bus: virtio
            - name: cloudinitdisk
              disk:
                bus: virtio
          interfaces:
          - name: default
            masquerade: {}
        resources:
          requests:
            memory: 64M
      networks:
      - name: default
        pod: {}
      volumes:
        - name: containerdisk
          containerDisk:
            image: arbabu/ubuntu-container-disk:v1
        - name: cloudinitdisk
              cloudInitNoCloud:
            userDataBase64: SGkuXG4=

After I run kubectl create -f vm.yaml, the VM gets provisioned. then I ran virtctl start trojanwall-vm to start the VM.

When I run kubectl get vms and kubectl get vmis Both the VirtualMachineInstance and VirtualMachine are running.

enter image description here

The following is the result after I describe the VirtualMachine. (kubectl describe vm testvm):

Name:         testvm
Namespace:    default
Labels:       <none>
Annotations:  kubevirt.io/latest-observed-api-version: v1
              kubevirt.io/storage-observed-api-version: v1alpha3
API Version:  kubevirt.io/v1
Kind:         VirtualMachine
Metadata:
  Creation Timestamp:  2022-08-08T11:26:10Z
  Generation:          2
  Managed Fields:
    API Version:  kubevirt.io/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:spec:
        .:
        f:template:
          .:
          f:metadata:
            .:
            f:labels:
              .:
              f:kubevirt.io/domain:
              f:kubevirt.io/size:
          f:spec:
            .:
            f:domain:
              .:
              f:devices:
                .:
                f:disks:
                f:interfaces:
              f:resources:
                .:
                f:requests:
                  .:
                  f:memory:
            f:networks:
            f:volumes:
    Manager:      kubectl-create
    Operation:    Update
    Time:         2022-08-08T11:26:10Z
    API Version:  kubevirt.io/v1alpha3
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .:
          f:kubevirt.io/latest-observed-api-version:
          f:kubevirt.io/storage-observed-api-version:
      f:spec:
        f:running:
    Manager:      Go-http-client
    Operation:    Update
    Time:         2022-08-08T11:26:13Z
    API Version:  kubevirt.io/v1alpha3
    Fields Type:  FieldsV1
    fieldsV1:
      f:status:
        .:
        f:conditions:
        f:created:
        f:printableStatus:
        f:ready:
        f:volumeSnapshotStatuses:
    Manager:         Go-http-client
    Operation:       Update
    Subresource:     status
    Time:            2022-08-08T11:26:49Z
  Resource Version:  10203
  UID:               af31a90e-c26b-44a2-a7ef-3ae4d024f861
Spec:
  Running:  true
  Template:
    Metadata:
      Creation Timestamp:  <nil>
      Labels:
        kubevirt.io/domain:  testvm
        kubevirt.io/size:    small
    Spec:
      Domain:
        Devices:
          Disks:
            Disk:
              Bus:  virtio
            Name:   containerdisk
            Disk:
              Bus:  virtio
            Name:   cloudinitdisk
          Interfaces:
            Masquerade:
            Name:  default
        Machine:
          Type:  q35
        Resources:
          Requests:
            Memory:  64M
      Networks:
        Name:  default
        Pod:
      Volumes:
        Container Disk:
          Image:  arbabu/ubuntu-container-disk:v1
        Name:     containerdisk
        Cloud Init No Cloud:
          userDataBase64:  SGkuXG4=
        Name:              cloudinitdisk
Status:
  Conditions:
    Last Probe Time:       <nil>
    Last Transition Time:  2022-08-08T11:26:47Z
    Status:                True
    Type:                  Ready
    Last Probe Time:       <nil>
    Last Transition Time:  <nil>
    Status:                True
    Type:                  LiveMigratable
  Created:                 true
  Printable Status:        Running
  Ready:                   true
  Volume Snapshot Statuses:
    Enabled:  false
    Name:     containerdisk
    Reason:   Snapshot is not supported for this volumeSource type [containerdisk]
    Enabled:  false
    Name:     cloudinitdisk
    Reason:   Snapshot is not supported for this volumeSource type [cloudinitdisk]
Events:
  Type    Reason            Age    From                       Message
  ----    ------            ----   ----                       -------
  Normal  SuccessfulCreate  2m25s  virtualmachine-controller  Started the virtual machine by creating the new virtual machine instance testvm

The following is the result after I describe the VirtualMachineInstance. (kubectl describe vmis testvm):

Name:         testvm
Namespace:    default
Labels:       kubevirt.io/domain=testvm
              kubevirt.io/nodeName=workernode-01
              kubevirt.io/size=small
Annotations:  kubevirt.io/latest-observed-api-version: v1
              kubevirt.io/storage-observed-api-version: v1alpha3
API Version:  kubevirt.io/v1
Kind:         VirtualMachineInstance
Metadata:
  Creation Timestamp:  2022-08-08T11:26:13Z
  Finalizers:
    kubevirt.io/virtualMachineControllerFinalize
    foregroundDeleteVirtualMachine
  Generation:  9
  Managed Fields:
    API Version:  kubevirt.io/v1alpha3
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .:
          f:kubevirt.io/latest-observed-api-version:
          f:kubevirt.io/storage-observed-api-version:
        f:finalizers:
          .:
          v:"kubevirt.io/virtualMachineControllerFinalize":
        f:labels:
          .:
          f:kubevirt.io/domain:
          f:kubevirt.io/nodeName:
          f:kubevirt.io/size:
        f:ownerReferences:
          .:
          k:{"uid":"af31a90e-c26b-44a2-a7ef-3ae4d024f861"}:
      f:spec:
        .:
        f:domain:
          .:
          f:devices:
            .:
            f:disks:
            f:interfaces:
          f:firmware:
            .:
            f:uuid:
          f:machine:
            .:
            f:type:
          f:resources:
            .:
            f:requests:
              .:
              f:memory:
        f:networks:
        f:volumes:
      f:status:
        .:
        f:activePods:
          .:
          f:d124681d-8f30-4577-99e7-b696b13da6ac:
        f:conditions:
        f:guestOSInfo:
        f:interfaces:
        f:launcherContainerImageVersion:
        f:migrationMethod:
        f:migrationTransport:
        f:nodeName:
        f:phase:
        f:phaseTransitionTimestamps:
        f:qosClass:
        f:runtimeUser:
        f:virtualMachineRevisionName:
        f:volumeStatus:
    Manager:    Go-http-client
    Operation:  Update
    Time:       2022-08-08T11:26:49Z
  Owner References:
    API Version:           kubevirt.io/v1
    Block Owner Deletion:  true
    Controller:            true
    Kind:                  VirtualMachine
    Name:                  testvm
    UID:                   af31a90e-c26b-44a2-a7ef-3ae4d024f861
  Resource Version:        10204
  UID:                     86553307-c1e5-4fab-93a6-dc4f1eb4e27a
Spec:
  Domain:
    Cpu:
      Cores:    1
      Model:    host-model
      Sockets:  1
      Threads:  1
    Devices:
      Disks:
        Disk:
          Bus:  virtio
        Name:   containerdisk
        Disk:
          Bus:  virtio
        Name:   cloudinitdisk
      Interfaces:
        Masquerade:
        Name:  default
    Features:
      Acpi:
        Enabled:  true
    Firmware:
      Uuid:  5a9fc181-957e-5c32-9e5a-2de5e9673531
    Machine:
      Type:  q35
    Resources:
      Requests:
        Memory:  64M
  Networks:
    Name:  default
    Pod:
  Volumes:
    Container Disk:
      Image:              arbabu/ubuntu-container-disk:v1
      Image Pull Policy:  IfNotPresent
    Name:                 containerdisk
    Cloud Init No Cloud:
      userDataBase64:  SGkuXG4=
    Name:              cloudinitdisk
Status:
  Active Pods:
    d124681d-8f30-4577-99e7-b696b13da6ac:  workernode-01
  Conditions:
    Last Probe Time:       <nil>
    Last Transition Time:  2022-08-08T11:26:47Z
    Status:                True
    Type:                  Ready
    Last Probe Time:       <nil>
    Last Transition Time:  <nil>
    Status:                True
    Type:                  LiveMigratable
  Guest OS Info:
  Interfaces:
    Info Source:  domain
    Ip Address:   10.244.1.31
    Ip Addresses:
      10.244.1.31
    Mac:                             52:54:00:0b:f8:71
    Name:                            default
  Launcher Container Image Version:  quay.io/kubevirt/virt-launcher:v0.55.0
  Migration Method:                  BlockMigration
  Migration Transport:               Unix
  Node Name:                         workernode-01
  Phase:                             Running
  Phase Transition Timestamps:
    Phase:                        Pending
    Phase Transition Timestamp:   2022-08-08T11:26:13Z
    Phase:                        Scheduling
    Phase Transition Timestamp:   2022-08-08T11:26:13Z
    Phase:                        Scheduled
    Phase Transition Timestamp:   2022-08-08T11:26:47Z
    Phase:                        Running
    Phase Transition Timestamp:   2022-08-08T11:26:49Z
  Qos Class:                      Burstable
  Runtime User:                   0
  Virtual Machine Revision Name:  revision-start-vm-af31a90e-c26b-44a2-a7ef-3ae4d024f861-2
  Volume Status:
    Name:    cloudinitdisk
    Size:    1048576
    Target:  vdb
    Name:    containerdisk
    Target:  vda
Events:
  Type    Reason            Age    From                       Message
  ----    ------            ----   ----                       -------
  Normal  SuccessfulCreate  2m45s  virtualmachine-controller  Created virtual machine pod virt-launcher-testvm-cmdk2
  Normal  Created           2m9s   virt-handler               VirtualMachineInstance defined.
  Normal  Started           2m9s   virt-handler               VirtualMachineInstance started.

The issue happens when I try to log into the VM using the following command: virtctl console --kubeconfig=$KUBECONFIG testvm and it displays the following comments and goes into a stuck mode.

Successfully connected to testvm console. The escape sequence is ^]

I am not able to escape from this sequence or even log into the VM properly.

The following is the output after I run virtctl vnc testvm from another window.

{"component":"","level":"info","msg":"--proxy-only is set to false, listening on 127.0.0.1\n","pos":"vnc.go:112","timestamp":"2022-08-08T11:36:18.121104Z"}

{"component":"","level":"info","msg":"connection timeout: 1m0s","pos":"vnc.go:153","timestamp":"2022-08-08T11:36:18.121363Z"}

Error encountered: could not find remote-viewer or vncviewer binary in $PATH

Does anyone know how to fix this or apply a work around?

arjunbnair
  • 25
  • 1
  • 2
  • 8

0 Answers0