2

There are many virtual machine based on QEMU in my server. Some of them are forbbiden to use networking, and only way to attach them is via virtual serial port.

Due to serial console's bad features (such as working badly with tmux and vim), I want to use ssh via serial line.

Here is my try: (QEMU maps serial to a unix socket in host machine)

#In Guest: (serial is /dev/ttyS1)
socat -d -d tcp:127.0.0.1:22 file:/dev/ttyS1,b115200

#In Host: (mapped to /var/run/qemu-server/vm.serial1)
socat -d -d tcp-l:10022 UNIX-CONNECT:/var/run/qemu-server/vm.serial1

Then I tried to run ssh -vv root@127.0.0.1 -p 10022, but it prompts:

Bad packet length 1349676916.
ssh_dispatch_run_fatal: Connection to UNKNOWN port 65535: message authentication code incorrect

But when I use

# Ctrl+C to terminate
socat STDIO,raw,echo=0,escape=0x3 UNIX-CONNECT:/var/run/qemu-server/vm.serial1

I can see

SSH-2.0-OpenSSH_7.6p1

So the tunnel is connected, but it seems that it can't transfer bare TCP over serial port (maybe lack of framing and package control).

So I tried to use pppd on serial line, and up a interface then ssh via TCP over IP over PPP...

But there are many VMs, it is hard to manage with so much interface created by ppp because of lack of automatic IP assigning. Also, an IP address just for ssh is too wasted.

So if it is possible to run a bare TCP socket over serial? Or skiping IP layer created by pppd and open a TCP socket over PPP like this:

+----------------------+
|                      |
|      TCP Socket      |
|                      |
+----------------------+
|                      |
| Data Link Layer(PPP) |
|                      |
+----------------------+

I only want to expose a TCP port (or a UNIX socket) to users who want to connect via ssh to a VM.

I tried to search for solutions, but all answers are about how to transfer serial over TCP, but what I want is reverse: how to transfer TCP over serial?

2 Answers2

3

You're trying to shove a bunch of protocols that have nothing to do with serial consoles over a serial console. It's not going to yield reasonable results, and the data you want is already being offered in plain text.

If you use libvirt to define and manage your VMs, this work has already been done for you. virsh console <VM Name> will connect you to a serial console of a VM, and it works just fine - you can also pipe that over SSH from the host or connect to virsh over ssh+qemu if you want to do that. Similarly, most other management and orchestration systems like OpenStack or Xenserver have similar methods of connecting to pty-like VM serial devices without the need for unix sockets.

tmux doesn't work as well with serial interfaces as minicom or screen does. If you're taking a manual approach, connecting either of those to PTY or socket devices would yield satisfactory results.

If you have many serial consoles in a manual orchestration plane like pure QEMU tends to be, using a console server would be prudent rather than connecting to each VM on its own special socket or PTY. conserver is a good one that I've worked with fairly regularly. You can set up aliases for connecting serial programs to each VM and connect to those programs through conserver via SSH; this yields a simple console <servername> from the conserver host connected via SSH to get to where you need to be.

Spooler
  • 7,016
  • 16
  • 29
  • 1
    I mean`tmux` in VM via console doesn't works very well. `virsh console` is `vt220` while ssh is `xterm-256color` which support more features than Serial console. Also serial can't support multiple user auth auth for VM. Because these VM can't using network, and also ssh access to host is resisted. I wanna to give vm users a direct tunnel without login host via ssh. If I can using serial device like a UNIX socket, we can expose it with many ways like php. Also ssh to VM can give user a tunnel to testing local service or sftp. – Komeiji Kuroko Apr 15 '18 at 17:05
  • If the VMs can't have any networking, but CAN have serial consoles to the host, then using a host-only network to them could be a reasonable thing. Then you can port-forward SSH directly to the VMs from the host without having to use the hosts SSH server. I'm not seeing another way to get reasonable multi-user logins given the conditions. – Spooler Apr 16 '18 at 14:33
  • The host is a proxmox server and it has 50+ VM on it. We need to separate every VM so they can't using same host-only adapter. It is messy to make so many host-only interface. Also, this VM user not has perssion to add an interface; they can only add/delete/start VM, and change VM settings. Not all users can connect ssh to host server, too. – Komeiji Kuroko Apr 18 '18 at 13:59
  • Hm. Managing that many layer 2 networks is definitely going to suck. You could keep an extremely limited L2 network between the host and infrastructure, allowing only a layer 3 tunnel (either GRE or IPsec depending on whether you want encryption) to each VM in order to isolate their traffic via TUN interfaces. This is a common model in multi-tenant cloud provider networks, and prevents VMs from sharing a broadcast domain. – Spooler Apr 18 '18 at 14:47
2

This can work, I have done it using VMWare and a Windows guest, which was running an IPSec VPN client, that disabled other networking, including to the host VM.

The trick that I found was that the SSH client and the SSH server both send their banner immediately on connect. If you connect /dev/ttyS1 to tcp:22 (and there is nothing listening on the other end of ttyS1), the banner goes into the bitbucket.

Similarly, if you ssh -O "ProxyCommand='socat - /dev/ttyS0'" target, and you have not yet established the server-side socat to tcp:22, the client banner gets lost. Ultimately, it became a question of timing.

You SHOULD be able to do what you need to do in the following sequence:

#In Host: (mapped to /var/run/qemu-server/vm.serial1)
socat -d -d UNIX-CONNECT:/var/run/qemu-server/vm.serial1 tcp-l:10022

#In Guest: (serial is /dev/ttyS1)
socat -d -d tcp:127.0.0.1:22 file:/dev/ttyS1,b115200

#In host:
ssh -vv root@127.0.0.1 -p 10022

The essential differences between what you did, and the above:

  1. In the host, I establish the socat connection to the Unix socket first, rather than waiting for the incoming TCP connection and only then opening the Unix socket. This way, when we run the socat in the guest, and the server sends its banner, socat will read and buffer the banner until the incoming TCP connection arrives.
  2. I run the socat in the guest next, so that it can send its banner to the waiting socat, to be buffered.
  3. Only then can we run the ssh client.
Rogan Dawes
  • 161
  • 2