1

I'm running a Mumble server (Murmur) on a Debian Wheezy Beta 4 (x86_64) KVM guest which runs on a Debian Wheezy Beta 4 (x86_64) KVM hypervisor. The guest machines are attached to a bridge device on the hypervisor system through Virtio network interfaces. The Hypervisor is attached to a 100Mbit/s uplink and does IP-routing between the guest machines and the remaining Internet.

In this setup we're experiencing a clearly recognizable lag between double-clicking a channel in the client and the channel joining action happening. This happens with a lot of different clients between 1.2.3 and 1.2.4 on Linux and Windows systems.

Voice quality and latency seems to be completely unaffected by this. Most of the times the client's information dialog states a 16ms latency for both the voice and control channel. The deviation for the control channels mostly is a lot higher than the one of the voice channels. In some situations the control channel is displayed with a 100ms ping and about 1000 deviation. It seems the TCP performance is a problem here.

We had no problems on an earlier setup which was in principle quite like the new one. We used Debian Lenny based Xen hypervisor and a soft-virtualised guest machine instead and an earlier version of the Mumble 1.2.3 series.

The current murmurd --version says: 1.2.3-349-g315b5f5-2.1

Update: I found this discussion where there are people running Mumble on virtualised system which experience exactly the same problem as I do.

What I have tried so far (without any success at all):

  • Installed and tried Mumble server on my hypervisor system
  • Installed and tried it with the beta 1.2.4 Mumble server on the guest system
  • Vacuumed my SQLite database from it's originally about 1MiB down to about 300 KiB
  • Disabled IPv6 on the system to check if that could be a problem source.
  • Installed a guest system with Debian Squeeze (stable) and tried Mumble there.

Update: Previously I stated that I had tested putting the Mumble database and log file in a tmpfs in-memory file system and it didn't solve the problem. I made an error there, so it wasn't actually stored inside the tmpfs. Now that I have actually done that, the performance problems are gone. But storing it in a tmpfs is not really a real solution to my problem.

HopelessN00b
  • 53,385
  • 32
  • 133
  • 208
aef
  • 1,705
  • 4
  • 24
  • 41

1 Answers1

0

I found out this is related to an I/O performance problem by putting the Mumble server's database and logfile and into an in-memory file system. What caused the bad I/O latency was subject of this question. The problem was resolved by adding the nobarrier mount option which was first added after Linux 2.6.33 introduced barrier as default option. Notice that this does induce a safety problem. Additionally the partition was accessed via Virtio, while setting cache to none or writeback. Performance was still bad when cache was set to writethrough

aef
  • 1,705
  • 4
  • 24
  • 41