2

I'm trying to do simple everything double with a reverse proxy in the front high availability for an Icecast master streaming server (i.e. I'm not talking about Icecast relays here). So, three VMs:

  • 2 identical self-contained Icecast VMs (each with a local MPD music source and a local nginx frontend for correct headers)
  • a single load balancer / reverse proxy nginx VM.

My question is – how do I configure the reverse proxy to do automatic failover in case one of the Icecast VMs goes down with as little interruption for the stream client?

Illustration:

                             /--- [ local nginx A <-> icecast master A <- mpd A]
-> [nginx reverse proxy] ---<
                             \--- [ local nginx B <-> icecast master B <- mpd B]

I first tried this simple tutorial to set up a reverse proxy nginx after which I could listen to the stream by opening the nginx VM.

upstream backend  {
  ip_hash; # try to send the same clients to the same servers
  server 1.2.3.4;
  server 1.2.3.5 max_fails=1  fail_timeout=15s;
}
server {
  location / {
    proxy_pass  http://backend;
  }
}

When I stop the Icecast service on the Icecast VM that's the final endpoint, though, the client doesn't failover to the good Icecast. Not even after a refresh for some reason. I tried experimenting with various ip_hash, max_fails, fail_timeout options, different response header properties, buffer sizes etc. that other sites mentioned, but nothing worked. I feel a bit like I'm fishing in the dark here and that there should be some obvious solution for streaming failover, given the number of popular radio stations out there. Any advice about how best to set this up or some good resources? Do I want 302 redirects or a real proxy pass?

I'm open to suggestions based on HAProxy if that's a better way to go too.

metakermit
  • 143
  • 1
  • 7

1 Answers1

2

As you discovered, reverse proxying Icecast is not a good idea. There is no real benefit, while there are several downsides and you still have a single point of failure: your frontend.

Icecast is a very stable and reliable server and typical strategies employed for serving web pages over HTTP don't necessarily apply to it.

Your effort would be probably better spent at getting to know Icecast, its configuration and limitations. That said, if properly configured Icecast will easily saturate a 1 GBit/s connection and serve well over 20.000 concurrent listeners. Tests actually indicate that it scales well beyond that, but there might be corner cases.

After that ask yourself: What problem in terms of availability am I really trying to solve and the answer to the problem will most likely not be 'reverse proxy'.

TBR
  • 200
  • 5
  • 1
    Yes, the Icecast server is on the same VM that downloads playlists & music via Python scripts, forwards it to mpd which outputs to Icecast. I was trying to put a simpler nginx-only VM in front of that whole thing cloned in case any of the services crashed (we had cases where icecast would not come back up after a maintenance reboot due to errors for example). But perhaps just isolating Icecast in its own VM in front of two VMs containing all the custom code (more frequently updated) and setting a [fallback-mount](http://www.freebsdcluster.org/~lasse/icecast-lvs-cluster-howto/) would work. – metakermit Nov 27 '15 at 11:48
  • 1
    Sounds like a good plan. Common setups for stations that have live and playlist programming are: /live.opus falls back to /playlist.opus falls back to /outage.opus -- where /outage.opus would be fed by e.g. a small playlist of jingles and ez-stream on the same machine as Icecast. (I don't recommend fallback to file) Also don't forget the override so that listeners get transfered smoothly back and forth. – TBR Nov 27 '15 at 12:56
  • 1
    Just implemented and it works awesome, thanks for the help! A simple `/boplive2` in the frontend icecast and another VM's mpd.conf set up with `mount "boplive2"` to feed it. Really smooth transitioning – even when the source VMs are on different continents. Yeah, we'll look into 3rd failover level too, but more likely it's gonna be something on the listener's machine (the project is not for public radio, but in-store music systems). – metakermit Nov 27 '15 at 15:33
  • Glad to hear you got it sorted! – TBR Nov 27 '15 at 20:04