I have two Asterisk servers in a chain like this:
SIP client (G.729) <-SIP-> Asterisk 1 (converting G.729 to G.711) <-IAX2-> Asterisk 2 -> terminated to analog line.
When no buffering was configured on either of Asterisks, the sound was terrible when received on the analog line (it was perfectly OK in the other direction - I assume the SIP client has buffering enabled for reception.)
As the two Asterisk servers are on LAN, I thought buffering will make most sense on Asterisk 1 facing the remote SIP client.
However, enabling buffering on Asterisk 1 made no difference to sound quality. Enabling buffering on Asterisk 2 somehow did the trick.
Watching the stats on Asterisk 2 confirms this:
CLI> iax2 show netstats
-------- LOCAL --------------------- -------- REMOTE --------------------
Channel RTT Jit Del Lost % Drop OOO Kpkts Jit Del Lost % Drop OOO Kpkts FirstMsg LastMsg
IAX2/ast1-5936 1 60 140 79 0 43 16712 64 0 40 0 0 0 0 0 Rx:NEW Rx:ACK
As you can see about 25% of received packet are Out-of-order (OOO), which I suppose is fixed by the configured jitterbuffer.
So my question is:
- How could the wrong order of packets survive codec translation on Asterisk 1 and make it through to Asterisk 2? and
- How jitterbuffer can fix errors introduced a hop away (I am sure the culprit is the SIP client -
CsipSimple
- as other SIP clients sound much better without buffering than this one) and why jitterbuffer on the closer server couldn't do that?