3

We have a LAN with Cisco switches, redundant cabling and spanning tree. If I understand it correctly, when I pull out a redundant cable (that is currently "used" by the spanning tree) it takes several seconds until the spanning tree converges in reaction. How can I prevent this packet loss (assuming of course I know beforehand that the cable will be pulled)? That is, how can I make the spanning tree adapt "proactively"?

I would have guessed that an interface shutdown plus waiting a couple of seconds should suffice, but did not dare to try that out yet. Actually, I am afraid an interface shutdown would cause the same interruption times during convergence because I suffered from such an interruption yesterday when makeing a supposedly harmless configuration change at some interfaces. (Edit: I just confimed this experimentally; as expected there was some 20 seconds of interruption after interface shutdown - note that I am looking for a "lossless" soluiton, not just "less loss")

Hagen von Eitzen
  • 816
  • 3
  • 15
  • 41

2 Answers2

4

It sounds like you're using class STP instead of rapid STP. Two options will speed up the convergence time significantly.

interface *server interface*
spanning-tree portfast

This should be applied to server interfaces. It will tell STP that there is no switch on the other side of this port, and that it is safe to skip the normal "safe" method of preventing loops. The port should move straight to forwarding.

spanning-tree mode rapid-pvst

Enables the newer Rapid Per-VLAN Spanning Tree protocol, which uses messages between switches to enable re-convergence within a couple of seconds rather than 30-45.

You might try setting up a port-channel between your switches instead of redundant single links. This would allow all traffic to fail over to the remaining port if one is lost.

Keller G
  • 644
  • 3
  • 6
3

As Keller says, definitely enable portfast facing your edge ports, but that's really not what you're worried about here.

If you're running classic spanning-tree then moving to rapid will help the outage time. Just be aware when you transition from classic to rapid there can be reconvergence, but generally there isn't.

What you are looking for is the spanning tree cost ### command. You just need to make the link that will be taken out of service a higher cost than the redundant link and spanning-tree will block that link and unblock the other. Or depending on your network layout you can run non-looped vlans that don't depend on spanning-tree for loop avoidance and/or outage recovery.

And edit to add... don't forget to remove the spanning-tree cost config after your maintenance and the link is back up, so your network is running the way it was originally designed.

cpt_fink
  • 907
  • 5
  • 12
  • So I get it right that setting spanning tree cost will cause reconvergence but since all old and new edges are available at all times, the transition from old tree to new tree is smoothly? Or is there still a (very short) interruption to avoid loops caused during transition by some switches using the old, some the new tree? – Hagen von Eitzen Jun 07 '13 at 09:37
  • Sorry, I just tested this: The moment I increase the cost of a link or remove the cost again, I have packet loss for the usual convergence time ... – Hagen von Eitzen Jun 07 '13 at 09:55
  • Hmmm... I'll have to lab out the classic spanning-tree and see. Moving to rapid spanning-tree should fix that though, since classic STP is timer based and rapid STP is proposal/agreement based. – cpt_fink Jun 08 '13 at 04:48
  • If you use classic STP with no additional features, forcing a reconverge by changing STP cost will act exactly (from STP's point of view) the same as actually downing the link. Same steps still have to be run through on the new link to guarantee a loop-free network. Here's some trivia for you -- what additional STP feature could you enable to prevent this delay? – Keller G Jun 10 '13 at 04:08