2

Ok, so I have to admit I am not an expert in how Juniper networked devices work, so this entire issue might be why I am having issues understanding something I am seeing on a pair of clustered SRX210H's.

So, when I SSH to one of these clustered Juniper devices, I am presented with a standard BSD/Unix prompt. If I run the ifconfig command, I get a long list of interfaces, and the last two interfaces are as follows:

fab1:   encaps: ether; framing: ether
        flags=0x3/0xc000 <PRESENT|RUNNING>
        curr media: i802 80:71:1f:b9:27:70
fab1.0: flags=0xc000 <UP|MULTICAST>
        inet mtu 8996 local=30.18.0.200 dest=30.18.0.0/24 bcast=30.18.0.255

a little farther up the interface list, i also so a fab0 interface

fab0:   encaps: ether; framing: ether
        flags=0x3/0xc000 <PRESENT|RUNNING>
        curr media: i802 80:71:1f:b9:17:b0
fab0.0: flags=0xc000 <UP|MULTICAST>
        inet mtu 8996 local=30.17.0.200 dest=30.17.0.0/24 bcast=30.17.0.255

When I lookup 30.17.0.200, i get the following:

http://30.17.0.200.ipaddress.com/

IP Address: 30.17.0.200
Organization:   DoD Network Information Center
ISP/Hosting:    DoD Network Information Center
Updated:    10/01/2016 09:34 AM

When I dig a little deeper, I find a reference to a juniper KB article titled

Why does the 'show interface terse' command display fabric interface IP addresses with 30.0.0.0/8 addresses?

Which has the following info

SUMMARY:

This article describes the issue of the 'show interface terse' command displaying fabric interface IP addresses with 30.0.0.0/8 addresses.

SYMPTOMS:

Fabric interface IP addresses are system-determined and not configurable. The following is a excerpt of the output of 'show interface terse' :

fab0 up up 
fab0.0 up up inet 30.17.0.200/24 
fab1 up down
fab1.0 up down inet 30.18.0.200/24 
fxp0 up down

SOLUTION:

  • This is expected behavior of the system.
  • These addresses are used only for the internal communication of the cluster.
  • No routes are installed for the fab0 subnets; it will not affect any transit traffic.

If, for whatever reason, a fabric interface was accidentally plugged into a production segment, the fabric traffic would still not be routed out, as it would not get processed at layer 2.

PURPOSE:

Implementation

That being said, why exactly is the expected behavior? Why would Juniper decide to use public IP addresses owned by the DoD Network Information Center to preform this function for clustered networked devices? My primary concern is this, earlier this year the DoD publically disclosed security backdoors in Juniper products. I would not find this to be too odd if it were any other organization, but the fact that the DoD was invloved in calling out Juniper on having backdoors in their products. See the following URL for more info on the backdoor that was exposed earlier in 2016.

http://motherboard.vice.com/read/department-of-defense-nudges-contractors-to-patch-juniper-backdoor

Can anyone explain what the purpose of using public IP address space owned by the DoD on Juniper equipment is? Why wouldn't they just use a point to point private ip address range for communications between two clustered devices? I would like a technical explanation if at all possible (not sure if a non-technical explanation even exists considering the question being asked). Any Juniper experts care to chime in?

I should mention that when I look at the packet counters for both interfaces, there is a lot of outgoing packet traffic, absolutely 0 incoming packet traffic on the fab0.0 and fab1.0 interfaces. If it were being used for just failover/clustering, wouldn't there be both incoming and outgoing packets?

Another question is why in the KB article from Juniper do they refer the public IP address range as being 30.0.0.0/8 when the line from the config they show in the example clearly shows that the network is using a /24 (254 hosts)?

Anders
  • 64,406
  • 24
  • 178
  • 215
Richie086
  • 183
  • 1
  • 7

0 Answers0