1
I'm doing some reading on SNMP and have Stallings' book "SNMP, SNMPv2 and RMON" (1996).
He shows how by using some basic math you are able to "determine the maximum number of stations that the management station can handle when engaged in full-time polling" (p. 194, 7.5.4 "Polling Frequency").
He gives the following:
N <= T/V
N = No. of agents
T = desired polling interval (in seconds)
V = average time required to perform a single poll
He then gives a worked example:
"The example consists of a single LAN, where each managed device is to be polled every 15 minutes. Assuming processing times on the order of 50ms, and a network delay of about 1ms (packet size of 1,000 bytes, no significant network congestion), then V is approx. 0.202 sec. Then:
N <= (15*60) / 0.202 = approx. 4,500."
I find this quite unclear. How do you get 0.202 sec from the information above?
Are there any other ways to determine how many stations can be handled by the management station?
Regards, ILMA.
I could see .102 seconds... 1ms for the packet to get to the polled station, 50ms processing time, 1ms for the return packet, 50ms processing time on on the management station. Unless somewhere before the example he states that management stations take about 150ms to process the returned data. – cpt_fink – 2013-04-07T03:53:57.120