I have a Linux server which has its time synchronized to a GPS-based NTP appliance located close by. Ping times from the server to the appliance are circa 1ms, with very low jitter:
--- x.x.x.x ping statistics --- 100 packets transmitted, 100 received, 0% packet loss, time 99001ms rtt min/avg/max/mdev = 0.874/0.957/1.052/0.051 ms
However, the NTP client estimates the accuracy of time synchronization to be around 5-6ms, which seems very high given the setup:
synchronised to NTP server (x.x.x.x) at stratum 2 time correct to within 5 ms polling server every 16 s
ntpq -p gives the following:
remote refid st t when poll reach delay offset jitter ============================================================================== *x.x.x.x .PPS. 1 u 10 16 377 0.964 -0.019 0.036
Two questions:
- What may be causing the NTP client to have such low confidence in the accuracy of the synchronization?
- Is there any way to measure the actual accuracy of the synchronization, say to the nearest millisecond?