5

I'm looking for a simple program or script, you give it a URL and it does the following:

  • checks whether it can open a connection to that URL and reports how long the response time is
  • checks how long it takes to the load the page and reports this number

  • reports some error code or other number when a site is unresponsive

  • exits after it either: fails to connect, loads the page successfully, or after a predetermined number of seconds specified by the user

My purpose is to integrate this functionality as an external program to Zabbix. I did a google search but was unable to locate one.

80skeys
  • 745
  • 2
  • 8
  • 15
  • Keep in mind that most scripts like wget won't reflect the load time for a user (the time it takes to load in a browser). That being said this is still a good idea to monitor as long as you don't mistake what the data is. – Kyle Brandt Jul 28 '11 at 19:46
  • Why are you looking to write this as an external program when Zabbix already does this internally? It won't tell you how long a page took to load, but you will get speed at which it downloaded, once you baseline your measurement it's the same thing. You automagically get return code checking, and I believe there is an item for it as well. In addition you can do this from a Zabbix proxy as well. – Red Tux Jul 31 '11 at 03:59

6 Answers6

6

You can do what you want with a combination of the time and wget commands - e.g.:
time wget -q http://www.google.com/

time will print the time (in seconds/fractions of a second) that it took to complete the wget command, and the return code of the whole mess will be whatever wget's return code would have been (0=success, non-zero indicating various failures).

This can be further wrapped in an appropriate script to determine if the page was successfully retrieved and produce output suitable for Zabbix to pick up and use.

voretaq7
  • 79,345
  • 17
  • 128
  • 213
2

I use the following script probably with basic ideas I'd borrowed from elsewhere. It uses curl statistics:

estadistica () {
    local site=$1
    echo $site
    echo ${site} | sed -n 's/./-/gp'
    curl -w '
    Lookup time:\t%{time_namelookup} s
    Connect time:\t%{time_connect} s
    Pretransfer time:\t%{time_pretransfer} s
    Starttransfer time:\t%{time_starttransfer} s
    Size download:\t%{size_download} bytes
    Speed download:\t%{speed_download} bytes/s

    Total time:\t%{time_total} s
    ' -o /dev/null -s $site
    echo
    }

for i in ${@}; do
    estadistica $i
done

Lets say it's named webstats; this is how it works:

~/src$ bash webstats http://serverfault.com/questions/295194/simple-program-or-script-to-check-load-time-of-web-page http://www.google.com
http://serverfault.com/questions/295194/simple-program-or-script-to-check-load-time-of-web-page
-----------------------------------------------------------------------------------------------

    Lookup time:    0,009 s
    Connect time:   0,139 s
    Pretransfer time:   0,139 s
    Starttransfer time: 0,284 s
    Size download:  37298 bytes
    Speed download: 57153,000 bytes/s

    Total time: 0,653 s

http://www.google.com
---------------------

    Lookup time:    0,084 s
    Connect time:   0,147 s
    Pretransfer time:   0,147 s
    Starttransfer time: 0,218 s
    Size download:  218 bytes
    Speed download: 1000,000 bytes/s

    Total time: 0,218 s

If something goes wrong you can know it (and hence tell zabbix) because the resulting data is not logical:

~/src$ bash webstats http://thisdoesntexist
http://thisdoesntexist
----------------------

    Lookup time:    0,000 s
    Connect time:   0,000 s
    Pretransfer time:   0,000 s
    Starttransfer time: 0,000 s
    Size download:  0 bytes
    Speed download: 0,000 bytes/s

    Total time: 0,000 s

EDITED: curl's timeout option:

Just to answer your comment, curl has a timeout option; from it's man page:

   --connect-timeout <seconds>
          Maximum time in seconds that you allow  the  connection  to  the
          server  to  take.   This  only limits the connection phase, once
          curl has connected this option is of no more use. See  also  the
          -m/--max-time option.
hmontoliu
  • 3,693
  • 3
  • 22
  • 24
  • This looks like it could do what I need. I'll give it a try and report back here. – 80skeys Jul 28 '11 at 19:39
  • Ok it provides the type of information I'm looking for, now I need a way to tell it to run for no more than x number of seconds. Once that limit is reached, the script dies with a timeout error. Any thoughts? – 80skeys Jul 28 '11 at 19:48
  • Ok on my CentOS system, there is a file called /usr/share/doc/bash-3.2/scripts/timeout which is a bash script that terminates a command after it runs for a certain number of seconds. It looks like I've got everything I need. Thanks for the help! – 80skeys Jul 28 '11 at 20:11
  • About timeout check the **Edit** in my post – hmontoliu Jul 29 '11 at 13:57
2

http://phantomjs.org/ will give you more real time, because it loads all resources of a page, and executes JS. Syntax is pretty easy (plain JavaScript)

1

I use curl instead of wget cause wget generates too much files. Example:

$ curl -o /dev/null -s -w %{time_total}\\n  http://google.com
0.084

Next you should to do - put this into external script in your ${datadir}/zabbix/externalscripts and call it with item type 'external checks'

masegaloeh
  • 17,978
  • 9
  • 56
  • 104
adel-s
  • 11
  • 4
0

This: http://tools.pingdom.com/

Alternatively, it would be really easy to code a simple script to do this yourself and output in a specific format.

yrosen
  • 283
  • 1
  • 6
  • pingdom won't work because Zabbix won't be able to call it, put in the appropriate parameters and parse the response. – 80skeys Jul 28 '11 at 19:22
0

I think the firefox firebug network option and pagespeed is a good option

firebug network

bulleric
  • 239
  • 1
  • 2
  • 8