First of all you'll need to enable cookie inserting in haproxy and assign each back end node its unique key. This is usually used for for session stickiness - i.e. you wan't someone visiting your site to always get the same back end node if it's still available. But it can also be used to monitor individual nodes by sending the appropriate cookie. So if not present add cookies to your haproxy server definitions:
cookie SERVERID insert indirect nocache
server webA1 10.0.0.1:80 cookie S1
server WebA2 10.0.0.2:80 cookie S2
Secondly you will need to figure out what makes the most sense to check, on this you'll need to do some thinking and fiddling on your own to figure out what makes the most sense and how to check that using nagios's awesome check_http. For completeness I'll give a complex example below of how you could test a POST toward a back-end Web Service. For this example scenario the requirements are:
- Post data should be <echo>Hello</echo>
- A successful execution will return the echo string back
- Disable any cache through HTTP headers
- Set content-type to text/xml and expect the same back
- SSL should be used
- Host name is example.com
- Port is 443
- URI is /service
- Max response time is 3 seconds
This would be taken care of by the following arguments to check_http (/usr/lib64/nagios/plugins/check_http on Cent OS 6)
-P "<echo>Hello</echo>"
-r 'Hello'
-k "Cache-Control: no-cache" -k "Pragma: no-cache"
-k "Content-Type: text/xml; charset=UTF-8" -k "Accept: text/xml"
-S
-H example.com
-p 443
-u /service
-t 3
Now, this all put together should give you a nice OK output, get this working first.
Then it's time for some custom aspects enabling node selection through the cookie and also optionally sending in of an IP you can use to override DNS in case you for example want to check a path through a passive data center. To do this we'll write a small shell script wrapper around check_http that will take one parameter as the host-name of the back end node (for convenience, lets use what icinga considers the host name to be) and an optional parameter overriding the IP of the server to check (bypassing DNS lookup). This all results in a shell script looking something like this (I suggest putting it in /usr/lib64/nagios/plugins/ and chown,chmod it as per the other plugins in there):
#/bin/bash
if [ -z "$1" ]
then
echo "Usage: $0 host-name [haproxy-ip]"
exit 2
fi
if [[ $# -eq 2 ]]; then
APPEND_OPTS=" -I $2"
fi
#Map icinga/nagios host names to haproxy node names in case these differ and you don't want to expose them on the internetz
declare -A nodes
nodes=(["webA1"]="S1"
["webA2"]="S2"
["webB1"]="S3"
["webB2"]="S4")
node=${nodes["$1"]}
/usr/lib64/nagios/plugins/check_http -P "<echo>Hello</echo>" -r 'Hello' -k "Cache-Control: no-cache" -k "Pragma: no-cache" -k "Content-Type: text/xml; charset=UTF-8" -k "Accept: text/xml" -S -H example.com -p 443 -u /service -t 3 -k "Cookie: SERVERID=$node" $APPEND_OPTS
Note that SERVERID is the name of the cookie set in haproxy.
Once this is in place you can define your nagios check commands similar to:
#Check path through av A fw and haproxy
define command{
command_name check_node_external_a
command_line $USER1$/check_node '$HOSTNAME$' '<A external IP>'
}
Where check_node is the name of the wrapper script and 'A external IP' is the IP used to reach the system in data center A.
This would have saved me a lot of time the last few days so I hope it can send you in the right direction too.