A member of my team came up with a rather clever solution that allows monit to check frequently (every minute), but once it has attempted to restart the service (which takes ~10 minutes) it will wait a specified grace period before attempting to start again.
This prevents waiting too long between checks, which combined with slow start is a much larger impact to customers. It works by using an intermediate script that acts as flag to indicate monit is already taking action from the last failure.
check host bamboo with address bamboo.mysite.com
if failed
port 443 type tcpSSL protocol http
and status = 200
and request /about.action
for 3 cycles
then exec "/bin/bash -c 'ps -ef | grep -v "$$" | grep -v "grep" | grep restartBamboo.sh >/dev/null 2>&1; if [ $? -ne 0 ]; then /opt/monit/scripts/restartBamboo.sh; fi'"
If bamboo (slow starting web app) is down for 3 minutes in a row, restart, BUT only if a restart script is not already running.
The the script that is called has a specified sleep that waits LONGER then the slowest start time for the service (in our case we expect to finish in ~10, so we sleep for 15)
#!/bin/bash
echo "Retarting bambo by calling init.d"
/etc/init.d/bamboo stop
echo "Stopped completed, calling start"
/etc/init.d/bamboo start
echo "Done restarting bamboo, but it will run in background for sometime before available so, we are sleeping for 15 minutes"
sleep 900
echo "done sleeping"