4

I am trying Tomcat Clustering with mod_jk for months and so far not so bad but facing a problem during deployment. I am using FarmDeployer to copy and deploy the WAR to other nodes in the cluster but most of the time the WAR is not deployed properly and thus leaving the page in 404 error. Even after removing the exploded war directory and again having tomcat extracted the WAR, the browser couldn't render the actual site until I restart/stop the tomcat service on that particular node(of course, http://node-ip/myapp works if redeployed war but not http://site1.mydomain.net once rendered 404 page). And also I think this problem is browser related(tried all the browsers) as the page rendered on other computers when redeployed after 404 error. I Also tried fail_on_status and so it puts the nodes to error stage which ever render 404 http status and redirect to other node BUT on my testing I found that it completely puts those nodes to error state and no request is sent to those nodes until restart though they are back working.

Workers.properties on load balancer:

workers.tomcat_home=/usr/share/tomcat
workers.java_home=/usr/lib/jvm/java-6-openjdk
ps=/
worker.list=cluster,balancer1,status

worker.balancer1.port=8009        
worker.balancer1.host=localhost
worker.balancer1.type=ajp13
worker.balancer1.lbfactor=2
worker.balancer1.cache_timeout=20
worker.balancer1.socket_timeout=20
#worker.balancer1.fail_on_status=-404,-503

worker.web1.port=8009        
worker.web1.host=192.168.1.8
worker.web1.type=ajp13
worker.web1.lbfactor=4
worker.web1.redirect=web2
worker.web1.cache_timeout=20
worker.web1.socket_timeout=20 
#worker.web1.fail_on_status=-404,-503

worker.web2.port=8009        
worker.web2.host=192.168.1.9
worker.web2.type=ajp13
worker.web2.lbfactor=4
worker.web2.redirect=web1
worker.web2.cache_timeout=20
worker.web2.socket_timeout=20 
#worker.web2.fail_on_status=-404,503

worker.cluster.type=lb
worker.cluster.balance_workers=web1,web2,balancer1
worker.cluster.sticky_session=True
worker.cluster.sticky_session_force=False

# Status worker for managing load balancer
worker.status.type=status

Anybody has any idea to skip 404 error node and instead hit other properly deployed nodes?. Atleast any tips in configuration if anything so that it renders the actual page after facing 404 having stickey session enabled.

Update:1

Apache Virtual Hosting on Load balancer(192.168.1.5 or balancer1):

<VirtualHost *:80>
ServerName site1.mydomain.net
JkAutoAlias /usr/share/tomcat/webapps/myapp
DocumentRoot /usr/share/tomcat/webapps/myapp

JkMount / cluster
JkMount /* cluster
JkMount /*.jsp cluster
  JkUnMount /myapp/*.html cluster
  JkUnMount /myapp/*.jpg  cluster
  JkUnMount /myapp/*.gif  cluster
  JkUnMount /myapp/*.png  cluster
  JkUnMount /myapp/*.css  cluster 

JkUnMount /abc cluster
JkUnMount /abc/* cluster
  JkUnMount /*.html cluster
  JkUnMount /*.jpg  cluster
  JkUnMount /*.gif  cluster
  JkUnMount /*.png  cluster
  JkUnMount /*.css  cluster

ProxyRequests Off
ProxyPreserveHost On
ProxyVia On 
<Proxy balancer://ajpCluster/>
    Order deny,allow
    Allow from all
  BalancerMember ajp://192.168.1.8:8009/ route=web1 ttl=60 timeout=20 retry=10
  BalancerMember ajp://192.168.1.9:8009/ route=web2 ttl=60 timeout=20 retry=10
  BalancerMember ajp://192.168.1.5:8009/ route=balancer1 status=+H ttl=60 

  ProxySet lbmethod=byrequests
  ProxySet stickysession=JSESSIONID|jsessionid
</Proxy>

<Location />
  ProxyPass balancer://ajpCluster/ nofailover=off
  ProxyPassReverse balancer://ajpCluster/
</Location>

</VirtualHost>

Tomcat virtual Hosting common on all the nodes:

<Host name="localhost"  appBase="webapps"
            unpackWARs="true" autoDeploy="true" deployOnStartup="true">
 <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"
               prefix="localhost_access_log." suffix=".txt"
               pattern="%h %l %u %t &quot;%r&quot; %s %b" />

       </Host>

<Host name="site1.mydomain.net" debug="0" appBase="webapps" unpackWARs="false" autoDeploy="false" deployOnStartup="false">
<Logger className="org.apache.catalina.logger.FileLogger" directory="logs" prefix="virtual_log1." suffix=".log" timestamp="true"/>
<Context path="" docBase="/usr/share/tomcat/webapps/myapps" debug="0" reloadable="true"/>

NO session replication with tomcat clustering: Disabled for now by commenting <cluster> element as it's consuming lot of memory updating and interacting all time one another in the cluster. For now I have Load balancing and Auto Failover with mod_jk or proxy_ajp BUT with 404 error problem when myapp is unavailable(and available again) as described above. How everybody handling this?

user53864
  • 1,653
  • 8
  • 36
  • 66

1 Answers1

1

The only resolution I could find so far is by de-activate the web servers leaving only one to host during deployment. Once the deployment is successful, activate the web servers and update the left backend server separately by disabling it. Probably we could activate/de-activate nodes with proxy balancer-manager in virtual hosting or with jkstatus of mod_jk in workers.properties like:

Proxy:

# Balancer-manager, for monitoring
ProxyPass /balancer-manager !
<Location /balancer-manager>
    SetHandler balancer-manager

    Order deny,allow
    Deny from None
    Allow from all
</Location>

mod_jk:

worker.list=cluster,status
................
.............
.......
# Status worker for managing load balancer
worker.status.type=status

A lot of user intervention is required!

user53864
  • 1,653
  • 8
  • 36
  • 66