6

I recently had the opportunity to move a web application from using a Nginx proxy "loadbalancer" to an F5 loadbalancer. Unfortunately during that migration it became clear that the memcached session storage needed to move from the Nginx proxy server to "somewhere". My thinking is that I should put memcached on all 3 of the web servers (the servers that sit behind the F5 in a pool) and use php-memcache or php-memcached to save sessions. Here's the trouble:

I've tried both php-memcache and php-memcached and can neither one to behave properly if one of the servers goes down. My latest attempt was with this configuration:

memcached version 2.2.0 with the configuration settings:

session.save_handler = memcached
session.save_path    = "172.29.104.13:11211,172.29.104.14:11211"

I have nothing special in memcached.ini other than extension=memcached.so.

With this configuration on both server 1 and 2 (I removed 3 temporarily to test), I point JMeter at the F5 VIP and start traffic. I can see memcached.log (the daemon) on both systems, though haven't spent time to decipher, start running.

Then if I stop one of the memcached daemons, traffic begins failing and my return is

session_start(): Write of lock failed

by the memcached that is left remaining.

At the end of the day my goal is simple - I need to be able to a) not run memcached on a single server (single point of failure), and the cluster needs to be resilient to a failure of a pool member.

I've also tried php-memcache but it too fails. For php-memcache the configuration looks like this:

memcache version 3.0.8 (beta) with the configuration settings:

 
session.save_handler = memcache
session.save_path    = "tcp://172.29.104.13:11211, tcp://172.29.104.14:11211"

and in memcache.ini:

extension=memcache.so
[memcache]
memcache.dbpath="/var/lib/memcache"
memcache.maxreclevel=0
memcache.maxfiles=0
memcache.archivememlim=0
memcache.maxfilesize=0
memcache.maxratio=0
memcache.hash_strategy=consistent
memcache.allow_failover=1
memcache.session_redundancy=2

The error here is simply invalid session token (implying to me that the server that was remaining didn't have the session token actually stored, meaning, replication of saving the session wasn't active).

I have not looked at putting session persistence back on the F5, though as a last resort I could do so, and clients trying to connect to the lost member would have to reauthenticate.

Joe
  • 472
  • 4
  • 15

1 Answers1

0

It is much simpler to have the clients store session state for you in cookies. This means no session store at all server-side, only a few microseconds of CPU usage to decrypt & verify the cookie. As CPU is by far the most abundant resource in a datacenter, this scheme will perform much better than a look up from memcached or any other server session storage.

See https://github.com/ascorbic/php-stateless-cookies for one implementation, there are many others kicking around. Note the session data should be encrypted but must be authenticated via HMAC or an AEAD cipher. Do not write this code yourself unless you are a cryptographer; use a well-vetted cryoto library.

Facebook uses this technique so any server can answer any user request, even if the user session started in another data center.

rmalayter
  • 3,744
  • 19
  • 27
  • Again thanks for the tip, but it's not practical to change the authentication method for the application. There are already a number of clients (i.e., daemons) that would have to begin extracting cookie information and returning it, etc. – Joe Aug 15 '14 at 18:33