I'm wondering how Docker manages a unix socket when it's shared across containers and how it affects the performance compared to just using TCP.
What I'm trying to accomplish is setting up docker-compose to build an php-fpm, nginx and mysql containers and set up nginx to use fastcgi_pass to php-fpm via unix socket instead of TCP.
This is because I'm deploying the set of containers to AWS ECS with Fargate. Since Fargate shares the same ENI across containers, I can't set my references to a specific container's hostname, but I can share a volume in order to share the php-fpm socket.
By the way, I know I could set up the nginx to php-fpm communication via TCP by using localhost on ECS and switch it to container links with docker-compose with the caveat that I would need to use EC2 based ECS (and not fargate).
Besides learning how Docker manages shared sockets, I want to know if there are any downsides to using the Unix socket on ECS. And If somehow the volume driver that Docker uses would impact performance.