Well, memcached itself has no permission configuration whatsoever (just the permissions of the listening socket). You simply boot the daemon and all memory objects that are sent to it will be stored based on the key. There is no distinction between the user or machine that sends or retrieves the data, you can even get key collisions.
Memcached was designed to be simple a small, forcing the application layer to think about everything else. And originally designed to run on the same machine as the application. That design did not change since 2013.
All that said, if a hosting provider gives you a socket on a different machine to which you connect to memcached directly, then you should stop using that hosting provider straight away. That's just plain unwise. Hosting providers that use memcached will either run a different memcached daemon for each user (the entire daemon is just a couple of kilobytes), use a reverse proxy (with authentication), or build a memcached compatible cache that does not really run memcached.
If you look at what AWS does:
A Memcached layer is an AWS OpsWorks layer that provides a blueprint for instances that function as Memcached servers
i.e. their elastic cache can be used as a memcached cache, but it is not a memcached daemon listening on a socket. And (from the same article) you can see that there is authentication to your cache:
Custom security groups
This setting appears if you chose to not automatically associate a
built-in AWS OpsWorks security group with your layers. You must
specify which security group to associate with the layer. For more
information, see Create a New Stack.
Therefore, using memcached as any networked solution notably in a hosting environment is simply unwise. But most hosting environments advertising memcached, aren't really using memcached, they place a layer in from to add permissions.