Recent versions of docker (or any version of nvidia-docker) allow direct(?) access to the host GPU from within docker containers, with full access to CUDA APIs. This is very convenient when deploying complex machine learning inference servers.
However, currently only Linux hosts are supported as far as I can tell.
Why can't Microsoft and Apple step up their game and provide the same level of support? Which is to say, what trick is being used on Linux, which is apparently hard to imitate in other OSes?