0
  1. Create juicefs Network mount(like NFS mount) in host machine
juicefs mount -d redis://10.111.100.91:6379/0 /mnt/jfs-mount
2021/12/10 20:05:25.914969 juicefs[33027] <INFO>: Meta address: redis://10.111.100.91:6379/0
2021/12/10 20:05:25.916720 juicefs[33027] <WARNING>: AOF is not enabled, you may lose data if Redis is not shutdown properly.
2021/12/10 20:05:25.917140 juicefs[33027] <INFO>: Ping redis: 329.765µs
2021/12/10 20:05:25.917626 juicefs[33027] <INFO>: Data use minio://10.102.8.247:9000/test/minio/
2021/12/10 20:05:25.917812 juicefs[33027] <INFO>: Disk cache (/var/jfsCache/3680a8cc-a3f7-40a9-ac6f-fc79505bb728/): capacity (1024 MB), free ratio (10%), max pending pages (15)
2021/12/10 20:05:26.419836 juicefs[33027] <INFO>: OK, minio is ready at /mnt/jfs-mount
[root@kube-node-1 juice]#
[root@kube-node-1 juice]# df -h|grep jfs-mount
JuiceFS:minio   1.0P  8.4M  1.0P    1% /mnt/jfs-mount
  1. Try to bind mount to an existing directory, and make it shared mode
[root@kube-node-1 juice]# mkdir /mnt/jfs-bind
[root@kube-node-1 juice]# mount --bind --make-shared /mnt/jfs-mount /mnt/jfs-bind
[root@kube-node-1 juice]# cat /proc/self/mountinfo |grep jfs | sed 's/ - .*//'
152 40 0:219 / /mnt/jfs-mount rw,relatime shared:117
155 40 0:219 / /mnt/jfs-bind rw,relatime shared:117
[root@kube-node-1 juice]#
  1. Kill the fuse process, and make mount point not work
[root@kube-node-1 juice]# ps -ef |grep juicefs
root      33043      1  0 20:05 ?        00:00:00 juicefs mount -d redis://10.111.100.91:6379/0 /mnt/jfs-mount
root      34338 129878  0 20:06 pts/1    00:00:00 grep --color=auto juicefs
[root@kube-node-1 juice]# kill -9 33043
[root@kube-node-1 juice]# ls /mnt/jfs-mount
ls: cannot access /mnt/jfs-mount: Transport endpoint is not connected
[root@kube-node-1 juice]# ls /mnt/jfs-bind
ls: cannot access /mnt/jfs-bind: Transport endpoint is not connected
  1. Recover the source mount point of bind(/mnt/jfs-mount), and check the target mount point
[root@kube-node-1 juice]# umount /mnt/jfs-mount
[root@kube-node-1 juice]# juicefs mount -d redis://10.111.100.91:6379/0 /mnt/jfs-mount
2021/12/10 20:07:19.357752 juicefs[35185] <INFO>: Meta address: redis://10.111.100.91:6379/0
2021/12/10 20:07:19.359160 juicefs[35185] <WARNING>: AOF is not enabled, you may lose data if Redis is not shutdown properly.
2021/12/10 20:07:19.359528 juicefs[35185] <INFO>: Ping redis: 340.317µs
2021/12/10 20:07:19.360107 juicefs[35185] <INFO>: Data use minio://10.102.8.247:9000/test/minio/
2021/12/10 20:07:19.360264 juicefs[35185] <INFO>: Disk cache (/var/jfsCache/3680a8cc-a3f7-40a9-ac6f-fc79505bb728/): capacity (1024 MB), free ratio (10%), max pending pages (15)
2021/12/10 20:07:19.862758 juicefs[35185] <INFO>: OK, minio is ready at /mnt/jfs-mount
[root@kube-node-1 juice]#ls  /mnt/jfs-bind
ls: cannot access /mnt/jfs-bind: Transport endpoint is not connected

I thought /mnt/jfs-bind can be recovered automatically, because the remount action can be propagated to bind target mount. Seems it's not the same behavior as https://www.kernel.org/doc/Documentation/filesystems/sharedsubtree.txt . I wonder why?

And further question, if I want to make bind mount automatically recover when the source mount point of juice is recovered. Is there any way to do so?

che yang
  • 113
  • 1
  • 5

0 Answers0