0

I am attempting to mount a folder on a fresh AWS EC2 instance running Ubuntu 18.04, to AWS S3.

I've been following instructions I found at https://cloud.netapp.com/blog/amazon-s3-as-a-file-system and https://www.nakivo.com/blog/mount-amazon-s3-as-a-drive-how-to-guide/. I've also seen the message Mounting an S3 bucket onto a AWS Ubuntu instance issues

When I run the command to mount the folder, I don't get any errors, but when I look at the currently mounted folders, my new one is not listed, eg:

ubuntu@ip-X.X.X.X:~$ sudo s3fs -o allow_other alextestbackup ~/s3-bucket/ -o passwd_file=~/.passwd-s3fs
ubuntu@ip-X.X.X.X:~$ mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
udev on /dev type devtmpfs (rw,nosuid,relatime,size=486512k,nr_inodes=121628,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=100208k,mode=755)
/dev/xvda1 on / type ext4 (rw,relatime,discard)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,rdma)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=26,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=13950)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
mqueue on /dev/mqueue type mqueue (rw,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
configfs on /sys/kernel/config type configfs (rw,relatime)
/var/lib/snapd/snaps/snapd_14066.snap on /snap/snapd/14066 type squashfs (ro,nodev,relatime,x-gdu.hide)
/var/lib/snapd/snaps/amazon-ssm-agent_4046.snap on /snap/amazon-ssm-agent/4046 type squashfs (ro,nodev,relatime,x-gdu.hide)
/var/lib/snapd/snaps/core18_2253.snap on /snap/core18/2253 type squashfs (ro,nodev,relatime,x-gdu.hide)
lxcfs on /var/lib/lxcfs type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,size=100204k,mode=700,uid=1000,gid=1000)

I've attempted various combinations for the mount command:

sudo /usr/bin/s3fs -o allow_other alextestbackup /home/ubuntu/s3-bucket/
s3fs -o allow_other alextestbackup ~/s3-bucket/ -o passwd_file=~/.passwd-s3fs
s3fs alextestbackup ~/s3-bucket/ -o passwd_file=~/.passwd-s3fs

I've also tested making the .passwd-s3fs have incorrect credentials, and I don't get any warning that these are incorrect.

To me it looks like the credentials are not getting picked up for some reason? Or there's something else I'm missing.

Any help much appreciated, and if you need any log file info, just let me know what to post here.

Dave M
  • 4,494
  • 21
  • 30
  • 30

2 Answers2

0

I stopped/started the AWS E2 instance, and then ran:

sudo /usr/bin/s3fs -o allow_other alextestbackup /home/ubuntu/s3-bucket/

and now I see the directory mounted (last few lines of the mount command):

lxcfs on /var/lib/lxcfs type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,size=100204k,mode=700,uid=1000,gid=1000)
s3fs on /home/ubuntu/s3-bucket type fuse.s3fs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)

and can confirm that when I write a file to the s3-bucket I can see the file on the AWS S3 console.

So... no idea what was happening before, or it just a restart was needed after installing s3fs

0

I have S3 mounted on EC2 (Ubuntu18) and never able to get it mounted using the command line. When I added an entry to the /etc/fstab file and remounted (source -a) it worked fine. I also ensured my EC2 instance had a role that included the appropriate S3 read/write/delete policies.

Srini has a good explanation (although only the fstab entry worked for me) in the link below.

https://serverfault.com/a/1063745/981157

max
  • 1