4

I'm trying to use s3fs to mount an S3 bucket on to a standard AWS Amazon Linux AMI (with all the necessary dependencies installed). However when following this tutorial when I run: s3fs mybucketname -o allow_other myfolder or variations thereof, I get a response of:

s3fs: could not determine how to establish security credentials

I've tried:

  • creating passwd-s3fs in the etc folder, with the format: accessID:secretAccessKey
  • creating .passwd-s3fs in the home folder
  • setting AWS_ACCESS_KEY_ID & AWS_SECRET_ACCESS_KEY environment variables
  • Opening up permissions on passwd-s3fs as far as possible (it's become more secure since this question)
  • Giving the IAM user associated with this Access Key Administrator Access
  • Successfully connecting via a local client with the same Access & Secret Access Key details
  • Generally double checking everything for typos etc

I've a feeling I'm doing something dumb AWS side (I'm totally new to AWS), is there something specific I need to apply to the S3 bucket Permissions, Policy etc? This is driving me mad, help much appreciated!

Chris Reynolds
  • 141
  • 1
  • 4
  • It's unlikely to be anything in IAM, bucket policy, etc... sounds like an issue on the local machine. You could try using `strace` to observe the system calls `s3fs` is making when you invoke it, and probably discover something useful from that. – Michael - sqlbot Oct 14 '14 at 03:33
  • The error "could not determine how to establish security credentials" occurs when the access key and password are not available. Maybe check the file permissions on .passwd-s3fs? –  Jul 07 '15 at 17:27
  • 1
    Hey man you got any solutions? – Dave Ranjan Apr 23 '17 at 20:14

2 Answers2

2

I found the best way to resolve this was to use the -o iam_role option to pass the name of the instance role in, plus if you're not in the us-east-1 region, you may also need to specify the url and endpoint options.

E.g.

s3fs mybucketname -o allow_other myfolder -o iam_role=$iam_role_name -o endpoint=${aws_region} -o url=https://s3-${aws_region}

(This is the command I used to mount an S3 bucket to an ECS container host instance, so an EC2 instance deployed as part of a cluster)

chrisbunney
  • 463
  • 2
  • 9
  • 20
1

Although this question is old, I also had the problem so I figure I would post the solution that worked for me in case another person has the same issue.

sudo apt purge s3fs -y
sudo apt update -y && sudo apt upgrade -y
sudo apt install s3fs -y

In the current users home directory, create a txt file with the name .passwd-s3fs with your IAM credentials as such:

key:secret

i.e

kjewndkjsn8387:emkwlmskld8/knsdknjnsjnsdk

# 1. use a text editor to add your key:secret & save the file
vim ~/.passwd-s3fs
# set permissions of .passwd-s3fs to 0400
sudo chmod 0400 ~/.passwd-s3fs
# mount drive s3fs /the_remote_path as /local_path
# i.e 
s3fs my_bucket:/the_remote_path /local_path 
Jealie
  • 1,034
  • 8
  • 7