Search This Blog

Wednesday, August 29, 2012

S3fs - LinodeWiki

S3fs - LinodeWiki:


Mounting Amazon S3 as a local filesystem via FUSE

[edit]Get your AWS account information

If you haven't already, sign up with Amazon Web Services and enable S3 for your account.
On the AWS page, hover over the "Your Account" tab, and select "Security Credentials". Find the section labeled "Your access keys" - make a note of your Access Key ID, then click on the link labeled "Show" under "Secret Access Keys" and note that information, too. You will have to provide both pieces of information to the s3 commands you'll be using later.

[edit]Set up s3cmd and create a bucket

S3 lets you create buckets to store your information; you have to use the AWS API to create buckets, and you won't be able to create buckets with s3fs. If you haven't already created a bucket in your S3 account, you can use the s3cmd program to set one up:
 $ s3cmd --configure
 
 Enter new values or accept defaults in brackets with Enter.
 Refer to user manual for detailed description of all options.
 
 Access key and Secret key are your identifiers for Amazon S3
 Access Key: YourID
 Secret Key: YourSecret
 
 Encryption password is used to protect your files from reading
 by unauthorized persons while in transfer to S3
 Encryption password: gpgpass
 Path to GPG program [/usr/bin/gpg]: 
 
 When using secure HTTPS protocol all communication with Amazon S3
 servers is protected from 3rd party eavesdropping. This method is
 slower than plain HTTP and can't be used if you're behind a proxy
 Use HTTPS protocol [No]: Yes
 
 New settings:
 Access Key: YourID
 Secret Key: YourSecret
 Encryption password: gpgpass
 Path to GPG program: /usr/bin/gpg
 Use HTTPS protocol: True
 HTTP Proxy server name: 
 HTTP Proxy server port: 0
 Test access with supplied credentials? [Y/n] n
 
 Save settings? [y/N] y
 Configuration saved to '/home/user/.s3cfg'
Note: your AWS Access ID and Secret will be stored in cleartext in .s3cfg. Make sure to set permissions on the file to be as restrictive as possible, and keep the file safe!

[edit]Compile S3FS

S3FS isn't packaged as a binary with any distribution I'm aware of, but it's relatively easy to compile. On a Debian Lenny system, you'll need a few packages to compile s3fs:
 sudo apt-get install make g++ libcurl4-openssl-dev libssl-dev libxml2-dev libfuse-dev
Grab the source off Google Code:
 wget http://s3fs.googlecode.com/files/s3fs-r177-source.tar.gz
Unpack the source and build the binary
 tar xzvf s3fs-r177-source.tar.gz
 cd s3fs
 make
Running make may return a warning or two, but should end with "Ok!". If not, you probably missed one of the dependency libraries above.
Copy the resulting binary to somewhere in your path, I used /usr/local/bin
 sudo cp s3fs /usr/local/bin
If you built the binary on one system and want to run it on another system, you'll still need libcurl and fuse installed. You shouldn't need to do this if you built the binary on the same machine:
 sudo apt-get install fuse-utils libcurl3
Test the command by running it.
 s3fs
If you get warnings about missing libcurl or libfuse, review your steps to make sure all the dependent shared objects are installed.

[edit]Using S3FS

If you want a regular user to be able to mount S3 shares, they will need to be added to the fuse group so they can read and write /dev/fuse
 usermod -aG fuse username
Now, as a user with fuse access, test a simple mount:
 s3fs mybucket -o accessKeyID=youraccesskey -o secretAccessKey=yoursecret -o url=https://s3.amazonaws.com /mnt/s3
You should be able to read and write files to and from /mnt/s3. If you write a file with S3FS, try confirming it with s3cmd:
 s3cmd ls s3://mybucket

[edit]Setting up automatic mounts

You can have s3fs mount your S3 shares automatically. To do this, create a file called /etc/passwd-s3fs. Make the permissions on this file as restrictive as possible: only users who will be mounting S3 filesystems should be able to read the file. I have my /etc/passwd-s3fs file owned by root, group root, with 400 permission because I only use root to mount the shares.
The format of the file is your Access Key ID and your Secret Key separated by a colon with no spaces between:
 AccessKeyID:SecretKey
To have the share mount when your Linode boots, add it to /etc/fstab:
 s3fs#mybucket /mnt/s3 fuse url=https://s3.amazonaws.com 0 0
Now you should be able to mount the filesystem with a regular mount command:
 sudo mount /mnt/s3

[edit]More mount options

By default, Fuse will lock the access to a file down to whoever ran the Fuse command. So, if you mount a filesystem as user foo, only foo will be able to access the filesystem- even root can't get to it! If you want to put an S3 filesystem in /etc/fstab and have root mount it at boot but have a regular user or group own the filesystem, you can set uid and/or gid in /etc/fstab:
 s3fs#mybucket /mnt/s3 fuse uid=500,gid=500,url=https://s3.amazonaws.com 0 0
If you want everyone on your Linode to have access to the filesystem and use Unix permissions for security instead of Fuse, you can pass a special option in /etc/fstab:
 s3fs#mybucket /mnt/s3 fuse allow_other,url=https://s3.amazonaws.com 0 0

[edit]HTTPS

By default, s3cmd and s3fs will use HTTP to access Amazon Web Services, and they will pass your Access ID and Secret Key in plain text. You need to protect your login information against snooping. You'll notice that when I configured s3cmd, I said to use HTTPS, and in all the s3fs commands, I included an option to use https. This will ensure that your transmissions to and from S3, including your login credentials, are encrypted in transit.

[edit]Great, now what?

What good is having S3 locally mounted? I'm using it to store my MP3s and photos, currently. I stream the music back to myself with MPDand display photos with WordPress's NextGen Gallery plugin. I pay about $8/mo to store 30GB on S3 and shuffle lots of bits around.
I have tested S3 as a backing store for BoxBackup, and that REALLY doesn't work. BoxBackup expects storage to be locally attached and dislikes latency in its datastore.
I have also tested S3 as a backing store for Bacula, which works very well. Look for a new Wiki page later detailing how to best configure Bacula storage on S3.
Please note; if using CentOS; the only functional solution I have found was to compile fuse and the S3fs script. Directions are on this page, in comment 8.

No comments:

Post a Comment