Introduction
After reading this article, you will be able to setup an FTP server on an EC2 instance, that uploads/downloads content directly to/from S3.
Background
In an effort to reduce operational overhead, I was looking for a solution to leverage Amazon Web Services to create an FTP server that would use S3 as the backend for storage. The current hosting facility 'bundles' services, of which bandwidth utilization is the highest price. Since there is no charge for inbound data transfer into S3 it makes sense to go this route. However, after spending several days looking for a detailed solution on how correctly set this configuration up, I went on my own and worked it out. Below are the steps I took:
Step 1: Setting up the EC2 instance
Configure the EC2 security group
Before you launch your EC2 instances, I recommend creating a new security group. Use the following settings:
TCP Port(s) Range | IP Range | Comment |
20-21 | 0.0.0.0/0 | FTP ports |
15393 - 15592 | 0.0.0.0/0 | Passive port range |
22 | <your_access_ip> | SSH port (we'll remove this later) |
1234546 | <your_access_ip> | Port you'll change ssh connection to |
80 | 0.0.0.0/0 | If you are running a web server |
NOTE: I like to setup a non-default SSH port (in this case 123456) to use once the system is setup. Since the server will be open to the world, changing the ssh port will help to lock down the system and prevent some intrusion attacks. After finishing the setup, we'll remove port 22 from the security group.
Add in any other monitoring ICMP or UDP ports as well.
Launch an EC2 instance
Launch an Ubuntu 12.04 LTS instance, using the newly created security group from above.
Step 2: Mounting S3 onto the instance
We'll need some 3rd party tools to correctly mount an S3 bucket to the server. I've compiled this information from a number of sites as well as my own trial and error. I must give credit to
http://code.google.com/p/s3fs/wiki/FuseOverAmazon
and
http://michaelaldridge.info/post/12086788604/mounting-s3-within-an-ec2-instance
Without these sites, I would have never been able to get this working!
Update the apt repositories
Connect to your newly launched EC2 instance, change to the root user, and run the following commands:
root@ip-xxx-xxx-xxx-xxx:~# echo 'deb http://us.archive.ubuntu.com/ubuntu/ lucid universe' >> /etc/apt/sources.list
root@ip-xxx-xxx-xxx-xxx:~# echo 'deb-src http://us.archive.ubuntu.com/ubuntu/ lucid universe' >> /etc/apt/sources.list
root@ip-xxx-xxx-xxx-xxx:~# echo 'deb http://us.archive.ubuntu.com/ubuntu/ lucid-updates universe' >> /etc/apt/sources.list
root@ip-xxx-xxx-xxx-xxx:~# echo 'deb-src http://us.archive.ubuntu.com/ubuntu/ lucid-updates universe' >> /etc/apt/sources.list
root@ip-xxx-xxx-xxx-xxx:~# echo 'deb http://us.archive.ubuntu.com/ubuntu/ lucid multiverse' >> /etc/apt/sources.list
root@ip-xxx-xxx-xxx-xxx:~# echo 'deb-src http://us.archive.ubuntu.com/ubuntu/ lucid multiverse' >> /etc/apt/sources.list
root@ip-xxx-xxx-xxx-xxx:~# echo 'deb http://us.archive.ubuntu.com/ubuntu/ lucid-updates multiverse' >> /etc/apt/sources.list
root@ip-xxx-xxx-xxx-xxx:~# echo 'deb-src http://us.archive.ubuntu.com/ubuntu/ lucid-updates multiverse' >> /etc/apt/sources.list
Update apt
root@ip-xxx-xxx-xxx-xxx:~# sudo apt-get update
Install required dependencies
root@ip-xxx-xxx-xxx-xxx:~# sudo apt-get -y install build-essential
libfuse-dev fuse-utils libcurl4-openssl-dev libxml2-dev mime-support make
Get and install s3fs
root@ip-xxx-xxx-xxx-xxx:~# mkdir /software
root@ip-xxx-xxx-xxx-xxx:~# cd /software
root@ip-xxx-xxx-xxx-xxx:~# wget http://s3fs.googlecode.com/files/s3fs-1.61.tar.gz
root@ip-xxx-xxx-xxx-xxx:~# tar xvzf s3fs-1.61.tar.gz
root@ip-xxx-xxx-xxx-xxx:~# cd s3fs-1.61
root@ip-xxx-xxx-xxx-xxx:~# ./configure --prefix=/usr
root@ip-xxx-xxx-xxx-xxx:~# make
root@ip-xxx-xxx-xxx-xxx:~# make install
root@ip-xxx-xxx-xxx-xxx:~# touch /etc/passwd-s3fs && chmod 640 /etc/passwd-s3fs && echo 'AccessKey:SecretKey' > /etc/passwd-s3fs
Note: Replace AccessKey:SecretKey with your Amazon AWS keys accordingly.
Create the mount location, and mount the S3 bucket
root@ip-xxx-xxx-xxx-xxx:~# mkdir -p /mnt/ftp
root@ip-xxx-xxx-xxx-xxx:~# /usr/bin/s3fs -o allow_other -o
default_acl="public-read" -o use_rrs=1 <your_bucket_to_be_mounted> /str/ftp
- Note 1: The bucket must exist before mounting.
- Note 2: the default_acl settings will set everything to public read. For acl options please see the s3fs man pages.
- Note 3: use_rss will set everything that is uploaded to reduced_redundancy (which will save you some money!)
- Note 4: You might need to chmod directories and stuff (mine have been set already and I don't remember what I did!)
Step 3: Setting up vsftpd
Now we'll need to install and configure vsftpd (credit for this goes to
Chris Hough for his response on http://askubuntu.com/questions/239239/ubuntu-12-04-500-oops-vsftpd-refusing-to-run-with-writable-root-inside-chroot).
Update repository listing and apt
root@ip-xxx-xxx-xxx-xxx:~# add-apt-repository ppa:thefrontiergroup/vsftpd
root@ip-xxx-xxx-xxx-xxx:~# apt-get update
Install vsftpd
root@ip-xxx-xxx-xxx-xxx:~# apt-get -y install vsftpd
Configure vsftpd.conf
root@ip-xxx-xxx-xxx-xxx:~# cp vsftpd.conf vsftpd.conf.ORIGINAL (just to make a backup incase you need it.)
root@ip-xxx-xxx-xxx-xxx:~# vim vsftpd.conf
Use the following settings:
listen=YES
anonymous_enable=NO
local_enable=YES
write_enable=YES
dirmessage_enable=YES
dirmessage_enable=YES
xferlog_enable=YES
connect_from_port_20=YES
chroot_local_user=YES
secure_chroot_dir=/var/run/vsftpd/empty
pam_service_name=vsftpd
sa_cert_file=/etc/ssl/private/vsftpd.pem
And add the following to the end of the file:
# Passive support
pasv_enable=yes
pasv_min_port=15393 # The start port range configured in the security group
pasv_max_port=15592 # The end port range configured int he security group
pasv_address=xxx.xxx.xxx.xxx # the public IP address of the FTP serv
If you want to keep users jailed to their home directory (recommended) then add in the following as well:
# Keep non-chroot listed users jailed
allow_writeable_chroot=YES
restart vsftpd
root@ip-xxx-xxx-xxx-xxx:~# service vsftpd restart
Create local users to access FTP
Now that everything is setup, we need to create some users who will have access to use the FTP server. These users will be created on the local EC2 instance, but will have their home directory set to the mounted S3 bucket (or directories below the bucket). It is recommended to create sub-directories below the mounted S3 bucket, and use those as the home directory, so they will not have access to other users directories.
Create local users
root@ip-xxx-xxx-xxx-xxx:~# useradd -d /str/ftp/<directory> -s /sbin/nologin <ftp_user_name>
root@ip-xxx-xxx-xxx-xxx:~# passwd <ftp_user_name>
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
Update /etc/shells
root@ip-xxx-xxx-xxx-xxx:~# echo '/sbin/nologin' >> /etc/shells
Congratulations! You can now connect to this server via FTP using any of the user/passwords created in step #4!
Change the SSH port
To provide a bit of added security, we'll change the
SSH port and remove it from the security group.
Update the sshd_config file
root@ip-xxx-xxx-xxx-xxx:~# vim /etc/ssh/sshd_config
- Change the port to the non-default port you configured in the security group.
- If required, change
PasswordAuthentication
to yes. - Restart ssh
root@ip-xxx-xxx-xxx-xxx:~# service ssh restart
- Remove the rule for port 22 access from the security group.
Points of Interest
This is my first posting to CodeProject, so be gentle! Hope this helps you out!
History
- 02/15/2012 - Initial post.