Server/device backup to S3 without requiring credentials on the server
Table of Contents
Having multiple servers dotted around with different backup solutions I wanted to consolidate on one consistent backup solution utilising S3 for the actual storage. Also a solution that can be used from any device for simple quick backups is a useful tool.
Avoiding having access and secret keys everywhere #
While researching using S3 for storage it is clear that the usual principle requires your AWSAccessKey and AWSSecretKey to be copied onto each server where uploading to S3 is required.
But as Amazon states…
IMPORTANT: Your Secret Access Key is a secret, and should be known only by you and AWS. You should never include your Secret Access Key in your requests to AWS. You should never e-mail your Secret Access Key to anyone. It is important to keep your Secret Access Key confidential to protect your account.
Ideally I wanted a solution that didn’t require the AWSAccessKey and AWSSecretKey to be copied onto every server requiring backup, increasing the likelihood of the AWS account being compromised if any server became compromised.
The solution is HTTP based uploads using POST. #
This is designed to allow visitors to your website to upload content directly to your S3 account without going through your server and without you passing any credentials to the web browser.
It has two features that allow us to use this for server backup.
- A signature that is generated from the AWSAccessKey and AWSSecretKey
- An expiration that can be set to a long time in the future So the solution is to create a signature that allows uploads to a bucket that doesn’t expire until well into the future. Then use curl to POST to S3 using the signature.
Script to generate policy and curl command #
Generating the correct policy document and curl parameters to make this happen took some time. So in case anyone would like to do this here’s a bit of Ruby code to generate the curl command line…
require 'base64'
require 'openssl'
require 'digest/sha1'
aws_access_key_id = '*** AWS ACCESS KEY ***'
aws_secret_key = '*** AWS SECRET KEY'
content_type = 'application/octet-stream'
bucket = '** BUCKET ***'
acl = 'private'
key_prefix = '***FOLDER****/'
policy_document = '{
"expiration": "2012-01-01T12:00:00.000Z",
"conditions": [
{"bucket": "' + bucket + '" },
{"acl": "' + acl + '" },
["starts-with", "$key", ""],
["starts-with", "$Content-Type", ""],
]
}'
policy = Base64.encode64(policy_document).gsub("\n","")
signature = Base64.encode64(
OpenSSL::HMAC.digest(
OpenSSL::Digest::Digest.new('sha1'),
aws_secret_key, policy)
).gsub("\n","")
print 'curl '
print "-F 'key=#{key_prefix}${filename}' "
print "-F AWSAccessKeyId=#{aws_access_key_id} "
print "-F acl=#{acl} "
print "-F policy=#{policy} "
print "-F signature=#{signature} "
print "-F Content-Type=#{content_type} "
print "-F file=@FILENAMEHERE "
print "-F Submit=OK "
print "http://#{bucket}.s3.amazonaws.com"
print "\n"
Just fill in the variables with the required information and run the script.
The curl command line generated will then upload any file to S3. Just replace FILENAMEHERE in the command line with the required filename (leave the @ before the filename).
Bonus rate limiting #
Also, rate limiting is possible with curl. For example –limit-rate 20K will keep curl to only using 20K/s of your bandwidth. Handy for stopping the backup using the full bandwidth of your connection.