Amazon Simple Storage Service (Amazon S3) gives you an easy way to make files available on the internet.  They host the files for you and your customers, friends, parents, and siblings can all download the documents.  You gotta figure they’re going to do a better job of hosting them than you would ever do.  Plus, if one of your file with instructions for downloading cute kitten photos gets linked from the NY Times then you know that your own server won’t die from too much traffic.

How do you go getting files from your computer to S3?  We have manually uploaded them through the S3 web interface.  It’s reasonable, but we wanted to do better.  So, we wrote a little Python 3 program that we use to put files into S3 buckets.  If the bucket doesn’t yet exist, the program will create the bucket.  If the bucket does exist, well, giddyap!

You’ll need to get the AWS SDK boto3 module into your installation.  You’ll also be setting up your credentials in a text file so that the SDK can log you into the whole AWS system.  We covered this a bit more in another AWS post.  Let’s look at some of the highlights and we’ll provide the complete program below.

Once you have the SDK and credentials in place you can create your connection to S3 pretty easily:

Once you have an s3 instance then you can start using its methods.  And the glory begins.  You create a bucket with a straightforward call.  The fact that it throws some exceptions adds a little complication, but you can handle them easily.

The AWS “error code” values are pretty readable (which seems funny to me for a “code”).  Anyway, they kinda make sense.  What may not jump out at you is that we are creating a bucket in our default region in AWS.  You can specify the region if you prefer:

After that portion of the program, we have created a bucket.  Now we have can put our file into the bucket.

Note that we’re opening up byte stream that the SDK loads into the bucket kind of magically for us.  We are also specifying the “Access Control Level” as “public-read” so that our new document will be available to the world.  The ACL options in the documentation give you some nice flexibility to control who can access your materials.

Once you have uploaded a document to S3 it’s also helpful to know the URL to access the document.  In this case, we’re creating public documents…what’s the point unless we get the URL and share it?!?!

It’s a little cumbersome to get to the URL because we have to construct some intermediate objects.  However, we can get there.

As long as we’re in there, we like to see what else is in the bucket.  It’s easy to forget the things you accumulate as time passes.  The SDK makes it pretty easy for us to see what we’ve got piled up in our bucket.  Using the s3 object we created earlier, we can use its structure to enumerate buckets and then enumerate items in the bucket.  Here’s how we present the contents:

Finally we can see the whole program in all its giant glory.

Do you interact with S3 programmatically?  Do you keep a lot of documents in S3 buckets?  How do you organize them?

Share This