Hosting a Static Website on Amazon and Azure: Part 2-Hosting on Amazon S3

In Part 1 of this three-part series, I explained that it is simple and affordable to host an entire static website out of cloud storage services such as Amazon S3 and Windows Azure Blob Storage. I also covered what I meant by a static website (no PHP scripting), and I offered a few pointers in PHP-freeing a website, especially in the areas of mobile site redirection, contact forms, and moving a self-hosted WordPress Blog to

In this entry, I would now like to show how to get one’s feet wet with Amazon S3 and how to actually port a now-optimized website over to Amazon S3 and host the entire website right out of Amazon S3.

The first step is pretty obvious. One will need to create an account on Amazon Web Services to begin. One will need a prior Amazon account to begin with (if one doesn’t have an Amazon account, it’s free and takes seconds to create), then Amazon Web Services walks through the process of associating a credit card with the account, validating one’s identity using a phone call, and after a few other steps, one’s Amazon Web Services account is all ready to go.

The next place to venture into is the Amazon Web Services My Account section. It outlines all of the various Amazon web services included with one’s account, as well as offers the ability to signup for premium support and/or cancel select Amazon Web Services one doesn’t need. In reality, unless one actually uses the services, one isn’t going to be charged for them, but just to be on the safe-side, since I have no intentions of using all the compute and database features (EC2, etc.), I went ahead and canceled them and could always re-add them to my account in the future if I need them. From there, one also should head over to the Security Credentials page (which I will not show for security reasons) and make special note of one’s Access Key ID and Secret Access Key. They will be needed coming up.

From there, one should then visit the AWS Management Console and click on the Amazon S3 tab (One can also set a particular tab as the default tab that loads when accessing the AWS Management Console. I have mine set to default to showing the Amazon S3 tab when I login). 

Once in the Amazon S3 Management Console, one will need to click on the Create Bucket button under the Buckets section. A bucket is a piece of one’s Amazon S3 storage account that stores objects: files, folders, documents, pictures, anything one wishes to store on Amazon S3. All objects must be stored in a bucket (or buckets). One can have quite as many buckets as one wishes, just remember that the more storage/bandwidth used, the higher the Amazon S3 bill each month. For the bucket that will serve to host the website, it will need to be named exactly, replacing with the actual domain of one’s website. In order to pull this off effectively, it is crucial that one names the bucket exactly with the naming conventions specified above, and yes, the “www” needs to be in the bucket name itself. For the purposes of this demonstration, I named by bucket Amazon S3 is also going to ask where should the bucket be created. In most instances for those living in the US, one should choose US Standard. For those living in other countries, one may wish to choose a location closer to their own home to reduce the amount of latency between their location and the Amazon S3 datacenter. For the purposes of this demonstration, I have set mine to US Standard.

Once the bucket has been created, one will need to click on the Properties button to pull up the properties dialog box. Under Permissions, one will need to click on either Add or Edit Bucket Policy. The Bucket Policy Editor will appear, allowing one to enter a custom bucket policy. Amazon has a great sample bucket policy that can be found here, and all one needs to do is copy/paste the policy into the bucket policy editor, replacing “example-bucket” with the name of the Amazon S3 bucket. Here’s an example of the custom bucket policy I applied to my website.

The next step is to actually upload all of the website files to the Amazon S3 bucket. One way to pull this off is by uploading them using the AWS Management Console, but for those that like the more drag-and-drop, FTP-style uploading, there’s a few other options. For Windows-based users, I would recommend CloudBerry Explorer for Amazon S3 from CloudBerry Labs. It’s free and is as easy to use as one’s favorite FTP client or dropping files on a flash drive. For Mac users, I recommend Transmit from Panic. It’s both an FTP client and Amazon S3 cloud explorer, and like Cloudberry on Windows, it has a great interface and is a breeze to use. Unlike CloudBerry Explorer, it’s about $35 (also available from the Mac App Store), but I’ve tested various FTP clients for the Mac in the past, and I’ve found none of them come close to Transmit’s ease of use and performance. Even iPhone users can get access Amazon S3 buckets on the go using the Cloud Services Manager iPhone app, available in a free lite and a pro ($6) version.

Once one has the proper Amazon S3 client on their system, it’s really as easy as launching the S3 client, adding in a Access Key and Secret Access Key into the system, and choosing the path (usually the bucket) one wants to access. For the purposes of this demonstration, I’ll be showing how to do this on a Mac using Transmit. One would launch Transmit, select the S3 tab, type in one’s Access Key ID and Secret Access Key ID (for security reasons I am using a parody key), and in the initial path box, type in the path to one’s newly created S3 bucket, in my case

From there, it’s a matter of dragging/dropping all the website files from one’s hard drive onto the S3 bucket, the same as one would drop files onto an FTP server or a flash drive.

From there, one needs to again return to the Amazon S3 tab of the AWS Management Console and again bring up the Properties dialog, and this time select the Website tab. One will need to check the box to enable website hosting, and specify the location of the index document (usually index.html) and optionally the location of the error document (if one doesn’t wish to use Amazon S3’s default error page). Amazon S3 will display a link in the Endpoint section that begins with the bucket name, includes some other information, and ends with Make a note of everything after the http:// and before the last / in the Endpoint domain. It will be needed later.

One last thing before proceeding and leaving the AWS Management Console. Periodically some Amazon S3 clients do not appropriately associate the correct file types with some of the files on Amazon S3. I noticed this when publishing my website to Amazon S3. Transmit did not associate the correct file type with my CSS files, causing them to load incorrectly, obviously making my website look strange. One way to check to ensure the file types are set correctly is while in AWS Management Console, select a file (object) in the bucket, bring up the Properties dialog box, and under the Metadata tab, ensure the key Content-Type is set to the correct file type. For CSS files, the value should be set to text/css. The same goes for other file types (HTML should be text/html, PDF text/PDF, JPEG’s should be picture/JPEG, PNG picture/PNG, etc.). If the Amazon S3 client did not properly set the content types at upload, Amazon S3 will associate the file type with binary/octect. If this is what is showing up on particular file types, all one needs to do is change the file type to the correct file type as I mentioned in some of the examples above. That’s one thing I particularly enjoy about Amazon S3. Changing file types are easy and can be done at the file level, whereas with the shared hosting package we were on before, file types had to be set in a .htaccess file if a file type needed changing, causing a small performance lag on our website. Additionally, many Amazon S3 clients, including Transmit, allow one to add a section in the Preferences panel that automatically tags files with the proper file type tags in the event the Amazon S3 client isn’t tagging it correctly by default, which again, makes dealing with file types a breeze.

While looking the at Properties dialog box for an object, one may notice under the Details tab that there’s a Storage option for Standard or Reduced Redundancy storage. In most cases, I would recommend people leave it set at Standard. Reduced Redundancy will offer lower priced storage (about 9-10 cents per GB per month), but with fewer backups of the data. I prefer to pay a few more cents each month and get the increased backup and redundancy.

Now that one’s website is hosted in an Amazon S3 bucket, one could take the Endpoint link that was generated by Amazon S3, type that into a browser, and go straight to their newly-hosted Amazon S3 website. The only drawback with that is obviously that long, drawnout link that Amazon S3 gives isn’t the most friendly to place on a business card or give out to people. So the next step to make it easier to get to a newly-hosted Amazon S3 website is to link up one’s custom domain with the Endpoint link generated by Amazon S3. The way to do is is to go to one’s domain registrar’s control panel (in this example I’ll be using NameCheap, in a future blog entry I may explain how to port one’s domain name from one registrar over to another as I recently moved my domains over to NameCheap) and under the DNS records, edit the www CNAME record to point to the Endpoint link that was generated by Amazon S3, just leave off the http:// and the last / at the end. So the link that gets pasted into the www CNAME record will start with one’s bucket name, contain the other information in the middle, and end with Save the changes, and once the DNS records populate, one will be able to access the Amazon S3 hosted website by going to 

Nowadays though, entering the www is pretty passé, so what about those who don’t wish to enter the www? The solution around that is, while in the Domain Manager or DNS records, is turn on domain forwarding for the domain itself, point to, in my case pointing to This will now allow anyone to enter (in my case into the address bar and still get to the Amazon S3-hosted site without any issues. The other alternative is one could create an @ A record and point it to Amazon S3’s IP address so that Amazon S3 could host the domain itself, but this is a practice not recommended by Amazon, so forwarding the domain to the www domain is the recommended practice in this case.

That basically sums up all one needs to know about hosting their static websites in Amazon S3! In the next and last blog entry of this series, I will explain how to basically perform the same actions, but this time working on a different platform, Azure Blob Storage from Microsoft’s Windows Azure. It’ll allow readers to get a grasp on how to work in both of the top cloud storage services available. Stay turned for more!