Static Web Site Hosting with Amazon S3 and Route 53

aws_logoWeb application frameworks have the current spotlight, which is why I started learning Ruby on Rails to get an idea what the fuss was about. But a big framework isn’t always the right tool for the job. Sometimes it’s just a set of static files to be served upon request. No server-side smarts necessary.

This was where I found myself when I wanted to put up a little web site to document my #rxbb8 project. I just wanted to document the design & build process, and I already had registered the domain rxbb8.com. The HTML content was simple enough to create directly in a text editor and styled with CSS from the Materialize library.

After I got a basic 1.0 version of my hand-crafted site, I uploaded the HTML (and associated images) to an Amazon S3 bucket. It only takes a few clicks to allow files in a S3 bucket to be web-accessible via a long cumbersome URL on Amazon AWS domain http://rxbb8.com.s3-website-us-west-2.amazonaws.com. Since I wanted this content to be accessible via the rxbb8.com domain I already registered, I started reading up on the AWS service named in geek-humor style as Route 53.

Route 53 is designed to handle the challenges of huge web properties, distributing workload across many computers in many regions. (No single computer could handled all global traffic for, say, netflix.com.) The challenge for a novice like myself is to figure out how to pull out just the one tool I need from this huge complex Swiss army knife.

Fortunately this usage case is popular enough for Amazon to have written a dedicated developer guide for it. Unfortunately, the page doesn’t have all the details. The writer helpfully points the reader to other reference articles, but those pages revert back to talking about complex deployments and again it takes effort to distill the simple basics out of the big feature list.

If you get distracted or lost, stay focused on this Cliff Notes version:

  1. Go into Route 53 dashboard, create a Hosted Zone for the domain.
  2. In that Hosted Zone, AWS has created two record sets by default. One of them is the NS type, write down the name servers listed.
  3. Go to your domain registrar and tell them to point name servers for the domain to the AWS name servers listed in step 2.
  4. Create S3 storage bucket for the site, enable static website hosting.
  5. Create a new Record Set in the Route 53 Hosted Zone. Select “Alias” to “Yes” and point alias target to the S3 storage bucket in step 4.

Repeat #4 and #5 for each sub-domain that needs to be hosted. (The AWS documentation created example.com and repeated 4-5 for www.example.com.)

And then… wait.

The update in step 3 needs time to propagate to name servers across the internet. My registrar said it may take up to 24 hours. In my case, I started getting intermittent results within 2 hours, but it took more than 12 hours before everything stabilized to the new settings.

But it was worth the effort to see version 1.0 of my created-from-scratch static web site up and running on my domain! And since it’s such a small and simple site with little traffic, it will cost me only a few pennies per month to host in this manner.

rxbb8.com.v1

Let the App… Materialize!

materializecsslogoAfter I got the Google sign-in working well enough for my Rails practice web app, the first order of business was to build the basic skeleton. This was a great practice exercise to take the pieces I learned in the Ruby on Rails Tutorial sample app and build something of my own design.

The initial pass implemented basic functionality but it didn’t look very appealing. I had focused on the Rails server-side code and left the client-side code simple plain HTML that would have been state-of-the-art in… maybe 1992?

Let’s make it look like something that belongs in 2017.

The Rails tutorial sample app used Bootstrap to improve the appearance and functionality of the client-side interface. I decided to take this opportunity to learn something new instead of doing the same thing. Since I’m using Google Sign-In in this app, I decided to adopt Google’s design concepts to my client-side appearance as well.

Web being the web, I knew I wouldn’t have to start from scratch. I knew about Google’s own Material Lite and thought that would be a good candidate before I learned it had been retired in favor of its successor, Material Components for the web. One of the touted advantages was improved integration with different web platforms. Sadly Rails was not among the platforms with examples ready-to-go.

I looked around for an existing project to help Rails projects adapt Google’s design language, and that’s when I found Materialize: A library that shares many usage patterns with Bootstrap. The style sheets are even written using SASS, native to default Rails apps, making for easy integration. Somebody has done that work and published it as Ruby gem materialize-sass, so all I had to do was add a single line to use Materialize in my app.

Of course I still had to put in the effort to revise all the view files in my web app to pick up Materialize styling and features. That took a few days, and the reward for this effort is a practice web app which no longer look so amateurish.

Simple Online Digital Photo Frame

The CarrierWave Playground project was created for experimentation with image upload. As intended, it helped me learn things about CarrierWave such as creating versions of the image scaled to different resolutions and extracting EXIF image metadata.

Obviously the image has to be displayed to prove that the upload was successful. I hadn’t intended to spend much time on the display side of things, but I started playing with the HTML and kept going. Logic was added for the browser to report its window size so the optimal image could be sent and scaled to fit. I had a button to reload the page, and it was fairly simple to change it from “reload the current page” to “navigate to another page”. Adding a JavaScript timer to execute this navigation… and voila! I had myself a rudimentary digital photo frame web app that loads and displays image in a sequence.

It’s fun but fairly crude. Brainstorming the possibilities, I imagine the following stages of sophistication:

Stage 1 – Basic: Where I am now, simple JavaScript that performs page navigation on a timer.

Problem: page blinks and abruptly shifts as new page is loaded. To avoid the abrupt shift, we have to eliminate the page switch.

Stage 2 – Add AJAX: Instead of a page navigation, perform a XMLHttpRequest to the server asking for information on the next image. Load the image in the background, and once complete, perform a smooth transition (fade out/fade in/etc.) from one image to the next.

Problem: Visual experience is at the mercy of the web browser, which probably has an address bar and other UI on screen. Also, the user’s screen will quickly go dark due to power saving features. To reliably solve both, I will need app-level access.

Stage 3 – Vendor-specific wrapper: Every OS platform has a way to allow web site authors an express lane into the app world. Microsoft offers the Windows App Studio. Apple has iOS web applications. Google has Android Web Apps.

Unknown: The JavaScript timer is a polling model, do we gain anything by moving to a server-push model? If so, that means…

Stage 4 – WebSocket: Photo updates are triggered by the server. Since I’m on Rails, the most obvious answer is to do this via WebSockets using Action Cable.

Looking at the list, I think I can tackle Stage 2 pretty soon if not immediately.

Stages 3 and 4 are more advanced and I’ll hold off for later.

Codecademy “Learn Sass” notes

SasslogoWhile learning Ruby on Rails, one of the things I put on my “look into this later” list was Sass. I knew it was related to CSS but didn’t know the details, I just noticed when the Rails generator created a controller, it created a .scss file under the stylesheets directory.

So when I received email from Codecademy notifying me that they have a new class on Sass… the “look into this later” became “let’s look into it now”.

Unlike Ruby on Rails, Sass is not a huge complicated system. It solves a fairly specific set of problems typical of CSS growing unwieldy as it grows with a project. It introduces some very nice concepts to keep CSS information organized. After banging my head on lots of walls with Ruby on Rails, it is refreshing to tackle a smaller-scope project and be able to understand what’s going on. The Codecademy format is well suited to teach a smaller scoped concept like Sass.

I was also mildly amused to learn that Sass is apparently written in Ruby. I don’t think it particularly matters what the implementation was, but it’s amusing to me to see Ruby applied in an entirely different way from Rails. The bonus is that, if I should try to debug or extend Sass myself, I wouldn’t be starting from scratch looking over its source code.

Being a fresh course, the Codecademy class had a few minor problems that still need to get ironed out. The flashcard example was supposed to flip on mouse hover… it never did anything for me. Too bad, because I think the effect would have been interesting.

I haven’t gotten far enough with Rails to think about making my web app pretty, but when I do, I know how to keep my style sheets manageable with Sass.

Onward to Unity Adventures

After deciding to move on from Phaser, I looked around for other game engines to generate web games. I came across Unity again and again. Unity is not new to me, but I also knew their web support was via the Unity web player browser plug-in. This is a problem, as browser plug-ins have fallen into disfavor. All the major desktop browsers Firefox / Chrome / IE, are moving away from plug-ins.

So… dead end (or at least dying). Plug-ins are not the future of the web.

Because of the web player, I dismissed every mention of Unity for web development I encountered, until I came across information that slapped me upside the head and woke me up. The web player is not the only Unity web target: Unity now has another web-friendly build target: WebGL, no plug-in required! Wow!

This warrants a closer look. While WebGL might be immature and inconsistent across browsers, it is a shared goal of many interests to evolve and mature the platform and I share their high hopes. The browser plug-in model is going away. WebGL may or may not take off but all the signs today look promising.

Unity had been on my to-do list of research topics, but it ranked lower than HTML-related technologies. (This is why I started with HTML, JavaScript, Node.JS, etc.) There are many interesting areas of development I can explore with Unity. Graphics, networking, user interface design, virtual reality, Android development, iOS development, the list goes on.

And now I learned that I can even explore some parts of the web world with Unity. It is quite the Swiss Army Knife of software development. With this latest discovery, Unity has moved to the top of my to-do list. It is time to roll up my sleeves and dig in.

This should be fun.

A quick look at Phaser

In a quest for something more substantial, I looked into the HTML5 game engine Phaser. It has a large audience and an impressive set of features. It was fun poking through some of the source to see how it does its magic, but I ultimately decided against making my own Phaser creation because it is at its core a bitmap-based system.

Bitmaps don’t scale as nicely as vectors by their nature, and I place a great deal of importance on ability to scale well. In the modern HTML world, the customer might be running resolutions ranging anywhere from a 320 x 480 iPhone screen to a 3840 x 2160 4K desktop display.

The people behind Phaser isn’t blind to this. Part of the core features includes a ScaleManager class to help the process. There are also ways to load lower- or higher-resolution assets in response to the screen resolution used by the particular user. But it is an imperfect solution, there are still bitmap scaling artifacts around that plainly wouldn’t be an issue at all had the system been based on vector graphics.

Most of the Phaser titles don’t bother with scaling at all, or only scale across a few fixed resolutions. I find myself quite annoyed when I load up a sample app, just to see it take up a tiny 320×480 (resolution of the original iPhone) section of my desktop browser.

sigh

I’ll keep looking.

The other “cloud development”

When I set out on this adventure, I knew I wanted to eventually cover the basics of the major cloud services. Write some sample services to run on Amazon Web Services, Microsoft Azure, Google cloud services, etc.

I was surprised to stumble into an entirely different meaning of “cloud development”: writing code in a browser. I had seen the educational coding playgrounds of Codecademy, and I had seen small trial tools like JSFiddle, but I had no idea that was just the tip of the iceberg and things can get much fancier.

I had started a project to practice my newly-learned jQuery skills. Just with a text editor on my computer and running the HTML straight off the file system. As soon as I learned of these web-based development environments I wanted to try it out by moving my project over.

The first I tried was Codenvy, whose whitepapers are quite grandiose in what it offers for improving developer productivity. Unfortunately the kind of development Codenvy supports aren’t the kind of things I know anything about today. But I’ll revisit in a few weeks to check again.

The second I tried was Cloud 9, which does support simple HTML+CSS+JS projects like what I wanted to do. Working in Cloud 9 gave me some tools for static analysis and serving my file off a real web server. It also integrates into Github preserving my source control workflow.

After a JavaScript project of around 300 lines of code, I can comfortably say I’m quite impressed. In the areas of development-time tooling and integration experience, it far exceeded my expectations. However, there was an area of disappointment: the debugging experience was either hard to find or just wasn’t there at all.

When my little project goes awry, I resorted to loading up the project in a separate browser window and using the web browser debugger. This is on par with the simpler tools like JSFiddle and JSBin. I had hoped for better.

I’m cautiously optimistic I’ll find a tool with better debugging experience as I continue to explore.

The cross-site rabbit hole

Wow, cross-site issues are huge cans of worms!

In the absence of first-hand experience dealing with network-centric development, my knowledge of cross-site vulnerabilities has been limited to broad descriptions covered in tech press.

While educating myself in the jQuery Learning Center, I came across the JSONP utility functions of jQuery. Trying to understand the utility meant I had to look up JSONP. Trying to understand JSONP required learning what problem it is trying to solve. Which dropped me into the rabbit hole of web security. Starting with a web browser’s same-origin policy, through cross-site scripting (XSS), cross-site request forgery (CSRF), and others.

The short version: Communication across multiple web domains is a very powerful thing. And like everything that’s powerful, there are people who will use it for evil. The various browser policies are efforts to shut down such activity to protect users from evil.

Like every effort to control great powers that can be used for good or evil, both sides continue to find ways to do what they want despite the walls erected to block them.

  1. Because it is so powerful for nefarious purposes: Hackers continue to find ways to circumvent cross-domain protection with clever exploitation. To keep me grounded in jQuery education, Wikipedia helpfully pointed to a cross-site security issue in jQuery itself. Problems can hide anywhere.
  2. Because it is so powerful for legitimately useful purposes: Developers continue to find ways to communicate across domains in a relatively safer manner. (Usually until a creative hacker comes along to prove why it isn’t safe.) JSONP, which started my whole adventure, is one such method.

Like everything else I’m learning about web programming, this will have to be a brief overview and I have to come back later. Right now I don’t even understand all the vocabulary yet.

I’m unsettled by this topic. The importance of network security grows with every passing day. This feels like a very fundamental area of network security, and it is a huge nasty hairball that has proven to be difficult to untangle. This can only be a recipe for more security vulnerabilities in the future.

Maybe even one that I inadvertently create.

Ugh.

JSFiddle

After completing the Codecademy jQuery class, I felt I had a decent overview but wanted more depth. I found that the jQuery project offers a jQuery Learning Center. I’ll likely write a dedicated post once I’m done, but that’s not today.

After reading a certain amount of documentation, I start feeling out of touch with the concepts: Too much theory and not enough practice. I tried to create a small project to play with jQuery but I made rookie mistakes with basic boilerplate, wasting a lot of time instead of learning.

That’s when I remembered something I saw while reading documentation for the Facebook React project: JSFiddle. I don’t have enough understanding to use React yet, but they embedded JSFiddle in their documentation so people can play with code as they go.

A fresh fiddle already has the HTML, CSS, and JavaScript all properly linked up. To use a framework or extension like jQuery, selecting from a drop-down box activates it with no fussing with tags, links, or URLs. Once the code snippets have been typed in, a click on “Run” will render to the output window. No upload to web server necessary.

This is a huge advantage when testing small pieces of code. No need to get bogged down in repetitious boilerplate, just focus on the interesting bits. I had a great time putting the various bits of theory into practice.

A downside for me was that debugging got messy. When I’m debugging just my own HTML with the browser’s built-in debugging tools, I have only a tiny bit of code to examine and all of it is mine.

But when I launch the browser debug tools on a fiddle, the debugger puts the whole JSFiddle page under examination. Even though my code is only a tiny part of the page. It gets tiresome trying to separate out my item of interest from everything else going on with the JSFiddle page.

But still, JSFiddle looks like a great tool for experimentation.

Codecademy “jQuery” notes

The “jQuery” class track was my first Codecademy class where everything is new to me. It was a lot of fun!

I’m very amused that the “new HTML” has ditched a lot of the mechanisms of the “old HTML” yet they’re all together in one big bucket of HTML. There were input forms using the input fields that I had known before, but they don’t bother with the form submit at the end. Entirely different script code is executed to replace the old-school form submit.

A whole lot of “new HTML” is hitched to the <div> element. Which, in the old world, does nothing. (That’s not a sarcastic remark, read the official specification.) But with CSS and jQuery, the empty placeholder of a <div> becomes the starting point for visuals, actions, responses, all kinds of things impossible in old static HTML.

This was also the first Codecademy class that referenced API documentation for the class material. I was a little annoyed with the previous classes that lacked a pointer for students to get more information.

Towards the end of the class, they introduced jQuery UI, which is a separate library built on top of jQuery. I think the fact they were two separate entities deserved more emphasis, and the class didn’t get into why they were two separate things at all.

Once I understand how the design philosophies differ between the two, I will have an idea if I should look in one or the other for any specific thing I might be researching. But I don’t have that yet! So I’ll have to look in both places until I do.