Let the App… Materialize!

materializecsslogoAfter I got the Google sign-in working well enough for my Rails practice web app, the first order of business was to build the basic skeleton. This was a great practice exercise to take the pieces I learned in the Ruby on Rails Tutorial sample app and build something of my own design.

The initial pass implemented basic functionality but it didn’t look very appealing. I had focused on the Rails server-side code and left the client-side code simple plain HTML that would have been state-of-the-art in… maybe 1992?

Let’s make it look like something that belongs in 2017.

The Rails tutorial sample app used Bootstrap to improve the appearance and functionality of the client-side interface. I decided to take this opportunity to learn something new instead of doing the same thing. Since I’m using Google Sign-In in this app, I decided to adopt Google’s design concepts to my client-side appearance as well.

Web being the web, I knew I wouldn’t have to start from scratch. I knew about Google’s own Material Lite and thought that would be a good candidate before I learned it had been retired in favor of its successor, Material Components for the web. One of the touted advantages was improved integration with different web platforms. Sadly Rails was not among the platforms with examples ready-to-go.

I looked around for an existing project to help Rails projects adapt Google’s design language, and that’s when I found Materialize: A library that shares many usage patterns with Bootstrap. The style sheets are even written using SASS, native to default Rails apps, making for easy integration. Somebody has done that work and published it as Ruby gem materialize-sass, so all I had to do was add a single line to use Materialize in my app.

Of course I still had to put in the effort to revise all the view files in my web app to pick up Materialize styling and features. That took a few days, and the reward for this effort is a practice web app which no longer look so amateurish.

The Cost for Security

In the seemingly never-ending bad news of security breaches, a recurring theme is “they knew how to prevent this, but they didn’t.” Usually in the form of editorializing condemning people as penny-pinching misers caring more about their operating cost than the customer.

The accusations may or may not be true, it’s hard to tell without the other side of the story. What’s unarguably true is that security has some cost. Performing encryption obviously takes more work than not doing any! But how expensive is that cost? Reports range wildly anywhere from less than 5% to over 50%, and it likely depends on the specific situations involved as well.

I really had no idea of the cost until I stumbled across the topic in the course of my own Rails self-education project.

I had designed my Rails project with an eye towards security. The Google ID login token is validated against Google certificates, and the resulting ID is salted and hashed for storage. The code for this added security were deceptively minor, as they triggered huge amounts of work behind the scenes!

I started on this investigation because I noticed my Rails test suite ran quite slowly. Running the test suite for the Rails Tutorial sample app, the test framework ran through ~120 assertions per second. My own project test suite ran at a snail’s pace of ~12 assertions/second, 10% of the speed. What’s slowing things down so much? A few hours of experimentation and investigation pointed the finger at the encryption measures.

Obviously security is good for the production environment and should not be altered. However, for the purposes of development & test, I could weaken them because there would be no actual user data to protect. After I made a change to bypass some code and reducing complexity in others, my test suite speed rose to the expected >100 assertions/sec.

Granted, this is only an amateur at work and I’m probably making other mistakes doing security inefficiently. But as a lesson to experience “Security Has A Cost” firsthand it is eye-opening to find a 1000% performance penalty.

For a small practice exercise app like mine, where I only expect a handful of users, this is not a problem. But for a high-traffic site, having to pay ten times the cost would be the difference between making or breaking a business.

While I still don’t agree with the decisions that lead up to security breaches, at least now I have a better idea of the other side of the story.

Protecting User Identity

google-sign-inRecently, web site security breaches have been a frequent topic of mainstream news. The technology is evolving but this chapter of technology has quite some ways to go yet. Learning web frameworks gives me an opportunity to understand the mechanics from web site developer’s perspective.

For my project I decided to use Google Identity platform and let Google engineers take care of identification and authentication. By configuring the Google service to retrieve only basic information, my site never sees the vast majority of personally identifiable information. It never sees the password, name, e-mail address, etc.

All my site ever receives is a string, the Google ID. My site uses it to identify an user account. With the security adage of “what I don’t know, I can’t spill” I thought this was a pretty good setup: The only thing I know is the Google ID, I can’t spill anything else.

Which led to the next question: what’s the worst that can happen with the ID?

I had thought the ID is something Google generated for my web site. More specifically my site’s Client ID. I no longer believe so. A little experimentation (aided by a change in Client ID for the security issue previously documented) led me to now believe it’s possible the Google ID is global across all Google services.

This means if a hacker manages to get a copy of my site’s database of Google ID, they can cross-reference to databases of other compromised web sites. Potentially assembling a larger picture out of small pieces of info.

While I can’t stop using Google ID (I have to have something to identify an user) I can make it more difficult for a hacker to cross-reference my database. I’ve just committed a change to hash the ID before it is stored in the database. Salted with a value that is unique per deployed instance of the app.

Now for a hacker to penetrate the identity of my user, they must do all of the following:

  1. Obtain a copy of the database.
  2. Obtain the hashing salt used by the specific instance of the app which generated that database.
  3. Already have the user’s Google ID, since they won’t get the original ID out of my database of hashed values.

None of which are impossible, but certainly a lot more effort than it would have otherwise taken.

I think this is a worthwhile addition.

Adventures in Server-Side Authentication

google-sign-inThe latest chapter comes courtesy of the Google Identity Platform. For my next Rails app project, I decided to venture away from the user password authentication engine outlined in the Hartl Ruby on Rails Tutorial sample app. I had seen the “Sign in with Google” button on several web sites (like Codecademy) and decided to see it from the other side: Users for my next Rails project will sign in with Google!

The client-side code was straightforward following directions in the Google documentation. The HTML is literally copy-and-paste, the JavaScript needed some reworking to translate into CoffeeScript for the standard Rails asset pipeline but wasn’t terribly hard.

The server side was less straightforward.

I started with the guide Authenticate with a Backend Server which had links to the Google API Client Library for (almost all) of the server side technologies including Ruby. The guide page itself included examples on using the client library to validate the ID token in Java, Node.JS, PHP, and Python. The lack of Ruby example would prove problematic because each flavor of the client library seems to have different conventions and use different names for the functionality.

Java library has a dedicated GoogleIdTokenVerifier class for the purpose. Node.JS library has a GoogleAuth.OAuth2 class with a verifyIdToken method. PHP has a Google_Client class with a verifyIdToken method. And to round out the set, Python library has oauth2client.verify_id_token.

Different, but they’re all in a similar vein of “verify”, “id”, and “token” so I searched the Ruby Google API client library documentation for those keywords in the name. After a few fruitless hours I concluded what I wanted wasn’t there.

Where to next? I went to the library’s Github page for clues. I had to wade through a lot of material irrelevant to the immediate task because the large library covers the entire surface of Google services.

I thought I had hit the jackpot when I found reference to the Google Auth Library for Ruby. It’s intended to handle all authentication work for the big client library, with the target completion date of Q2 2015. (Hmm…) Surely it would be here!

It was not.

After too many wrong turns, I looked at Signet in detail. It has a OAuth2::Client class, which sounded very similar to the other libraries, but it had no “verify” method so every time I see a reference to Signet I keep deciding to look elsewhere. Once I decided to read into the details of Signet::OAuth2::Client, I finally figured out that it had a decoded_id_token method that can optionally verify the token.

So it had the verification feature but the keyword “verify” itself wasn’t in the name, throwing off my search multiple times.

Gah.

Nothing to do now but to take some deep breaths, clear out the pent-up frustration, and keep on working…

Simple Online Digital Photo Frame

The CarrierWave Playground project was created for experimentation with image upload. As intended, it helped me learn things about CarrierWave such as creating versions of the image scaled to different resolutions and extracting EXIF image metadata.

Obviously the image has to be displayed to prove that the upload was successful. I hadn’t intended to spend much time on the display side of things, but I started playing with the HTML and kept going. Logic was added for the browser to report its window size so the optimal image could be sent and scaled to fit. I had a button to reload the page, and it was fairly simple to change it from “reload the current page” to “navigate to another page”. Adding a JavaScript timer to execute this navigation… and voila! I had myself a rudimentary digital photo frame web app that loads and displays image in a sequence.

It’s fun but fairly crude. Brainstorming the possibilities, I imagine the following stages of sophistication:

Stage 1 – Basic: Where I am now, simple JavaScript that performs page navigation on a timer.

Problem: page blinks and abruptly shifts as new page is loaded. To avoid the abrupt shift, we have to eliminate the page switch.

Stage 2 – Add AJAX: Instead of a page navigation, perform a XMLHttpRequest to the server asking for information on the next image. Load the image in the background, and once complete, perform a smooth transition (fade out/fade in/etc.) from one image to the next.

Problem: Visual experience is at the mercy of the web browser, which probably has an address bar and other UI on screen. Also, the user’s screen will quickly go dark due to power saving features. To reliably solve both, I will need app-level access.

Stage 3 – Vendor-specific wrapper: Every OS platform has a way to allow web site authors an express lane into the app world. Microsoft offers the Windows App Studio. Apple has iOS web applications. Google has Android Web Apps.

Unknown: The JavaScript timer is a polling model, do we gain anything by moving to a server-push model? If so, that means…

Stage 4 – WebSocket: Photo updates are triggered by the server. Since I’m on Rails, the most obvious answer is to do this via WebSockets using Action Cable.

Looking at the list, I think I can tackle Stage 2 pretty soon if not immediately.

Stages 3 and 4 are more advanced and I’ll hold off for later.

EXIF fun with CarrierWave uploader

To play with the CarrierWave uploader gem I created a new Rails project just for the purpose of experimentation. I had thought about doing this as part of the Hartl Rails Tutorial sample app but ultimately decided to keep things as bare-bones as I can.

When trying to understand a new system, a debugger is a developer’s best friend. Part of this exercise is to get my feet wet using a debugger to poke around a running Rails app. The Hartl Rails Tutorial text introduced the byebug gem but only minimally covered usage. The official Rails Guides had more information which helped me get going, in addition to various users writing up their own cheat sheets.

I decided to dig into image metadata. Basic information such as width and height were made available as img[:width] and img[:height] but where are the others? With the help of byebug I found that they were available in the img.data hash. Most of the photography-specific EXIF metadata are in there as well, though somebody interested only in that subset can access it via the img.exif hash.

As an exercise I decided to try to pull out the original date. Time stamp on digital photography files have always been a headache. Most computer OS track “Created Date” and “Modified Date” but they are relative to the computer and not reliable for photo organization purposes. Photo editing introduces another twist: if an image is created from a photograph, the time stamp would be the day the edit operation was made and not when the original photo was taken.

Which is how I ended up looking at EXIF “DateTime”, “DateTimeOriginal”, and “DateTimeDigitized” to take the earliest date of the three (when present).  Then I ran into another can of worms: time zones. The EXIF time stamp have no time zone information, but the Ruby DateTime object type does. Now my time stamps are interpreted as UTC when it isn’t. Since EXIF doesn’t carry time zone data (that I can find) I decided to leave that problem to be tackled another day.

Behavior Driven Development

cucumberlogoMy new concept of the day: Behavior Driven Development. As this beginner understands the concept, the ideal is that the plain-English customer demands on the software is formalized just enough to make it a part of automated testing. In hindsight, a perfectly logical extension of Test-Driven Development concepts, which started as QA demands on software treated as the horse instead of the cart. I think BDD can be a pretty fantastic concept, but I haven’t seen enough to decide if I like the current state of the art in execution.

I stumbled into this entirely by accident. As a follow-up to the Rails Tutorial project, I took a closer look at one corner of the sample app. The image upload feature of the sample app used a gem called carrierwave uploader to do most of the work. In the context of the tutorial, CarrierWave was a magic black box that was pulled in and used without much explanation. I wanted to better understand the features (and limitations) of CarrierWave for use (or not) in my own projects.

As is typical of open-source projects, the documentation that exists is relatively thin and occasionally backed by the disclaimer “for more details, see source code.” I prefer better documentation up front but I thought: whatever, I’m a programmer, I can handle code spelunking. It should be a good exercise anyway.

Since I was exploring, I decided to poke my head into the first (alphabetically sorted) directory : /features/. And I was immediately puzzled by the files I read. The language is too formal to be conversational English for human beings, but too informal to be a programming language as I knew one. Some amount of Google-assisted research led me to the web site for Cucumber, the BDD tool used by the developers of CarrierWave.

That journey was fun, illuminating, and I haven’t even learned anything about CarrierWave itself yet!

Dipping toes in AWS via Rails Tutorial Sample App

aws_logoAmazon Web Services is a big, big ball of yarn. For somebody just getting started, the list of AWS products is quite intimidating. It’s not any fault of Amazon, it’s just the nature of building out such a comprehensive system. Fortunately, Amazon is not blind to the fact people can get overwhelmed and put admirable effort into a gentle introduction via their Getting Started resources: a series of (relatively) simple guided tours through select parts of the AWS domain.

At the end of it all, though, a developer has to roll up their sleeves and dive in. The question then is: where? In previous times, I couldn’t make up my mind and got stuck. This time around, I have a starting point: Michael Hartl’s Ruby on Rails Tutorial sample app, which wants to store images on Amazon S3 (Simple Storage Service).

Let’s make it happen.

One option is to blaze the simplest, most direct path to get rolling, but I resisted. The example I found on stackoverflow.com granted the rails app full access to storage with my AWS root credentials. Functionally speaking that would work, but that is a very bad idea from a security practices standpoint.

So I took a detour through Amazon IAM (Identity and Access Management). I wanted to learn how to do a properly scoped security access scheme for the Rails sample app, rather than giving it the Golden Key to the entire kingdom. Unfortunately, since IAM is used to manage access to all AWS properties, it is a pretty big ball of yarn itself.

Eventually I found my on-ramp to AWS: A section in the S3 documentation that discussed access control via IAM. Since I control the rails app and my own AWS account, I was able to skip a lot of the cross-account management for this first learning pass, boiling it down to the basics of what I can do for myself to implement IAM best practices on S3 access for my Rails app.

After a few bumps in the exploration effort, here’s what I ended up with.


Root account: This has access to everything, so we want to use this account as little as possible to minimize risk of compromising this account. Login to this account just long enough to activate multi-factor authentication and create an IAM user account with “AdministratorAccess” privileges. Log out of the management console as root, log back in under the new admin account to do everything else.

Admin account: This account is still very powerful so it is still worth protecting. But if it should be compromised, it can at least be shut down without closing the whole Amazon account. (If root is compromised, all bets are off.) Use this account to set up the remaining items.

Storage bucket: While logged in as the admin account, go to the S3 service dashboard and create a new storage bucket.

Access policy: Go to the IAM dashboard and create a new S3 access policy. The “resource” of the policy is the storage bucket we just created. The “action” we allow are the minimum set needed for the rails sample app and no more.

  1. PutObject – this permission allows the rails app to upload the image file itself.
  2. PutObjectAcl – this permission allows the rails app to change the access permission on the image object, make the image publicly visible to the world. This is required for use as the source field of an HTML <img> tag in the rails app.
  3. DeleteObject – when a micropost is deleted, the app needs this permission so the corresponding image can be deleted as well.

Access group: From the IAM dashboard, create a new access group. Under the “Permissions” list of the group, attach the access policy we just created. Now any account which is a member of the group has enough access the storage bucket to run the rails sample app.

User: From the IAM dashboard, create a new user account to be used by the rails app. Add this newly user to the access group we just created, so it is a part of the group and can use the access policy we created. (And no more.)

This new user, which we granted only a low level of access, will not need a password since we’ll never log in to Amazon management console with it. But we will need to generate an app access key and secret key.

Once all of the above are done, we have everything we need to put into Heroku for the Rails Tutorial sample app. A S3 storage bucket name, plus the access and secret key of the low level user account we created to access that S3 storage bucket.


While this is far more complex than the stackoverflow.com answer, it is more secure. Plus a good exercise to learn the major bits and pieces of an AWS access control system.

The above steps will ensure that, if the Rails sample app should be compromised in any way, the hacker has only the permissions we granted to the app and no more. While the hacker can put new images on the S3 bucket and make them visible, or delete existing images, but they can’t do anything else in that S3 bucket.

And most importantly, the hacker has no access to any other part of my AWS account.

Rails Tutorial (Take 2)

RailsTutorial-cover-webWith all the fun and excitement around 3D printing, I’ve let my Ruby on Rails education lapse. I want to dive back in, but it’s been long enough that I felt I needed a review. Also, during my time away, the Ruby on Rails team released version 5, and Michael Hartl’s Ruby on Rails Tutorial was updated accordingly.

Independent of the Rails 5 updates, it was well worth my time to go through the book again. On second run, I understood some things that didn’t make sense before. It was also good to look at the first do-nothing “hello app” and the second automated-scaffold “toy app” with a little more Rails knowledge under my belt. The book is structured so the beginner reader didn’t have to understand the mechanics of the hello or toy apps, but readers with a bit of understanding will get something out of it.

The release notes for the update mentioned that a few sections were rearranged for better pacing and structure, and added more exercises for readers to check their progress. Both are incremental improvements that I appreciated but neither were especially earth-shattering.

Action Cable, one of the big signature feature of Rails 5, was not rolled into the book. Hartl is handling that in a separate tutorial Learn Enough Action Cable to be Dangerous which I will go through at some point in the near future.

Towards the end of the book, Hartl introduced an optional advanced concept: using Amazon Web Services to store user image uploads. I skipped that section the first time through, and decided to dive into it this time.

I quickly found myself in a deep rabbit hole. Amazon Web Services has many moving parts designed for a wide range of audiences and it’s a challenge to get started without being overwhelmed.

Which is where I am now. Lots more exploration ahead!

“Ruby on Rails Tutorial” notes

RailsTutorial-cover-webTowards the end of the Getting Started with Rails guide, there was a link to the Ruby on Rails Tutorial. I had missed it the first time I went through the Getting Started guide, but when I reviewed the guide a second time (things made a lot more sense) I noticed and decided to check it out.

I’m very glad that I did!

For the most part, this book (available as paper edition, but the author had made it freely readable online on the web site) was exactly what I had hoped to find for Ruby on Rails. The book went through the process of making a Rails app three times, in exactly the progression I wanted:

First pass: A simple “hello world” that does nothing much but lets the student experience all the standard overhead around creating and deploying a Rails app. It’s barely a Rails lesson at all: it’s really a lesson for learning the surrounding infrastructure before digging into the meat of learning Rails.

Second pass: A simple “toy app” that does a lot… but the student is rushed through without understanding what’s going on behind the scene. It’s really a demonstration of what’s possible via Rails helpers & scaffolding and less about learning Rails. It reminds me of what the Codecademy Ruby on Rails “class” was like. A lot of incomprehensible fancy flash. However, unlike Codecademy which left me asking “now what?” Rails Tutorial follows up.

Third pass: Over 80% of the book is spent building the “sample app.” Starting from a site that can serve a few static pages and grows, bit by bit, into a mini clone of Twitter. It had creation and maintenance of user accounts, authentication, posting 140-character messages, following & un-following users, all the core bits we associate with Twitter. (Which, the author asserted, was also originally built using Ruby on Rails.)

The Codecademy Rails class left me confused and feeling like I’ve wasted my time. (About a day.)

The Rails Guides left me with a basic level of comprehension of what goes on in a Rails app. My time (About half a week) is well spent, but I didn’t feel like I could build anything on my own yet.

After the Ruby on Rails Tutorial (which took me over a week) I feel like I have all the basic tools I need to start playing in the Rails playground. I feel like I have an idea how to get started, how to experiment, understand what the experiment does behind the scenes, and debug whatever goes wrong while I experiment.

And with that, it’s time to practice using my new tools: Let’s build something!