A look at front-end code
It’s hardly news that modern apps are written as single page webapps consuming a remote API. It became good practice to write web apps as static clients consuming the same public API as any other clients, perhaps with the exception of being hosted on the service’s main domain. Because we create highly dynamic webapps that create interdependent experiences, we assume that the user will download the entire web application into their browser. As such we attempt to minimize the size of the client code so that the user has the fastest experience possible.
This leads us into a world where webapps more resemble traditional software: we can assume the client to have the entire source present; we have a build process and build artefacts and we tend to end up with some concept of versions (even if we practice some form of continuous delivery).
We’ve developed an interesting methodology of dealing with this. Here’s a quick video demo of this system, the rest of the article describe the details of the system.
In the video, I make some changes to the styles and demonstrate how I can get a version of the app running on production with a shareable link. If you have a Cloud Analytics account, you can try Pony Analytics right here:
I also show how we gain separation between front-end and back-end code, because each can be run with the production version of the other.
Our code is split into a back-end and front-end system. The back-end system is
deployed using our Cloud Management product (we eat our own dog food). The front-end system is ‘deployed’ on every
git push to GitHub. This is not visible to users, since which of these ‘deployments’
is visible is governed by a database record on the back-end. This field propagates
to the entry point of the front-end: the
index.html. From here we load what we
call a Scoutfile, this gives us considerable flexibility to load any other
‘deployment’ if specified, or default to the ‘production’ deployment.
We wanted the ability for front-end engineers and designers to be able to run the front-end against our staging or production back-ends. This allows them to work on the front-end without having to go trough the trouble of having to setup the back-end (which is admittedly painful) and of acquiring realistic datasets in their development environment.
Once we had that ability, we realised that it would be nice to have the ability to deploy and test every single commit and be able to share these deployments in our team.
We push code to GitHub. Travis CI picks up the push and builds, tests, packages and finally uploads our build artefacts to S3. We serve our build artefacts from S3, but for security reasons we have a reverse proxy set up that we load the assets through, so users always see assets loading from our IP addresses.
The production app then only loads a tiny file that checks a
scout url parameter.
If the parameter is present, it will load the entire front-end assets from the
S3 bucket, from the folder corresponding to the
scout parameter. It can also
be used to load assets served by a local server.
This enables a much more powerful method of communication in the team, where any change can be instantly and automatically previewed by any other team member.
To make this form of communication even easier, I wrote a Chrome extension, that adds preview buttons to GitHub Pull Requests:
When you click on the preview button, you are redirected to the production application with the front-end corresponding to the latest commit in the pull request is loaded.
The extension is generic, so if you wish to use a similar approach, you can get it here and change the settings so that it work for your setup.
Here I would like to show some of the key pieces of this infrastructure. This is a highly technical bit, and essentially this is almost literal programming, so consider yourself warned.
index.html - The Stringent Gatekeeper
We render our single page app through our Rails app. This is not strictly necessary - our original concept simply had a global file in the root of the S3 bucket. But having Rails handle the entry point into the application means that Rails can redirect the user to the login page and can easily bootstrap the app with some initial data. But more importantly, we store the Git SHA of the production version of the app in the database. This means that only people who have write access to the production database can release a new version of the application to the customer (which given my pony example, is probably a good thing), whereas every commit can (and indeed is) written to the production S3 bucket.
To enable this all front-end request go through this controller
First we check whether a user is logged in.
We render the view, passing in some information through helper functions:
We pass in the GIT sha of the current release tag to load the correct default version of the UI.
We also pass in a hash of values that allow us to bootstrap the app and we save one HTTP request.
This renders the following view:
We use this simple method to inject into our code bootstrapping information.
Then we setup a place for our application to render in.
Finally we inject the scoutfile. We have different endpoints for local environments, testing environments and production/staging environments.
Rails.application.config.assets_endpoint is a configuration option that in production
points to a proxy server that ensures that all requests are through RightScale IP
addresses. In development it points to a different server on localhost.
Travis - The Ingenious Builder
Travis is a system that requires surprisingly little code to get very powerful things done:
Travis will run
npm testby default, which we have configured to a
grunt citask. I won’t post the code to of our Gruntfile, as there is no way to simplify that file. I’ll take this opportunity to say that Grunt is terrible for building complex applications, so if you try this at home, use something else.
language: node_js node_js: - "0.10" deploy: - provider: s3
Use an AWS IAM that has only write access to your production S3 bucket. Travis will encrypt it for you, so you can then have it checked into your repo.
access_key_id: MY_AWS_ACCESS_KEY_ID secret_access_key: secure: MY_ENCRYPTED_AWS_SECRET_ACCESS_KEY bucket: my-production-bucket
The last task in our grunt pipeline find the SHA of the current commit, and copies the build artefacts to a directory called by that name. So for commit
6239b1e2f8a1963eff137c5ca7f7520f5fda8bc0grunt will copy all artefacts into
./dist/relase/6239b1e2f8a1963eff137c5ca7f7520f5fda8bc0, where Travis will find them and copy them to S3.
local-dir: dist/release acl: public_read skip_cleanup: true region: us-west-2 on: repo: rightscale/analytics_ui all_branches: true
scoutfile.js - The Courages Source Finder
Inspired by the influential post by Alex Sexton, The scoutfile is there to figure out what version of the app to load and then to do it.
First, we have to figure out where to load our assets from (we want to persist the setting to
If there is a scout parameter in the url, we will try to do something with it:
If the scout param is
dev, then we persist the setting and return
Otherwise we assume it is either a URL or a Git SHA. If it is a domain, we need to validate it against a whitelist for security reasons.
If it is a Git SHA, we complete it into our default assets domain (which is in fact a proxy to our production S3 bucket).
There is no scout param, we will repeat the process with whatever is stored in local storage.
If neither are set, we return the default.
We want to save whatever the user passed through the URL, so that refreshing the app doesn’t change the UI version, so we persist it into localStorage.
We need to support HTTPS so that a production back-end (always running with SLL) can load local assets. However, for that we need to run our dev server on two different ports and we need to load the assets from the appropriate one.
The default base includes the Git SHA from which the scoutfile was loaded. To find that we need to get the
<script>element from which the scoutfile was loaded and parse the
Next we need to setup our loading infrastructure. We will be inserting our scripts after the first script in the document.
Next we want a helper function to add a tag to the HEAD element.
We then make a specialised maker function for each type of asset being loaded.
Scripts on load trigger the callback sent to them.
CSS also has a media attribute, which it is better to set.
Once we have this setup, we can proceed to load our files.
Finally we also don’t want to keep every single build artefact forever (as they can be quite large), and if we send versions of the app to customers we don’t want the app to silently fail. Therefore we have implemented a canary system, that will cause the default ui to be loaded in case the requested UI cannot be reached.
scoutfile-canary.jslooks like this:
window.scoutfileCanary = true. This allows us to detect that the load was succesful, since if there was a silent failure, this variable would be false.
Now we know we are ready to load everything, we go through our list of scripts to load and recursively we build up the callback. This guarantees scripts will be loaded in the correct order (if order of scripts doesn’t matter, some performance can be gained by modyfing this bit).
CSS can be loaded asynchronously, as there order matters less.
Finally we trigger the load: if we are in local mode, then we need to ask the server for the list of dependencies. This is a JSONP request where the response is an array of files to load.
In production our dependencies are known, as they are built into a single file.
ui_version_indicator_directive.js - The Handy Informant
Finally we have a tiny Angular directive that shows what version of the assets we’re running:
I’ve included the template inline, but better practice would be having it in a separate file. This also assumes some other common directives, such as a tooltip implementation.
Git SHA’s are commonly abreviated to 8 charachters, which makes it a bit easier on the eyes.
Next we check localStorage to find what version of the UI the user is on. This is basically a reversal of the process in the scoutfile above.
Finally we create a function to get out of a customised UI, which prompts the user and then removes the relevant items from localStorage and then redirects them to a non-scoutified URL.
This system allows for a big degree of flexibility, communication and easy customer prototypes. BTW, we’re looking for a front-end engineer to join our team if this is something that interests you.