A piano, in CSS only

Gifs are great but CSS spinners are awesome.

Classic spinner

Equalizer spinner

Pong spinner

Performance is important, we all know that. I like performance and I like optimizing websites, but sometimes it’s easy to forget that it’s really all about the users. Don’t do any optimizations unless your users really care! However, quite often they tend to do so…

In this post, I will cover a number of tools that I often use to analyze and measure the performance of a website. The tools help us with two things: How fast is the site right now, and where should we start to optimize? To know when to use what tool is the key to quickly spot bottlenecks. However, I will not cover tools for finding issues related to in-app performance (scrolling, animations, etc.). Instead the focus will be on tracking the page load performance.

Chrome dev tools

What would we do without Chrome dev tools? Frankly, I don’t know.

I use it every day and it’s definitely the first thing I look at when tracking the performance of a site. Use the network tab to count the number of resources, measure the rendering and page load time, look at the waterfall diagram and verify that the cache headers are properly set. There are tons of useful information available here!


Webpagetest.org is quite a lot like the network tab in Chrome dev tools, but it allows you to do measurements from several different locations. This is really helpful to get an idea of how the site performs from different continents all around the world. For example, Tokyo will have a longer response time than London if the server is placed in Stockholm. Since we are limited by the speed of light, the geographical location really matters!

Rendering time, page load time, time for DNS lookups, waterfall diagrams, bandwidth diagrams and more can be found here. You even get a film strip of how the page is rendered, that is really helpful when optimizing the rendering of the above the fold view.

Webpagetest makes the measurements from real devices, which is kind of cool. The drawback is that most of them are running in a Windows environment. However, a few other environments, such as Android mobile phones, are available as well. Hopefully we will see iOS and other popular devices in a near future.

If you are running Jenkins, Trevis or any other CI-server you might want to automate the procedure of gathering the measurements. This can be done by using the Webpagetest API client, which is written in Node.js.

How about not proceeding a deploy if the startpage is loading too slow? I’d love to see that kind of setup!

Yslow and Google PageSpeed

Yslow and Google PageSpeed are two quite similar tools. They both grade the site and provide useful tips of how to improve the performance. They are both very useful to use during development! They cover a lot of different measures, including server server response time, rendering performance, browser caching, image analysis and more. Both Yslow and Google PageSpeed are available as browser plugins and they are really easy to use.

During development I prefer Google PageSpeed over Yslow since I think it provides more useful information, but Yslow has other advantages. When it comes to scripting and automatizations, Yslow can be run using PhantomJS. It supports TAP output which integrates beautifully with both Jenkins and Travis.

PageSpeed is unfortunately a bit harder to automate. However, there is a Google PageSpeed API which allows you to start the analysis from the command line and then collect the results later on.


Phantomas is a wonderful little tool that is built on top of PhantomJS. It hits your site and collect all kinds of metrics, such as number of requests, number of gziped requests, time to first byte, number of Javascript/CSS/font/image files etc.

But not only does it collect data related to the loading of the page, it also presents some interesting numbers related to Javascript performance. A few of them are: The number of DOM queries, number of DOM-inserts, global variables, duplicated DOM queries that could be cached, and more…

It’s run from the command line, and as well as Yslow Phantomas supports the TAP format which makes it easy to integrate it with your CI server.


Have you ever wanted to analyze all the pages on your site? Sitespeed.io is your friend. Just specify a single URL and it will crawl the entire site by following anchor tags. It collects data from each page, and allows you to compare the results between the different pages. Most data is related to assets, such as the size of Javascript/CSS/images, and how they are loaded. It also calculates the median and average of all the metrics, which gives a good view of how the site performs overall.

Along with that, it keeps track of all the assets it comes across (including third party assets). This allows you to easily spot if there are any particular slow assets, such as third party widgets. You also get a table that summarizes the caching of all the assets, which makes it easy to find assets that are missing proper cache headers.


Charles is a web proxy. It makes sure that all the outgoing traffic pass through it. This allows you to do tons of interesting things. One of the most useful features is that it can throttle the connection to (for example) 3G speed. Very useful when testing the performance as if running from a mobile network!

There are other tools that can do this kind of throttling, such as the network link conditioner, but Charles applies even for local traffic which most other software don’t. This is of course very useful during development when you are running against a local web server.

Yey, the new site is finally online! Up until now, blog posts were written by inserting stuff manually into a database. Not very clever. I definitely blame my laziness on writing blog posts on that.

The site is now backed by Locomotive CMS. When at it, I took the opportunity to rewrite the frontend application as well. It’s now written in CanJS. I must give a shout-out to the CanJS team, I really like the framework! Small, simple yet brilliant.

To get the site crawlable by search engines I now share mustache templates between frontend and backend. A simple Ruby application written in Sinatra renders them backend meanwhile CanJS renders them at the frontend side. It turned out to work quite well! What I like about Mustache is that it’ll force you to move all logic in the templates into a view instead.

The code snippets are still hosted as Gists. To embed them into the site I wrote a simple jQuery plugin: GistFetcher. It transforms your anchor tags into a nice looking code blocks.

Please ping me if you find any bugs or other weird quirks. I bet there are a few!

After reading this post, you’ll be able to setup everything needed to write and run tests in a Javascript environment. Jasmine will be used as testing framework and the tests will be run using Grunt. Another tool, Istanbul, will be used to monitor the code coverage.

Write a test

Suppose that there is a function called MyModule, it is supposed to return “Hello world” when the method myMethod() is run.

var MyModule = function() {
    this.myMethod = function() {
        return "Hello world";

Now, let’s create a file containing a test written using Jasmine. A test for the code above could look something like this:

describe("My module", function() {
  it("answers to myMethod() correctly", function() {
    var instance = new MyModule();
    expect(instance.myMethod()).toEqual("Hello world");

For more information about how to write tests using Jasmine, checkout their website.

Install Grunt

To install Grunt, you first need to get Node.js running on your local machine. Once you have Node.js installed, simply run sudo npm install -g grunt-cli.

This will (most likely) install the grunt binary in /usr/local/bin. Make sure this directory to your path.

Configure Grunt to run the tests

In your project directory, run npm init to create the file package.json for your project. This file contains information about what node.js modules the project depends on.

To run the Jasmine tests we need the grunt package along with a grunt task that can run the tests. Install them both by running npm install grunt grunt-contrib-jasmine --save-dev

The flag --save-dev will add the package as a dependency in package.json.

Next up, create a file called Gruntfile.js. This file will describe the different tasks run by Grunt. Let’s configure it to run the tests!

module.exports = function(grunt) {

  // Project configuration.

        // Metadata.
        pkg: grunt.file.readJSON('package.json'),

        // Task configuration.
        jasmine: {
            all: {
                src: [
                options: {
                    'vendor': 'public/javascripts/libs/**/*.js',
                    'specs': 'public/javascripts/spec/**/*.js'



There are a few things you need to adjust to fit your project. The src-parameter describes where your Javascript files are located. The vendor-parameter is used if you have any third-party libraries, like jQuery for example. Finally, the specs-parameter describes where your tests are located.

Make sure that you place the test you have written in the folder specified by the spec-parameter.

Run the tests

The tests can now be run by grunt jasmine:all

To automate this you can use the Grunt task watch. The tests can then be run whenever you save a Javascript file. To install the watch task, run npm install grunt-contrib-watch --save-dev

Then, add a watch configuration in your Gruntfile.js

watch: {
    js: {
        files: [
        tasks: ['jasmine:all']

In addition, don’t forget to load the watch-task. Add this line in the end of the Gruntfile: grunt.loadNpmTasks('grunt-contrib-watch');

You’re now ready to run grunt watch.

Grunt now looks for changes in your Javascript files. Whenever a change is detected, the task jasmine:all will be run.

Generate code coverage measures

Istanbul is a wonderful tool that let’s you track statements, branch and function coverage in Javascript. It integrates nicely with Jasmine and Grunt, install the Istanbul code coverage template by running

npm install grunt-template-jasmine-istanbul --save-dev

Then tweak the Jasmine-task in Gruntfile.js to include the Istanbul configuration.

istanbul: {
    src: '<%= jasmine.all.src %>',
    options: {
        vendor: '<%= jasmine.all.options.vendor %>',
        specs: '<%= jasmine.all.options.specs %>',
        template: require('grunt-template-jasmine-istanbul'),
        templateOptions: {
            coverage: 'coverage/json/coverage.json',
            report: [
                {type: 'html', options: {dir: 'coverage/html'}},
                {type: 'text-summary'}

To analyze the code and generate the code coverage, simply run grunt jasmine:istanbul.

As you can see, you’ll get the coverage presented directly in the terminal. If you want a more graphical view, open coverage/html/index.html in a browser.


Gruntfile.js now contains three different tasks:

  • jasmine:all - Runs the Jasmine tests
  • watch - Watches the Javascript files and fires jasmine:all when a file is changed
  • jasmine:istanbul - Generate code coverage using Istanbul

To make it easier to remember them, we can give them a name. Add the following to Gruntfile.js

grunt.registerTask('test', ['jasmine:all']);
grunt.registerTask('coverage', ['jasmine:istanbul']);
grunt.registerTask('default', ['watch']);

The default task is the task that is run when grunt is run, i.e. when grunt is invoked without any task being specified.

To summarize, the final Gruntfile.js now looks like this.

Happy testing!