Wednesday, March 12, 2014

Browser-perf 0.1.0

Measure the rendering performance of your site with browser-perf

Browser-perf is
  • a node based tool
  • that measures various performance metrics 
  • of a web page or a hybrid (cordova based) application 
  • on browsers like IE, Chrome and Firefox
  • when running real scenarios (like shopping cart checkout, or scrolling a web page)
Measuring the smoothness or responsiveness of a web page is hard without data to support it. Tools in modern browsers go a long way in helping web developers determine and fix runtime rendering and performance issues. Most of the analysis depends on rules and "wisdom" generated by the web development community over time.Browser-perf is a way to convert those rules into tools to constantly monitor such issues.
As an example, this article is a great checklist of things to watch out for, when trying to develop a smooth web page. It lists some of the most common issues that make a web page janky. However, referring back to these rules every time during a new deployment of the web site is hard. Browser-perf converts the data from a browser into numbers based on the checklist. Some of the metrics derived from the checklist include number of event handlers that take more than 1/60 seconds, expensive GC events during animations, average area of paints, number of nodes calculated during layouts, etc.

Inside Browser-perf

Browser-perf runs tests scenarios based on selenium and collects the data from sources like the chrome developer tools timeline, or about:tracing. It then maps this raw data into actionable information based on ideas derived from various performance checklists.
The test scenario can range from simple page scrolls, to a complex checkout workflow that the user defines. During this scenario, various data points like duration of frame painting, layout cycles, memory growth, etc are measured and reported.
Browser-perf is extensible and developers can add ways to collect data from more sources (like xperf in case of IE), and also generate various other types of metrics.

Using Browser-perf in your environment

Browser-perf is node based and can either be used from the command line, or included as a node module to be a part of the build or continuous integration process. It can be run against a local setup of selenium or simply against "cloudified" browsers. It can also be used for testing Cordova applications on Android 4.4. You can find a lot more information in the wiki pages of the project.
If you would like to try browser-perf out, please do let me know and I would love to see if I can help you out. 

Type of Metrics

Some of the metrics collected when this is run include
  • mean time to render a frame
  • Paint Times, and average nodes per layout cycle
  • Memory growth rate
  • Number of layers
  • Aggregation of time for all events on Chrome time lines
  • Event handlers that take more than 16ms (1/60 of a second)
An exhaustive and growing list of metrics is available here.

Next Steps

On my todo list, are the following items
  1. Enable AngularJS end to end tests also measure performance. I am working on writing an adapter for Protractor. Please vote on this issue if you think this could help you.
  2. Better documentation :) 
  3. Integrate information that Windows Performance Analyzer provides for IE, to get richer data for IE
  4. Get more information from about:tracing
  5. Help a couple of sites integrate this into their build process to study how they could benefit from such information. Please contact me if you would like to try it out.My previous blog post for more details.
Watch this space for more updates, and my work on browser performance. 

Sunday, January 5, 2014

Making Frontend Performance testing a part of Continuous Integration - PerfJankie

Real developers ship, the rest of us just introduce bugs into production systems. That is why continuous integration exists; for catching that ever elusive semicolon or that missing comma. When shipping, developers talk about database performance, application scalability and even network optimization after functionality and stability. Front end performance is usually the last in the picture, but thanks to tools the initiative of YSlow (and then pagespeed), people do think about it. Though the end user notices browser rendering performance issues first on any deployment, it is usually handled, in a very ad hoc manner.
A quote on the internet says
During the old days of front end engineering, people may have attributed jank to a faulty, dirt ridden mouse scroll wheel, but with touch devices and polished screens, users have started noticing that the finger slides across the screen while the page is trying to catchup.
It is painfully easy to slow down a smooth feeling webpage in the battle to make it pretty. Would it not be great if browser performance regression testing can also become a part of continuous integration.

PerfJankie is an attempt towards that idea, a way to control how fast a developer slows down their page. Some also call it an attempt to get back at the awesome 'off-by-one-pixel' designers by 'guilting' about that extra color in the gradient, but this is not true :)

Technical Details

Perfjankie does two main things

Run Performance test cases

Chromium Telemetry offers a wealth of metrics for testing web pages. Converting it to node and running them on 'cloudified' Saucelabs or BrowserStack browsers just makes the deal sweeter. Perfjankie delegates this to another node module called browser-perf. The type of metrics calculated today include first paint time, mean time of a frame when scrolling the page, dropped frames, load time, etc.

Save and display those results

Storing and displaying the data is the boring but important part of the module. It uses CouchDB database (or hosted couchDB like cloudant (on Azure) or iriscouch) to store the results as browser-perf hands over. This raw data in itself is not interesting. The way to convert this data into actionable information is in form of a CouchApp that resides in same database. The app displays the metrics in a timeline of commits, singling out the commits that slowed down the web page. It also show how adding features may have gradually made the page heavy. Here is how the graph looks like.

Perfjankie deploys the couch app the first time the tests are run and can also check and update the app or the couchdb views if the node module is updated.
Note: Of Course the UI and usability of this dashboard is bad as there were no designers - it would be awesome if you could send in a pull request making the dashboard better. 

How do I use it?

Integrating this into your projects should be easy, given that this is a node module If you already use grunt, you can take a look at this pull request with a grunt task. If you do not like couchdb, you could directly use the browser-perf command line interface and store the data into your own database.
If you are still stuck, please contact me and I would be more than happy to help you.
For more information about my previous or future work with rendering performance, watch out for this space, or subscribe to my blog :)

Friday, December 27, 2013

Introducing: Karma Telemetry

Karma is a great test runner tool and has the capability to launch local browsers without installing multiple servers or web drivers like selenium does. Karma is a part of the AngularJS tool chain and a lot of websites use it to write unit test cases.
However, most development workflows stop at functional testing. Performance testing usually happens when a site or a component slows down. I thought it would be interesting to include a way to check for performance regressions as a part of this testing workflow. Hence Karma-telemetry.
Karma-telemetry is a karma plugin (like karma-qunit or karma-jasmine) that runs the Chromium Telemetry's smoothness and loading benchmarks for components, widgets and parts of a web page. With web components seeming to play a big part in Angular's future, this framework would also be a way to test how well each component performs and track if any perf regressions are introduced during development.
The idea was inspired by topcoat's benchmarking website. I had done a similar analysis for bootstrap and ReactJS components a few posts ago. With this now being a Karma plugin, it can be run in a NodeJS environment without having to download the entire Chromium Source code.

To add this to your website, follow the instructions in the Readme file.

Technical Details

Chromium telemetry is a python based tool that connects to a running Chrome instance using the remote debugging protocol to run various tests. The loading benchmark calculates various metrics like the time taken to render a page, DOM load time, etc. It looks at the window.performance.timing API and the Chrome timeline. The smoothness benchmark tries to calculate how janky the page scroll is, and the possible frames per second. Interestingly, both these benchmarks also give useful information in case of other browsers too.
Though telemetry claims that it can run of other browsers, some of the tests use webkit specific APIs. As a part of karma-telemetry, I was able
  1. Extract telemetry into a separate node module
  2. Remove webkit specific code so that it runs across all browsers
  3. Report test results in a way karma can consume and possibly save in a graph for regression checks. 

Inside Karma-telemetry

Karma telemetry is very similar to other karma plugins like karma-qunit and karma-jasmine. The only difference is that it does not have a Grunt build file to construct the final qunit-adapter.js from a qunit-wrapper. Instead, I baked the wrapper into the adapter which was much easier for testing. When loading, the index file loads karma in addition to a few more javascript file. These javascript files are responsible to performing the scroll and recording the metrics. The test metrics themselves are available as individual test results and reporters like karma-junit-reporter can be used to save the XML file. The reporter can also be integrated into continuous integration systems like Jenkins to and alters can be set if changes between commits are drastic.

Additions to Karma

Karma-telemetry needed some additional features in Karma. I also sent in several pull requests to add these features to Karma and Karma launchers
  • Ability to run tests in a new window instead of the usual iFrame. This can be leverage not just by karma-telemetry but also by other frameworks that do not run inside an iFrame - pull
  • Firefox Launcher - ability to add custom preferences when launching firefox- pull
  • Sauce Launcher - pass custom options to the sauce launcher and disable popups - pull and pull
  • Chrome - disable popup blocker by default - pull
Karma tests also do not run on Windows due to path problems. I also sent in a pull request for fixing the path issues so that Windows developers can run tests on Karma.

Next Steps

I am working on building a node command line utility based on this that is based on selenium to get similar results. The idea is to use services like saucelabs to be able to run the performance tests on commits and generate graphs that would alert us on regressions.

Checking for performance on every commit is hard and I think such tooling to integrate performance testing into continuous integration would help a lot of developers keep their sites fast.

Follow my blog's feed for my experiments on performance.

Tuesday, November 12, 2013

ReactJS Components and Performance

I was at the ReactJS Hackathon in Seattle last month and spent some time learning the ReactJS framework. Though I started building a simple mobile application with the framework, I changed my mind midway and instead thought I could extend the work I was doing with chromium telemetry to see the performance of how various react components are rendered.
Applications using ReactJS organically structure any web page into components and I wanted to achieve a way to see the performance of these components evolve over commits. I was also able to pick up the different versions of this component to see how Christopher was impacting performance as he continued development on JSFiddle. To see the graph on the site, change the database from localhost to and type in the component name as "chart". Comparing the peaks to the diff from the JSFiddle changes shows how certain functions increase the render time.
Given that this was a hackathon about ReactJS, I also reworked the page to be built on ReactJS :)
I won the 'best port of existing app' category and now have an awesome Jawbone Jambox playing my music - thanks facebook :)

I am continuing to work on a way to extract the code from telemetry so that it can be run as a separate node module. I am also starting to integrate it as a Karma framework (like Qunit or Jasmine) to make it run benchmarks on IE, Chrome and Firefox. I have also been able to get results for to power the data for
The repository is here and I plan to continue working on it and hopefully sending pull requests to Karma for some changes that I would need to enable this. Watch the repository for more updates and stay tuned for my next post about how the framework turns out !!

Monday, November 4, 2013

SourceReporter - Seattle AngleHack AppHack 2013

AngelHack organized the AppHack Hackathon in Seattle this weekend on Nov 2 and 3. It was a great event and I partnered with some really talented folks to build SourceReporter - a way to democratize news reporting. We were placed second and got a chance to take home, the trophy made with all the soda cans and cereals we could not eat over the weekend :)

The Problem

Twitter is increasingly becoming a great source of news. With cellphones now omnipresent, events and incidents are also recorded and show up on youtube way before they are broadcast on news channels. However, these videos are more like raw footage and news channels usually have to edit them to bring them at par to the videos that they relay.

The Solution

SourceReporter is a web application that can be loaded on the mobile browsers can people can start broadcasting immediately in real time. The "reporters" can enhance the recording using tips that are displayed when the video is being recorded. These tips could range from simple cues or sample interview questions to help the tongue tied to ideas on orientation or duration of each recorded segment. CNN also seems to have iReport and this project brings those ideas together with technology.
The casual viewer is on a desktop and sees about some incident on Twitter. Alternatively, the viewer is passing by an incident and is curious to see what is going on. He hits the site to see a list of all active reporters in the area and can zoom into one of the reports. The interactive maps has pins that show reports from an area, graded by how fresh the report is. The site cycles over the live reports and viewers can rate and review individual reporters.
Monetization could be by showing advertisements between report segments, tips to reporters or licensing content to new agencies. The reporters will also receive a major part of the tips and licensing fees.





Over the weekend, the entire project was built in Node and used HTML5 features. The real time communication was made possible using webRTC and Socket.IO server. This was the first time we had a designer and the final product did look very professional. The maps was using Bing Maps and the CSS was based on bootstrap. The code, as typical of a hackathon, was copied and hacked together - a big mess but hey, all the moving pieces work ! We also integrated the API to fetch information about a person talking in the interview so that the reporter does not have to type in all the information on the mobile phone.
The source code is available at and also has steps to run it in the README file.

Next Steps

A lot of folks we spoke to did tell us that the idea was interesting and maybe of value to them. Apart from the obvious item to fix the code and polish the idea, we could look at working with news agencies to help them get the content they want from the citizen reporters.