Monday, July 7, 2014

Introducing Perfmonkey

Check how smooth your website scrolls - perfmonkey.com 

Perfmonkey.com is a service to monitor the rendering performance of web pages. Send a HTTP request every time the site is deployed, and Perfmonkey will run a bunch of rendering tests to ensure that your web pages never have performance regressions.
Perfmonkey.com is currently in private beta and we are currently adding folks to the service. Till we have a fully scalable infrastructure, I wanted to create a way for developers to run performance test for single web pages using browser-perf and get an idea of the kind of metrics that can be measured.
Hence, I built the page that can take any website, run the performance tests on it and show the performance report. The challenge was to build something quickly that can scale well. Since most projects use Github for source code and Travis-ci for continuous integration, I based this service on top of github and travis.

A runner for browser-perf

The first step was to create the code that can run browser-perf. This repository simply invokes browser-perf and runs the tests specified in a config.json file. Since this runs on the cloud, the selenium server was configured to point to Sauce Labs.
This public repository was also configured to run Travis builds for every commit or a pull request. This way, if someone wanted to run the tests on a site, they would just need to change the config file, and send a pull request.

Automating the trigger

A pull request triggers the test run, I wanted a simpler user interface. I created a simple web page that uses github's API to edit a file, and then send a pull request. The Github API for changing files and creating a pull request is also pretty simple, everything was done in the web page.
The only server that I needed to host was an OAuth proxy to authenticate the users. The oAuth tokens are not even saved on the server, they are just cached in a user's browser
Given that Github APIs support cross-origin requests, I could simply use AJAX for all the requests.

Publishing the results

Github also provides an API to check the build status. This can be combined with the Travis API to fetch the actual build job, and the build logs. Like Github, Travis also supports cross origin request. All I had to do was pick up the Travis logs and parse the output.

Sample test runs

Here are the results for the test run on a slow page with parallax effects, and the test on the same page made faster

Instead of returning to the Developer tools profiler every time, this tool can be added to your continuous integration system to monitor for performance regressions. You can also get pretty graphs that can indicate the commits of deploys that make the site slower. Sign up for the beta of perfmonkey.com and we could help you get started on monitoring rendering performance.

Wednesday, June 25, 2014

GPU Composited CSS and browser-perf

I was at the Velocity Conference 2014, Santa Clara, speaking about "Automating Website Performance Measuring and adding it to continuous integration". The slides are available here.

Just before my presentation, I had the chance to catch up with Ariya Hidayat and we spoke about his work on GPU Composited CSS.His article talks about how CSS computations can be offloaded to the GPU and how over-doing it would simply exhaust the GPU.

He suggested that it would be interesting to see how browser-perf worked on his Codepen examples. It is clear from the developer timline tools (as described in Ariya's article) that work is transferred to the GPU. The cooler part would be trying to see if picking up this information can be automated.

I wrote a quick script before the presentation to demonstrate this, but the projectors did not agree with my laptop, and I just had to show static slides. Here are the details of my experiments.

Experiment 1: Impact of number of color changing rectangles

This codepen page shows different number of rectangles on the screen, each changing color using keyframes. browser-perf recorded the metrics for 1, 10 and 100 boxes. Here is the comparison for each case

Metric One Ten Hundred Units
CompositeLayers 20.00 58.99 171.00 ms
CompositeLayers_count 90 113 136 count
Layers 7 17 106 count
Paint 0.00 2.72 277.99 ms
Paint_count 1 20 7332 count

As seen from the table, the number of layers and composite layers increases with the number of rectangles on the screen. The number of times paint is called and the time per paint also increases.

Experiment 2: Changing Color vs Changing Opacity

As seen from the experiments above, changing the color causes the GPU to redraw the texture. A simpler way to simulate the same effect would be to use two  rectangles and slowly show one while hiding the other. With their opacity changing over time, they approximately show the same effect.
Metric Color Change Opacity Change Unit
CompositeLayers 33.99 10.00 ms
Paint_count 51 5 count
Layers 6 2 count
The same page was used for changing colors, while this codepen was used to try changing the opacity.
The numbers also confirm that the number of times paint happened is much lower when the opacity changes. Similarly, the number of layers and composite layers in case of changing the color is much higher.

To summarize, certain properties like background color, borders, shape, etc make the GPU redraw the rectangle and should be avoided when trying to achieve a smooth web page.

Here is the full gist with all the data and the code to re-run the experiments. Have interesting rendering performance examples and want to measure the metrics for them? Ping me and I would love to help you run browser-perf on your examples.

Tuesday, June 3, 2014

Perfslides - demo app to show that browser-perf works

Performance Graphs for a website generated using browserperf

I have been working on browser-perf for quite some time now. With browser-perf and perfjankie, front end performance monitoring can be easily added to any continuous integration system. These tools make a lot of interesting metrics available. I always wanted to a project that demonstrates how this information can be actionable and associated to code that can be fixed to improve performance.

Perfslides is a project to show how changes in code across individual commits can impact smoothness and jank of a web page. It also doubles up as a slide deck that was used at conferences where I spoke about these tools.
The project has a simple, long and scrollable webpage on which the performance tests are run. The perf branch has five commits which are compared. Each commit is isolated in functionality to demonstrate a problem with jank or how it was fixed. These code snippets are from real world projects.
A graph over these five commits clearly shows the relation between the changed code and their impact of performance.

The Commits
The five commits are as follows
  1. The first commit is a very basic version of the site with minimal bootstrap styling. The site has unscaled images from the slides and the site. 
  2. The second commit is a styled version of the page, with parts related to the presentation hidden. This translates to hiding many large pictures that were used for the slide deck. Other styles included aligning the text, resizing images, etc. 
  3. The third commit mocks a feature that introduces performance regressions. This code has a scroll handler that tries to position a bookmark indicating the amount scrolled. Amongst other things, it also tries to save this information in a cookie to reload the page at the same position it was scrolled to. 
  4. This commit fixes the performance regression by delegating all the expensive work to a requestAnimationFrame call, using CSS transforms and caching all jQuery elements.
  5. The final commit shows the impact of third party Javascript to a file. In this case, social sharing buttons and a comments form was added to the site. 
Integrating performance tests was as simple as adding a grunt task from perfjankie. Running grunt perf starts up the web server, connects to local selenium, runs the tests and saves all the data to a local database. This data is also replicated to a cloudant server and shows the impact of each commit on the metric.

The Metrics
Some noticeable and interesting changes to the metrics are
  • When adding a scroll handler, the mean frame time improves when work is delegated to requestAnimationFrame.
  • The addition of CSS3 transforms in commit 4 is shown by the increase in Layers metric
  • The average painted area is also different
  • Javascript execution time and navigation metrics increase when third party code was added to the site.
  • The main difference between commit 1 and 2 is resized images. Looking at the DecodeImage and ResizeImage clearly indicates this. 

Testing Cordova
The built folder was also copied to the www folder in a cordova project and similar performance tests were run. The results were very similar.

Using it for your sites
Checking the developer tools for performance regressions after every deploy is very hard. Using tools like browserperf and perfjankie make tracking regressions simple. You could simple add a grunt task as shown in this project, connect to hosted browsers on saucelabs and save the data on cloudant. Alternatively, you could also try out this entire setup by signing up for perfmonkey.com, and sending a CURL request every time a new version of the site is deployed. Perfmonkey.com is a hosted service that does all this for you.

Follow the projects on github or watch out this space for more updates.

Friday, May 9, 2014

Front End Ops Conf 2014

I was at the Front End Ops conference on April 24, 2014 at San Francisco and spoke about "Adding Rendering Metrics to Browser Performance".

Here is a video of the session





The slides for the session are available at http://nparashuram.com/perfslides

Wednesday, March 12, 2014

Browser-perf 0.1.0

Measure the rendering performance of your site with browser-perf

Browser-perf is
  • a node based tool
  • that measures various performance metrics 
  • of a web page or a hybrid (cordova based) application 
  • on browsers like IE, Chrome and Firefox
  • when running real scenarios (like shopping cart checkout, or scrolling a web page)
Measuring the smoothness or responsiveness of a web page is hard without data to support it. Tools in modern browsers go a long way in helping web developers determine and fix runtime rendering and performance issues. Most of the analysis depends on rules and "wisdom" generated by the web development community over time.Browser-perf is a way to convert those rules into tools to constantly monitor such issues.
As an example, this article is a great checklist of things to watch out for, when trying to develop a smooth web page. It lists some of the most common issues that make a web page janky. However, referring back to these rules every time during a new deployment of the web site is hard. Browser-perf converts the data from a browser into numbers based on the checklist. Some of the metrics derived from the checklist include number of event handlers that take more than 1/60 seconds, expensive GC events during animations, average area of paints, number of nodes calculated during layouts, etc.

Inside Browser-perf

Browser-perf runs tests scenarios based on selenium and collects the data from sources like the chrome developer tools timeline, or about:tracing. It then maps this raw data into actionable information based on ideas derived from various performance checklists.
The test scenario can range from simple page scrolls, to a complex checkout workflow that the user defines. During this scenario, various data points like duration of frame painting, layout cycles, memory growth, etc are measured and reported.
Browser-perf is extensible and developers can add ways to collect data from more sources (like xperf in case of IE), and also generate various other types of metrics.

Using Browser-perf in your environment

Browser-perf is node based and can either be used from the command line, or included as a node module to be a part of the build or continuous integration process. It can be run against a local setup of selenium or simply against "cloudified" browsers. It can also be used for testing Cordova applications on Android 4.4. You can find a lot more information in the wiki pages of the project.
If you would like to try browser-perf out, please do let me know and I would love to see if I can help you out. 

Type of Metrics

Some of the metrics collected when this is run include
  • mean time to render a frame
  • Paint Times, and average nodes per layout cycle
  • Memory growth rate
  • Number of layers
  • Aggregation of time for all events on Chrome time lines
  • Event handlers that take more than 16ms (1/60 of a second)
An exhaustive and growing list of metrics is available here.

Next Steps

On my todo list, are the following items
  1. Enable AngularJS end to end tests also measure performance. I am working on writing an adapter for Protractor. Please vote on this issue if you think this could help you.
  2. Better documentation :) 
  3. Integrate information that Windows Performance Analyzer provides for IE, to get richer data for IE
  4. Get more information from about:tracing
  5. Help a couple of sites integrate this into their build process to study how they could benefit from such information. Please contact me if you would like to try it out.My previous blog post for more details.
Watch this space for more updates, and my work on browser performance.