I thought it would be fun to look at the two year history of this awesome open source project from the perspective of a web developer who thinks more about performance and jankfree-ness now.
Checkout the results at http://nparashuram.com/bootstrap-perf/
The resultsSince most people are interested in the results, I thought I would put it above the part of "how this is done". Here are some interesting trends that I noticed from the graphs.
- Most components show that they started off as simple CSS rules, but as they got complex, the performance seems to drop.
- The performance drop seems to stop at 2.3.2 release and looks like the latest 3.0.0 release was aimed at making things faster. A lot of components in 3.0.0 are way better than their 2.3.2 counterparts.
- Looks like the developers have taken a second look at most components and tried to re-write or re-architect them to make them better. Most components have a sudden increased performance between 2.1.* and 3.0.0
- The base css has grown bigger over time and hence, the performance has reduced.
- Some components (css classes) did not exist in the early versions and the graph show how the performance increases when CSS classes were introduced for them.
- There are significant performance changes between the RC and the final versions of 3.0.0. This could be due to incorrect CSS files I generated, or was there something different in the final release ?
- Some of my data points are completely skewed (nav for example), and I may have to re-run the tests to get good data.
Testing MethodologyTopcoat is another great CSS framework with emphasis on performance. The most impressive part of the framework is the integration of the performance test suites to daily commits and they way the developers discover any performance regressions introduced.
TOPCOAT ROCKS !!!
Bootstrap VersionsUnlike Topcoat, bootstrap has a much longer history and collecting historical data over commits would be hard. Instead, I decided to pick up commits that correspond to tagged releases and enumerate them. Though the evolution of the build process for Bootstrap shows the framework maturing, it was hard to automate the builds that had Make, older versions of some npm components and finally Grunt. I just manually generated the bootstrap versions and checked them into the bootstrap-perf repository as they would not change anyway.
Generating the test filesThe next steps in testing is generation of the test files. Like Topcoat, I wanted to measure the evolution of each component separately. Most of the components listed in the example page were written and the individual test pages with specific versions of bootstrap are programatically generated. Check out the Grunt file in the repository to see how this is done.
These test files are also copied over to the Chromium source where the telemetry suites are started.
Collecting the dataOnce the files are copied over to the Chromium source code, the tests are run. The telemetry page_set jsons can run the tests for all the components, or individual component. Once the tests are run, the results are available as CSV and can be uploaded to the CouchDB server using the web page in the gh-pages branch, or online. The tests were run multiple times and the raw data is available here.
Analyzing the dataThis couch view simply returns the stats for each component over different versions. This data is drawn on the web page using jqplot. Also note that I am saving the data on iriscouch, but to ensure that the database does not die out due to traffic, I have replicated the data on cloudant.
ConclusionTwo years may seem like a long time in web-scale-time, but with the tools available today, creating jankfree sites is easier. I am also working on a version that could use xperf to get similar data for IE, both for topcoat and bootstrap.
Side note: This is an independent analysis that I did over a weekend and not authoritative. However, iIt would be fun to see such a performance suite become a part of the official bootstrap continuous integration process.