Introducing: Karma Telemetry


Karma is a great test runner tool and has the capability to launch local browsers without installing multiple servers or web drivers like selenium does. Karma is a part of the AngularJS tool chain and a lot of websites use it to write unit test cases.
However, most development workflows stop at functional testing. Performance testing usually happens when a site or a component slows down. I thought it would be interesting to include a way to check for performance regressions as a part of this testing workflow. Hence Karma-telemetry.
Karma-telemetry is a karma plugin (like karma-qunit or karma-jasmine) that runs the Chromium Telemetry's smoothness and loading benchmarks for components, widgets and parts of a web page. With web components seeming to play a big part in Angular's future, this framework would also be a way to test how well each component performs and track if any perf regressions are introduced during development.
The idea was inspired by topcoat's benchmarking website. I had done a similar analysis for bootstrap and ReactJS components a few posts ago. With this now being a Karma plugin, it can be run in a NodeJS environment without having to download the entire Chromium Source code.

To add this to your website, follow the instructions in the Readme file.

Technical Details

Chromium telemetry is a python based tool that connects to a running Chrome instance using the remote debugging protocol to run various tests. The loading benchmark calculates various metrics like the time taken to render a page, DOM load time, etc. It looks at the window.performance.timing API and the Chrome timeline. The smoothness benchmark tries to calculate how janky the page scroll is, and the possible frames per second. Interestingly, both these benchmarks also give useful information in case of other browsers too.
Though telemetry claims that it can run of other browsers, some of the tests use webkit specific APIs. As a part of karma-telemetry, I was able
  1. Extract telemetry into a separate node module
  2. Remove webkit specific code so that it runs across all browsers
  3. Report test results in a way karma can consume and possibly save in a graph for regression checks. 

Inside Karma-telemetry

Karma telemetry is very similar to other karma plugins like karma-qunit and karma-jasmine. The only difference is that it does not have a Grunt build file to construct the final qunit-adapter.js from a qunit-wrapper. Instead, I baked the wrapper into the adapter which was much easier for testing. When loading, the index file loads karma in addition to a few more javascript file. These javascript files are responsible to performing the scroll and recording the metrics. The test metrics themselves are available as individual test results and reporters like karma-junit-reporter can be used to save the XML file. The reporter can also be integrated into continuous integration systems like Jenkins to and alters can be set if changes between commits are drastic.

Additions to Karma

Karma-telemetry needed some additional features in Karma. I also sent in several pull requests to add these features to Karma and Karma launchers
  • Ability to run tests in a new window instead of the usual iFrame. This can be leverage not just by karma-telemetry but also by other frameworks that do not run inside an iFrame - pull
  • Firefox Launcher - ability to add custom preferences when launching firefox- pull
  • Sauce Launcher - pass custom options to the sauce launcher and disable popups - pull and pull
  • Chrome - disable popup blocker by default - pull
Karma tests also do not run on Windows due to path problems. I also sent in a pull request for fixing the path issues so that Windows developers can run tests on Karma.

Next Steps

I am working on building a node command line utility based on this that is based on selenium to get similar results. The idea is to use services like saucelabs to be able to run the performance tests on commits and generate graphs that would alert us on regressions.

Checking for performance on every commit is hard and I think such tooling to integrate performance testing into continuous integration would help a lot of developers keep their sites fast.

Follow my blog's feed for my experiments on performance.

ReactJS Components and Performance


I was at the ReactJS Hackathon in Seattle last month and spent some time learning the ReactJS framework. Though I started building a simple mobile application with the framework, I changed my mind midway and instead thought I could extend the work I was doing with chromium telemetry to see the performance of how various react components are rendered.
Applications using ReactJS organically structure any web page into components and I wanted to achieve a way to see the performance of these components evolve over commits. I was also able to pick up the different versions of this component to see how Christopher was impacting performance as he continued development on JSFiddle. To see the graph on the site, change the database from localhost to axemclion.iriscouch.com and type in the component name as "chart". Comparing the peaks to the diff from the JSFiddle changes shows how certain functions increase the render time.
Given that this was a hackathon about ReactJS, I also reworked the page to be built on ReactJS :)
I won the 'best port of existing app' category and now have an awesome Jawbone Jambox playing my music - thanks facebook :)

I am continuing to work on a way to extract the code from telemetry so that it can be run as a separate node module. I am also starting to integrate it as a Karma framework (like Qunit or Jasmine) to make it run benchmarks on IE, Chrome and Firefox. I have also been able to get results for topcoat.io to power the data for http://bench.topcoat.io.
The repository is here and I plan to continue working on it and hopefully sending pull requests to Karma for some changes that I would need to enable this. Watch the repository for more updates and stay tuned for my next post about how the framework turns out !!

SourceReporter - Seattle AngleHack AppHack 2013

AngelHack organized the AppHack Hackathon in Seattle this weekend on Nov 2 and 3. It was a great event and I partnered with some really talented folks to build SourceReporter - a way to democratize news reporting. We were placed second and got a chance to take home, the trophy made with all the soda cans and cereals we could not eat over the weekend :)

The Problem

Twitter is increasingly becoming a great source of news. With cellphones now omnipresent, events and incidents are also recorded and show up on youtube way before they are broadcast on news channels. However, these videos are more like raw footage and news channels usually have to edit them to bring them at par to the videos that they relay.

The Solution

SourceReporter is a web application that can be loaded on the mobile browsers can people can start broadcasting immediately in real time. The "reporters" can enhance the recording using tips that are displayed when the video is being recorded. These tips could range from simple cues or sample interview questions to help the tongue tied to ideas on orientation or duration of each recorded segment. CNN also seems to have iReport and this project brings those ideas together with technology.
The casual viewer is on a desktop and sees about some incident on Twitter. Alternatively, the viewer is passing by an incident and is curious to see what is going on. He hits the site to see a list of all active reporters in the area and can zoom into one of the reports. The interactive maps has pins that show reports from an area, graded by how fresh the report is. The site cycles over the live reports and viewers can rate and review individual reporters.
Monetization could be by showing advertisements between report segments, tips to reporters or licensing content to new agencies. The reporters will also receive a major part of the tips and licensing fees.

Demo

 

 

Technology

Over the weekend, the entire project was built in Node and used HTML5 features. The real time communication was made possible using webRTC and Socket.IO server. This was the first time we had a designer and the final product did look very professional. The maps was using Bing Maps and the CSS was based on bootstrap. The code, as typical of a hackathon, was copied and hacked together - a big mess but hey, all the moving pieces work ! We also integrated the ark.com API to fetch information about a person talking in the interview so that the reporter does not have to type in all the information on the mobile phone.
The source code is available at https://github.com/hverespej/MobileNewsNowReporter and also has steps to run it in the README file.

Next Steps

A lot of folks we spoke to did tell us that the idea was interesting and maybe of value to them. Apart from the obvious item to fix the code and polish the idea, we could look at working with news agencies to help them get the content they want from the citizen reporters.

Bootstrap - Evolution over two years

Last week, Bootstrap 3.0.0 was released. It has almost been two years since bootstrap has existed in the wild, helping web developers hide their imperfect aesthetic talents. Personally, it saved me hours at hackathons trying to design a user interface that looks presentable. 
I thought it would be fun to look at the two year history of this awesome open source project from the perspective of a web developer who thinks more about performance and jankfree-ness now. 

The results

Since most people are interested in the results, I thought I would put it above the part of "how this is done". Here are some interesting trends that I noticed from the graphs. 
  • Most components show that they started off as simple CSS rules, but as they got complex, the performance seems to drop. 
  • The performance drop seems to stop at 2.3.2 release and looks like the latest 3.0.0 release was aimed at making things faster. A lot of components in 3.0.0 are way better than their 2.3.2 counterparts.
  • Looks like the developers have taken a second look at most components and tried to re-write or re-architect them to make them better. Most components have a sudden increased performance between 2.1.* and 3.0.0
  • The base css has grown bigger over time and hence, the performance has reduced.
  • Some components (css classes) did not exist in the early versions and the graph show how the performance increases when CSS classes were introduced for them. 
  • There are significant performance changes between the RC and the final versions of 3.0.0. This could be due to incorrect CSS files I generated, or was there something different in the final release ?
  • Some of my data points are completely skewed (nav for example), and I may have to re-run the tests to get good data.
I am not a statistician and my comprehension of the results could be wrong. If you think some of my interpretations are crazy, please do drop in your opinions in the comments. If you are curious on how this data was generated, read on. 

Testing Methodology

Topcoat is another great CSS framework with emphasis on performance. The most impressive part of the framework is the integration of the performance test suites to daily commits and they way the developers discover any performance regressions introduced.
 TOPCOAT ROCKS !!!
Inspired by this system, I decided to use telemetry from the Chromium repository to run similar tests over the various versions of Bootstrap. 
 

Bootstrap Versions

Unlike Topcoat, bootstrap has a much longer history and collecting historical data over commits would be hard. Instead, I decided to pick up commits that correspond to tagged releases and enumerate them. Though the evolution of the build process for Bootstrap shows the framework maturing, it was hard to automate the builds that had Make, older versions of some npm components and finally Grunt. I just manually generated the bootstrap versions and checked them into the bootstrap-perf repository as they would not change anyway.

Generating the test files

The next steps in testing is generation of the test files. Like Topcoat, I wanted to measure the evolution of each component separately. Most of the components listed in the example page were written and the individual test pages with specific versions of bootstrap are programatically generated. Check out the Grunt file in the repository to see how this is done.
These test files are also copied over to the Chromium source where the telemetry suites are started.

Collecting the data

Once the files are copied over to the Chromium source code, the tests are run. The telemetry page_set jsons can run the tests for all the components, or individual component. Once the tests are run, the results are available as CSV and can be uploaded to the CouchDB server using the web page in the gh-pages branch, or online. The tests were run multiple times and the raw data is available here.

Analyzing the data

This couch view simply returns the stats for each component over different versions. This data is drawn on the web page using jqplot. Also note that I am saving the data on iriscouch, but to ensure that the database does not die out due to traffic, I have replicated the data on cloudant. 

Conclusion

Two years may seem like a long time in web-scale-time, but with the tools available today, creating jankfree sites is easier. I am also working on a version that could use xperf to get similar data for IE, both for topcoat and bootstrap. 
Side note: This is an independent analysis that I did over a weekend and not authoritative. However, iIt would be fun to see such a performance suite become a part of the official bootstrap continuous integration process.

Parking Drone - Battlehack

I was at Battlehack, a hackathon organized by Paypal on 10 and 11 Aug, 2013. I teamed up with Hakon Verespej and hacked the ARDrone to build an interesting project.

The Pitch

The hackathon was themed around "local" and "neighborhood" and the first problem we thought of was finding parking. We thought that it would be fun to use the AR Drone to fly around and find empty parking spots and hold it for you till you get there.
The AR Drone is programmable and would be launched using a phone app. The drone would fly around to the spot, identify empty spots and return back the location.


The Execution

The AR Drone is programmable and we decided to use the node-ar-drone module to control it. The phone app is a pure HTML app that sends a message to a node server to launch the drone. The node server starts the drone and moves it around.
The phone app constantly pings the server for the latest status and also allows for drone to be called back.

On the server, OpenCV is used to pick up the camera images and analyze them for detecting the "emptiness" of a parking spot. In the interest of time, we just look for canny lines to identify empty spots. For the demo, the drone flies lower when it identifies the presence of an object under it. The source code of all that we managed in those 20 hours is available on Github
We also had a small test page that let us control the drone manually and showed us the results from running the OpenCV filters.

Problems

The ARDrone 2.0 with GPS had not released it during the Battlehack - implementing the "search for a parking spot" was not possible without the ability to tell the drone to go to a certain location.
Getting the list of parking spots was also hard and we could not find a database that could give us such a geo-tagged map.
The biggest problem however was the stability of the drone itself. Given that we were demonstrating this indoor, it was hard to ensure that the drone would just hover at one place and not drift. The drone could not be controlled as precisely as we wanted.

The Presentation

After almost 20 hours of non-stop coding, here is what we ended up with.


 

On the whole, it was a fun event and I was able to work on something interesting. Hakon was a great team mate and I was amazed by dedication to get it to work; would love to work with him on another hackathon.

At Phonegap Day

At

At the phonegap day in Portland, Oregon

Travis, Secure Enviroment Variables and Continuous Integration

This is one of the long titles for my blog posts, partially because I was unable to think of something catchy, but also because this post deals with all the things mentioned in it.
Some of my projects on github use Sauce Labs as a part of the continuous integration test suite that run on Travis-CI every time I push to those projects. Sauce Labs is a service that offers testing across multiple browser using virtual machines and requires a secret key to access these machines.
Since these keys have to be a secret, they are defined as secure environment variables in the travis configuration file. As the documentation in Travis suggests, they are not available during pull requests (if they were, a malicious user could simple send a pull request that echos the value - and travis would automatically run that pull request and reveal the secret).
With this security comes the inconvenience of not being able to automatically run the tests for a pull request. The two options we have in this case are to either manually download the pull request and run is on a local machine with the proper authorization, or merge it and then try running it. The first case is cumbersome and I usually fall back to the second case. There have been pull requests that break the tests and since they are already into master now, reverting them is a pain.
I noticed that this has been happening a good number of times, and I finally decided to write a script for it. The shell script does the following.
  1. All pull requests should be made to a new branch called incoming-pr. The master branch is holy and should not be broken
  2. When a pull request is made, basic sanity tests that do not require secure variables is executed.
  3. If the basic sanity tests pass, the pull request is accepted into the incoming-pr branch
  4. Travis now builds the incoming-pr branch with the full test suite. 
  5. If the tests on incoming-pr succeed,  the branch is merged into master.
  6. If the tests fail, a backup of the branch in created by a new name. 
  7. In both cases, incoming-pr branch is deleted and created again to be at par with master. This is done to ensure that we are ready for the next pull request.
Some interesting quirks that I noticed while implementing this are
  • In step 7, when I create a new incoming-pr branch, travis tries to build it again, and no matter what the result, the branch is deleted and created again. This leads to an loop and to break this, we run the scripts only when master and incoming-pr have  differences. 
  • I was initially using a temporary github account to perform pushes, but nschonni indicated here that instead of using passwords, we could use github personal tokens.
  • Git push always seems to print the updated branch. In this case, we need to use the quiet flag with pushing and redirect the error stream to dev/null so that no output is printed. 
Here is the Travis shell script that does all this. Before, Merge and Revert are passed as arguments to this script during before_script, after_success and after_failure respectively. 

IndexedDB - Performance Comparisons: Part 2

See IndexedDB Performance Comparisons here.

In my previous post, I had written about some comparisons of various IndexedDB operations. Here is a compile of most common cases. Note that there are comments on each test case, and you can look at a test case and leave your thoughts (and interesting discoveries) right on that page.

  • Comparing keys is pretty much comparing different objects - numbers being the fastest and nested arrays being the slowest. [link]
  • Interestingly in Firefox, specifying a version in indexedDB.open() is faster than not specifying a version. I guess they look up the database meta-data when it is not specified. [link]
  • The presence (or absence) or keypath and auto-increment does not change the speed on add operation. This is interesting as I always thought that auto-inrement, or keypath would slow down the opreations as additional computation would be required. [link]
  • Adding more stores to a transaction scope does slow down read operations. However, since reads do not block each other, should adding more stores into a read transaction really matter ? [link]
  • Adding more stores to a write transaction does slow it down. However, in case of firefox, all writes in one tranasction is actually faster !! [link]
  • In chrome, calling put is always faster than calling Add. On other browsers, Add is faster !! [link] No idea why.
  • Grouping all read operations in a single transaction is faster than having multiple transaction. However in IE, grouping transaction is definitely faster - is this not supposed to be the general case given that read transactions are non-blocking. [link]
  • However, multiple write transactions do slow down things as expected - due to contention issues [link]
  • When using Cursors, instead of reading or writing in a single cursor, opening multiple cursors is way faster. Even in case of write, waiting for a cursor to sequentially write is slower than multiple cursors waiting and then writing [link]
  • Adding Indexes does not seem to slow the read. What about multi-entry indexes where you would have to fill the index table - should that not be slower ? [link]
  • Iterating using the cursors on primary key, or an indexes almost equally fast. [link]
  • Getting just the keyCursor on index is faster that getting the entire objectCursor. [link]
 Please do send pull requests to https://github.com/axemclion/IndexedDBShim. You can also send me suggestions about the typical scenarios that you would like to test and I could codify them too.

IndexedDB - Performance Comparisons: Part 1

See IndexedDB Performance Comparisons here

In my previous post, I had written about the IndexedDB performance test cases that I had been working on. With the infrastructure set up, this post talks about some of the findings from the test cases. I plan to add more cases and this is the first part in a series.

Note: This post if NOT about comparing browsers, or comparing one storage technology against another. Instead, the focus here is to pick out common IndexedDB patterns and see how a developer may have to change their code for best performance. Each of the test case has a comments section to discuss the outcome.

General Tests
The first set of tests are about performance of comparisons based on the type of  keys. The results are as expected with integers and longs being the fastest and arrays being the slowest. Note that nested arrays are even worse.
Opening database with and without a version seems to be almost the same - except in Firefox where specifying a version make is around 10% faster. This could due to the extra time taken for looking up the table meta data stored in a different table? 
Similarly, the different in a write operation due to the presence (or absence) of keypaths and auto-increments are not very pronounced.

Transactions and batching up read/write requests
In theory, read transactions can occur in parallel, while write transactions wait for other write transactions to finish. The tests however seem to tell a different story. If every read request is placed in its own transaction, it is much slower than queuing the requests. Looks like the time taken to create a transaction out-weighs the time take for a request to be queued and executed. The results for writing data as as expected. Note that grouping read transactions is probably a better way, instead of queuing all reads in a single transaction.

Object Stores and Transactions.
Does including multiple object stores in a transaction scope change things? The time taken to create a transaction becomes greater with the number of stores in the transaction's scope. Also, read operations in transactions with a scope of lesser object store is faster.It is even more pronounced in write transactions where the contention for stores increases when more stores are included in the transaction scope.

This is just the first part in the series of IndexedDB analysis. Watch out this space for more tests, and more results. To make the tests statistically significant, please run the tests located at http://nparashuram.com/IndexedDB/perf. Also add your comments to the end of each test, pointing out any significant surprises you might encounter.

IndexedDB - Performance Comparisons

IndexedDB Performance Comparisons - Link

Over the last month, I have been playing with various IndexedDB operations, trying to figure out performance best practices and surprises. I introduced the test harness at the HTML5 Dev Conference at San Francisco. This post talks about the way these test cases were written, and the interesting observations while writing these cases. I hope to discuss the actual results of the tests in a followup post.
I started the test cases with JSPerf.com so that I could concentrate only on writing code, without having to worry about measuring and displaying the results (as JSPerf would take care of it for me). JSPerf internally uses Benchmarkjs that takes care of running my test cases, a statistically significant number of times to give accurate results.
However, there were some problems with continuing to use JSPerf.
  1. The test setup for JSPerf is not asynchronous. For IndexedDB, I wanted to delete the database between each run, or at least before the entire suite started. In case of benchmark, the setup needs to be synchronous as it is added inline with the test case itself. Hence, I had to add code in the 'Preparation' HTML code, where I hid the 'Run Tests' button till the database was deleted and any seed data added. Not the best way to run tests
  2. I was having problems with the versioning system. I was not able to figure out a way update the code and ensure that the latest version of my cases showed up directly on the URL. 
I thought it would be simpler to roll out my own version of JSPerf, given that I had been looking at the internals of Benchmark to figure out async setup. Here is the bootstrap based theme that lists the various test case, all in one page, and runs each test with support for the asynchronous setup. I also added additional visual details like collapsible tests, progress bars and more details about the test themselves.
The source code for the test cases are also checked into github. Watch out this space for discussions about the test results, soon to follow.

IndexedDB updates

Over the weekend, I was able to update IndexedDB tutorials and the jquery-indexeddb plugin. The IndexedDB site at http://axemclion.github.com/IndexedDB was changed from custom styling to  default but much better bootstrap theme. I made the links easier to access, and hopefully also organized the content better.
It has been almost 2 years since I first started working on the IndexedDB examples. The implementations in Firefox, IE and Chrome were different in may ways and it was simpler to have a version for each browser. With the specification becoming more stable, the browser implementations have also become more uniform. I was able to combine the examples into one set (available here) that can run on Chrome, Firefox and IE. The older versions are now archived.
I also have to change Trialtool to accommodate  the fact that accessing some properties of IndexedDB requests during certain operations now throws exceptions in Firefox and IE. For example, accessing the request.error when an operation actually succeeds throws an error. Since I was printing the entire request object so that it can be inspected, all examples were throwing exceptions.
Chrome has finally removed the setVersion method and now supports the onupgradeneeded method. This change is now reflected in the jquery-indexeddb plugin. The transaction modes and the cursor directions are also strings now.
Note that window.indexedDB is now immutable in Chrome and hence Chrome cannot force the IndexedDB polyfill to run. The polyfill can only run on Opera and Safari and I hope it is rendered obsolete when the browsers implement IndexedDB.

Automatic NPM publish via Travis

In my last post, I had written about updating the grunt plugins that I use to the latest version of Grunt. As this issue suggests, I try to keep updating my code, but almost always forget to update the version on the npm registry.
Since npm itself it a package that can be used programmatically, I decided to automate the process such that a new version is published to the npm registry every time a new package version is built successfully in the continuous integration system. Technically, it just had to be a npm publish from Travis. Note that since Travis does not preserve states between runs, there would be no ~/.npmrc and I would have to add user every time. Since travis supports adding secure environment variables, I could use that to pass the values passed to adduser using this.
I started looking at the node-publish package as it was the easiest way to push code from a CI system. Everything seemed to work well on my local system, but publishing from Travis failed. After digging through the code of npm, npm-commands and npm-registry, I noticed that npm.registry.addUser only authenticates the user and sets the user name and password in the config. It does not set the email, which is required by the publish method. I have submitted a pull request to node-publish that adds the email to the npm.config, and this fixed publish to run on Travis too.
Till the pull request gets accepted, I am using the raw npm methods to authenticate and publish to the registry. It does not have the version checking logic in npm-publish, but I can live with that as versions are not overwritten and all versions that do not exist in the npm registry (new or old) do get updated. Here is the simple code to update npm packages from travis, without the hassels of ~/.npmrc.
Note that the credentials are supplied by the environment, and as mentioned earlier, should be encrypted.

Grunt-ification 0.4

Grunt just released a new 0.4 version and there were many changes from its previous versions.The two plugins I maintain also needed to be migrated, and I just finished porting them to the newer version. This also enabled me to port my other projects that depend on this, to the latest version of Grunt.

Porting the plugins
Moving the plugins to the newer versions of Grunt was the easy task - the only API that affected them was changing grunt.utils to grunt.util. I was able to upgrade grunt-node-qunit and grunt-saucelabs. One of my projects depends on grunt-jsmin-sourcemap, and looks like there are more things to update (references to this.file, etc) and that may take a little longer. 

Porting the projects
Updating the projects themselves was more work. With all default plugins now moving to grunt-contrib, I had to manually change package.json to reference individual grunt-contrib-* plugins. Including grunt-contrib is an option, but grunt-contrib on the whole is pretty large and takes a long time to install. Once in package.json, each plugin now has to be loaded using the grunt.loadNPMTasks.

Thoughts about Grunt 0.4
In general, I think Grunt 0.4 is a great step forward. The breaking changes in the API are bad, but in my opinion, necessary. Grunt is now being increasingly used in many major products and hence should move to a 1.0 soon.

Separating grunt into grunt, grunt-cli and grunt-init was the biggest change. Grunt-init will soon be replaced by Yeoman, and that is consistent with what most developers start for scaffolding projects. However, I am still trying to understand the logic behind separating grunt-cli and grunt. I do realize that grunt-cli has command completion too, but in almost all cases, users would install both of them together anyway. The grunt-cli is good for developer machines where grunt in run from the shell, but for CI, I would prefer running something like npm test instead of having to install grunt-cli and then running the tests.

Changing the name of the Gruntfile was a great idea - it tells us that the file is just a grunt runner file, and not the grunt plugin itself. Its even better for Windows users like me who do not have to explicitly invoke grunt.cmd or alias it to run grunt.

All plugins have been moved to grunt-contrib and that keeps grunt extensible. However, the annoyance resulting out of this is to include the plugin in package.json, and then loading them in the Gruntfile.js using grunt.loadNpmTask. I am sure someone will write a script to read package.json and simply load all plugins that are prefixed with grunt. Loading whole grunt-contrib just takes a lot of time and installs a lot of plugins.

Removing directives was another great idea - I have always found it to be confusing. Replacing it with the accepted template system makes things a lot easier to understand.

Overall, this is a Grunt 0.4 is a great upgrade and hope it gets to a stable 1.0 version soon.

W3Conf - Session on IndexedDB

W3Conf, San Francisco - Feb 2, 22



Session on using IndexedDB today.


HTML5DevConf 3 - Session on Using Client Side Storage Today

HTML5 Developer Conference 





Using Client Side Storage Today

A website for dev conferences - PouchDB example

Check out "The Conference
a starter website for developer conferences

We love developer conferences; the only thing we all hate at such conferences is the unpredictable wireless internet. All information about the conference (sessions, speaker list, schedule etc) are on the conference website and without internet, they needed to be printed and handed out to all the attendees
Given that most browsers have offline capabilities in some form or the other, I worked on a project that can be used as a starting point for developer conferences website.

Functionality 
The conference website has the following functionality.

  • Responsive site to view conference information like sessions, speakers and schedule - this information is stored and retrieved from a CouchDB database.
  • Go green - no printing conference schedules. All information stored using IndexedDB or WebSql, so that it is available during no internet connectivity.
  • Bookmark and plan sessions you want to attend. These are also available offline, and sync to a server when internet connection in the conference finally works.
  • Take notes on your computer and sync them to a server, read them later
  • Socialize with other participants, note down their details and see what sessions are getting interesting.
  • Responsive with Bootstrap - works on your computer, tablets of phone - take any of them to the conference with you
Demo




Technology
Under the covers, the website does the following
This is also an entry to the Mozilla Dev Derby, so if you like the idea, please vote for it :)
If you would like to try using this for information about your conference, please do get in touch with me, I would be more than happy to help you.