IndexedDB - Performance Comparisons: Part 2

See IndexedDB Performance Comparisons here.

In my previous post, I had written about some comparisons of various IndexedDB operations. Here is a compile of most common cases. Note that there are comments on each test case, and you can look at a test case and leave your thoughts (and interesting discoveries) right on that page.

  • Comparing keys is pretty much comparing different objects - numbers being the fastest and nested arrays being the slowest. [link]
  • Interestingly in Firefox, specifying a version in is faster than not specifying a version. I guess they look up the database meta-data when it is not specified. [link]
  • The presence (or absence) or keypath and auto-increment does not change the speed on add operation. This is interesting as I always thought that auto-inrement, or keypath would slow down the opreations as additional computation would be required. [link]
  • Adding more stores to a transaction scope does slow down read operations. However, since reads do not block each other, should adding more stores into a read transaction really matter ? [link]
  • Adding more stores to a write transaction does slow it down. However, in case of firefox, all writes in one tranasction is actually faster !! [link]
  • In chrome, calling put is always faster than calling Add. On other browsers, Add is faster !! [link] No idea why.
  • Grouping all read operations in a single transaction is faster than having multiple transaction. However in IE, grouping transaction is definitely faster - is this not supposed to be the general case given that read transactions are non-blocking. [link]
  • However, multiple write transactions do slow down things as expected - due to contention issues [link]
  • When using Cursors, instead of reading or writing in a single cursor, opening multiple cursors is way faster. Even in case of write, waiting for a cursor to sequentially write is slower than multiple cursors waiting and then writing [link]
  • Adding Indexes does not seem to slow the read. What about multi-entry indexes where you would have to fill the index table - should that not be slower ? [link]
  • Iterating using the cursors on primary key, or an indexes almost equally fast. [link]
  • Getting just the keyCursor on index is faster that getting the entire objectCursor. [link]
 Please do send pull requests to You can also send me suggestions about the typical scenarios that you would like to test and I could codify them too.

IndexedDB - Performance Comparisons: Part 1

See IndexedDB Performance Comparisons here

In my previous post, I had written about the IndexedDB performance test cases that I had been working on. With the infrastructure set up, this post talks about some of the findings from the test cases. I plan to add more cases and this is the first part in a series.

Note: This post if NOT about comparing browsers, or comparing one storage technology against another. Instead, the focus here is to pick out common IndexedDB patterns and see how a developer may have to change their code for best performance. Each of the test case has a comments section to discuss the outcome.

General Tests
The first set of tests are about performance of comparisons based on the type of  keys. The results are as expected with integers and longs being the fastest and arrays being the slowest. Note that nested arrays are even worse.
Opening database with and without a version seems to be almost the same - except in Firefox where specifying a version make is around 10% faster. This could due to the extra time taken for looking up the table meta data stored in a different table? 
Similarly, the different in a write operation due to the presence (or absence) of keypaths and auto-increments are not very pronounced.

Transactions and batching up read/write requests
In theory, read transactions can occur in parallel, while write transactions wait for other write transactions to finish. The tests however seem to tell a different story. If every read request is placed in its own transaction, it is much slower than queuing the requests. Looks like the time taken to create a transaction out-weighs the time take for a request to be queued and executed. The results for writing data as as expected. Note that grouping read transactions is probably a better way, instead of queuing all reads in a single transaction.

Object Stores and Transactions.
Does including multiple object stores in a transaction scope change things? The time taken to create a transaction becomes greater with the number of stores in the transaction's scope. Also, read operations in transactions with a scope of lesser object store is faster.It is even more pronounced in write transactions where the contention for stores increases when more stores are included in the transaction scope.

This is just the first part in the series of IndexedDB analysis. Watch out this space for more tests, and more results. To make the tests statistically significant, please run the tests located at Also add your comments to the end of each test, pointing out any significant surprises you might encounter.

IndexedDB - Performance Comparisons

IndexedDB Performance Comparisons - Link

Over the last month, I have been playing with various IndexedDB operations, trying to figure out performance best practices and surprises. I introduced the test harness at the HTML5 Dev Conference at San Francisco. This post talks about the way these test cases were written, and the interesting observations while writing these cases. I hope to discuss the actual results of the tests in a followup post.
I started the test cases with so that I could concentrate only on writing code, without having to worry about measuring and displaying the results (as JSPerf would take care of it for me). JSPerf internally uses Benchmarkjs that takes care of running my test cases, a statistically significant number of times to give accurate results.
However, there were some problems with continuing to use JSPerf.
  1. The test setup for JSPerf is not asynchronous. For IndexedDB, I wanted to delete the database between each run, or at least before the entire suite started. In case of benchmark, the setup needs to be synchronous as it is added inline with the test case itself. Hence, I had to add code in the 'Preparation' HTML code, where I hid the 'Run Tests' button till the database was deleted and any seed data added. Not the best way to run tests
  2. I was having problems with the versioning system. I was not able to figure out a way update the code and ensure that the latest version of my cases showed up directly on the URL. 
I thought it would be simpler to roll out my own version of JSPerf, given that I had been looking at the internals of Benchmark to figure out async setup. Here is the bootstrap based theme that lists the various test case, all in one page, and runs each test with support for the asynchronous setup. I also added additional visual details like collapsible tests, progress bars and more details about the test themselves.
The source code for the test cases are also checked into github. Watch out this space for discussions about the test results, soon to follow.