User Interaction Sync for ReactNative

Test ReactNative on multiple devices simultaneously - github:Maya-kai
As a developer creating user interfaces, it is important to ensure that the screens and controls we build work well across the wide range of devices that our customers use. On the web, most well written sites have responsive layouts that can be tested on different viewports using browser developer tools.

Testing Responsive Layouts

When testing mobile sites on real devices, tools like BrowserSync reduce having to perform repetitive interactions on every device by simultaneously mirroring clicks, typing and scrolls across multiple screens. This method is particularly useful when trying the app on a wide assortment of Android devices with different screen sizes, capabilities and operating systems.
This would also be useful for testing native mobile applications and I had written about using BrowserSync for a Cordova/Phonegap application. One of the comments on the post suggested using this multi-screen, mirrored-interaction testing for ReactNative apps too.
While Cordova applications are native mobile applications, they have a full screen WebView which can be used by BrowserSync to listen to top level "document" or "window" events. There is no WebView, "document" or "window" in ReactNative and listening to or simulating native events could get tricky. This podcast explains how ReactNative implements its own event sub-system and we could tap into this model to achieve the same result.

Demo

I created Maya-Kai (மாய கை), a project that helps you test ReactNative applications across multiple devices. "Maya-Kai" literally means "Magic Hand" in Tamil and this invisible hand copies events on one device and mirrors it on other devices.



Adding it to your project is as simple as importing the package and starting it up. The project is open source on github with an MIT license. More instructions on how to use it are on the README in the github repository.

Architecture and internals

ReactNative's event system has a concept of event plugins. These plugins are internal and provide handlers for events like touch, change or inputs. When maya-kai is imported to the main index.ios.js or index.android.js, it appends an extra plugin that simply listens to all events and broadcasts them to a web socket server.
Here is the architecture, read the steps in the numbered order.


The event is then broadcast to all other connected devices. A listener on the other devices receives the event and injects it using ReactNativeEventEmitter._receiveRootNodeIDEvent. The events are serialized using a JSON Object Graph library (called JSOG) rather than JSON.stringify since they contain cycles. 

Record and Replay

While broadcasting and playing was the primary use case, the events could be recorded into a file and replayed back at a later time using the exact same method. This enables the record/replay scenario that can be useful for navigating to common screens during development, or even perform UI integration testing. Check out the clients folder to see how simple it was to implement them. 

Next Steps

Most integration and UI testing for ReactNative apps today are done using Appium. However, the way to select elements on the page typically would be using testID, accessibility Ids or names - something that is not natural to React. 
ReactJS test frameworks like Enzyme on the other hand work at a component level. Since the events recorded in this approach are at a component level, this could possibly be a way to perform integration testing too, without having to rely on the Selenium JSONWire protocol. I am playing with the idea to create an integration testing framework, and would love to hear if you have suggestions or want to collaborate. 
I am also trying maya-kai on all the ReactNative applications I can lay my hands on, to ensure that it works for all events and covers all edge cases. If you have a ReactNative app or know of one and want to see how it works with maya-kai, please leave a comment below and we can try it out together :)