JSON, REST, SOAP or simply innerHTML ?

Hey,

This is one classic question that often gets asked while working on websites that typically have many AJAX requests for rendering a dynamic page. The AJAX results could be a result of interaction, but I have noticed many pages making AJAX requests even for the first view of the page. Though the format in which data is delivered is not a "BIG" concern for the backend server (there are other things to worry about) code developers, this decision can potentially change the design of front end. People may argue that just like the backend, we could plug in adapters/converters for the data format, I personally feel that this would only end up making javascript unnecessarily heavy. Given the nature of JavaScript, adapters at the backend would be easier and quicker to write. Before trying to answer the question, the rendering mechanism of the individual schemes could be considered, outlining the pros and cons of each approach.
SOAP-XML is too heavy and I would prefer the backend to give me a stripped down REST version of the data. The times when the front end requires the data type of an object (given that JavaScript is loosely typed), are rare. Most data from the data is primarily used display things. Even if data type is required for special front end operations (like sorting tables), assumptions can be made about the table contents when writing the sort code in JavaScript.
REST was good, but it is no way near to representing the native data representation. The access mechanism using xPath is also slower. I would prefer data to be JSON. After all, JSON is just a different way to write RES; different syntax, but the same semantics. I usually end up writing a JSON convector for REST data, hence would prefer the backend server to have a layer that churns out data in JSON as well.
The real contest is between JSON and innnerHTML. In case of JSON, the data would be picked up by JavaScript code and inserted into appropriate HTML typically using variations of document.getElementById() and setting the required attribute. This is easy if the elements to be changed are distributed. However, rendering clustered data like tables etc results in looping, leaving chunks of HTML inside the JavaScript code.

var htmlString = "<table>";
for (var i = 0; i <10; i++)
{
htmlString += <tr><td>fromServer.name + "</td></tr>"
}
someElement.innerHTML = htmlString;


The HTML strings inside JavaScript soon become complex and hard to maintain, specially as changing styles inside JavaScript strings is not a great idea.
This is when I would prefer receiving HTML chunks from the server. All that is required is to read the response text and set it as an innerHTML. Other operations like sorting, etc could also be embedded into this HTML chunk, either as a inline JavaScript function (not a great idea), or reference to an external JS file. This approach also helps us to look at results in isolation making debugging code and rendering easier. This HTML chunk could use the same stylesheet included into the page.
To summarize my take on this question, I believe that XML, both SOAP and REST are a little uncomfortable due to the extra chunk of code that needs to be thrown in for rendering them. JSON is great as long as the changes that this brings about is distributed across the page. The server sending me HTML is preferable when I have to change chunks of a page, specially when they seem to be logical components of a page. Would love to hear comments on this.

Enhancements to the Google Search Results page

Hey,

I had earlier blogged about a greasemonkey script that allowed copy-pasting links on the Google Search Result page. The Google script changed the href of links in the search page so that the clicks to results can be counted. The script simple parses all links that have a class "l", and attaches a onmousedown event to correct the damage done by the google script.
However, disabling this also disables Google Web History, something that I extensively use. Also, I use Snap Links Firefox extension to quickly open multiple links in the background. The greasemonkey disables web history in this case also. Hence, I modified the script a little so that Google Web History is enabled, without compromising on the Web History feature.
The script now checks the mouse button clicked (lines 32-24) , and if it was a right click, does not change the google URLs to the original URLs. Also, to allow plugins like Snap links to use Web History feature, the links are changed to the web history links as soon as the page is loaded. This is achieved using a timeout in lines 69 to 71. Hence, whenever the user clicks the link, or a parser runs through the page, hyperlink gets redirected via the web history.
I now have Snap Links and other screen scrapers working perfectly fine, without compromising the ability to copy links.

Yet another ORKUT worm ? - Nah

Hey,

Yesterday, a friend on mine showed me a script that claimed to exploit an SQL injection in orkut that let people view hidden photos of people. This trick has become old, nonetheless, I wanted to see if this was like something that had earlier occurred. The previous bug was a genuine Script injection hack using which, one Rodrigo Lacerd using flash and javascript.
This one however was a lot lamer and did nothing of that sort. All it does is spam people in the friend list and makes the victim join some communities. The profile where this seems to have originated seems to be this; the objectionable content has been surpassed. The dropper code is still available here. Not sure why it is a greasemonkey extension, but it is just a way to trick unsuspecting users into becoming droppers.

The Internals of YUI Image Cropper and File Uploades

Hey,

It has been quite some time since I have written; I was a little busier than usual the last week. Through the week, I was working on a project and on YUI. I stole some time away to quickly post on two interesting and new components of the YUI library that I had been working on extensively last week.

Image Cropper
The Image cropper lets a user select a a small part of an image, typically used for resizing in most application, or search for similarities, as like.com does. You can see an example of here. I got a little curious to see how this is done. Contrary to what many guessed, there are no multiple, there are not six or eight triangles with decimal opacity leaving out the center to get the effect. The real way it is implemented is a lot simpler. Once the image is initialized using new YAHOO.widget.ImageCropper('yui_img'); a new div is drawn over the actual image with black color and non-zero opacity. There is another smaller div that represents the crop area. This div can be dragged and resized over the image, and shows the part of the image that is not masked by the black color. This div also has the image as its background-image. As the div is dragged around, the background-position changes to make the image align with original image. This gives the cropping effect.

File Uploader
I had earlier written a post on file uploading. The YUI file uploader is a similar component. The file upload component uses flash to allow multiple select and also to display the progress. This is very similar to the scheme used with the flickr image upload page. On initialization, a small flash object is embedded into the page that shows up a file open dialog box. Since this is a flash dialog, multiple files can be selected.

Apart from these, I was also looking at the YUI Layout Manager, something that was already achievable using YUI CSS foundation. Apart from providing the basic positioning, the extra javascript part allows collapsing and scrolling of panes. This is a pretty interesting framework manager that I would be using in one of the projects that I am working on. Will be posting on this later, when I have used the component inside out....

Sending Virtual Gifts as Mails..

Hey,

After an initial spike in traffic, ScrapsTimeOut has not been able hold the attention of people. I do realize that the idea would drive seasonal traffic, with hits surging during occasions like Christmas or valentines day. We also had initial technical glitches, and the infamous orkut captcha bug that reduced the usage.
On the parallel, we also discovered that there are a lot of varied email forward flowing around the internet. Hence, converting ScrapsTimeOut to a mailable virtual gift was an obvious next step for us. All we had to do was to come up with a simple HTML page that would be attatched to emails and sent across to people. On clicking the "Download" button, the scrap text is submitted to a PHP page, that conveniently adds a Content-disposition: attachment; filename=mail.html header element. This pops up the "Save As" dialog box for the HTML content. The user can have the attachment on the local file system till the timer ticks down to zero. They can open the gift at the time specified by the sender; more like keeping a gift with you that you cannot open till a time!
The main challenge was the display of the attachment on the mail clients. Yahoo mail beta shows it inline without scripts or swf content. In case of gmail, "Viewing" the attachment disables some styles also. As expected, scripts and swf are also blocked. On rich clients like Outlook and Lotus Notes, the file needs to be explicitly downloaded. Also, people viewing the attachment may or may not be connected to the internet.
To serve all these requirement, the HTML attachment is just a plain page telling users to download and view the attachment. An inline script tag displays error of not being connected to the internet. The page also has a reference to an external Javascript file. This hides the "no-connected-to-internet" error, and renders the final page, letting users view the gift.
We have just finished testing it, and have launched it. Please let us know if you liked this feature, or would like to see any other ideas implemented.