Reputations on BlogComments - Design

Hey,

I had posted about a product that was involved with assigning online reputation for bloggers and commentators online. Developed by a company called SezWho, they have plugins for WordPress and Moveable type that accumulates the online reputation of people at their servers.
I was looking for something similar for Blogspot. but have not really succeeded in finding one. MyBlogLog is not really about comments, and co.mments is more about tracking. Even the SezWho page does not have any offering for blogspot. So I thought that I could come up with something that would allow rating of people who have commented on a blog to show up on blogs hosted by blogger also.
Blogs allow users to embed scriptlets, and that is one functionality that can be used to achieve this functionality. The end result was not graphically great, but achieved the result. I could not complete it, but here is the way to do it, and I would complete it later when I can squeeze out some time.
The script basically looks for the "comments" div, digs a little deeper to find the individual comments that are defined as the
element as shown in line 83. It then basically weeds out anonymous comments and sends request to a server that would return the rating of the commentators if any. The user can also rate the current comment writer and the button would perform a hidden iFrame post, after collecting the rater's unique Id (either blogger id, or the mail address). All the cros site AJAX are done using YUI. The unique id of the comment writer is available inside the tag (line 86) and is passed as a parameter. I also saw it working across a couple of different themes. All I now need is some server space and I could get this system up and running.
So far, so good, but this system does have its problems. The entire UI addition will be visible only when one blog post is seen. It is NOT visible when a user is writing comments as the javascript that we include in the page is not available. This would require a Greasemonkey script, but that is for later.
So the design is ready, all I am waiting is for someone to donate me some server space, and some time that I can squeeze out. I am not sure if this really is going to be useful, but javascript development sure is fun !!

Cardspace, Payments and Business

Hey,

Many people have been toying with the idea of using CardSpace as a vehicle for e-commerce payments. The main driver for this idea I believe is the fact that 3-D secure resembles the federation protocols like SAML or WS-FED. Ironically, 3-D secure and federation seem to have the same problems with user adoption. Thought the technology is simply remarkable, the engineers don't seemed to have covered the last mile of taking it to end users.
I first read about this idea in a blog by Sid Sidner. Many others including Ashish Jain (Ping Identity) and Kim Cameroon had linked to the post. I had also found XMLDAP having simple demos of the concept. The guys at xmldap have put up demo. The sample card issuer that corresponds to a bank, and a sample merchant site. The cards issued at the the bank can be used as currency at the merchant site.
In this post, I try to analyze the various issues with the idea, and potential solutions.
The first problem is that the demo does not work with the Windows Cardspace. Even after importing the card, it does not show up in the Identity selecter at the merchant.
The reason it fails is that the required claims for the cardspace is something like



param id="requiredClaims" name="requiredClaims" value=" http://schemas.xmlsoap.org/PaymentCard/account http://schemas.xmlsoap.org/PaymentCard/VV http://schemas.xmlsoap.org/PaymentCard/expiry http://schemas.xmlsoap.org/PaymentCard/trandata?price=2700EUR


The card that is imported has all the claims except the last line. The URL parameter that follows the question mark (in red) is dynamic and hence is not a part of the claims in the card.
The best way to accomodate this is to enhance the cardspace standards to make it recognise the URL the parameter as separate entities.
A quick fix however is to include this as an optional parameter. In this case, the user will have to explicitly select that parameter when he submits the card. The STS can throw an errorr (new version supports custom error messages) is no value is submitted.
A couple of other enhancements could be to identify this uniquely as a payment card by having an attribute that indicates the type of the card. This could be an attribute that all the merchants and the banks will have to agree upon and would help merchants to show up only the cards that they accept. Additionally, the currency (viz. dollar, rupee or euro) could be a separate url parameter, something like price=2700&currency=rupee.
The last glaring problem that I see here is the fact that getting the user consent on the money transfer is not simple. The consent is buried with other attributes, but I feel only the money part should be hilighted.
The cardspace standards sure are evolving, and I hope to see a simple single standard protocol for federation. On second thoughts, why should not people use OpenID (and YADIS for discovery) for online payments ?

Hey,


Another crash and I lost my Firefox. I thought it was time to re-install and re-document the extensions that I have on Firefox. I did a lot of cleanup, and interestingly, I realized that I have only 77 frequently used extensions !!
I used ExtensionDump to generate this list. What extensions do you have ?? :)


Application: Firefox 2.0.0.11 (2007112718)

Operating System: WINNT (x86-msvc)



Scraps-Timeout v2.0 : Is it an idea badly executed

Hey,

I had earlier written about an application that I had been working on for Orkut. Too restless to wait for OpenSocial to make its grand appearance, we cooked up a flash application that could run as scraps.
We also did rigorous viral marketing to start with. Initially, the traffic was good, with around 40K hits in the first week. However, the traffic seems to have dipped. Apparently our first level folks in the viral marketing circle were not very impressed by the idea and did not pass it on to others. We have been trying to figure out reasons for this, asking many people, and doing unofficial surveys. I thought that it would be interesting to share our findings.
Many people I have seen on Orkut do not seem to be bothered about sending random scraps to all. There are these small Javascriptlets that people paste on the address bar to scrap to all the friends. These have been festival wishes, or plain stupid messages. Taking the cue from it, if people are interested in sending messages as pictures, why not use ScrapsTimeout ?
The initial hiccups (read bugs) that the site had could have detered people, but the meebo box that we put into the page seemed to help a lot. We worked extra hard to nail out the bugs, but I am sure that cost us some users. There were a few dedicated users who lend us their valuable time in patiently telling us about the bugs. Thanks to all of them.
The other barrier could have been the part of copy-pasting the code to Orkut, instead of automatically scrapping friends. I was a little averse to asking for username-passwords (as that is the only way to scrap friends, unless someone figures out a CSRF !! ). However, we decided to give it a try, and put on Gigya. The problem with gigya is that for orkut, they only allow scrapping self. So we decided to write something similar to Gigya to get this done.
I am also working on the page for the last one week, adding more gifts and preview feature for the themes, but the majority of the part was code restructure to clean out that one-night hackish development.
The code may not be the best I have written, but it sure is more maintainable and scalable. Adding themes and gifts is a lot more easier now. We are doing the final phase of testing and are planning to launch the new version very soon.
There is one bug that I am trying to nail down. On IE7, for a div with overflow:hidden, if the child has position:relative, the hidden property does not seem to work. Have to check it out.
We would also be forcing a little more marketing, our last attempt at this idea. I hope it works.
Please do let us your thoughts, if you liked this idea. I think that at this juncture, it is the users who would decide if we work on this, or move on.... :)

Cross Domain Server Requests / AJAX

Hi,

With the enforcement of domain of origin security model for browsers, many innovative mash-ups are virtually impossible. There have been hackish ways of achieving this, but quelling cross-browser anomalies warrants for a library that could do this. While writing a bookmarklet, this is a normal situation as you are almost never on the page that fetches more data for the user.
This is where the YUI implementation of YAHOO.util.GET comes in very handy. Though the YUI GET is well documented, I dug into the source code and wanted to write about the internal workings of the file.
To start with, the GET library adds the external file as a script in the HTML document. As the GET request for the scripts always carry the cookies or authentication information of the domain where the script source reside, stateful sessions can be handled without any problems.
The tricky part comes in when the script tag's "onLoad" event is to be detected. For Firefox, the onload works just fine. In case of IE, the onreadystate event is used. Based on the state, the onSuccess handler is fired. In case of safari version < 3.0, there is no way to detect this, and a very interesting implementation is in place. Quoting the comments in the GET file,

script nodes with complete reliability in these browsers, script nodes either need to invoke a function in the window once they are loaded or the implementer needs to provide a well-known property that the utility can poll for.
For the CSS style sheets, Firefox does not throw for onload, and hence, the styles are applied as fetched from the server. There are a lot of quirks and hence, the utility is still in beta. There is also a provision to delete the script node to keep the size of the DOM in control.
One feature or service that I would like to see is a YAHOO proxy that fetches data from any URL, and sends it to the client using the GET utility as var extVariable = []. The extVariable should be configurable. At
the first inspection, this looks pretty safe to get all the data as escaped strings inside the page. The script inside the page could do the harder job of making sense of the data that is fetched, typically using something similar to the paraseJSON method.
Another enhancement could be a possibility to POST forms to the URL. Though it may be difficult to understand the response, an abstraction sure could help passing huge information to servers of different domains.
To summarize, the GET utility is a great step ahead for mash-ups and bookmarklets, and I hope it comes out of beta soon.

Writing effective bookmarklets easily

Hey,

Though Browser extensions and Greasemonkey scripts are the obvious choices for extending web pages, bookmarklets still continue to be the only reliable, cross browser choice to easily add custom functionality to sites.
As bookmarklets can simply be scriptlets extracted from a page, and can be later transformed into extensions or greasemonkey scripts, they seem to be obvious choices to prototype extensions for web pages. Most bookmarklets are just about including a remote script file into the current DOM structure. However, a bookmarklet can become even more elegant if it could use libraries and external CSS. Also, it would be a great if the entire bookamarklet functionality could be broken into files for manageability. I was working on a bookmarklet project when I wrote a generic bookmarklet that could be used as a starting point to write new bookmarklets. You can find the source here.
A little explanation of the bookmarklet framework that you could extend it to do your functionality. The 'init' function loads all the scripts and style sheets that you specify. Alternatively, the list of files could be written into this file at the build time. Line 16 checks if the script is already included, and if it is, you should not try including all the files again. A note on trying to fetch data from a site x.com, when you are on a web page from y.com. Due to the same domain restriction policy, AJAX requests cannot be made.
Though I started writing a Cross Site XHR-JSON support, I found that YUI just got it out in its beta. The YUI Get utility is something you may want to take a look at. All that the utility does is add a "script" tag to the DOM, thus including the JS file that is fetched from x.com into the page at y.com. Additional features including cleaning the DOM of such script tags so as not to bloat the source, onload handlers, and attaching the script tag to required DOM elements. This is a particularly useful utility to fetch data from different domains, and even to include style sheets.
A couple of enhancements that I am working on are
1. Check if the included scripts already exist in the page, and if they do, don't load them.
2. Currently, all the scripts have new functions attached to them. That is bad on memory, and I am writing a common singular function that would listen to all the loads.

Please do let me know of any other features that you would want it this generic bookmarklet.
If you are looking at converting your bookmarklet to a greasemonkey script and are having trouble testing it, you could give a shot at a greasemonkey testing framework that I am working on.