Hey,
The moment we stop realizing that the '1' sandwiched between a 't' and a 'b' in the browser address bar is not an 'i', the phisher takes over. Creating identical websites is easy and quick, I have even heard about toolkits that replicate pages.
Apart from this, we never actually check the long URL in the address bar that are opened as a result of clicking on links that our banks send us, informing about suspension of credit cards, etc. The idea of phishing is so simple, and the irony is that anti-phishing is even simpler. Some very conspicuous domain specific personalization on the web page and the fake sites could never impersonate the genuine web sites.
Talking about domain specific personalization, I was thinking of greasemonkey, where i specify the //@include www.site.com/* in the header comment. In a sense, that is also domain specific personalization of a web page. So, can greasemonkey scripts be used to fight phishing ? In this case, the problem of man in the middle does not really come into picture, as all the personalization is done by a script that never really travels over the network. The script would simple sit on the client browser, and once it sees the domain of a site that is to be protected, changes it to a form that is wanted. There could be an argument that the scheme wont really work if the script is compromised, but the first premise of this idea is the trust on the client browser.
The idea is having a secure seal as YAHOO mail shows is good, but not all sites have it implemented. Hence, a user could protect himself from phishing on using some greasemonkey script to personalize the page.
Taking the idea a step further, would it be a better idea to have a generic greasemonkey script that does this personalization, or use something like Platypus to generate GM scripts for every page that we need to protect. I am still exploring this idea, and I would be glad to hear about any ideas that you may have.