Welcome!

AJAX & REA Authors: Alfredo Diaz, Andreas Grabner, Tim Hinds, RealWire News Distribution

Related Topics: AJAX & REA, Web 2.0

AJAX & REA: Article

Smart Browser, Where Art Thou?

In 1998, I got my hands on Mitchell Waldrop's book called 'Complexity'. Ever since, I've been on an amazing journey...

In 1998, I got my hands on Mitchell Waldrop's book called 'Complexity'. Ever since, I've been on an amazing journey discovering one of the most profound developments in modern science. Complexity, or more formally, the study of complex systems, is about unifying themes that run through all modern scientific disciplines including physics, biology, economics, ecology, linguistics, and sociology.

John Holland, one of the fathers of complexity science, coined the term Complex Adaptive Systems to characterize ant colonies, societies, cells and ecosystems. He pointed out that the agents in these systems are adapting to the surrounding environment by building models and by learning. And as Jeff Hawkins summarized in his book 'On Intelligence', the human evolution has been about the progressive invention of various forms of memory. Genes, brains, language and books are all examples of this.

It is obvious that memory plays a critical role in human intellect and human interactions. Yet today our  interactions with computers, and the web in particular, are disappointingly stateless. We keep going back to the Google search box and re-entering the same stuff over and over again. The computer simply has no idea what we are looking for and how to help us find it. Ah, you'd say, but how can it? Don't we need artificial intelligence for that? My claim in this article is that no, we do not. Instead, we need to get inspiration from complexity science and focus on usability and productivity.

Bookmarks are flat, the world is not
Let’s start by looking at the way we currently remember things on the web. When we find something interesting we create a bookmark. If we are web 2.0 savvy we tag it and send it to del.icio.us. Sounds good, right? Not really. Say we find an interesting book on Amazon, a wine to buy for a friend's birthday, or a restaurant that we’d like to visit next Valentines day. The moment we bookmark the site, the rich concept like a book,  a wine or a restaurant instantly disappears. Instead what we have left is the link: a piece of text that will not be meaningful a week from now. Why not? Because we do not think in terms of links. We think in terms of concepts like books, wines and restaurants.

The links that we store as the clues to deal with the massive amount of information out there, are just not enough. We need to capture and store the semantics of a thing that we find interesting. If it is a book, then we need to remember that it is about globalization and that was written by Thomas Friedman. If it is a wine, we need to capture that it was a mix of Petite Syrah and Zinfandel and that it came from Boggle winery in California. If it is a restaurant we need to remember that it is Asian fusion cuisine near Times Square in New York.

There is a minimum set of attributes that actually defines a thing in our brains. Without these attributes, the objects carry no meaning and we can not remember then. By now, as a society, we have accumulated countless of bookmarks that we will never revisit. We also lack the tools to help us clean them out. So we are trapped with collections of information that we can not re-use.

Tagging all the way
It is important to capture the right amount information to create meaning. It is also important to facilitate meaning via a taxonomy. This is a fancy word, but frankly, Yahoo! got it right since its early days. Things and concepts need to be organized, because people thrive on structured information. The problem, however, is that the web directories are static, but our personal taxonomies aren't. Like any complex system, our understanding of the world and our semantics constantly changes.

The recent invention of tagging has the potential to solve the personal taxonomy issue correctly. For managing this evolution tags are much more flexible than folders. For example, if something was  called OLE, we can retag it as ActiveX, but fundamentally it remains the same piece of information. Moving stuff from one folder to another is incredibly painful, but renaming a tag is easy.

To be successful, tagging needs to become pervasive and it needs to be built into the browser.

There is huge productivity gain in that. When we need to find something, we can just think of a tag or concept and instantly narrow our search space from thousands of things down to a handful. And no, it is not the same as folders and bookmarks. With folders, we always focused on the hierarchy and must decide into which folder or sub-folder to save the link.

The truth is that it was always a losing game, because James Gosling's blog could go into either a Java folder, a Blog folder or a Movers and Shakers folder. With tagging, we can quickly find a link to  Gosling's blog just by thinking about any one of these concepts. This is natural, because this how our brain organizes information and it is so fast that we cringe at the thought of expanding yet another folder.

Got memories? Use them wisely.
Suppose now that the browser can capture the information, preserve semantics and offer rich support for tagging. Assume also that the browser lets us painlessly find things that we once liked. What else do we need to have in place to have a more productive online experience? We'd like the browser to use the information that we’ve already stored, to help us find relevant new information.

For example, say we bookmarked the movie 'Memento' on the Internet Movie Database site. Our smart browser created a movie bookmark, stored the director, the names of the stars, the year and the title of the movie. It also helped us tag this movie with 'Crime', 'Drama', 'Mystery' and 'Thriller' tags.

The next day, when we are perusing our movie collection, we decide to rent this movie on Netflix. Instead of going to netflix.com, typing the title of the movie and then selecting it from the list of matches, we just want to be able to right-click on the movie in our organized collection and select 'Rent on Netflix'. Or, when we want to find more movies by the same director, Christopher Nolan, we click and select 'Find more movies by Christopher Nolan'. If we wanted to see a video about the star Carrie-Ann Moss, we should be able to right-click and select 'Show videos with Carrie-Ann Moss on YouTube'.

All of this is possible, because the browser has the concept of a movie and its attributes built right in. This is not artificial intelligence, it is simply hard-coding. But it is the kind of hard-coding that is not wrong, because it leads to a huge productivity boost. In addition to handling the everyday concepts, the  browser also helps the users tag every piece of interesting content, to make our browsing world wonderfully connected.

Because we tagged the movie 'Memento' we can now instantly search for 'Thriller' in books on amazon, find podcasts about 'Crime' on Odeo or find what people recently tagged as 'Drama' on del.icio.us. If we are looking at Madonna's music, we can instantly find her latest album, along with latest pop and dance music. If we are looking at an iPod, we can instantly find other products by Apple, other iPod models and pictures of iPods on Flickr. This is not AI, this is common sense.

Microformats and the long wait for semantic web
We have been talking about this functionality for years. Sir Tim-Berners Lee has been pushing hard for a Semantic Web since the early days of the web. We need semantics on the web, HTML just does not cut it. The loss of structure and semantics caused by HTML leads to a loss of productivity, and, frankly, a waste of our time. How and when do we get to the Semantic Web?

The answers are not so clear. The reality is that we have billions of HTML pages out there. Our best bet right now are microformats, which insert XML-like semantical information into HTML. Microformats are actually the right approach, because they offer us hope and a road to the Semantic Web. It is not a straight road, however. We need more tools that help us extract semantics and annotate existing web content with microformats notation and we need to invent microformats for everyday things like books, movies and wine.

While the industry is struggling to standardize, the end users continue struggling to find and organize their information. The browser needs to step in, in the meantime, and help the users. Building semantics into the browser is a good thing and it is the right thing because every one already uses a browser. We do not need to have a centralized uber-intellect trying to answer all of our questions. Instead we need a personalized browser on every desktop, in every cell phone and pda, focused on saving time for an individual user. This distributed model works well in computers and complex systems.

But we are going to get there, right?
So who is working on making the browser smart? Surprisingly, the heavyweights seem to be on the sidelines. Perhaps the bloody battles of the Netscape vs. Microsoft browser war still bring bitter memories or perhaps something is being build in complete secrecy. Who knows? But we do know that there are people working on the problem in the open. Here are some examples.

First, notably, there is Flock (http://www.flock.com) – the browser for you and your friends. Flock is on a mission to integrate with the best web services to help people better collaborate online. Flock helps us manage our del.icio.us and Shadows bookmarks, work with photos on Flickr and post to various blogs.

Flock's strategy for integrating Web Services involves a so-called topbar, which you can see in Figure 5 above. The topbar changes depending on which service the user is interacting with. For example, the topbar can display Flick photos,Yahoo! maps or News. The jury on the usability of this is still out, but the guys at Flock definitely get the productivity vibe. They work hard to make things simple - that’s for sure.

Another example of building smarts into the browser is the Attention Recorder from Attention Trust (http://www.attentiontrust.org). This non-profit has a lot of prominent web 2.0 people as its members, including Michael Arrington and Stowe Boyd. The organization produced an Attention Recorder extension for Firefox, which simply records each URL that you visit, stamps it with a timestamp and stores it in one of the approved services. The premise and the promise of the Attention Trust organization is to deliver personalization based on the user's attention. Here is the quote from their Principles:

When you pay attention to something (and when you ignore something), data is created. This "attention data" is a valuable resource that reflects your interests, your activities and your values, and it serves as a proxy for your attention.

Right now, this is still in the beta/data collection phase, but it is obviously heading in a very interesting direction.

The last example is a brand new startup called adaptiveblue, of which I am the CEO. The vision of adaptiveblue is to develop new browser technologies that deliver a personalized web experience, enhance productivity and save time. Our first product, the blueorganizer Firefox extension, is now in private beta. The blueorganizer addresses most of the issues discussed in this article. You can learn more about it and try it, by visiting http://www.adaptiveblue.com.

The smart browser circa 2010
We do not know who is going to get there first. It does not matter. What matters is that we can get done what we need to get done, despite the accelerating pace of our society. Increasingly, we spend more time working on the web. Sooner or later, everything will become the web. So it is important, that the one tool that we use to interact with the web, the browser, raises the bar and helps us out.

What we need is for the browser to 'understand' what we are doing and to save us time. To do that, it will have to know about what we have looked for in the past and what we are looking for now. It will need semantics. To achieve these ambitious goals, the next generation of the browsers will need to: 

         Embed the basic everyday life-concepts like books, movies and electronics

         Not lose important information and preserve semantics

         Encourage and simplify tagging of the content

         Continuously build and update the set of user experiences

         Focus on usability and help the user do things faster

         Embrace the design for productivity model

How can we get this smart browser now? We need to ignite another browser war. The smart browser will be born in the battle of web giants and startups. It will be a product of imagination, struggle for standards and long sleepless nights. But at the end of the day, like all great inventions it will be worth it because it will make our life easier.

More Stories By Alex Iskold

Alex Iskold is the Founder and CEO of adaptiveblue (http://www.adaptiveblue.com), where he is developing browser personalization technology. His previous startup, Information Laboratory, created innovative software analysis and visualization tool called Small Worlds. After Information Laboratory was acquired by IBM, Alex worked as the architect of IBM Rational Software Analysis tools. Before starting adaptiveblue, Alex was the Chief Architect at DataSynapse, where he developed GridServer and FabricServer virtualization platforms. He holds M.S. in Computer Science from New York University, where he taught an award-winning software engineering class for undergraduate students. He can be reached at alex.iskold@gmail.com.

Comments (6) View Comments

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Most Recent Comments
Alex Iskold 05/27/06 12:00:11 PM EDT

Will,

I've been following Flock pretty closely for the past 6 months - you've been doing interesting and important work. Usability is certainly critical and something that can make or break a product like this. I will keep checking it out and and will send you my feedback.

Alex

Alex Iskold 05/27/06 11:56:43 AM EDT

Tony,

Thanks for your comment. It is difficult to cover everything in one article, but I think that google labels are great example of tagging. With respect to where things are stored, I also agree. The profile needs to be stored on the server. The browser needs to fetch this profile from the server and create personalized web experience on the client side.

Alex

Will Pate 05/26/06 03:24:30 AM EDT

Hey Alex,

Thanks for mentioning Flock. Our first beta, Cardinal, will be coming out soon. Our new director of User Experience, Will Tschumy, has been spearheading some major usability improvements that you should see in the release. We're continuing to try and make things simple, as you said. If you check out the beta, we'd love to know what you think.

Cheers,

Will Pate
Community Ambassador, Flock

TonyL 05/25/06 09:52:54 PM EDT

Thanks for the good article on the future of browsers. I was wondering how come you make no mention of gmail's great 'labels' feature which is essentially email tags. Secondly, the gmail tags are stored on the server and not in your browser... is that where we finally want the web (X.0) to go where all the sites we use store the tagging information so we don't have to move our browser profile around from machine to machine we use?

AJAX News Desk 05/25/06 06:19:53 PM EDT

In 1998, I got my hands on Mitchell Waldrop's book called 'Complexity'. Ever since, I've been on an amazing journey discovering one of the most profound developments in modern science. Complexity, or more formally, the study of complex systems, is about unifying themes that run through all modern scientific disciplines including physics, biology, economics, ecology, linguistics, and sociology.

AJAX News Desk 05/25/06 06:11:04 PM EDT

In 1998, I got my hands on Mitchell Waldrop's book called 'Complexity'. Ever since, I've been on an amazing journey discovering one of the most profound developments in modern science. Complexity, or more formally, the study of complex systems, is about unifying themes that run through all modern scientific disciplines including physics, biology, economics, ecology, linguistics, and sociology.

Cloud Expo Breaking News
Cloud scalability and performance should be at the heart of every successful Internet venture. The infrastructure needs to be resilient, flexible, and fast – it’s best not to get caught thinking about architecture until the middle of an emergency, when it's too late. In his interactive, no-holds-barred session at 14th Cloud Expo, Phil Jackson, Development Community Advocate for SoftLayer, will dive into how to design and build-out the right cloud infrastructure.
Cloud backup and recovery services are critical to safeguarding an organization’s data and ensuring business continuity when technical failures and outages occur. With so many choices, how do you find the right provider for your specific needs? In his session at 14th Cloud Expo, Daniel Jacobson, Technology Manager at BUMI, will outline the key factors including backup configurations, proactive monitoring, data restoration, disaster recovery drills, security, compliance and data center resources. Aside from the technical considerations, the secret sauce in identifying the best vendor is the level of focus, expertise and specialization of their engineering team and support group, and how they monitor your day-to-day backups, provide recommendations, and guide you through restores when necessary.
More and more enterprises today are doing business by opening up their data and applications through APIs. Though forward-thinking and strategic, exposing APIs also increases the surface area for potential attack by hackers. To benefit from APIs while staying secure, enterprises and security architects need to continue to develop a deep understanding about API security and how it differs from traditional web application security or mobile application security. In his session at 14th Cloud Expo, Sachin Agarwal, VP of Product Marketing and Strategy at SOA Software, will walk you through the various aspects of how an API could be potentially exploited. He will discuss the necessary best practices to secure your data and enterprise applications while continue continuing to support your business’s digital initiatives.
The revolution that happened in the server universe over the past 15 years has resulted in an eco-system that is more open, more democratically innovative and produced better results in technically challenging dimensions like scale. The underpinnings of the revolution were common hardware, standards based APIs (ex. POSIX) and a strict adherence to layering and isolation between applications, daemons and kernel drivers/modules which allowed multiple types of development happen in parallel without hindering others. Put simply, today's server model is built on a consistent x86 platform with few surprises in its core components. A kernel abstracts away the platform, so that applications and daemons are decoupled from the hardware. In contrast, networking equipment is still stuck in the mainframe era. Today, networking equipment is a single appliance, including hardware, OS, applications and user interface come as a monolithic entity from a single vendor. Switching between different vendor'...
You use an agile process; your goal is to make your organization more agile. What about your data infrastructure? The truth is, today’s databases are anything but agile – they are effectively static repositories that are cumbersome to work with, difficult to change, and cannot keep pace with application demands. Performance suffers as a result, and it takes far longer than it should to deliver on new features and capabilities needed to make your organization competitive. As your application and business needs change, data repositories and structures get outmoded rapidly, resulting in increased work for application developers and slow performance for end users. Further, as data sizes grow into the Big Data realm, this problem is exacerbated and becomes even more difficult to address. A seemingly simple schema change can take hours (or more) to perform, and as requirements evolve the disconnect between existing data structures and actual needs diverge.
SYS-CON Events announced today that SherWeb, a long-time leading provider of cloud services and Microsoft's 2013 World Hosting Partner of the Year, will exhibit at SYS-CON's 14th International Cloud Expo®, which will take place on June 10–12, 2014, at the Javits Center in New York City, New York. A worldwide hosted services leader ranking in the prestigious North American Deloitte Technology Fast 500TM, and Microsoft's 2013 World Hosting Partner of the Year, SherWeb provides competitive cloud solutions to businesses and partners around the world. Founded in 1998, SherWeb is a privately owned company headquartered in Quebec, Canada. Its service portfolio includes Microsoft Exchange, SharePoint, Lync, Dynamics CRM and more.
The world of cloud and application development is not just for the hardened developer these days. In their session at 14th Cloud Expo, Phil Jackson, Development Community Advocate for SoftLayer, and Harold Hannon, Sr. Software Architect at SoftLayer, will pull back the curtain of the architecture of a fun demo application purpose-built for the cloud. They will focus on demonstrating how they leveraged compute, storage, messaging, and other cloud elements hosted at SoftLayer to lower the effort and difficulty of putting together a useful application. This will be an active demonstration and review of simple command-line tools and resources, so don’t be afraid if you are not a seasoned developer.
SYS-CON Events announced today that BUMI, a premium managed service provider specializing in data backup and recovery, will exhibit at SYS-CON's 14th International Cloud Expo®, which will take place on June 10–12, 2014, at the Javits Center in New York City, New York. Manhattan-based BUMI (Backup My Info!) is a premium managed service provider specializing in data backup and recovery. Founded in 2002, the company’s Here, There and Everywhere data backup and recovery solutions are utilized by more than 500 businesses. BUMI clients include professional service organizations such as banking, financial, insurance, accounting, hedge funds and law firms. The company is known for its relentless passion for customer service and support, and has won numerous awards, including Customer Service Provider of the Year and 10 Best Companies to Work For.
Chief Security Officers (CSO), CIOs and IT Directors are all concerned with providing a secure environment from which their business can innovate and customers can safely consume without the fear of Distributed Denial of Service attacks. To be successful in today's hyper-connected world, the enterprise needs to leverage the capabilities of the web and be ready to innovate without fear of DDoS attacks, concerns about application security and other threats. Organizations face great risk from increasingly frequent and sophisticated attempts to render web properties unavailable, and steal intellectual property or personally identifiable information. Layered security best practices extend security beyond the data center, delivering DDoS protection and maintaining site performance in the face of fast-changing threats.
From data center to cloud to the network. In his session at 3rd SDDC Expo, Raul Martynek, CEO of Net Access, will identify the challenges facing both data center providers and enterprise IT as they relate to cross-platform automation. He will then provide insight into designing, building, securing and managing the technology as an integrated service offering. Topics covered include: High-density data center design Network (and SDN) integration and automation Cloud (and hosting) infrastructure considerations Monitoring and security Management approaches Self-service and automation
In his session at 14th Cloud Expo, David Holmes, Vice President at OutSystems, will demonstrate the immense power that lives at the intersection of mobile apps and cloud application platforms. Attendees will participate in a live demonstration – an enterprise mobile app will be built and changed before their eyes – on their own devices. David Holmes brings over 20 years of high-tech marketing leadership to OutSystems. Prior to joining OutSystems, he was VP of Global Marketing for Damballa, a leading provider of network security solutions. Previously, he was SVP of Global Marketing for Jacada where his branding and positioning expertise helped drive the company from start-up days to a $55 million initial public offering on Nasdaq.
Performance is the intersection of power, agility, control, and choice. If you value performance, and more specifically consistent performance, you need to look beyond simple virtualized compute. Many factors need to be considered to create a truly performant environment. In his General Session at 14th Cloud Expo, Marc Jones, Vice President of Product Innovation for SoftLayer, will explain how to take advantage of a multitude of compute options and platform features to make cloud the cornerstone of your online presence.
Are you interested in accelerating innovation, simplifying deployments, reducing complexity, and lowering development costs? The cloud is changing the face of application development and deployment, with enterprise-grade infrastructure and platform services making it possible for you to build and rapidly scale enterprise applications. In his session at 14th Cloud Expo, Gene Eun, Sr. Director, Oracle Cloud at Oracle, will discuss the latest solutions and strategies for application developers and enterprise IT organizations to leverage Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) to build and deploy modern business applications in the cloud.
Hybrid cloud refers to the federation of a public and private cloud environment for the purpose of extending the elastic and flexibility of compute, storage and network capabilities, in an on-demand, pay-as-you go basis. The hybrid approach allows a business to take advantage of the scalability and cost-effectiveness that a public cloud computing environment offers without exposing mission-critical applications and data to third-party vulnerabilities. Hybrid cloud environments involve complex management challenges. First, organizations struggle to maintain control over the resources that lie outside of their managed IT scope. They also need greater infrastructure visibility to help reduce maintenance costs and ensure that their company data and resources are properly handled and secured.
As more applications and services move "to the cloud" (public or on-premise), cloud environments are increasingly adopting and building out traditional enterprise features. This in turn is enabling and encouraging cloud adoption from enterprise users. In many ways the definition is blurring as features like continuous operation, geo-distribution or on-demand capacity become the norm. At NuoDB we're involved in both building enterprise software and using enterprise cloud capabilities. In his session at 14th Cloud Expo, Seth Proctor, CTO of NuoDB, Inc., will cover experiences from building, deploying and using enterprise services and suggest some ways to approach moving enterprise applications into a cloud model.