Machine Learning Authors: Zakia Bouachraoui, Liz McMillan, Roger Strukhoff, Pat Romanski, Carmen Gonzalez

Related Topics: Machine Learning

Machine Learning : Article

Real-World AJAX Book Preview: Working with Asynchronous Server Content

Real-World AJAX Book Preview: Working with Asynchronous Server Content

This content is reprinted from Real-World AJAX: Secrets of the Masters published by SYS-CON Books. To order the entire book now along with companion DVDs for the special pre-order price, click here for more information. Aimed at everyone from enterprise developers to self-taught scripters, Real-World AJAX: Secrets of the Masters is the perfect book for anyone who wants to start developing AJAX applications.

Working with Asynchronous Server Content
One advantage comes from learning to work with JavaScript code asynchronously - it makes explaining the XMLHttpRequest object, arguably the cornerstone of AJAX, much easier.

Until roughly five years ago, working with the Web tended to be almost exclusively a one-way proposition - information flowed from the server to the client (the user agent or browser) and it did so only once. At least this was the way it appeared to most Web users and the vast majority of Web site designers. However, even this wasn't really the complete story. When you downloaded a Web page, the actual process was a little more complicated:

  • The initial HTML page was downloaded and as it loaded it would be automatically parsed by the browser engine.
  • If the HTML page contained images, then these images would be downloaded asynchronously under separate threads, and these threads would in turn also handle the rendering of the images to the browser page. The image rendering system would also check to make sure the images hadn't already been downloaded and made available in the browser cache.
  • If objects were embedded using either the <object> or <embed> tag, these would also be downloaded asynchronously, and would typically run under separate processes that would do their own caching and download management.
  • Finally, if iframes were used, the iframes would handle their own downloading process independent of the initial Web page.
Ultimately, each of these processes would also require the use of some form of what's called a socket, which is a specific internal network "pipe" that could communicate with the server via a specified protocol. Since there are only a limited number of such pipes, this process also meant that such browser management is typically constrained to loading only a few such items at a time (which is one of the reasons why in a page with multiple images, only a few of them will be loading at any given moment).

Introducing the XMLHttpRequest
In 1999, Microsoft introduced a new object called the XMLHttpRequest() object designed to make it possible to open up a socket under user control. Unfortunately, the name is a misleading in several respects:

  • While optimized for use with incoming XML content, it can in fact be used with any text content, including binary content that has been converted into some text representation (such as binHex).
  • While it's been optimized for use with HTTP content, it can work with certain other protocols as well, depending on the implementation.
  • Finally, while it was originally designed to request content from the server, it can, in fact, be used to send rich content, including XML and encoded binary, to the server as well.
Given this, it's perhaps not surprising that it took a long time for this object to really begin to realize its potential (though Microsoft used it internally for applications for quite some time). In 2003, as part of the Mozilla revitalization project, the Mozilla team decided to implement a version of the XMLHttpRequest object in Firefox. Roughly a year after that, Google decided that with both Firefox and Internet Explorer using this technology, they could safely target their Gmail service to roughly 90% of the browser market, and very quickly Opera, Safari and Konqueror all followed suit. Finally, the most recent version of Internet Explorer (7.0) was modified so this object could be invoked without the fairly complex ActiveX shell, bringing it in line with the Mozilla implementation.

In early 2006, there was so much momentum behind the XMLHttpRequest object implementation that the W3C established a Working Group to formalize this object as a standard. The specific interface for the XMLHttpRequest object is given in Table 2.10.

The XMLHttpRequest object solves a number of problems, not the least of which is the simple one of getting XML content into a DOM Object, or even into a Web page. For instance, if you needed to load a DOM from an external file (myXMLFile.xml) on the server (assuming it's in the same folder), you could retrieve it either synchronously as:

var http = new XMLHttpRequest();
http.open("GET", "myXMLFile.xml",false);
var doc = http.getResponseXML;

or asynchronously as:

var doc = null;
var http = new XMLHttpRequest();
http.open("GET", "myXMLFile.xml",true);
http.onreadystatechange = function(
      if (http.readystate == 4){
         doc = http.getResponseXML;

These represent the two forms (synchronous vs. asynchronous) of almost all XMLHttpRequest uses, but in the main are similar - open a connection, set the appropriate parameters, send the message, then wait for completion to get the response. The send() method can of course send content (as the name implies). But for HTTP GET calls, it more typically just sends a null value. (Note here that Internet Explorer assumes a null if you assign no parameters, but Mozilla doesn't. So in general you should always include the null value if dealing with cross-platform code).

The use of synchronous vs. asynchronous calls is important here. Synchronous calls are in-process calls, which means that the system basically freezes until some response comes back. If a connection is established but then fails, the HTTP object could potentially hang indefinitely, which is most convincingly a bad thing for your system's responsiveness.

Asynchronous calls, on the other hand, occur out-of-process, which means that while your code becomes more complex - your processing must be done in an invoked function rather than serially after the send() statement - you also have more control over things when they fail.

The control is further extended via the readyState property and the onreadystatechange event, the only event common to both Internet Explorer and Mozilla. The readyState property can hold one of four potential values, as shown in Table 2.11.

If you want to ensure that the content is completely usable, check to see that the readyState value has been set to 4, as shown in the asynchronous example above. (The synchronous example will only unblock the call once the readyState has been set to 4 implicitly, so no test is required there.)

The one thing you can't do with the events alone is to determine whether or not to call time-out or handle it if it does. Fortunately, this requires only a slight amendment to the asynchronous call:

var isProcessed = false;
var timeoutValue = 5000; // timeout in five seconds
var doc = null;
var http = new XMLHttpRequest();
http.open("GET", "myXMLFile.xml",true);
http.onreadystatechange = function(
      if (http.readystate == 4){
         doc = http.getResponseXML;
         isProcessed = true;
var timeoutToken = window.setTimeout( function(){
      if (!isProcessed){
         alert("Download has timed out!");
      }, timeoutValue);

In this case, a flag is set up (isProcessed) that determines whether the request is processed in the required interval. If it hasn't been, then a notification is sent and the HTTP process is aborted - the call is cleared and the HTTP object will take no further action. Then the original setTimeout() call is also cleared, though in this case that's not completely necessary (once the setTimeout function is processed, it will clear automatically).

As with fishing, simply because you have something on the line doesn't necessarily mean you've caught a fish. It's entirely possible, for instance, that the server can't find the Web page in question (the dread 404 error) and has instead sent a page back detailing this information. From personal experience, you can spend hours trying to figure out why your handy AJAX widget doesn't seem to want to display content, when a simple check of the server message might reveal that it's telling you that you've typed the wrong name for the filename.

As a consequence, you should check the status code after you've retrieved the content:

if (http.status != 200){
else {

In general, the XMLHttpRequest system shares the same socket system as the rest of the browser. This means that when you download a resource from the Web, the browser will automatically cache this resource. This is great in those cases where the XML resources are static, but if you're dealing with a GET-based Web Service, cached content can prove to be a pain. Fortunately, you can shortcut this process by specifically setting request headers, not cache content, when it's requested - and specifically by setting the Cache-Control header to no-cache:


While on the subject of Web Services, most such services (except SOAP-based ones) work by sending parameters to the server. In the case of GET-based services, you'd add the parameters to the query string. For instance, let's say that you had a Web Service that would return the currency exchange value between currencies, giving abbreviated names of the currencies.

If the service uses a GET-based protocol, you'd pass the parameters in the open method:

http.open("GET","http://www.currencyExchange.com/ws/convert?from=USD&to=CND&amount =10000");

If the content being submitted is fairly long (or if the Web Service expects it), you should use the POST method instead, sending the information as ampersand-delimited name/value pairs:


Note that if you have an XML DOM object (xmlDom), you can set the Content-Type header to text/xml and send the DOM as XML:


It should be noted that while it's possible to use other HTTP commands beyond POST and GET (especially with the IE component), the WebDAV commands were generally not supported under Mozilla when this was written, so you should use such WebDAV extensions very carefully.

Additionally, the XMLHttpRequest is generally sandboxed in Web pages to work only with the same server as the Web page that the request was made from. This means that if you want to implement something like a news feed viewer, you either have to work outside of this context (say, in a browser extension) or use some kind of server-side redirect capability to work with specific feeds.

Finally, this chapter has assumed the use of Internet Explorer 6.0 Service Pack 2 or above for the IE implementation of the XMLHttpRequest stack, but if you're working with older versions of IE you have to invoke the object specifically as an ActiveX Control:

var http = new ActiveXObject("MSXML.XMLHttpRequest");

otherwise the interfaces are identical.

This content is reprinted from Real-World AJAX: Secrets of the Masters published by SYS-CON Books. To order the entire book now along with companion DVDs, click here to order.

More Stories By Kurt Cagle

Kurt Cagle is a developer and author, with nearly 20 books to his name and several dozen articles. He writes about Web technologies, open source, Java, and .NET programming issues. He has also worked with Microsoft and others to develop white papers on these technologies. He is the owner of Cagle Communications and a co-author of Real-World AJAX: Secrets of the Masters (SYS-CON books, 2006).

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

CloudEXPO Stories
The precious oil is extracted from the seeds of prickly pear cactus plant. After taking out the seeds from the fruits, they are adequately dried and then cold pressed to obtain the oil. Indeed, the prickly seed oil is quite expensive. Well, that is understandable when you consider the fact that the seeds are really tiny and each seed contain only about 5% of oil in it at most, plus the seeds are usually handpicked from the fruits. This means it will take tons of these seeds to produce just one bottle of the oil for commercial purpose. But from its medical properties to its culinary importance, skin lightening, moisturizing, and protection abilities, down to its extraordinary hair care properties, prickly seed oil has got lots of excellent rewards for anyone who pays the price.
The platform combines the strengths of Singtel's extensive, intelligent network capabilities with Microsoft's cloud expertise to create a unique solution that sets new standards for IoT applications," said Mr Diomedes Kastanis, Head of IoT at Singtel. "Our solution provides speed, transparency and flexibility, paving the way for a more pervasive use of IoT to accelerate enterprises' digitalisation efforts. AI-powered intelligent connectivity over Microsoft Azure will be the fastest connected path for IoT innovators to scale globally, and the smartest path to cross-device synergy in an instrumented, connected world.
There are many examples of disruption in consumer space – Uber disrupting the cab industry, Airbnb disrupting the hospitality industry and so on; but have you wondered who is disrupting support and operations? AISERA helps make businesses and customers successful by offering consumer-like user experience for support and operations. We have built the world’s first AI-driven IT / HR / Cloud / Customer Support and Operations solution.
ScaleMP is presenting at CloudEXPO 2019, held June 24-26 in Santa Clara, and we’d love to see you there. At the conference, we’ll demonstrate how ScaleMP is solving one of the most vexing challenges for cloud — memory cost and limit of scale — and how our innovative vSMP MemoryONE solution provides affordable larger server memory for the private and public cloud. Please visit us at Booth No. 519 to connect with our experts and learn more about vSMP MemoryONE and how it is already serving some of the world’s largest data centers. Click here to schedule a meeting with our experts and executives.
Darktrace is the world's leading AI company for cyber security. Created by mathematicians from the University of Cambridge, Darktrace's Enterprise Immune System is the first non-consumer application of machine learning to work at scale, across all network types, from physical, virtualized, and cloud, through to IoT and industrial control systems. Installed as a self-configuring cyber defense platform, Darktrace continuously learns what is ‘normal' for all devices and users, updating its understanding as the environment changes.