Machine Learning Authors: Zakia Bouachraoui, Liz McMillan, Roger Strukhoff, Pat Romanski, Carmen Gonzalez

Related Topics: Machine Learning , Java IoT, Industrial IoT, Microservices Expo, AJAXWorld RIA Conference & Expo

Machine Learning : Article

AJAX on the Enterprise

Enterprise Matters

In Star Trek, Scotty – James Montgomery Scott – was my favorite character, perhaps inevitably. Spock was always the cool and collected uber-genius, inscrutable and forced into an emotional straightjacket, and while the parallels to the real politik of the time are obvious, to me Spock has always been the epitome of the pure ivory tower researcher. Scotty, on the other hand, was the enginee , in many ways the ultimate hacker. Spock may have been able to tell you what properties of dilithium would induce warp speed, but Scotty knew exactly how to crack the damn crystals in such a way as to eke out that last 0.5 warp factor necessary to escape the baddies chasing the Enterprise.

Scotty knew about estimates – and how much you could pad an estimate to insure that you got the correct time necessary to complete your work down to the minute. He was not above a brawl or two, but when it came right down to it, a vacation was the time it woul take to get to that stack of Linux magazines from 2215 that you’ve been putting aside for the last five years and just read.

The Enterprise needed Scotty far more than it needed Kirk or Spock or even McCoy, yet he was always little more than an odd bit player, the one who was never on the bridge…unless he was repairing one of the computer panels that everyone else kept falling into every time the gravitational system failed, which usually didn’t happen because of anything that Scotty did, but because Kirk seemed to have absolutely no sense of restraint or the cost involved in replacing one of those warp nacelles. And AJAX…let me tell you about AJAX on the Enterprise…

I…oh, I’m sorry…this piece should have been about AJAX in the enterprise. Oops…Um, okay, the slides are already prepared, and I’m going to be in serious trouble if I have to take this thing from scratch in front of such an august group of people as yourselves…so how about letting me tell you a little bit about AJAX on the Enterprise, and we’ll see if maybe, just maybe, there are a few nuggets of wisdom (or at least crystals of dilithium) that we can extract from all this when dealing with the issues of AJAX in the enterprise.

AJAX: The Five-Year Mission

In the introduction to the early Star Trek episodes, the hope of NBC (or at least Gene Roddenberry) was fairly clear – the Enterprise was on a five-year mission. Unfortunately for them, they managed to get through only three before the axe fell (and not surprisingly, when Patrick Stewart’s stentorian tones introduced The Next Generation two decades later, it had become the “Ongoing Mission”).

However, I believe that that there was something about that five-year bit that’s actually pretty important in the here and now. In the 1960s, Central Planning was as much a part of the American economy as it was the old pre-peristroika Soviet economy, and the five-year plan described what was often taken as a convenient metric for how far one could plan before things became too unpredictable.

Five years also seems to be about the lifespan that it takes for “major” technologies to go from being a good idea to becoming foundational. (Note that this differs fairly significantly from product marketing lifecycles, which seem to have about a three-year cycle from inception to obsolescence). I believe that we’re at one of those interesting transitional points where things are really changing in radical ways, the end of one “five-year mission” and the beginning of another, waiting only for Picard to make it so.

Five years ago, several very interesting things were happening, both in software and in business in general. The tech sector was collapsing, warp shields blowing left, right, and center. Now, to someone who’s weathered a few of these five-year plans, the tech sector collapsing was really nothing new – it’s an industry that’s built on promises of miracles and every so often the bill comes due. People invest in tech hoping for outsized gains are generally deluding themselves – tech always underperforms in the short-term, and overperforms in the long, but in ways that few people can really imagine.

However, in spite of, or more likely because of, this effort, people who had been hoarding their cool ideas to capitalize from the next Bay Area VC suddenly found themselves unemployed and sitting in their parents’ spare bedroom with time on their hands while they waited for some response — ANY response — from an employer. So they did what computer people always do when the next boom becomes the next bust – they began to network.

Standards groups that companies had been rushing to get something out the door began to slow down and actually start to take some time thinking about those standards. Several good ones came out between 2000 and 2002 – XSLT, XPath, XML Schema (well, maybe not schema), XForms, XHTML, DocBook (just for a break from the 25th letter of the alphabet), SVG, ebXML, RDF, and a whole host of specialized industry-specific languages from XBRL to MathML to HumanML (yup, it’s up there in OASIS – I was a member of that working group for a while).

Meanwhile, Linus Torvald’s pet project went from an interesting hobbyist effort to looking like a standard itself, accreting stillborn commercial products that were given new life in the long tail, reinforcing the notion believed by most programmers (and espoused quietly by Scotty himself more than once) that if you get two developers communicating with one another, you get something more than twice as good as what each can develop separately, that three tend to add value proportionately, and so forth.

In other words, those five years of “downtime,” was a time of real research and development, not done in hopes of getting that next crucial patent (or the million-dollar payoff) but rather done because the work represented real needs that needed to be rectified and it was to everyone’s benefit to do so. Standards matured, projects started and worked and bloomed and died, and out of the remnants came new projects and the further tinkering with standards.

One of those revenant projects was the ghost of Netscape. I’m going to speak what’s heresy here in San Jose, but Netscape failed because it wasn’t good enough. You give away a perfectly good software product for free against a competitor who has billions in the bank, and while you’ll find people cheering you on, they, and you, are idiots. You will fail. Netscape failed.

It failed largely because the only way it could compete was to change the rules of the game, for the project to divorce itself from the requirements of being a business. It became a ghost in the machine, one largely sustained by the efforts of Brendan Eich, who is one of the most brilliant and insightful people I have ever met. Brendan and others in the nascent Mozilla.org movement decided to create a browser using an HTML-like language called XUL that described the critical components necessary for an application to work, set up a fairly large set of core services written in C++ then proceeded to use JavaScript, another Eich invention, to tie the pieces together.

Ahead, Warp Factor 9

Powering the Web with JavaScript

Let me tell you a little bit about JavaScript. It’s a simple language. Now, I know how to program in Java, and C++ and C# and Smalltalk and a few others that I’ve picked up over the years, but I’ve always liked the simple languages best. JavaScript has only a rudimentary definition of type, though it’ll get more in upcoming releases. PHP is simple, and it’s rapidly becoming the primary Lingua Latina of the server world, replacing Perl, which was also simple, but a little too indeterministic. John Thompson created Lingo for Macromedia Director several years ago, and much of the spirit of that language (along with JavaScript) still lies at the heart of ActionScript.

JavaScript is the most ubiquitous procedural language in the world. There are more lines of JavaScript code in existence right now, I suspect, than there are lines of code in Microsoft Vista. It’s a language that can be picked up by a bright eight-year-old, and yet now, as a programmer you can command a six-figure salary if you know how to write it well. It’s important in great part because of its role as the glue that holds Web pages together, that powers Web applications, and makes it possible for you to boldly go where no one has gone before.

There’s a lesson in that, a lesson that emerged with HTML (and that occasionally needs to be relearned on the XML side — and even on the AJAX side). Simple is good. Indeed, perhaps more than that, simple is so friggin’ essential that your development efforts will most likely fail spectacularly if you don’t embrace that fundamental notion.

JavaScript provides a development environment that’s on nearly every browser and increasingly in many other embedded applications (you can program OpenOffice in JavaScript, for instance). In other words, it’s out there for anyone to use, and increasingly managing that JavaScript should be a part of your larger developmental efforts, because in the long run lowly JavaScript will trump Java, C++, C#, and just about everything else because of that ubiquity.

JavaScript has also been evolving pretty dramatically in the last couple of years, though this evolution may not be as evident if you’ve been working in the Internet Explorer realm. Getters and setters have become pretty much standard fare, array and object manipulation has become considerably more sophisticated and powerful, prototype functional programming is taking advantage of advanced features of languages such as Haskell and Miranda.

However, perhaps as significant has been a wholesale rewriting of the programming methodologies surrounding JavaScript to the extent that most dedicated JavaScript developers today write code that bears surprisingly little resemblance to that of even five years ago. For starters, object-oriented programming has become far more pervasive, even if the “object-ness” tends to bear only a passing resemblance to the class/property/method structures of Java or other similar languages. Code frameworks, such as the Ruby prototype.js object, scriptaculous, the Google and Yahoo AJAX frameworks, Atlas, and so forth, are emerging to handle the more common use cases, such as drag-and-drop operations, as well as spurring new interest in animated dynamic effects. XML manipulation (as will be discussed) is becoming easier, and a second graphical framework, the Canvas object, is opening up additional two-dimensional graphical capabilities.

There’s even an emerging “live” object transport protocol, called JSON, that’s becoming useful in low-level messaging systems. JSON uses the J ava S cript O bject model N otation (hence the acronym) to represent JavaScript objects and to transport them across pipes (using the XMLHttpRequest object, discussed shortly). While JSON is a lightweight alternative to XML for certain tasks, and while it’s not going to replace XML any time soon, JSON does work especially well in setting up object bindings, though it also represents a potential security hole that needs to be watched very carefully, as will also be discussed.

Beam Me Up, Scotty

Messaging and the XMLHttpRequest Object

While JavaScript is the engine that drives much of the AJAX movement, JavaScript by itself isn’t enough. Two factors have changed JavaScript from something that was almost an afterthought in most business circles to one that has been gaining a lot of traction – XML and the Asynchronous HTTP Messaging object, bearing the ungainly name of XMLHttpRequest object but best thought of as the Message Pump. The Message Pump can be thought of as a radio transmitter/receiver – it can retrieve content from an external source – a file, a Web server, or in some cases a database – and it can also send messages to external sources. It was originally a part of Microsoft’s Internet Explorer – and for many years it was one of the better kept secrets in that environment, but the Mozilla folks were smart enough to recognize a good idea when they saw it (and in all honesty, the prototypes for the object were there in Netscape as well, but by then too many other fires were blazing for it to make much difference).

That message pump means that you can send information from the client to the server and back from within a Web page. Of course, you can do that anyway, but the important distinction is that with the Message Pump you’re not necessarily forced into refreshing the entire page every time you need to change some aspect of it. In programming circles, this means that state management no longer has to be done exclusively within the server, but can in fact be significantly offloaded to the client.

Now, to take this back to the Star Trek metaphor again (hah, you thought I’d forgotten, hadn’t you!) imagine the Enterprise without transporters (or look at the final series…or maybe not). You want to get valuable medical equipment to the colonists at Rigel 7, you have to launch a shuttlecraft filled to the brim with equipment and fuel, have it descend into a hot steamy jungle with only the vaguest hope of finding a convenient airstrip to land on, have to provide armed guards for the shuttle while you unload the likely heavy equipment then take off from an unfamiliar planet in hopes of a rendezvous with the mother ship two or three days later. This was Web programming circa 1998.

With the Message Pump and some intelligent JavaScript code you suddenly turn on those transporters (though you also keep the shuttles just in case Scotty isn’t around to press the right levers). You send down an initial support staff via transporters who check the security, make the necessary arrangements, and find out what things are specifically needed. They can then call down equipment that can be positioned to within a few dozen yards of where it needs to end up, bring in additional personel to instruct and train the locals in the use of the equipment, and while there can beam up the few serious medical cases that need to be transported to a more complete medical center. Finally, everyone beams back up to the Enterprise and spend some well-deserved time on the Holodeck.

To get back to contemporary terminology, what this means in practice is that rather than creating a single page from fairly complex components on the server and needing to maintain this information on the server, you instead push the components onto the client, and each component in turn becomes responsible for its own interactivity with the server. The state of the application in turn either ends up residing in each component, or in a client-side “model” server which all of the other components interact with.

The distinction between these two forms is important (and I’ll get to them momentarily) but one of the immediate upshots of this is that the server can in fact become dumb – it doesn’t need, in either situation, to retain anywhere near as much state as it did before for that particular session or application. This has a number of immediate consequences:

  • The server needs to send each component only once then let the component handle the presentation layer directly rather than doing this task itself for each component every time some aspect of state changes. This makes the server-side code easier to write and more modular to maintain.

  • The server can standardize on a single given transport protocol that the various components can use, meaning that you have less need for extensive server-side development of “translators” between databases and a whole host of different presentation formats.

  • The server layer becomes thinner – more a generic conduit between the database and the client than a large set of custom presentation and content pages, and in general this translates into an ability to generate more sessions for the same resources.

From the client standpoint, however, things tend to get potentially more complex. (The third law of programming thermodynamics – complexity never disappears, it only moves around). Browsers are perhaps more homogenous with regards to interfaces than they were five years ago (especially now that Internet Explorer 7 is on the horizon), which in turn means that the amount of specialization code necessary to write to the diversity of browsers has shrunk. It hasn’t disappeared entirely yet, though the good news is that the benefits of maintaining a uniform set of interfaces appears to have sink in to just about all of the major players.

Moreover, Google, Yahoo, and others have been working on creating standard frameworks to handle the small things (such as Sarissa, which provides hooks into the XSLT transformer objects on most browsers and supports a limited JavaScript-enabled transformer for those few platforms that don’t have XSLT support fully enabled), making for a homogenous generalized environment without going overboard in trying to build extensive (and potentially restrictive) frameworks. These efforts are essential, since they provide the last homogenization/standardization in the browser space to use browsers as independent platforms for applications.

Send Down a Couple More Red Shirts

AJAX and Web 2.0

This point in turn raises another and in many ways more crucial one. The AJAX movement is not about calling home without refreshing the page, is not about cool widgets appearing in Web pages, displaying the latest feeds from Slashdot or neat drag-and-drop effects, though certainly all of these have a place. Instead, the primary driving motivations of AJAX is the fundamental belief that the browser is ultimately the last platform, that the Web will not truly be universal until browsers can do everything that a standalone desktop environment can do, regardless of whether there’s a multi-colored flag, a fruit, or an aquatic fowl on the start-up screen .

This shouldn’t be a radical point, but somehow it is. Your customers, the employees at your company, and you individually spend a huge amount of time in front of Web browsers, which are in turn becoming the primary interfaces for all modalities of communication. I haven’t used a standalone e-mail application in months, most of my IM communication occurs in a browser context, and increasingly my production tools exist as extensions to my Web browser. For many people, going from a Web interface to a standalone application seems a step backwards, forcing them from their primary point of contact for news, documentation, and communication into an isolated environment where they have to run the browser in the background and click back and forth to shift between the two.

AJAX has gained momentum not because someone put a Message Pump on a browser with 15% of the market, but because this move has basically catalyzed a reaction among the other browser vendors and projects and caused the Web developer sphere to shout “Enough is enough! If certain parties won’t get their act together then we will solve this problem ourselves!”

Movements are funny things, especially in technology. No one takes them seriously at first — there’s no press releases, no aging rock star starts singing the praises of the product, usually just a handful of people who recognize that there is a problem and that the “market” isn’t rushing to solve it because there’s no immediate money in it.

Often there’s a single event that sparks the whole thing – a programmer gets frustrated because no one can find information about physics papers written at the research center where he works and puts up a small set of tools for free, a grad student who sends out a note saying that because he can’t afford to use the university’s Unix implementation he’s writing his own free one, and would anyone else like to help…events that occur almost daily now that are only important in hindsight. People pitch in not for glory or money (because there’s seldom much of either) but because most software developers are a lot like Scotty – they do things because they need to be done and the problems are interesting enough to them to make it worthwhile.

Yet these sparks are almost invariably observable only in retrospect – and what’s more, such sparks are much like those that start a forest fire – there may be dozens or hundreds of them flickering around a campfire that go nowhere because conditions aren’t right, but if the weather has been dry for too long, if the underbrush is overgrown and primed then any one of those sparks (or many of them) may be responsible for the raging conflagration.

(This same argument, by the way, is one of the most compelling I’ve seen against software patents, as important as they may seem to CEOs and investors – good ideas can only exist in a proper context – too early and there isn’t enough technology to support the concepts, too late and the ideas become obsolete. Because software developers live in a medium of common (and commonly available) ideas, it’s very rare for a truly unique idea to actually occur in this space.)

About now conditions are ripe for AJAX to occur, if by AJAX we mean the consolidation of a baseline platform of XML, JavaScript, and connectivity support across multiple browsers, the development of a methodology for building distributed applications across the Web and agreements on the part of enough market movers to abide by common conventions to create an established framework.

It’s arguable as to whether this should be called Web 2.0. It’s a nice catch-phrase, and I’ve written a few articles myself on what Web 2.0 really means. However, I think that this tends to mask that what’s really going on here is essentially a continuity with what happened in the 1990s, after taking a few steps back to rejig some of the basics…most notably XML.

The argument has been made elsewhere that I’ll repeat here that the end of the dot.com era occurred because we pushed the prototype phase of the Web too far and thought it was complete. I’m a software developer. I practice a form of development that likely won’t be out of place at any of your companies – I start with an idea, a model of where I want to go, and build it much like I would a sculpture’s maquette from clay — add a module here, rewrite a part of an API over there, building something up in pieces until I get about as far as I possibly can. However, and this is the important part, this maquette exists only as a prototype for me to understand what I need to do in the final product. Functionally it’s a mess – the API may not be consistent from one class or structure to the next, the XML may be hideously non-optimal for either performance or updating, the documentation consists mainly of // To Be Written. It will work, indeed, it might work quite well because I’ve been doing this gig for a while, but it will be almost impossible to maintain and passing it off to another programmer at this point will frankly be an invitation for him or her to rewrite it.

However, that maquette is important to the scultor, just as the prototype is important to a software developer. It helps both of them shape their final vision, and having completed the prototype, the developer can then go in and rebuilt it right, insuring that there’s fundamental integrity between and within the components, that the application is able to integrate properly, and that the resulting product is not only functional and (this is the critical thing) maintainable.

Your applications will start to become obsolete the moment the programmers stop working on it because the business cases that the software was intended to solve will change in response to changes in the business environment and change because you’ve solved the immediate business cases, which in turn open up possibilities that weren’t open before. What this means is that your applications will spend far more time in a phase of incipient obsolescence than they did in development, which means in turn that they should be designed to age well.

Given all that, we’ve developed the prototype with Web 1.0, and like all too many products out there, the prototype was shipped. Web 2.0 is not a new Web, it’s what happens after engineers take the crash test dummy from the 100 MPH collision with a tanker truck and examine what’s left. AJAX is, as a consequence, a natural evolution of that.

Entering the Holodeck

XML Messaging/XML Presentation

The issue of XML is perhaps fundamental to this whole discussion. XML is more than just a replacement for HTML, and after a decade of XML being out there I’m not going to spend any time digging into what exactly it is. If you don’t know, ask your programmers. If your programmers don’t know…fire them. Seriously. All of your data will eventually be moving around in XML streams of one sort or another if it doesn’t already, your databases are likely increasingly speaking XQuery as well as SQL (and there are MANY MANY benefits to that), chances are that your middleware is increasingly tasked with transporting and manipulating XML, and of course your client applications are increasingly assuming one or another XML dialect to render content. That of course is not even beginning to talk about the XML services that are out there, the fact that in your verticals your customers, business partners, and competitors are already working with industry-specific XML schemas and will be expecting you to be too. If your programmers don’t know the basics of working with XML then chances are pretty good they’ll be a liability real soon now.

Keep in mind that XML is fundamentally a mechanism for abstraction. It’s not a product – it’s not even, technically speaking, a language. It’s simply a set of conventions for structuring data in particular ways and providing means to identify compositional elements in that data. I remember one client nervously viewing the medical landscape and getting alarmed that a particular hospital group was going to XML. I was personally ecstatic – it meant that the application I had developed for the client would be able to work more easily there than with those groups that were still dealing with patient records on paper (or even in SQL databases). It’s key to this whole Web 2.0 thing – the free flow of information requires a common structural language, and XML, for all its warts, is it.

However, that doesn’t mean that XML by itself is the answer, and more importantly doesn’t mean that XML itself hasn’t been changing to reflect the evolution of the Web. In particular, there are several key aspects of XML that will likely loom large in the AJAX world, and you should be looking very carefully at these as you evaluate technological investment for the next few years.

On Impulse Power

Objective XML (E4X)

This is an effort on the part of a number of different language developers to create a simplified mechanism for working with XML. JavaScript’s version is called ECMAScript for XML (E4X), PHP calls it SimpleXML, Java has XJ, Microsoft’s .NET platform has LINQ. In nearly all cases, its intent is to undo some of the verbosity introduced with the W3C’s Document Object Model (DOM) API and make XML manipulation as easy as object manipulation is in most languages.

I’m Receiving a Message on a Subspace Frequency

Atom and XML Syndication

Syndication is for more than just blogs. Incredible amounts of information in your system, from red shirt security types that are expendible to planets that serve Earl Grey tea can be thought of as lists that can be presented as syndicated information. Atom is an XML format designed to be a good mechanism for presenting lists of content and includes its own (openly available) publishing formats.

The fact that each Atom entry can also contain a veritable forest of links of varying types and semantics also makes it a good lightweight alternative to RDF and other relational formats, especially once people start migrating to Xquery-enabled databases.

For instance, consider as an example a set of schematic diagrams (of, say, starships just to keep in theme here), with each ship being one entry in an atom feed. Each schematic in turn contains a breakdown of the schematic by section, and each section in turn contains a list of callouts that point to specific items of interest in the schema section. If each of these lists consists of entries defined with appropriate linkage structures then this “application” essentially becomes simply a matter of pulling in external “news feeds” that contain enough data to describe particular nodes in a graph while at the same time provide unique links capable of pointing to subordinate “feeds.” Certainly such information can be expressed as RDF as well, but the fixed commonality of Atom feeds means that there’s typically enough to populate generalized components without requiring that “semantics” seep into the equation.

What’s perhaps more compelling about such syndicated feeds is that the system for displaying them assumes that such information changes over time, that the hyperlinked lists are themselves ephemeral and have some form of time or thematic relevance. Obviously, news in general fits this bill well, but does weather information, availability of computer systems, lists of students in a given course and so forth? An Atom list is fundamentally a cohesive “editorial” unit, with all items in the list tied together by some relevant criterion.

One of the critical issues inherent in deploying Web Services has been the question of determining how to designate list or array content. If you think of an Atom feed as an array in which each entry has a minimal set of “metadata” that can provide some context for the links contained in the feed then you can do such things as build tools that will display Atom without needing to know what the specific “payload” is, which in turn makes it much easier to componentize such viewers. This is discussed more in the final section of this article about bindings and components.

Computer? … Yes, Dear…

XQuery and XML Databases

Every era in computing has defined its own paradigms for reading and updating data. If you’re reading a relational database to convert into XML then sending XML up to the server and spending time with DOM converting it into SQL, XQuery is for you. XQuery is a lightweight (and non-XML) language for manipulating XML based in large part on the XPath 2.0 specifications that are going gold this month.

I’ve written two books and perhaps a dozen articles and blog postings on XQuery. They were, admittedly, too far ahead of the curve – the specification for an XML-oriented query language has been underway since before 2000 and even today the formal specification strictly handles only the query (not the update) side of data management. However, one of the most interesting facets of XML databases has been the fact that a number of different mechanisms for handling updates have been tried, and the most elegant of them seem to tie into the notion of performing such updates in the same query space as used for getting XML requests in the first place.

I think it’s fair to talk about XQuery and XML databases in the same breath. The two are fundamentally tied together, and are further tied to the notion of data provider abstraction. A significant amount of the work involved in putting together a Web application of any complexity involves a translation layer to communicate between the database and the Web client. For the most part, such middle-tier services involve using some sort of data abstraction service such as ODBC, ADO, Spring, etc. to read from or write to specific fields in a database, typically using a language such as C# or PHP to handle this work.

Unfortunately, such code is remarkably fragile, is very verbose, always deals with information at the atomic level even when the information may be coming in (or needs to be produced) at a more abstract aggregate level, and all too often is spread out over several different functions or Web Services, making maintenance costly and cumbersome.

XQuery shifts the processing of such queries (and potentially updates as well) out of the server language and into XQuery scripts.

XML databases are becoming both fast and robust, and there are some interesting update extensions proposed (and integrated into open source projects such as eXist and Sleepy Cat XML Database) that handle the update side of XML data query in a clean and seamless way. From personal experience, such databases can cut your development time significantly in the Web application space.

The Flux in Matter/Anti-Matter Converters

XSLT and Transformations

XSLT is old hat in the XML world, but it’s becoming a major part of both the client and server side of AJAX programming because most browsers now support some form of XSLT processor. For those that don’t (and as a means of consolidating the slight variety in existing XSLT APIs) Google has released the Sarissa XSLT Processor that provides JavaScript-based XSLT support (and that invokes native processors when available for performance).

XSLT in the hands of a good programmer is a wonder tool, especially if you can use XSLT 2.0, which goes gold this month as well – it provides a means to provide exhaustive transformations from one form of XML into another, can read from multiple XML streams and produce multiple forms of output, can be easily subclassed to handle variations in formats, and works incredibly well even in bindings (which I’ll talk about shortly).

One additional facet of eXist that I like is the ability to perform XSLT transformations from within an XQuery and then continue processing the results in the same query (including passing the transformation onto a conditional pipeline of other transformations). I can’t stress enough how important XSLT is even now, and how it will perhaps be the dominant mechanism for manipulating XML in the future.

Out of Space-Dock

The Move to XHTML

The distinction may seem minor – XHTML is, for the most part, simply an expression of HTML using XML rules rather than the older SGML rules – but the effects are profound. By shifting to XHTML, you gain all the manipulative tools of XML, including the ability to create arbitrary tags that can be transformed or otherwise bound, the ability to incorporate other namespaces (from the graphically oriented SVG to MathML to RDF for metadata to Xforms), and the means to validate such XHTML content quickly and easily.

What’s more, you can incorporate XHTML fragments into transport formats such as Atom, or as secondary documentation in many other formats. Finally, even browsers that don’t formally recognize XHTML (such as Internet Explorer) can still take XHTML as valid HTML with a minor change in the response header.

More Stories By Kurt Cagle

Kurt Cagle is a developer and author, with nearly 20 books to his name and several dozen articles. He writes about Web technologies, open source, Java, and .NET programming issues. He has also worked with Microsoft and others to develop white papers on these technologies. He is the owner of Cagle Communications and a co-author of Real-World AJAX: Secrets of the Masters (SYS-CON books, 2006).

Comments (3)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

CloudEXPO Stories
The precious oil is extracted from the seeds of prickly pear cactus plant. After taking out the seeds from the fruits, they are adequately dried and then cold pressed to obtain the oil. Indeed, the prickly seed oil is quite expensive. Well, that is understandable when you consider the fact that the seeds are really tiny and each seed contain only about 5% of oil in it at most, plus the seeds are usually handpicked from the fruits. This means it will take tons of these seeds to produce just one bottle of the oil for commercial purpose. But from its medical properties to its culinary importance, skin lightening, moisturizing, and protection abilities, down to its extraordinary hair care properties, prickly seed oil has got lots of excellent rewards for anyone who pays the price.
The platform combines the strengths of Singtel's extensive, intelligent network capabilities with Microsoft's cloud expertise to create a unique solution that sets new standards for IoT applications," said Mr Diomedes Kastanis, Head of IoT at Singtel. "Our solution provides speed, transparency and flexibility, paving the way for a more pervasive use of IoT to accelerate enterprises' digitalisation efforts. AI-powered intelligent connectivity over Microsoft Azure will be the fastest connected path for IoT innovators to scale globally, and the smartest path to cross-device synergy in an instrumented, connected world.
There are many examples of disruption in consumer space – Uber disrupting the cab industry, Airbnb disrupting the hospitality industry and so on; but have you wondered who is disrupting support and operations? AISERA helps make businesses and customers successful by offering consumer-like user experience for support and operations. We have built the world’s first AI-driven IT / HR / Cloud / Customer Support and Operations solution.
ScaleMP is presenting at CloudEXPO 2019, held June 24-26 in Santa Clara, and we’d love to see you there. At the conference, we’ll demonstrate how ScaleMP is solving one of the most vexing challenges for cloud — memory cost and limit of scale — and how our innovative vSMP MemoryONE solution provides affordable larger server memory for the private and public cloud. Please visit us at Booth No. 519 to connect with our experts and learn more about vSMP MemoryONE and how it is already serving some of the world’s largest data centers. Click here to schedule a meeting with our experts and executives.
Darktrace is the world's leading AI company for cyber security. Created by mathematicians from the University of Cambridge, Darktrace's Enterprise Immune System is the first non-consumer application of machine learning to work at scale, across all network types, from physical, virtualized, and cloud, through to IoT and industrial control systems. Installed as a self-configuring cyber defense platform, Darktrace continuously learns what is ‘normal' for all devices and users, updating its understanding as the environment changes.