Welcome!

Machine Learning Authors: Yeshim Deniz, Pat Romanski, Liz McMillan, Elizabeth White, Corey Roth

Related Topics: Java IoT, Industrial IoT, Microservices Expo, Microsoft Cloud, Machine Learning

Java IoT: Article

An Introduction to Client Latency

Speed as perceived by the end user is driven by multiple factors, including how fast results are returned

What is client latency?
Let's face it, users are impatient. They demand a blazingly fast experience and accept no substitutes. Google performed a study in 2010 which proved that when a site responds slowly, visitors spend less time there.

Speed as perceived by the end user is driven by multiple factors, including how fast results are returned and how long it takes a browser to display the
content.

So while the effects of poor performance is obvious, it makes one wonder about the relationship between client latency and the "perception of speed". After all, the user can trigger many state change events (page load, submit a form, interact with a visualization, etc) and all these events have an associated latency to the client. However, are certain types of latency more noticable to the user then others?

Let's look at all the different ways latency can creep in, throughout the ‘request-to-render' cycle.

Time To First Lag
The first place for potential client latency is the infamous Time To First Byte. From a user perspective, this is the worst kind of client latency, as it leaves the user with the dreaded "white screen of death".

WSoD

"White Screen of Death" in Google Chrome

Time To First Byte (TTFB) is the duration between a user making an HTTP request and first byte of the page being received by the browser. The following is a typical scenario for TTFB:

2 seconds to first byte ... but at least everything else is fast!

Two seconds to first byte ... but at least everything else is fast!

  • User enters a URL in browser and hits enter
  • The browser parses the URL
  • The browser performs a DNS lookup (Domain => IP address)
  • The browser opens a TCP connection and sends the HTTP Request
  • (Wait on the network)
  • The browser starts receiving the HTTP Response (first byte received)

As you can see, there are many places in this segment where latency can rear it's ugly head. The DNS lookup may fail, the web server may be under heavy load so it'squeueing requests, network channel congestion may be causing packet loss, or soft errors due to cosmic rays flipping bits.

This type of latency has negative effects on the user, as the user gets stuck looking at a loading/waiting animation. This becomes a true test of the user's patience, as an long TTFB can eventually lead the user to abandon the HTTP Request and close the browser tab.

Client Latency and the DOM
Now that the HTTP Response is pouring in, the browser engine can start doing what it does best...display content! But before the browser can render (or paint) content on the screen, there is still more work to be done!

NOTE: Not all browsers engines are created equal! Consult your local open source browser source code repository for detailed inner workings.

As the browser engine receives HTML, it begins constructing the HTML DOM tree. While parsing the HTML and building the DOM tree, the browser is looking for any
assets (or sub-resources) so it can initiate a download of their content.

So now that the HTTP Response has arrived, let's take a further look into the associated latency for both CSS and Javascript sub-resource and how it effects the user experience.

CSS
CSS related latency gives the user the impression of a "broken page". This comes in two forms, a "flash of unstyled content" (FOUC) where the page appears unstyled for a short period, then it flickers into the right design. However, if the stylesheet never loads, the DOM content will just remain unstyled. While this isn't ideal, its manageable because content is still available to the user but just in a degraded state.

Let's look at the effect of latency on styling and rendering the DOM.

browser_render

  • The browser is parsing the HTML in the HTTP Reponse
  • When the browser locates a link tag, it initiates a non-blocking download of the external CSS stylesheet
  • (Wait on network)
  • Once the download completes, the browser engine begins to parse the CSS
  • While parsing CSS, the browser engine is building all the CSS style rules and begins matching DOM elements to CSS styles
  • Once complete, the browser engine then applys the style rules with the DOM nodes by constructing a render tree
  • This builds a layout, which the browser then renders or paints to the screen

Latency in this segment is highly visible to the user, as it's the last hurdle to overcome before we can actually display content to the user. The first potential bottleneck is the placement of the stylesheet tag. We want the stylesheet to be downloaded as soon as possible, so we can progressively render the page. Thus, use your HEAD and Put stylesheets at the Top. The user-visible effect of not following this is a "flash of unstyled content" or a "white screen of death" (depending on the browser).

Our next stop on the latency express is the network. We always want to use an external stylesheet, however this requires an extra download. So we want this download to be fast, optimized for user location, and highly reliable. Well if you haven't been living under a rock, then you know to Use a Content Delivery Network (CDN). The user-visible effect of not following is slower loading of styled content, as your webserver has to handle extra requests to serve the assets (increased load) and this can be slower for users in geographically distant locations.

Finally, we have how you write your CSS as the last chokepoint. Inefficiently written CSS causes the browser to take longer to build a complete render tree, thus the user-visible effect is a slower loading page render. Fortunately, this is easily avoidable if you just Write efficient CSS.

Following CSS best practices will not only improve your user experience but also provide your users with the appearance that your pages are loading faster.

JavaScript
JavaScript related latency can have differing effects on the user experience. Clicking links that don't seem to do anything, stalled loading of the page, or a "laggy" feeling when scrolling through a page.

There are many places where client latency can appear with Javascript, so let's take ahighly simplified look at how the browser deals with Javascript.

  • The browser is parsing the HTML in the HTTP Reponse
  • When the browser locates a script tag, it initiates a blocking download of the external Javascript file
  • (Wait on network)
  • The browser parses the Javascript file
  • The browser executes the Javascript file
  • (The browser is no longer blocked)
  • Once complete, if the Javascript made changes to the DOM, this forces the construction of a new render tree.
  • This builds a layout, which the browser then renders or paints to the screen

You read that correct, the browser blocks when downloading Javascript files. This can be a real deal breaker for users and can cause a progressively loading page to stop dead in it's tracks. This blocking is a result of the potential use of document.write which can write HTML or Javascript to the document. This means the user has to wait patiently for the Javascript to download, parse, and run before any progress can be made with the page rendering. To avoid this type of client latency, you want to Put Scripts at the Bottom or use async or defer script tags.

All sub-resources have been downloaded, the page has been rendered, and now the user gets to actually do something. As the user navigates around your web application, Javascript is executing as your user triggers state changing events.

Inefficient or slow executing Javascript has the effect of reducing the browsers render/paint rate. This can cause "jank" which is defined as "problematic blocking of a software application's user interface due to slow operations." Without going into great detail, it's important to note that this type of latency is highly noticable to the user. Understanding how to properly use your developer tools is key in the battle against jank. Visit jankfree.org for more information.

Javascript that heavily manipulates the DOM is prone to reduced performance and increased client latency due to the triggering of excessive reflows. Excessive reflows will make the user's browser stutter, which is quite noticable. Everytime you change the DOM, this triggers reflows which forces the construction of a new render tree and then a re-paint of the screen. The DOM is slow, so it's best to take a batch approach when dealing with the DOM. This means, do all your DOM reads first and then do all your DOM writes second (and minimizing the amount of DOM writes).

// Bad!
var height = $('.container').height();
$('.container').height(height + 100);
var width = $('.container').width();
$('.container').width(width + 100);

// Good!
var height = $('.container').height();
var width = $('.container').width();
$('.container').height(height + 100);
$('.container').width(width + 100);

Finally, client latency can also appear when you make a XMLHttpRequest in Javascript. The user has to yet again wait on the network, however clever UX tricks (i.e. loading & transition animations) can help make this feel less noticeable to the user.

Conclusion
At the end of the day, the performance of your web application directly affects the user experience. Latency is a given in any networking application, so understanding how to mitigate its effects on your application will help improve performance and your overall user experience.

More Stories By Dan Riti

Dan Riti is a software developer with AppNeta with a passion for Python, JavaScript and music with a lot of bass.

CloudEXPO Stories
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
In an era of historic innovation fueled by unprecedented access to data and technology, the low cost and risk of entering new markets has leveled the playing field for business. Today, any ambitious innovator can easily introduce a new application or product that can reinvent business models and transform the client experience. In their Day 2 Keynote at 19th Cloud Expo, Mercer Rowe, IBM Vice President of Strategic Alliances, and Raejeanne Skillern, Intel Vice President of Data Center Group and GM, discussed how clients in this new era of innovation can apply data, technology, plus human ingenuity to springboard to advance new business value and opportunities.
With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions need to be truly scalable. Far from it. There are at least six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments. In this presentation, the speaker will detail these pain points and explain how cloud can address them.
Discussions of cloud computing have evolved in recent years from a focus on specific types of cloud, to a world of hybrid cloud, and to a world dominated by the APIs that make today's multi-cloud environments and hybrid clouds possible. In this Power Panel at 17th Cloud Expo, moderated by Conference Chair Roger Strukhoff, panelists addressed the importance of customers being able to use the specific technologies they need, through environments and ecosystems that expose their APIs to make true change and transformation possible.
The current age of digital transformation means that IT organizations must adapt their toolset to cover all digital experiences, beyond just the end users’. Today’s businesses can no longer focus solely on the digital interactions they manage with employees or customers; they must now contend with non-traditional factors. Whether it's the power of brand to make or break a company, the need to monitor across all locations 24/7, or the ability to proactively resolve issues, companies must adapt to the new world.