Welcome!

Machine Learning Authors: Kevin Jackson, Madhavan Krishnan, VP, Cloud Solutions, Virtusa, Pat Romanski, Elizabeth White, William Schmarzo

Related Topics: Java IoT, Industrial IoT, Microservices Expo, Microsoft Cloud, Machine Learning

Java IoT: Article

An Introduction to Client Latency

Speed as perceived by the end user is driven by multiple factors, including how fast results are returned

What is client latency?
Let's face it, users are impatient. They demand a blazingly fast experience and accept no substitutes. Google performed a study in 2010 which proved that when a site responds slowly, visitors spend less time there.

Speed as perceived by the end user is driven by multiple factors, including how fast results are returned and how long it takes a browser to display the
content.

So while the effects of poor performance is obvious, it makes one wonder about the relationship between client latency and the "perception of speed". After all, the user can trigger many state change events (page load, submit a form, interact with a visualization, etc) and all these events have an associated latency to the client. However, are certain types of latency more noticable to the user then others?

Let's look at all the different ways latency can creep in, throughout the ‘request-to-render' cycle.

Time To First Lag
The first place for potential client latency is the infamous Time To First Byte. From a user perspective, this is the worst kind of client latency, as it leaves the user with the dreaded "white screen of death".

WSoD

"White Screen of Death" in Google Chrome

Time To First Byte (TTFB) is the duration between a user making an HTTP request and first byte of the page being received by the browser. The following is a typical scenario for TTFB:

2 seconds to first byte ... but at least everything else is fast!

Two seconds to first byte ... but at least everything else is fast!

  • User enters a URL in browser and hits enter
  • The browser parses the URL
  • The browser performs a DNS lookup (Domain => IP address)
  • The browser opens a TCP connection and sends the HTTP Request
  • (Wait on the network)
  • The browser starts receiving the HTTP Response (first byte received)

As you can see, there are many places in this segment where latency can rear it's ugly head. The DNS lookup may fail, the web server may be under heavy load so it'squeueing requests, network channel congestion may be causing packet loss, or soft errors due to cosmic rays flipping bits.

This type of latency has negative effects on the user, as the user gets stuck looking at a loading/waiting animation. This becomes a true test of the user's patience, as an long TTFB can eventually lead the user to abandon the HTTP Request and close the browser tab.

Client Latency and the DOM
Now that the HTTP Response is pouring in, the browser engine can start doing what it does best...display content! But before the browser can render (or paint) content on the screen, there is still more work to be done!

NOTE: Not all browsers engines are created equal! Consult your local open source browser source code repository for detailed inner workings.

As the browser engine receives HTML, it begins constructing the HTML DOM tree. While parsing the HTML and building the DOM tree, the browser is looking for any
assets (or sub-resources) so it can initiate a download of their content.

So now that the HTTP Response has arrived, let's take a further look into the associated latency for both CSS and Javascript sub-resource and how it effects the user experience.

CSS
CSS related latency gives the user the impression of a "broken page". This comes in two forms, a "flash of unstyled content" (FOUC) where the page appears unstyled for a short period, then it flickers into the right design. However, if the stylesheet never loads, the DOM content will just remain unstyled. While this isn't ideal, its manageable because content is still available to the user but just in a degraded state.

Let's look at the effect of latency on styling and rendering the DOM.

browser_render

  • The browser is parsing the HTML in the HTTP Reponse
  • When the browser locates a link tag, it initiates a non-blocking download of the external CSS stylesheet
  • (Wait on network)
  • Once the download completes, the browser engine begins to parse the CSS
  • While parsing CSS, the browser engine is building all the CSS style rules and begins matching DOM elements to CSS styles
  • Once complete, the browser engine then applys the style rules with the DOM nodes by constructing a render tree
  • This builds a layout, which the browser then renders or paints to the screen

Latency in this segment is highly visible to the user, as it's the last hurdle to overcome before we can actually display content to the user. The first potential bottleneck is the placement of the stylesheet tag. We want the stylesheet to be downloaded as soon as possible, so we can progressively render the page. Thus, use your HEAD and Put stylesheets at the Top. The user-visible effect of not following this is a "flash of unstyled content" or a "white screen of death" (depending on the browser).

Our next stop on the latency express is the network. We always want to use an external stylesheet, however this requires an extra download. So we want this download to be fast, optimized for user location, and highly reliable. Well if you haven't been living under a rock, then you know to Use a Content Delivery Network (CDN). The user-visible effect of not following is slower loading of styled content, as your webserver has to handle extra requests to serve the assets (increased load) and this can be slower for users in geographically distant locations.

Finally, we have how you write your CSS as the last chokepoint. Inefficiently written CSS causes the browser to take longer to build a complete render tree, thus the user-visible effect is a slower loading page render. Fortunately, this is easily avoidable if you just Write efficient CSS.

Following CSS best practices will not only improve your user experience but also provide your users with the appearance that your pages are loading faster.

JavaScript
JavaScript related latency can have differing effects on the user experience. Clicking links that don't seem to do anything, stalled loading of the page, or a "laggy" feeling when scrolling through a page.

There are many places where client latency can appear with Javascript, so let's take ahighly simplified look at how the browser deals with Javascript.

  • The browser is parsing the HTML in the HTTP Reponse
  • When the browser locates a script tag, it initiates a blocking download of the external Javascript file
  • (Wait on network)
  • The browser parses the Javascript file
  • The browser executes the Javascript file
  • (The browser is no longer blocked)
  • Once complete, if the Javascript made changes to the DOM, this forces the construction of a new render tree.
  • This builds a layout, which the browser then renders or paints to the screen

You read that correct, the browser blocks when downloading Javascript files. This can be a real deal breaker for users and can cause a progressively loading page to stop dead in it's tracks. This blocking is a result of the potential use of document.write which can write HTML or Javascript to the document. This means the user has to wait patiently for the Javascript to download, parse, and run before any progress can be made with the page rendering. To avoid this type of client latency, you want to Put Scripts at the Bottom or use async or defer script tags.

All sub-resources have been downloaded, the page has been rendered, and now the user gets to actually do something. As the user navigates around your web application, Javascript is executing as your user triggers state changing events.

Inefficient or slow executing Javascript has the effect of reducing the browsers render/paint rate. This can cause "jank" which is defined as "problematic blocking of a software application's user interface due to slow operations." Without going into great detail, it's important to note that this type of latency is highly noticable to the user. Understanding how to properly use your developer tools is key in the battle against jank. Visit jankfree.org for more information.

Javascript that heavily manipulates the DOM is prone to reduced performance and increased client latency due to the triggering of excessive reflows. Excessive reflows will make the user's browser stutter, which is quite noticable. Everytime you change the DOM, this triggers reflows which forces the construction of a new render tree and then a re-paint of the screen. The DOM is slow, so it's best to take a batch approach when dealing with the DOM. This means, do all your DOM reads first and then do all your DOM writes second (and minimizing the amount of DOM writes).

// Bad!
var height = $('.container').height();
$('.container').height(height + 100);
var width = $('.container').width();
$('.container').width(width + 100);

// Good!
var height = $('.container').height();
var width = $('.container').width();
$('.container').height(height + 100);
$('.container').width(width + 100);

Finally, client latency can also appear when you make a XMLHttpRequest in Javascript. The user has to yet again wait on the network, however clever UX tricks (i.e. loading & transition animations) can help make this feel less noticeable to the user.

Conclusion
At the end of the day, the performance of your web application directly affects the user experience. Latency is a given in any networking application, so understanding how to mitigate its effects on your application will help improve performance and your overall user experience.

More Stories By Dan Riti

Dan Riti is a software developer with AppNeta with a passion for Python, JavaScript and music with a lot of bass.

@CloudExpo Stories
"There's plenty of bandwidth out there but it's never in the right place. So what Cedexis does is uses data to work out the best pathways to get data from the origin to the person who wants to get it," explained Simon Jones, Evangelist and Head of Marketing at Cedexis, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
In his session at 21st Cloud Expo, Carl J. Levine, Senior Technical Evangelist for NS1, will objectively discuss how DNS is used to solve Digital Transformation challenges in large SaaS applications, CDNs, AdTech platforms, and other demanding use cases. Carl J. Levine is the Senior Technical Evangelist for NS1. A veteran of the Internet Infrastructure space, he has over a decade of experience with startups, networking protocols and Internet infrastructure, combined with the unique ability to it...
SYS-CON Events announced today that CrowdReviews.com has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5–7, 2018, at the Javits Center in New York City, NY. CrowdReviews.com is a transparent online platform for determining which products and services are the best based on the opinion of the crowd. The crowd consists of Internet users that have experienced products and services first-hand and have an interest in letting other potential buye...
Agile has finally jumped the technology shark, expanding outside the software world. Enterprises are now increasingly adopting Agile practices across their organizations in order to successfully navigate the disruptive waters that threaten to drown them. In our quest for establishing change as a core competency in our organizations, this business-centric notion of Agile is an essential component of Agile Digital Transformation. In the years since the publication of the Agile Manifesto, the conn...
Enterprises are moving to the cloud faster than most of us in security expected. CIOs are going from 0 to 100 in cloud adoption and leaving security teams in the dust. Once cloud is part of an enterprise stack, it’s unclear who has responsibility for the protection of applications, services, and data. When cloud breaches occur, whether active compromise or a publicly accessible database, the blame must fall on both service providers and users. In his session at 21st Cloud Expo, Ben Johnson, C...
"We're developing a software that is based on the cloud environment and we are providing those services to corporations and the general public," explained Seungmin Kim, CEO/CTO of SM Systems Inc., in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Enterprises are adopting Kubernetes to accelerate the development and the delivery of cloud-native applications. However, sharing a Kubernetes cluster between members of the same team can be challenging. And, sharing clusters across multiple teams is even harder. Kubernetes offers several constructs to help implement segmentation and isolation. However, these primitives can be complex to understand and apply. As a result, it’s becoming common for enterprises to end up with several clusters. Thi...
"MobiDev is a software development company and we do complex, custom software development for everybody from entrepreneurs to large enterprises," explained Alan Winters, U.S. Head of Business Development at MobiDev, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that Telecom Reseller has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Telecom Reseller reports on Unified Communications, UCaaS, BPaaS for enterprise and SMBs. They report extensively on both customer premises based solutions such as IP-PBX as well as cloud based and hosted platforms.
Data scientists must access high-performance computing resources across a wide-area network. To achieve cloud-based HPC visualization, researchers must transfer datasets and visualization results efficiently. HPC clusters now compute GPU-accelerated visualization in the cloud cluster. To efficiently display results remotely, a high-performance, low-latency protocol transfers the display from the cluster to a remote desktop. Further, tools to easily mount remote datasets and efficiently transfer...
"Codigm is based on the cloud and we are here to explore marketing opportunities in America. Our mission is to make an ecosystem of the SW environment that anyone can understand, learn, teach, and develop the SW on the cloud," explained Sung Tae Ryu, CEO of Codigm, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
WebRTC is great technology to build your own communication tools. It will be even more exciting experience it with advanced devices, such as a 360 Camera, 360 microphone, and a depth sensor camera. In his session at @ThingsExpo, Masashi Ganeko, a manager at INFOCOM Corporation, introduced two experimental projects from his team and what they learned from them. "Shotoku Tamago" uses the robot audition software HARK to track speakers in 360 video of a remote party. "Virtual Teleport" uses a multip...
In his session at 21st Cloud Expo, James Henry, Co-CEO/CTO of Calgary Scientific Inc., introduced you to the challenges, solutions and benefits of training AI systems to solve visual problems with an emphasis on improving AIs with continuous training in the field. He explored applications in several industries and discussed technologies that allow the deployment of advanced visualization solutions to the cloud.
"Infoblox does DNS, DHCP and IP address management for not only enterprise networks but cloud networks as well. Customers are looking for a single platform that can extend not only in their private enterprise environment but private cloud, public cloud, tracking all the IP space and everything that is going on in that environment," explained Steve Salo, Principal Systems Engineer at Infoblox, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Conventio...
"We're focused on how to get some of the attributes that you would expect from an Amazon, Azure, Google, and doing that on-prem. We believe today that you can actually get those types of things done with certain architectures available in the market today," explained Steve Conner, VP of Sales at Cloudistics, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"NetApp is known as a data management leader but we do a lot more than just data management on-prem with the data centers of our customers. We're also big in the hybrid cloud," explained Wes Talbert, Principal Architect at NetApp, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"Space Monkey by Vivent Smart Home is a product that is a distributed cloud-based edge storage network. Vivent Smart Home, our parent company, is a smart home provider that places a lot of hard drives across homes in North America," explained JT Olds, Director of Engineering, and Brandon Crowfeather, Product Manager, at Vivint Smart Home, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Gemini is Yahoo’s native and search advertising platform. To ensure the quality of a complex distributed system that spans multiple products and components and across various desktop websites and mobile app and web experiences – both Yahoo owned and operated and third-party syndication (supply), with complex interaction with more than a billion users and numerous advertisers globally (demand) – it becomes imperative to automate a set of end-to-end tests 24x7 to detect bugs and regression. In th...
The question before companies today is not whether to become intelligent, it’s a question of how and how fast. The key is to adopt and deploy an intelligent application strategy while simultaneously preparing to scale that intelligence. In her session at 21st Cloud Expo, Sangeeta Chakraborty, Chief Customer Officer at Ayasdi, provided a tactical framework to become a truly intelligent enterprise, including how to identify the right applications for AI, how to build a Center of Excellence to oper...
"IBM is really all in on blockchain. We take a look at sort of the history of blockchain ledger technologies. It started out with bitcoin, Ethereum, and IBM evaluated these particular blockchain technologies and found they were anonymous and permissionless and that many companies were looking for permissioned blockchain," stated René Bostic, Technical VP of the IBM Cloud Unit in North America, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Conventi...