Welcome!

IoT User Interface Authors: Elizabeth White, Liz McMillan, John Esposito, Pat Romanski, AppDynamics Blog

Related Topics: Java IoT, IoT User Interface

Java IoT: Article

Performance as Key to Success

How Online News Portals could do better

What factors make you think a web page is good or not? What keeps on that page longer than others? On the one hand it is the content on the page and whether this content is of interest to you. On the other it is the velocity with which you can navigate through the individual pages. High-Speed internet and performance-optimized pages make our day-to-day browsing easier when accessing our emails, tweets, latest updates on sports or news. With all the changes in the recent years in Web Performance Optimization (WPO) we’ve been spoiled by those sites that follow all these Best Practices and boosted their web site experience. No wonder that we start losing our patience with a site that doesn’t respond as fast as we’ve come to expect.

For this particular reason Google modified their ranking algorithm to now also include web performance as one of their metrics. The speed of a web page – the time from entering the URL in the browser until the page has been fully loaded – has now become one factor that defines whether your site shows up on the top or on the bottom of a search result. A good reason to pick “fast by default” as the main theme for this year’s Velocity 2010 Conference. Steve Souders – the driving force behind Web Performance Optimization – hosted this conference and it became clear the significant impact Web Site Performance has on end user behavior and thus the success of a web site.

At last year’s Velocity the first results on Business Impact of Performance were presented to the public. Microsoft, Yahoo and Google presented their results of their internal tests that showed that slower pages have a direct impact on banner clicks – and therefore generated revenue. On the contrary – Shopzilla presented the results of a web site overhaul showing how a faster web site positively impacts the number of users, time spent on site, number of clicks and generated revenue.

Who is faster? Who is better? Bild.de or Spiegel.de?

Similar to my analysis of the FIFA World Cup Website, the Golf Masters or the web site for the Winter Olympic Games in Vancouver I wanted to take a closer look at the two biggest German news portals: Bild.de and Spiegel.de.

Based on Alexa, these are the top two news portals in Germany. Both portals probably have a very different reader community as both publishers have a different way to “present” news. My analysis focuses on how well these pages follow the Best Practices on Web Performance Optimization – therefore the actual content doesn’t matter, as it is all about how to deliver the content. Both pages deliver high-volume, multimedia content. Both pages make money with online ads and therefore it is in their interest that many users visit their site, stay as long as possible on their site and click as many ads as possible. We’ve learned from Google, Bing, etc… that speed is a critical factor in keeping user on the page – now let’s see how these two sites do:

There are three great and free tools available that we can use for an analysis like this:

All three tools analyze individual web sites based on the Best Practices from dynaTrace, Yahoo and Google and provide a nice overview on whether the rules discussed in these documents were met or not.

The Test scenario
On both sites – www.bild.de and www.spiegel.de – I looked at two individual pages: the start page and the overview page for politics. Important for my testing is to start with a cleared browser cache in order to analyze the page as a First Time Visitor. It is recommended to run the same test again – this time with a primed browser cache – and then compare the timings.

The Result
Using any of the three mentioned tools is rather simple. dynaTrace AJAX allows me to record a complete browser session which includes all 4 pages I am testing. The following screenshot shows the Performance Report that is opened when double clicking on the recorded session showing the four tested pages:

dynaTrace Performance Report shows all 4 pages in performance comparison

dynaTrace Performance Report shows all 4 pages in performance comparison

We can make the following interesting observations:

  1. Both sites have a First Impression Time of less than 2 seconds. This means that it takes less than 2 seconds until the user gets a first visual impression of the website – which is considered acceptable
  2. The Fully Loaded Time of both start pages is very high – both take about 14 seconds to fully load. The main reason for this is the amount of multimedia content (mainly images)
  3. Bild.de requires 289 HTTP Requests to fully load – that’s 113 more than Spiegel.de
  4. Bild.de uses JavaScript and XHR (XmlHttpRequests) to dynamically load content
  5. The full page size of bild.de is 4.6MB. That is 2.5 times more than Spiegel.de which “only” has 1.8MB

Lifetime of a Web Page
One of the nicest features of dynaTrace AJAX is the Timeline View. This view shows all activities (HTTP Requests, JavaScript execution, Rendering, XHR Requests and page events) that happen on a single page in chronological order. We can also see all requests split up by domain which makes it easy to spot which domains serve a lot of content or which domains serve the content very slowly. Especially on pages that include external 3rd party content, e.g.: Ad-Banners, it is interesting to see how this type of content slows down your page. Google’s PageSpeed has a new feature in their latest version which allows you to focus on your own or external content when analyzing page performance.

The following two screenshots show the dynaTrace Timeline for the start page of Bild.de and Spiegel.de:

Bild.de loads most of their images from a dedicated image domain. Rendering activity seems to very high as well

Bild.de loads most of their images from a dedicated image domain. Rendering activity seems to very high as well

Spiegel.de serves all images from its primary domain. It also hardly uses JavaScript or excessive rendering

Spiegel.de serves all images from its primary domain. It also hardly uses JavaScript or excessive rendering

There are some fundamental differences between the two start pages:

  • Bild.de uses multiple domains to deliver multimedia content, e.g.: bilder.bild.de or newscase.bild.de. Splitting content on multiple domains is a Best Practice that is called Domain Sharding. It has the advantage of letting the browser use more physical network connections to download more content in parallel
  • Spiegel.de delivers most of its multimedia content from the primary www.spiegel.de domain which is the bottleneck of their deployment. A browser has a limited number of physical network connections to a single domain, e.g: IE 7 uses 2 connections. If there are more than 2 resources to be downloaded from a domain they need to get queued up and have to wait for a connection to become available. Domain Sharding solves this problem by splitting content on multiple domains
  • both pages have individual resources that take extraordinarily long to load, e.g.: initial HTML page and CSS files on spiegel.de or 2 Flash components on bild.de
  • Bild.de shows a constantly high rendering activity – caused by animated images as well as the large number of images on that page

Key Performance Indicators
Additionally to the performance metrics as shown in the dynaTrace AJAX Performance Report Overview the Key Performance Indicator (KPI) tab shows a set of additional important KPI’s:

Key Performance Indicators for Bild.de - showing high load time, server-time, wait time and number of requests

Key Performance Indicators for Bild.de - showing high load time, server-time, wait time and number of requests

As mentioned earlier, bild.de requires many HTTP Roundtrips (289) in order to load the full page. The number one rule in Web Performance Optimization says to minimize HTTP Roundtrips. 289 roundtrips from browser to server is definitely too much. Every roundtrip needs to wait on a free physical network connection (read more on The Two HTTP Connection Limit Issue). Every roundtrip includes the overhead of network latency between browser and server as well as the overhead of the HTTP Protocol itself (HTTP Headers very often contribute a large percentage of the total roundtrip size).

The KPI Report also shows how many resources (images, CSS, …) are being cached or have to be retrieved from the server every time. The first table shows that 80 resources do not use any browser cache headers. That means that these resources have to be re-downloaded on every subsequent visit of the same user.

Usage of the Browser Cache
The Browser Caching Tab on the dynaTrace AJAX Performance Report performs a detailed analysis of HTTP Cache Headers on every downloaded resource. It is recommended to read Best Practices on Browser Caching which explains available browser caching options.

Bild.de and Spiegel.de don’t make optimal use of the browser cache. The following illustration shows a list of all resources that have no cache setting, a cache setting with a past date or a very short cache expiration date:

Many resoures on the page will not be cached at all or have a very short expiration date

Many resources on the page will not be cached at all or have a very short expiration date

Obviously it doesn’t make sense to cache every image on a page. Images that have a short life time should not fill up the local browser cache unnecessarily. If we have a closer look at some of the images on these pages here it seems that at least some of them could be cached for a longer time as they won’t change that frequently. The following section of the website shows images that are cached only shortly (< 48 hours):

Some examples on images that can be cached longer as they won't change that frequently

Some examples on images that can be cached longer as they won't change frequently

These logos will probably not change every 2 days – therefore it makes sense to specify a Far-Future Expiration Header. This reduces the number of roundtrips for revisiting users.

Reducing HTTP Roundtrips
I talked about this several times now – the top rule is to reduce network roundtrips. Besides making use of the browser’s cache there are other ways to reduce the roundtrips for every user (not just revisiting). dynaTrace AJAX has the Network tab that shows those roundtrips that are considered “unnecessary”:

7 unnecessary HTTP Redirects, CSS, JavaScript and Images on this page

Seven unnecessary HTTP Redirects, CSS, JavaScript and Images on this page

HTTP Redirects allow implementing some important use cases, e.g.: authentication, short memorable URLs or end-user monitoring. Too often though HTTP Redirects are caused by wrong configuration settings on the web server. Redirects that can be avoided are a great way to improve web site performance. On the start page of spiegel.de we have 5 redirects – bild.de has 7. A redirect means an additional HTTP Roundtrip as the browser needs to follow the redirect and request another URL in order to get to the originally requested resource.

Great savings can also be achieved on CSS, JavaScript and image resources. All 3 types can potentially be merged into fewer resources. The Best Practice on Network Requests and Roundtrips talks about CSS and JavaScript merging and compression. It also talked about CSS Sprites. CSS Sprites is a technique to combine multiple images in a single resource and use CSS styles to show the individual images on the correct location on the page. Check out the Best Practices on CSS Sprites.

Reducing resources not only brings the advantage of fewer roundtrips. It also means that fewer resources need to be downloaded from the same domain which reduces waiting time (remember the limitation of 2 connections per domain on IE7 mentioned earlier in this blog?).

On all pages of the two sites (Bild and Spiegel) we can observe too many JavaScript, CSS and image resources. It is not possible to merge all of these files – as this is technically not always feasible. It looks like some of these files could be merged and would therefore greatly enhance load time.

Dynamic content
Even though the majority of the content on a news site is static for at least a while (new news doesn’t arrive every second) there is enough content that needs to be generated dynamically for every user. Examples for this are any type of ticker information, weather or personalized ads.

dynaTrace identifies dynamic content based on the rules defined in Best Practice on Server-Side Performance. All these requests are listed on the Server-Side tab in the Performance Report. The Server-Time is the time also known as Time-To-First-Byte. It is the time from the last byte of the HTTP Request sent until the first byte of the response is received. This also includes network latency between the browser and the server – it is however the closest you can get to the server-time without actually measuring the time on the server. The following illustration shows the dynamic resources of bild.de:

dynaTrace shows slow running server-side requests - both on bild.de as well as Ad-Service domains

dynaTrace shows slow running server-side requests - both on bild.de as well as Ad-Service domains

The slowest requests are stock information requests, the initial page request itself as well as the weather information. We also see some requests to an external ad-service and requests to a web-tracking service (not visible in the screenshot above). Server-side performance mainly becomes an issue for dynamically generated content – and usually only in a scenario where many users want to access data from the server. In this case it becomes an even bigger problem because it prevents a larger number of users to actually click on those ads that generate revenue.

Top 10 Performance Problems taken from Zappos, Monster, Thomson and Co lists the typical server-side performance problems and shows how to prevent them.

Conclusion: Who delivers the fastest News?
In our tests, neither site does particularly well. Both sites deliver lots of content without following the Best Practices on Browser Caching or Network Roundtrips. The fully-loaded time of both pages is very high with 14 seconds each (obviously this also depends on your connection speed). Both sites deliver an acceptable first visual impression (<2 seconds).

dynaTrace, PageSpeed and YSlow allow uploading performance results to ShowSlow. ShowSlow is an Open Source Platform and can be used as performance repository for web metrics. ShowSlow.com hosts a public instance of the ShowSlow server and allows you to upload and compare results:

Performance comparision between the sites using all available Web Performance Tools

Performance comparison between the sites using all available Web Performance Tools

The difference in ranking and grading is that every tool puts the focus on different rules. dynaTrace puts its focus on load time first and then on rules such as browser caching or network roundtrips. YSlow and PageSpeed focus more on the Best Practice rules. A good mix of tool usage is therefore recommended – especially because no tool supports every browser anyway.

Now – who is the winner of this analysis? Let’s hope it is us – the readers of these online news portals. With the result of this analysis, with the help of the tools and with the help of people like Steve Souders, Web Site Performance is put in the spotlight and will ultimately lead to better and faster websites:

Performance == More Users == More Revenue

Related reading:

  1. How better Caching helps Frankfurt’s Airport Website to handle additional load caused by the Volcano Along with so many others I am stranded in Europe...
  2. Hands-On Guide: Verifying FIFA World Cup Web Site against Performance Best Practices Whether you call it Football, Futbol, Fussball, Futebol, Calcio or...
  3. Video on Common Performance Antipatterns online [caption id="attachment_1020" align="alignleft" width="250" caption="Parleys - Common Performance Antipatterns"][/caption] Last...
  4. Thomson Reuters on How they deliver High Performance Online Services We have had some great Webinars with our customers in...
  5. www.utah.travel in minutes">How to analyze and speed up content rich web sites likes www.utah.travel in minutes One of my daily activities is checking interesting blog posts...

More Stories By Andreas Grabner

Andreas Grabner has been helping companies improve their application performance for 15+ years. He is a regular contributor within Web Performance and DevOps communities and a prolific speaker at user groups and conferences around the world. Reach him at @grabnerandi

@CloudExpo Stories
SYS-CON Events announced today that Ericsson has been named “Gold Sponsor” of SYS-CON's @ThingsExpo, which will take place on June 7-9, 2016, at the Javits Center in New York, New York. Ericsson is a world leader in the rapidly changing environment of communications technology – providing equipment, software and services to enable transformation through mobility. Some 40 percent of global mobile traffic runs through networks we have supplied. More than 1 billion subscribers around the world re...
Increasing IoT connectivity is forcing enterprises to find elegant solutions to organize and visualize all incoming data from these connected devices with re-configurable dashboard widgets to effectively allow rapid decision-making for everything from immediate actions in tactical situations to strategic analysis and reporting. In his session at 18th Cloud Expo, Shikhir Singh, Senior Developer Relations Manager at Sencha, will discuss how to create HTML5 dashboards that interact with IoT devic...
Artificial Intelligence has the potential to massively disrupt IoT. In his session at 18th Cloud Expo, AJ Abdallat, CEO of Beyond AI, will discuss what the five main drivers are in Artificial Intelligence that could shape the future of the Internet of Things. AJ Abdallat is CEO of Beyond AI. He has over 20 years of management experience in the fields of artificial intelligence, sensors, instruments, devices and software for telecommunications, life sciences, environmental monitoring, process...
Internap Corporation has expanded its OpenStack-based bare-metal Infrastructure-as-a-Service offering, AgileSERVER 2.0, to its data centers in Amsterdam, Dallas and Santa Clara, Calif. Launched in 2015 out of Internap’s New York Metro data center in Secaucus, N.J., AgileSERVER 2.0 is now available in four locations globally, enabling enterprises and devops teams running mission-critical applications and big data workloads to build scale-out infrastructure environments that are higher performing ...
You think you know what’s in your data. But do you? Most organizations are now aware of the business intelligence represented by their data. Data science stands to take this to a level you never thought of – literally. The techniques of data science, when used with the capabilities of Big Data technologies, can make connections you had not yet imagined, helping you discover new insights and ask new questions of your data. In his session at @ThingsExpo, Sarbjit Sarkaria, data science team lead ...
Much of the value of DevOps comes from a (renewed) focus on measurement, sharing, and continuous feedback loops. In increasingly complex DevOps workflows and environments, and especially in larger, regulated, or more crystallized organizations, these core concepts become even more critical. In his session at @DevOpsSummit at 18th Cloud Expo, Andi Mann, Chief Technology Advocate at Splunk, will show how, by focusing on 'metrics that matter,' you can provide objective, transparent, and meaningfu...
Struggling to keep up with increasing application demand? Learn how Platform as a Service (PaaS) can streamline application development processes and make resource management easy.
If there is anything we have learned by now, is that every business paves their own unique path for releasing software- every pipeline, implementation and practices are a bit different, and DevOps comes in all shapes and sizes. Software delivery practices are often comprised of set of several complementing (or even competing) methodologies – such as leveraging Agile, DevOps and even a mix of ITIL, to create the combination that’s most suitable for your organization and that maximize your busines...
SYS-CON Events announced today that Peak 10, Inc., a national IT infrastructure and cloud services provider, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. Peak 10 provides reliable, tailored data center and network services, cloud and managed services. Its solutions are designed to scale and adapt to customers’ changing business needs, enabling them to lower costs, improve performance and focus inter...
Up until last year, enterprises that were looking into cloud services usually undertook a long-term pilot with one of the large cloud providers, running test and dev workloads in the cloud. With cloud’s transition to mainstream adoption in 2015, and with enterprises migrating more and more workloads into the cloud and in between public and private environments, the single-provider approach must be revisited. In his session at 18th Cloud Expo, Yoav Mor, multi-cloud solution evangelist at Cloudy...
See storage differently! Storage performance problems have only gotten worse and harder to solve as applications have become largely virtualized and moved to a cloud-based infrastructure. Storage performance in a virtualized environment is not just about IOPS, it is about how well that potential performance is guaranteed to individual VMs for these apps as the number of VMs keep going up real time. In his session at 18th Cloud Expo, Dhiraj Sehgal, in product and marketing at Tintri, will discu...
So, you bought into the current machine learning craze and went on to collect millions/billions of records from this promising new data source. Now, what do you do with them? Too often, the abundance of data quickly turns into an abundance of problems. How do you extract that "magic essence" from your data without falling into the common pitfalls? In her session at @ThingsExpo, Natalia Ponomareva, Software Engineer at Google, will provide tips on how to be successful in large scale machine lear...
SYS-CON Events announced today that SoftLayer, an IBM Company, has been named “Gold Sponsor” of SYS-CON's 18th Cloud Expo, which will take place on June 7-9, 2016, at the Javits Center in New York, New York. SoftLayer, an IBM Company, provides cloud infrastructure as a service from a growing number of data centers and network points of presence around the world. SoftLayer’s customers range from Web startups to global enterprises.
The IoTs will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform. In his session at @ThingsExpo, Craig Sproule, CEO of Metavine, will demonstrate how to move beyond today's coding paradigm and share the must-have mindsets for removing complexity from the development proc...
You deployed your app with the Bluemix PaaS and it's gaining some serious traction, so it's time to make some tweaks. Did you design your application in a way that it can scale in the cloud? Were you even thinking about the cloud when you built the app? If not, chances are your app is going to break. Check out this webcast to learn various techniques for designing applications that will scale successfully in Bluemix, for the confidence you need to take your apps to the next level and beyond.
There is an ever-growing explosion of new devices that are connected to the Internet using “cloud” solutions. This rapid growth is creating a massive new demand for efficient access to data. And it’s not just about connecting to that data anymore. This new demand is bringing new issues and challenges and it is important for companies to scale for the coming growth. And with that scaling comes the need for greater security, gathering and data analysis, storage, connectivity and, of course, the...
Peak 10, Inc., has announced the implementation of IT service management, a business process alignment initiative based on the widely adopted Information Technology Infrastructure Library (ITIL) framework. The implementation of IT service management enhances Peak 10’s current service-minded approach to IT delivery by propelling the company to deliver higher levels of personalized and prompt service. The majority of Peak 10’s operations employees have been trained and certified in the ITIL frame...
SYS-CON Events announced today that Fusion, a leading provider of cloud services, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. Fusion, a leading provider of integrated cloud solutions to small, medium and large businesses, is the industry's single source for the cloud. Fusion's advanced, proprietary cloud service platform enables the integration of leading edge solutions in the cloud, including cloud...
SYS-CON Events announced today that Enzu, a leading provider of cloud hosting solutions, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. Enzu’s mission is to be the leading provider of enterprise cloud solutions worldwide. Enzu enables online businesses to use its IT infrastructure to their competitive advantage. By offering a suite of proven hosting and management services, Enzu wants companies to foc...
SYS-CON Events announced today that DatacenterDynamics has been named “Media Sponsor” of SYS-CON's 18th International Cloud Expo, which will take place on June 7–9, 2016, at the Javits Center in New York City, NY. DatacenterDynamics is a brand of DCD Group, a global B2B media and publishing company that develops products to help senior professionals in the world's most ICT dependent organizations make risk-based infrastructure and capacity decisions.