Welcome!

AJAX & REA Authors: Sebastian Kruk, RealWire News Distribution, Harald Zeitlhofer

Related Topics: Java, AJAX & REA

Java: Article

Performance as Key to Success

How Online News Portals could do better

What factors make you think a web page is good or not? What keeps on that page longer than others? On the one hand it is the content on the page and whether this content is of interest to you. On the other it is the velocity with which you can navigate through the individual pages. High-Speed internet and performance-optimized pages make our day-to-day browsing easier when accessing our emails, tweets, latest updates on sports or news. With all the changes in the recent years in Web Performance Optimization (WPO) we’ve been spoiled by those sites that follow all these Best Practices and boosted their web site experience. No wonder that we start losing our patience with a site that doesn’t respond as fast as we’ve come to expect.

For this particular reason Google modified their ranking algorithm to now also include web performance as one of their metrics. The speed of a web page – the time from entering the URL in the browser until the page has been fully loaded – has now become one factor that defines whether your site shows up on the top or on the bottom of a search result. A good reason to pick “fast by default” as the main theme for this year’s Velocity 2010 Conference. Steve Souders – the driving force behind Web Performance Optimization – hosted this conference and it became clear the significant impact Web Site Performance has on end user behavior and thus the success of a web site.

At last year’s Velocity the first results on Business Impact of Performance were presented to the public. Microsoft, Yahoo and Google presented their results of their internal tests that showed that slower pages have a direct impact on banner clicks – and therefore generated revenue. On the contrary – Shopzilla presented the results of a web site overhaul showing how a faster web site positively impacts the number of users, time spent on site, number of clicks and generated revenue.

Who is faster? Who is better? Bild.de or Spiegel.de?

Similar to my analysis of the FIFA World Cup Website, the Golf Masters or the web site for the Winter Olympic Games in Vancouver I wanted to take a closer look at the two biggest German news portals: Bild.de and Spiegel.de.

Based on Alexa, these are the top two news portals in Germany. Both portals probably have a very different reader community as both publishers have a different way to “present” news. My analysis focuses on how well these pages follow the Best Practices on Web Performance Optimization – therefore the actual content doesn’t matter, as it is all about how to deliver the content. Both pages deliver high-volume, multimedia content. Both pages make money with online ads and therefore it is in their interest that many users visit their site, stay as long as possible on their site and click as many ads as possible. We’ve learned from Google, Bing, etc… that speed is a critical factor in keeping user on the page – now let’s see how these two sites do:

There are three great and free tools available that we can use for an analysis like this:

All three tools analyze individual web sites based on the Best Practices from dynaTrace, Yahoo and Google and provide a nice overview on whether the rules discussed in these documents were met or not.

The Test scenario
On both sites – www.bild.de and www.spiegel.de – I looked at two individual pages: the start page and the overview page for politics. Important for my testing is to start with a cleared browser cache in order to analyze the page as a First Time Visitor. It is recommended to run the same test again – this time with a primed browser cache – and then compare the timings.

The Result
Using any of the three mentioned tools is rather simple. dynaTrace AJAX allows me to record a complete browser session which includes all 4 pages I am testing. The following screenshot shows the Performance Report that is opened when double clicking on the recorded session showing the four tested pages:

dynaTrace Performance Report shows all 4 pages in performance comparison

dynaTrace Performance Report shows all 4 pages in performance comparison

We can make the following interesting observations:

  1. Both sites have a First Impression Time of less than 2 seconds. This means that it takes less than 2 seconds until the user gets a first visual impression of the website – which is considered acceptable
  2. The Fully Loaded Time of both start pages is very high – both take about 14 seconds to fully load. The main reason for this is the amount of multimedia content (mainly images)
  3. Bild.de requires 289 HTTP Requests to fully load – that’s 113 more than Spiegel.de
  4. Bild.de uses JavaScript and XHR (XmlHttpRequests) to dynamically load content
  5. The full page size of bild.de is 4.6MB. That is 2.5 times more than Spiegel.de which “only” has 1.8MB

Lifetime of a Web Page
One of the nicest features of dynaTrace AJAX is the Timeline View. This view shows all activities (HTTP Requests, JavaScript execution, Rendering, XHR Requests and page events) that happen on a single page in chronological order. We can also see all requests split up by domain which makes it easy to spot which domains serve a lot of content or which domains serve the content very slowly. Especially on pages that include external 3rd party content, e.g.: Ad-Banners, it is interesting to see how this type of content slows down your page. Google’s PageSpeed has a new feature in their latest version which allows you to focus on your own or external content when analyzing page performance.

The following two screenshots show the dynaTrace Timeline for the start page of Bild.de and Spiegel.de:

Bild.de loads most of their images from a dedicated image domain. Rendering activity seems to very high as well

Bild.de loads most of their images from a dedicated image domain. Rendering activity seems to very high as well

Spiegel.de serves all images from its primary domain. It also hardly uses JavaScript or excessive rendering

Spiegel.de serves all images from its primary domain. It also hardly uses JavaScript or excessive rendering

There are some fundamental differences between the two start pages:

  • Bild.de uses multiple domains to deliver multimedia content, e.g.: bilder.bild.de or newscase.bild.de. Splitting content on multiple domains is a Best Practice that is called Domain Sharding. It has the advantage of letting the browser use more physical network connections to download more content in parallel
  • Spiegel.de delivers most of its multimedia content from the primary www.spiegel.de domain which is the bottleneck of their deployment. A browser has a limited number of physical network connections to a single domain, e.g: IE 7 uses 2 connections. If there are more than 2 resources to be downloaded from a domain they need to get queued up and have to wait for a connection to become available. Domain Sharding solves this problem by splitting content on multiple domains
  • both pages have individual resources that take extraordinarily long to load, e.g.: initial HTML page and CSS files on spiegel.de or 2 Flash components on bild.de
  • Bild.de shows a constantly high rendering activity – caused by animated images as well as the large number of images on that page

Key Performance Indicators
Additionally to the performance metrics as shown in the dynaTrace AJAX Performance Report Overview the Key Performance Indicator (KPI) tab shows a set of additional important KPI’s:

Key Performance Indicators for Bild.de - showing high load time, server-time, wait time and number of requests

Key Performance Indicators for Bild.de - showing high load time, server-time, wait time and number of requests

As mentioned earlier, bild.de requires many HTTP Roundtrips (289) in order to load the full page. The number one rule in Web Performance Optimization says to minimize HTTP Roundtrips. 289 roundtrips from browser to server is definitely too much. Every roundtrip needs to wait on a free physical network connection (read more on The Two HTTP Connection Limit Issue). Every roundtrip includes the overhead of network latency between browser and server as well as the overhead of the HTTP Protocol itself (HTTP Headers very often contribute a large percentage of the total roundtrip size).

The KPI Report also shows how many resources (images, CSS, …) are being cached or have to be retrieved from the server every time. The first table shows that 80 resources do not use any browser cache headers. That means that these resources have to be re-downloaded on every subsequent visit of the same user.

Usage of the Browser Cache
The Browser Caching Tab on the dynaTrace AJAX Performance Report performs a detailed analysis of HTTP Cache Headers on every downloaded resource. It is recommended to read Best Practices on Browser Caching which explains available browser caching options.

Bild.de and Spiegel.de don’t make optimal use of the browser cache. The following illustration shows a list of all resources that have no cache setting, a cache setting with a past date or a very short cache expiration date:

Many resoures on the page will not be cached at all or have a very short expiration date

Many resources on the page will not be cached at all or have a very short expiration date

Obviously it doesn’t make sense to cache every image on a page. Images that have a short life time should not fill up the local browser cache unnecessarily. If we have a closer look at some of the images on these pages here it seems that at least some of them could be cached for a longer time as they won’t change that frequently. The following section of the website shows images that are cached only shortly (< 48 hours):

Some examples on images that can be cached longer as they won't change that frequently

Some examples on images that can be cached longer as they won't change frequently

These logos will probably not change every 2 days – therefore it makes sense to specify a Far-Future Expiration Header. This reduces the number of roundtrips for revisiting users.

Reducing HTTP Roundtrips
I talked about this several times now – the top rule is to reduce network roundtrips. Besides making use of the browser’s cache there are other ways to reduce the roundtrips for every user (not just revisiting). dynaTrace AJAX has the Network tab that shows those roundtrips that are considered “unnecessary”:

7 unnecessary HTTP Redirects, CSS, JavaScript and Images on this page

Seven unnecessary HTTP Redirects, CSS, JavaScript and Images on this page

HTTP Redirects allow implementing some important use cases, e.g.: authentication, short memorable URLs or end-user monitoring. Too often though HTTP Redirects are caused by wrong configuration settings on the web server. Redirects that can be avoided are a great way to improve web site performance. On the start page of spiegel.de we have 5 redirects – bild.de has 7. A redirect means an additional HTTP Roundtrip as the browser needs to follow the redirect and request another URL in order to get to the originally requested resource.

Great savings can also be achieved on CSS, JavaScript and image resources. All 3 types can potentially be merged into fewer resources. The Best Practice on Network Requests and Roundtrips talks about CSS and JavaScript merging and compression. It also talked about CSS Sprites. CSS Sprites is a technique to combine multiple images in a single resource and use CSS styles to show the individual images on the correct location on the page. Check out the Best Practices on CSS Sprites.

Reducing resources not only brings the advantage of fewer roundtrips. It also means that fewer resources need to be downloaded from the same domain which reduces waiting time (remember the limitation of 2 connections per domain on IE7 mentioned earlier in this blog?).

On all pages of the two sites (Bild and Spiegel) we can observe too many JavaScript, CSS and image resources. It is not possible to merge all of these files – as this is technically not always feasible. It looks like some of these files could be merged and would therefore greatly enhance load time.

Dynamic content
Even though the majority of the content on a news site is static for at least a while (new news doesn’t arrive every second) there is enough content that needs to be generated dynamically for every user. Examples for this are any type of ticker information, weather or personalized ads.

dynaTrace identifies dynamic content based on the rules defined in Best Practice on Server-Side Performance. All these requests are listed on the Server-Side tab in the Performance Report. The Server-Time is the time also known as Time-To-First-Byte. It is the time from the last byte of the HTTP Request sent until the first byte of the response is received. This also includes network latency between the browser and the server – it is however the closest you can get to the server-time without actually measuring the time on the server. The following illustration shows the dynamic resources of bild.de:

dynaTrace shows slow running server-side requests - both on bild.de as well as Ad-Service domains

dynaTrace shows slow running server-side requests - both on bild.de as well as Ad-Service domains

The slowest requests are stock information requests, the initial page request itself as well as the weather information. We also see some requests to an external ad-service and requests to a web-tracking service (not visible in the screenshot above). Server-side performance mainly becomes an issue for dynamically generated content – and usually only in a scenario where many users want to access data from the server. In this case it becomes an even bigger problem because it prevents a larger number of users to actually click on those ads that generate revenue.

Top 10 Performance Problems taken from Zappos, Monster, Thomson and Co lists the typical server-side performance problems and shows how to prevent them.

Conclusion: Who delivers the fastest News?
In our tests, neither site does particularly well. Both sites deliver lots of content without following the Best Practices on Browser Caching or Network Roundtrips. The fully-loaded time of both pages is very high with 14 seconds each (obviously this also depends on your connection speed). Both sites deliver an acceptable first visual impression (<2 seconds).

dynaTrace, PageSpeed and YSlow allow uploading performance results to ShowSlow. ShowSlow is an Open Source Platform and can be used as performance repository for web metrics. ShowSlow.com hosts a public instance of the ShowSlow server and allows you to upload and compare results:

Performance comparision between the sites using all available Web Performance Tools

Performance comparison between the sites using all available Web Performance Tools

The difference in ranking and grading is that every tool puts the focus on different rules. dynaTrace puts its focus on load time first and then on rules such as browser caching or network roundtrips. YSlow and PageSpeed focus more on the Best Practice rules. A good mix of tool usage is therefore recommended – especially because no tool supports every browser anyway.

Now – who is the winner of this analysis? Let’s hope it is us – the readers of these online news portals. With the result of this analysis, with the help of the tools and with the help of people like Steve Souders, Web Site Performance is put in the spotlight and will ultimately lead to better and faster websites:

Performance == More Users == More Revenue

Related reading:

  1. How better Caching helps Frankfurt’s Airport Website to handle additional load caused by the Volcano Along with so many others I am stranded in Europe...
  2. Hands-On Guide: Verifying FIFA World Cup Web Site against Performance Best Practices Whether you call it Football, Futbol, Fussball, Futebol, Calcio or...
  3. Video on Common Performance Antipatterns online [caption id="attachment_1020" align="alignleft" width="250" caption="Parleys - Common Performance Antipatterns"][/caption] Last...
  4. Thomson Reuters on How they deliver High Performance Online Services We have had some great Webinars with our customers in...
  5. www.utah.travel in minutes">How to analyze and speed up content rich web sites likes www.utah.travel in minutes One of my daily activities is checking interesting blog posts...

More Stories By Andreas Grabner

Andreas Grabner has more than a decade of experience as an architect and developer in the Java and .NET space. In his current role, Andi works as a Technology Strategist for Compuware and leads the Compuware APM Center of Excellence team. In his role he influences the Compuware APM product strategy and works closely with customers in implementing performance management solutions across the entire application lifecycle. He is a frequent speaker at technology conferences on performance and architecture-related topics, and regularly authors articles offering business and technology advice for Compuware’s About:Performance blog.

Cloud Expo Latest Stories
14th International Cloud Expo, held on June 10–12, 2014 at the Javits Center in New York City, featured three content-packed days with a rich array of sessions about the business and technical value of cloud computing, Internet of Things, Big Data, and DevOps led by exceptional speakers from every sector of the IT ecosystem. The Cloud Expo series is the fastest-growing Enterprise IT event in the past 10 years, devoted to every aspect of delivering massively scalable enterprise IT as a service.
Hardware will never be more valuable than on the day it hits your loading dock. Each day new servers are not deployed to production the business is losing money. While Moore’s Law is typically cited to explain the exponential density growth of chips, a critical consequence of this is rapid depreciation of servers. The hardware for clustered systems (e.g., Hadoop, OpenStack) tends to be significant capital expenses. In his session at 15th Cloud Expo, Mason Katz, CTO and co-founder of StackIQ, to discuss how infrastructure teams should be aware of the capitalization and depreciation model of these expenses to fully understand when and where automation is critical.
Over the last few years the healthcare ecosystem has revolved around innovations in Electronic Health Record (HER) based systems. This evolution has helped us achieve much desired interoperability. Now the focus is shifting to other equally important aspects – scalability and performance. While applying cloud computing environments to the EHR systems, a special consideration needs to be given to the cloud enablement of Veterans Health Information Systems and Technology Architecture (VistA), i.e., the largest single medical system in the United States.
In his session at 15th Cloud Expo, Mark Hinkle, Senior Director, Open Source Solutions at Citrix Systems Inc., will provide overview of the open source software that can be used to deploy and manage a cloud computing environment. He will include information on storage, networking(e.g., OpenDaylight) and compute virtualization (Xen, KVM, LXC) and the orchestration(Apache CloudStack, OpenStack) of the three to build their own cloud services. Speaker Bio: Mark Hinkle is the Senior Director, Open Source Solutions, at Citrix Systems Inc. He joined Citrix as a result of their July 2011 acquisition of Cloud.com where he was their Vice President of Community. He is currently responsible for Citrix open source efforts around the open source cloud computing platform, Apache CloudStack and the Xen Hypervisor. Previously he was the VP of Community at Zenoss Inc., a producer of the open source application, server, and network management software, where he grew the Zenoss Core project to over 10...
Most of today’s hardware manufacturers are building servers with at least one SATA Port, but not every systems engineer utilizes them. This is considered a loss in the game of maximizing potential storage space in a fixed unit. The SATADOM Series was created by Innodisk as a high-performance, small form factor boot drive with low power consumption to be plugged into the unused SATA port on your server board as an alternative to hard drive or USB boot-up. Built for 1U systems, this powerful device is smaller than a one dollar coin, and frees up otherwise dead space on your motherboard. To meet the requirements of tomorrow’s cloud hardware, Innodisk invested internal R&D resources to develop our SATA III series of products. The SATA III SATADOM boasts 500/180MBs R/W Speeds respectively, or double R/W Speed of SATA II products.
As more applications and services move "to the cloud" (public or on-premise) cloud environments are increasingly adopting and building out traditional enterprise features. This in turn is enabling and encouraging cloud adoption from enterprise users. In many ways the definition is blurring as features like continuous operation, geo-distribution or on-demand capacity become the norm. NuoDB is involved in both building enterprise software and using enterprise cloud capabilities. In his session at 15th Cloud Expo, Seth Proctor, CTO at NuoDB, Inc., will discuss the experiences from building, deploying and using enterprise services and suggest some ways to approach moving enterprise applications into a cloud model.
Until recently, many organizations required specialized departments to perform mapping and geospatial analysis, and they used Esri on-premise solutions for that work. In his session at 15th Cloud Expo, Dave Peters, author of the Esri Press book Building a GIS, System Architecture Design Strategies for Managers, will discuss how Esri has successfully included the cloud as a fully integrated SaaS expansion of the ArcGIS mapping platform. Organizations that have incorporated Esri cloud-based applications and content within their business models are reaping huge benefits by directly leveraging cloud-based mapping and analysis capabilities within their existing enterprise investments. The ArcGIS mapping platform includes cloud-based content management and information resources to more widely, efficiently, and affordably deliver real-time actionable information and analysis capabilities to your organization.
Almost everyone sees the potential of Internet of Things but how can businesses truly unlock that potential. The key will be in the ability to discover business insight in the midst of an ocean of Big Data generated from billions of embedded devices via Systems of Discover. Businesses will also need to ensure that they can sustain that insight by leveraging the cloud for global reach, scale and elasticity. In his session at Internet of @ThingsExpo, Mac Devine, Distinguished Engineer at IBM, will discuss bringing these three elements together via Systems of Discover.
Cloud and Big Data present unique dilemmas: embracing the benefits of these new technologies while maintaining the security of your organization’s assets. When an outside party owns, controls and manages your infrastructure and computational resources, how can you be assured that sensitive data remains private and secure? How do you best protect data in mixed use cloud and big data infrastructure sets? Can you still satisfy the full range of reporting, compliance and regulatory requirements? In his session at 15th Cloud Expo, Derek Tumulak, Vice President of Product Management at Vormetric, will discuss how to address data security in cloud and Big Data environments so that your organization isn’t next week’s data breach headline.
The cloud is everywhere and growing, and with it SaaS has become an accepted means for software delivery. SaaS is more than just a technology, it is a thriving business model estimated to be worth around $53 billion dollars by 2015, according to IDC. The question is – how do you build and scale a profitable SaaS business model? In his session at 15th Cloud Expo, Jason Cumberland, Vice President, SaaS Solutions at Dimension Data, will give the audience an understanding of common mistakes businesses make when transitioning to SaaS; how to avoid them; and how to build a profitable and scalable SaaS business.
SYS-CON Events announced today that Gridstore™, the leader in software-defined storage (SDS) purpose-built for Windows Servers and Hyper-V, will exhibit at SYS-CON's 15th International Cloud Expo®, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Gridstore™ is the leader in software-defined storage purpose built for virtualization that is designed to accelerate applications in virtualized environments. Using its patented Server-Side Virtual Controller™ Technology (SVCT) to eliminate the I/O blender effect and accelerate applications Gridstore delivers vmOptimized™ Storage that self-optimizes to each application or VM across both virtual and physical environments. Leveraging a grid architecture, Gridstore delivers the first end-to-end storage QoS to ensure the most important App or VM performance is never compromised. The storage grid, that uses Gridstore’s performance optimized nodes or capacity optimized nodes, starts with as few a...
SYS-CON Events announced today that Solgenia, the global market leader in Cloud Collaboration and Cloud Infrastructure software solutions, will exhibit at SYS-CON's 15th International Cloud Expo®, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Solgenia is the global market leader in Cloud Collaboration and Cloud Infrastructure software solutions. Designed to “Bridge the Gap” between personal and professional social, mobile and cloud user experiences, our solutions help large and medium-sized organizations dramatically improve productivity, reduce collaboration costs, and increase the overall enterprise value by bringing collaboration and infrastructure solutions to the cloud.
Cloud computing started a technology revolution; now DevOps is driving that revolution forward. By enabling new approaches to service delivery, cloud and DevOps together are delivering even greater speed, agility, and efficiency. No wonder leading innovators are adopting DevOps and cloud together! In his session at DevOps Summit, Andi Mann, Vice President of Strategic Solutions at CA Technologies, will explore the synergies in these two approaches, with practical tips, techniques, research data, war stories, case studies, and recommendations.
Enterprises require the performance, agility and on-demand access of the public cloud, and the management, security and compatibility of the private cloud. The solution? In his session at 15th Cloud Expo, Simone Brunozzi, VP and Chief Technologist(global role) for VMware, will explore how to unlock the power of the hybrid cloud and the steps to get there. He'll discuss the challenges that conventional approaches to both public and private cloud computing, and outline the tough decisions that must be made to accelerate the journey to the hybrid cloud. As part of the transition, an Infrastructure-as-a-Service model will enable enterprise IT to build services beyond their data center while owning what gets moved, when to move it, and for how long. IT can then move forward on what matters most to the organization that it supports – availability, agility and efficiency.
Every healthy ecosystem is diverse. This is especially true in cloud ecosystems, where portability and interoperability are more important than old enterprise models of proprietary ownership. In his session at 15th Cloud Expo, Mark Baker, Server Product Manager at Canonical/Ubuntu, will discuss how single vendors used to take the lead in creating and delivering technology, but in a cloud economy, where users want tools of their preference, when and where they need them, it makes no sense.