Welcome!

AJAX & REA Authors: Andreas Grabner, Tim Hinds, Alfredo Diaz, Kevin Benedict, RealWire News Distribution

Related Topics: Java, AJAX & REA

Java: Article

Performance as Key to Success

How Online News Portals could do better

What factors make you think a web page is good or not? What keeps on that page longer than others? On the one hand it is the content on the page and whether this content is of interest to you. On the other it is the velocity with which you can navigate through the individual pages. High-Speed internet and performance-optimized pages make our day-to-day browsing easier when accessing our emails, tweets, latest updates on sports or news. With all the changes in the recent years in Web Performance Optimization (WPO) we’ve been spoiled by those sites that follow all these Best Practices and boosted their web site experience. No wonder that we start losing our patience with a site that doesn’t respond as fast as we’ve come to expect.

For this particular reason Google modified their ranking algorithm to now also include web performance as one of their metrics. The speed of a web page – the time from entering the URL in the browser until the page has been fully loaded – has now become one factor that defines whether your site shows up on the top or on the bottom of a search result. A good reason to pick “fast by default” as the main theme for this year’s Velocity 2010 Conference. Steve Souders – the driving force behind Web Performance Optimization – hosted this conference and it became clear the significant impact Web Site Performance has on end user behavior and thus the success of a web site.

At last year’s Velocity the first results on Business Impact of Performance were presented to the public. Microsoft, Yahoo and Google presented their results of their internal tests that showed that slower pages have a direct impact on banner clicks – and therefore generated revenue. On the contrary – Shopzilla presented the results of a web site overhaul showing how a faster web site positively impacts the number of users, time spent on site, number of clicks and generated revenue.

Who is faster? Who is better? Bild.de or Spiegel.de?

Similar to my analysis of the FIFA World Cup Website, the Golf Masters or the web site for the Winter Olympic Games in Vancouver I wanted to take a closer look at the two biggest German news portals: Bild.de and Spiegel.de.

Based on Alexa, these are the top two news portals in Germany. Both portals probably have a very different reader community as both publishers have a different way to “present” news. My analysis focuses on how well these pages follow the Best Practices on Web Performance Optimization – therefore the actual content doesn’t matter, as it is all about how to deliver the content. Both pages deliver high-volume, multimedia content. Both pages make money with online ads and therefore it is in their interest that many users visit their site, stay as long as possible on their site and click as many ads as possible. We’ve learned from Google, Bing, etc… that speed is a critical factor in keeping user on the page – now let’s see how these two sites do:

There are three great and free tools available that we can use for an analysis like this:

All three tools analyze individual web sites based on the Best Practices from dynaTrace, Yahoo and Google and provide a nice overview on whether the rules discussed in these documents were met or not.

The Test scenario
On both sites – www.bild.de and www.spiegel.de – I looked at two individual pages: the start page and the overview page for politics. Important for my testing is to start with a cleared browser cache in order to analyze the page as a First Time Visitor. It is recommended to run the same test again – this time with a primed browser cache – and then compare the timings.

The Result
Using any of the three mentioned tools is rather simple. dynaTrace AJAX allows me to record a complete browser session which includes all 4 pages I am testing. The following screenshot shows the Performance Report that is opened when double clicking on the recorded session showing the four tested pages:

dynaTrace Performance Report shows all 4 pages in performance comparison

dynaTrace Performance Report shows all 4 pages in performance comparison

We can make the following interesting observations:

  1. Both sites have a First Impression Time of less than 2 seconds. This means that it takes less than 2 seconds until the user gets a first visual impression of the website – which is considered acceptable
  2. The Fully Loaded Time of both start pages is very high – both take about 14 seconds to fully load. The main reason for this is the amount of multimedia content (mainly images)
  3. Bild.de requires 289 HTTP Requests to fully load – that’s 113 more than Spiegel.de
  4. Bild.de uses JavaScript and XHR (XmlHttpRequests) to dynamically load content
  5. The full page size of bild.de is 4.6MB. That is 2.5 times more than Spiegel.de which “only” has 1.8MB

Lifetime of a Web Page
One of the nicest features of dynaTrace AJAX is the Timeline View. This view shows all activities (HTTP Requests, JavaScript execution, Rendering, XHR Requests and page events) that happen on a single page in chronological order. We can also see all requests split up by domain which makes it easy to spot which domains serve a lot of content or which domains serve the content very slowly. Especially on pages that include external 3rd party content, e.g.: Ad-Banners, it is interesting to see how this type of content slows down your page. Google’s PageSpeed has a new feature in their latest version which allows you to focus on your own or external content when analyzing page performance.

The following two screenshots show the dynaTrace Timeline for the start page of Bild.de and Spiegel.de:

Bild.de loads most of their images from a dedicated image domain. Rendering activity seems to very high as well

Bild.de loads most of their images from a dedicated image domain. Rendering activity seems to very high as well

Spiegel.de serves all images from its primary domain. It also hardly uses JavaScript or excessive rendering

Spiegel.de serves all images from its primary domain. It also hardly uses JavaScript or excessive rendering

There are some fundamental differences between the two start pages:

  • Bild.de uses multiple domains to deliver multimedia content, e.g.: bilder.bild.de or newscase.bild.de. Splitting content on multiple domains is a Best Practice that is called Domain Sharding. It has the advantage of letting the browser use more physical network connections to download more content in parallel
  • Spiegel.de delivers most of its multimedia content from the primary www.spiegel.de domain which is the bottleneck of their deployment. A browser has a limited number of physical network connections to a single domain, e.g: IE 7 uses 2 connections. If there are more than 2 resources to be downloaded from a domain they need to get queued up and have to wait for a connection to become available. Domain Sharding solves this problem by splitting content on multiple domains
  • both pages have individual resources that take extraordinarily long to load, e.g.: initial HTML page and CSS files on spiegel.de or 2 Flash components on bild.de
  • Bild.de shows a constantly high rendering activity – caused by animated images as well as the large number of images on that page

Key Performance Indicators
Additionally to the performance metrics as shown in the dynaTrace AJAX Performance Report Overview the Key Performance Indicator (KPI) tab shows a set of additional important KPI’s:

Key Performance Indicators for Bild.de - showing high load time, server-time, wait time and number of requests

Key Performance Indicators for Bild.de - showing high load time, server-time, wait time and number of requests

As mentioned earlier, bild.de requires many HTTP Roundtrips (289) in order to load the full page. The number one rule in Web Performance Optimization says to minimize HTTP Roundtrips. 289 roundtrips from browser to server is definitely too much. Every roundtrip needs to wait on a free physical network connection (read more on The Two HTTP Connection Limit Issue). Every roundtrip includes the overhead of network latency between browser and server as well as the overhead of the HTTP Protocol itself (HTTP Headers very often contribute a large percentage of the total roundtrip size).

The KPI Report also shows how many resources (images, CSS, …) are being cached or have to be retrieved from the server every time. The first table shows that 80 resources do not use any browser cache headers. That means that these resources have to be re-downloaded on every subsequent visit of the same user.

Usage of the Browser Cache
The Browser Caching Tab on the dynaTrace AJAX Performance Report performs a detailed analysis of HTTP Cache Headers on every downloaded resource. It is recommended to read Best Practices on Browser Caching which explains available browser caching options.

Bild.de and Spiegel.de don’t make optimal use of the browser cache. The following illustration shows a list of all resources that have no cache setting, a cache setting with a past date or a very short cache expiration date:

Many resoures on the page will not be cached at all or have a very short expiration date

Many resources on the page will not be cached at all or have a very short expiration date

Obviously it doesn’t make sense to cache every image on a page. Images that have a short life time should not fill up the local browser cache unnecessarily. If we have a closer look at some of the images on these pages here it seems that at least some of them could be cached for a longer time as they won’t change that frequently. The following section of the website shows images that are cached only shortly (< 48 hours):

Some examples on images that can be cached longer as they won't change that frequently

Some examples on images that can be cached longer as they won't change frequently

These logos will probably not change every 2 days – therefore it makes sense to specify a Far-Future Expiration Header. This reduces the number of roundtrips for revisiting users.

Reducing HTTP Roundtrips
I talked about this several times now – the top rule is to reduce network roundtrips. Besides making use of the browser’s cache there are other ways to reduce the roundtrips for every user (not just revisiting). dynaTrace AJAX has the Network tab that shows those roundtrips that are considered “unnecessary”:

7 unnecessary HTTP Redirects, CSS, JavaScript and Images on this page

Seven unnecessary HTTP Redirects, CSS, JavaScript and Images on this page

HTTP Redirects allow implementing some important use cases, e.g.: authentication, short memorable URLs or end-user monitoring. Too often though HTTP Redirects are caused by wrong configuration settings on the web server. Redirects that can be avoided are a great way to improve web site performance. On the start page of spiegel.de we have 5 redirects – bild.de has 7. A redirect means an additional HTTP Roundtrip as the browser needs to follow the redirect and request another URL in order to get to the originally requested resource.

Great savings can also be achieved on CSS, JavaScript and image resources. All 3 types can potentially be merged into fewer resources. The Best Practice on Network Requests and Roundtrips talks about CSS and JavaScript merging and compression. It also talked about CSS Sprites. CSS Sprites is a technique to combine multiple images in a single resource and use CSS styles to show the individual images on the correct location on the page. Check out the Best Practices on CSS Sprites.

Reducing resources not only brings the advantage of fewer roundtrips. It also means that fewer resources need to be downloaded from the same domain which reduces waiting time (remember the limitation of 2 connections per domain on IE7 mentioned earlier in this blog?).

On all pages of the two sites (Bild and Spiegel) we can observe too many JavaScript, CSS and image resources. It is not possible to merge all of these files – as this is technically not always feasible. It looks like some of these files could be merged and would therefore greatly enhance load time.

Dynamic content
Even though the majority of the content on a news site is static for at least a while (new news doesn’t arrive every second) there is enough content that needs to be generated dynamically for every user. Examples for this are any type of ticker information, weather or personalized ads.

dynaTrace identifies dynamic content based on the rules defined in Best Practice on Server-Side Performance. All these requests are listed on the Server-Side tab in the Performance Report. The Server-Time is the time also known as Time-To-First-Byte. It is the time from the last byte of the HTTP Request sent until the first byte of the response is received. This also includes network latency between the browser and the server – it is however the closest you can get to the server-time without actually measuring the time on the server. The following illustration shows the dynamic resources of bild.de:

dynaTrace shows slow running server-side requests - both on bild.de as well as Ad-Service domains

dynaTrace shows slow running server-side requests - both on bild.de as well as Ad-Service domains

The slowest requests are stock information requests, the initial page request itself as well as the weather information. We also see some requests to an external ad-service and requests to a web-tracking service (not visible in the screenshot above). Server-side performance mainly becomes an issue for dynamically generated content – and usually only in a scenario where many users want to access data from the server. In this case it becomes an even bigger problem because it prevents a larger number of users to actually click on those ads that generate revenue.

Top 10 Performance Problems taken from Zappos, Monster, Thomson and Co lists the typical server-side performance problems and shows how to prevent them.

Conclusion: Who delivers the fastest News?
In our tests, neither site does particularly well. Both sites deliver lots of content without following the Best Practices on Browser Caching or Network Roundtrips. The fully-loaded time of both pages is very high with 14 seconds each (obviously this also depends on your connection speed). Both sites deliver an acceptable first visual impression (<2 seconds).

dynaTrace, PageSpeed and YSlow allow uploading performance results to ShowSlow. ShowSlow is an Open Source Platform and can be used as performance repository for web metrics. ShowSlow.com hosts a public instance of the ShowSlow server and allows you to upload and compare results:

Performance comparision between the sites using all available Web Performance Tools

Performance comparison between the sites using all available Web Performance Tools

The difference in ranking and grading is that every tool puts the focus on different rules. dynaTrace puts its focus on load time first and then on rules such as browser caching or network roundtrips. YSlow and PageSpeed focus more on the Best Practice rules. A good mix of tool usage is therefore recommended – especially because no tool supports every browser anyway.

Now – who is the winner of this analysis? Let’s hope it is us – the readers of these online news portals. With the result of this analysis, with the help of the tools and with the help of people like Steve Souders, Web Site Performance is put in the spotlight and will ultimately lead to better and faster websites:

Performance == More Users == More Revenue

Related reading:

  1. How better Caching helps Frankfurt’s Airport Website to handle additional load caused by the Volcano Along with so many others I am stranded in Europe...
  2. Hands-On Guide: Verifying FIFA World Cup Web Site against Performance Best Practices Whether you call it Football, Futbol, Fussball, Futebol, Calcio or...
  3. Video on Common Performance Antipatterns online [caption id="attachment_1020" align="alignleft" width="250" caption="Parleys - Common Performance Antipatterns"][/caption] Last...
  4. Thomson Reuters on How they deliver High Performance Online Services We have had some great Webinars with our customers in...
  5. www.utah.travel in minutes">How to analyze and speed up content rich web sites likes www.utah.travel in minutes One of my daily activities is checking interesting blog posts...

More Stories By Andreas Grabner

Andreas has over a decade of experience as an architect and developer, and currently works as a senior performance architect and technology strategist for dynaTrace Software, where he influences product strategy and works closely with customers in implementing performance management solutions across the application life cycle. He is a regular speaker at software conferences, writes for a number of technology publications, and blogs at http://blog.dynatrace.com

Cloud Expo Breaking News
More and more enterprises today are doing business by opening up their data and applications through APIs. Though forward-thinking and strategic, exposing APIs also increases the surface area for potential attack by hackers. To benefit from APIs while staying secure, enterprises and security architects need to continue to develop a deep understanding about API security and how it differs from traditional web application security or mobile application security. In his session at 14th Cloud Expo, Sachin Agarwal, VP of Product Marketing and Strategy at SOA Software, will walk you through the various aspects of how an API could be potentially exploited. He will discuss the necessary best practices to secure your data and enterprise applications while continue continuing to support your business’s digital initiatives.
Web conferencing in a public cloud has the same risks as any other cloud service. If you have ever had concerns over the types of data being shared in your employees’ web conferences, such as IP, financials or customer data, then it’s time to look at web conferencing in a private cloud. In her session at 14th Cloud Expo, Courtney Behrens, Senior Marketing Manager at Brother International, will discuss how issues that had previously been out of your control, like performance, advanced administration and compliance, can now be put back behind your firewall.
Next-Gen Cloud. Whatever you call it, there’s a higher calling for cloud computing that requires providers to change their spots and move from a commodity mindset to a premium one. Businesses can no longer maintain the status quo that today’s service providers offer. Yes, the continuity, speed, mobility, data access and connectivity are staples of the cloud and always will be. But cloud providers that plan to not only exist tomorrow – but to lead – know that security must be the top priority for the cloud and are delivering it now. In his session at 14th Cloud Expo, Kurt Hagerman, Chief Information Security Officer at FireHost, will detail why and how you can have both infrastructure performance and enterprise-grade security – and what tomorrow's cloud provider will look like.
The social media expansion has shown just how people are eager to share their experiences with the rest of the world. Cloud technology is the perfect platform to satisfy this need given its great flexibility and readiness. At Cynny, we aim to revolutionize how people share and organize their digital life through a brand new cloud service, starting from infrastructure to the users’ interface. A revolution that began from inventing and designing our very own infrastructure: we have created the first server network powered solely by ARM CPU. The microservers have “organism-like” features, differentiating them from any of the current technologies. Benefits include low consumption of energy, making Cynny the ecologically friendly alternative for storage as well as cheaper infrastructure, lower running costs, etc.
The revolution that happened in the server universe over the past 15 years has resulted in an eco-system that is more open, more democratically innovative and produced better results in technically challenging dimensions like scale. The underpinnings of the revolution were common hardware, standards based APIs (ex. POSIX) and a strict adherence to layering and isolation between applications, daemons and kernel drivers/modules which allowed multiple types of development happen in parallel without hindering others. Put simply, today's server model is built on a consistent x86 platform with few surprises in its core components. A kernel abstracts away the platform, so that applications and daemons are decoupled from the hardware. In contrast, networking equipment is still stuck in the mainframe era. Today, networking equipment is a single appliance, including hardware, OS, applications and user interface come as a monolithic entity from a single vendor. Switching between different vendor'...
Cloud backup and recovery services are critical to safeguarding an organization’s data and ensuring business continuity when technical failures and outages occur. With so many choices, how do you find the right provider for your specific needs? In his session at 14th Cloud Expo, Daniel Jacobson, Technology Manager at BUMI, will outline the key factors including backup configurations, proactive monitoring, data restoration, disaster recovery drills, security, compliance and data center resources. Aside from the technical considerations, the secret sauce in identifying the best vendor is the level of focus, expertise and specialization of their engineering team and support group, and how they monitor your day-to-day backups, provide recommendations, and guide you through restores when necessary.
Cloud scalability and performance should be at the heart of every successful Internet venture. The infrastructure needs to be resilient, flexible, and fast – it’s best not to get caught thinking about architecture until the middle of an emergency, when it's too late. In his interactive, no-holds-barred session at 14th Cloud Expo, Phil Jackson, Development Community Advocate for SoftLayer, will dive into how to design and build-out the right cloud infrastructure.
You use an agile process; your goal is to make your organization more agile. What about your data infrastructure? The truth is, today’s databases are anything but agile – they are effectively static repositories that are cumbersome to work with, difficult to change, and cannot keep pace with application demands. Performance suffers as a result, and it takes far longer than it should to deliver on new features and capabilities needed to make your organization competitive. As your application and business needs change, data repositories and structures get outmoded rapidly, resulting in increased work for application developers and slow performance for end users. Further, as data sizes grow into the Big Data realm, this problem is exacerbated and becomes even more difficult to address. A seemingly simple schema change can take hours (or more) to perform, and as requirements evolve the disconnect between existing data structures and actual needs diverge.
SYS-CON Events announced today that SherWeb, a long-time leading provider of cloud services and Microsoft's 2013 World Hosting Partner of the Year, will exhibit at SYS-CON's 14th International Cloud Expo®, which will take place on June 10–12, 2014, at the Javits Center in New York City, New York. A worldwide hosted services leader ranking in the prestigious North American Deloitte Technology Fast 500TM, and Microsoft's 2013 World Hosting Partner of the Year, SherWeb provides competitive cloud solutions to businesses and partners around the world. Founded in 1998, SherWeb is a privately owned company headquartered in Quebec, Canada. Its service portfolio includes Microsoft Exchange, SharePoint, Lync, Dynamics CRM and more.
The world of cloud and application development is not just for the hardened developer these days. In their session at 14th Cloud Expo, Phil Jackson, Development Community Advocate for SoftLayer, and Harold Hannon, Sr. Software Architect at SoftLayer, will pull back the curtain of the architecture of a fun demo application purpose-built for the cloud. They will focus on demonstrating how they leveraged compute, storage, messaging, and other cloud elements hosted at SoftLayer to lower the effort and difficulty of putting together a useful application. This will be an active demonstration and review of simple command-line tools and resources, so don’t be afraid if you are not a seasoned developer.
SYS-CON Events announced today that BUMI, a premium managed service provider specializing in data backup and recovery, will exhibit at SYS-CON's 14th International Cloud Expo®, which will take place on June 10–12, 2014, at the Javits Center in New York City, New York. Manhattan-based BUMI (Backup My Info!) is a premium managed service provider specializing in data backup and recovery. Founded in 2002, the company’s Here, There and Everywhere data backup and recovery solutions are utilized by more than 500 businesses. BUMI clients include professional service organizations such as banking, financial, insurance, accounting, hedge funds and law firms. The company is known for its relentless passion for customer service and support, and has won numerous awards, including Customer Service Provider of the Year and 10 Best Companies to Work For.
Chief Security Officers (CSO), CIOs and IT Directors are all concerned with providing a secure environment from which their business can innovate and customers can safely consume without the fear of Distributed Denial of Service attacks. To be successful in today's hyper-connected world, the enterprise needs to leverage the capabilities of the web and be ready to innovate without fear of DDoS attacks, concerns about application security and other threats. Organizations face great risk from increasingly frequent and sophisticated attempts to render web properties unavailable, and steal intellectual property or personally identifiable information. Layered security best practices extend security beyond the data center, delivering DDoS protection and maintaining site performance in the face of fast-changing threats.
From data center to cloud to the network. In his session at 3rd SDDC Expo, Raul Martynek, CEO of Net Access, will identify the challenges facing both data center providers and enterprise IT as they relate to cross-platform automation. He will then provide insight into designing, building, securing and managing the technology as an integrated service offering. Topics covered include: High-density data center design Network (and SDN) integration and automation Cloud (and hosting) infrastructure considerations Monitoring and security Management approaches Self-service and automation
In his session at 14th Cloud Expo, David Holmes, Vice President at OutSystems, will demonstrate the immense power that lives at the intersection of mobile apps and cloud application platforms. Attendees will participate in a live demonstration – an enterprise mobile app will be built and changed before their eyes – on their own devices. David Holmes brings over 20 years of high-tech marketing leadership to OutSystems. Prior to joining OutSystems, he was VP of Global Marketing for Damballa, a leading provider of network security solutions. Previously, he was SVP of Global Marketing for Jacada where his branding and positioning expertise helped drive the company from start-up days to a $55 million initial public offering on Nasdaq.
Performance is the intersection of power, agility, control, and choice. If you value performance, and more specifically consistent performance, you need to look beyond simple virtualized compute. Many factors need to be considered to create a truly performant environment. In his General Session at 14th Cloud Expo, Marc Jones, Vice President of Product Innovation for SoftLayer, will explain how to take advantage of a multitude of compute options and platform features to make cloud the cornerstone of your online presence.