Click here to close now.

Welcome!

AJAX & REA Authors: VictorOps Blog, Yakov Fain, Elizabeth White, Ken Fogel, Automic Blog

Related Topics: Java, AJAX & REA, Cloud Expo

Java: Article

The Super Bowl Effect on Website Performance

Synthetic monitoring of a website

Whether or not you are a fan of U.S. football - it was really hard to avoid this huge sports event on February 5. In addition to the actual game, it's the Super Bowl commercials that - besides being very expensive to air - usually drive a lot of load on the websites of the companies that run their ads. The question is whether the millions of dollars spent really drive consumers to these websites and make them do business with them.

As we won't get an answer from the top brands that advertised about the actual conversion rates we can look at the End User Experience and performance of their websites while these ads were aired. By analyzing the data that we can get through continuous synthetic monitoring combined with deep dive browser diagnostics, we'll be able to see whether their web application was actually able to handle the load and didn't leave too many of these users with a bad user experience.

Use Case: What Was the User Experience at One of the Top Brands on Super Bowl Day
In order to avoid any discussions on whether the numbers we present are good or bad and mean that this company did a good or terrible job, we present all this data without giving an actual name as the purpose of this exercise is to show how to perform this type of analysis.

Synthetic Monitoring of a Web Site
In order to monitor web site performance of a site we set up a Gomez Synthetic Monitor to be executed on a scheduled basis. The monitor not only monitored the performance of the initial home page but also walked through several main use-case scenarios on that page, e.g., searched for a product. These tests were also executed from multiple U.S. locations, using low- and high-bandwidth connections and also using both the Internet Explorer and Firefox browsers. In addition we captured individual performance sessions using dynaTrace Browser Diagnostics.

The following screenshot shows the response time of the home page of this website via our Last Mile platform via a Firefox browser in the days before the Super Bowl as well as during the Super Bowl. The average page load time was around 9 seconds with the exception of the time-frame during the Super Bowl - that's when it jumped up to 30 seconds:

Synthetic Monitoring shows peak during the Super Bowl of up to 30s to load the Homepage

Analyzing Page Load Time from Monitoring Data
There were two factors that drove this spike:

  1. Higher load on their application due to the commercial aired during the Super Bowl
  2. Additional content on their page that doubled the page size

Factor 1: Higher Load
That's of course intended and a success for their marketing campaign. What you want to make sure is that you can handle the additional expected load by using CDNs (Content Delivery Networks) to deliver that static content to your end users as well as provide additional resources on your web and application servers to handle the extra load. To be prepared for that it's highly recommended that you do some up-front large scale load-testing. For more information on that you can read my recent blog posts on To Load Test or Not to Load Test: That Is Not the Question.

Factor 2: Additional Content
The average size of the home page jumped from 776kb to 1.497kb - that's double the page size. The reason for this is additional images and content that was displayed on the home page when accessed during the Super Bowl. The Gomez Monitor as well as the dynaTrace Browser Diagnostics Sessions provides detailed resource analysis that immediately highlights the additional images, stylesheets and JavaScript files. The following shows some of the additional Super Bowl-related content including size and download time:

Additional content downloaded during the Super Bowl resulting in up to 3 times higher page load times

The additional content in this case may have been necessary to fulfill the company's marketing campaign. The question is whether the additional content could have been optimized not to download 40 additional resources with a total size of > 700kb.

Deep Dive Browser Diagnostics
Looking at the dynaTrace Browser Diagnostics Data we can observe several interesting aspects.

Observation 1: Load Impact of additional resources
We can see the actual impact of these additional resources that got downloaded during the time of the game. Two of these resources took a very long time and had a major impact on page load time. The background image was delivered by their own web server and was not put on a CDN, which results in more traffic on their web servers and less than optimal performance for end users who are far away from their data centers. Another interesting aspect was an additional Facebook Like button that took 600ms to download and execute.

dynaTrace Timeline View showing the impact of additional resources on the page load time

Observation 2: Page Transition Impact of CDN User Tracking Logic
Some tracking tools out there send their data in the onbeforeunload event handler. Modern browsers actually don't allow the onbeforeunload event handlers to take too long or send any data. The workaround for this is to put in an "artificial" JavaScript loop that waits for 750ms to ensure the browser sends the AJAX Request with the tracking data before the browser navigates to the next page. We can also see this behavior on this page:

JavaScript Tracing Code from a CDN Provider adding a 750ms loop to ensure tracing data gets send before navigating to the next page

Conclusion
When you are responsible for a website that is going to see high user volume for a marketing campaign you want to:

  1. Make sure the additional content for that marketing campaign is optimized, e.g., make sure this content is on CDNs and follow the Best Practices for Web Performance Optimization
  2. Make sure you test your application with the campaign-specific features under realistic load
  3. Analyze the impact of third-party tracking tools and other widgets you put on your page

More Stories By Andreas Grabner

Andreas Grabner has more than a decade of experience as an architect and developer in the Java and .NET space. In his current role, Andi works as a Technology Strategist for Compuware and leads the Compuware APM Center of Excellence team. In his role he influences the Compuware APM product strategy and works closely with customers in implementing performance management solutions across the entire application lifecycle. He is a frequent speaker at technology conferences on performance and architecture-related topics, and regularly authors articles offering business and technology advice for Compuware’s About:Performance blog.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@CloudExpo Stories
VictorOps is making on-call suck less with the only collaborative alert management platform on the market. With easy on-call scheduling management, a real-time incident timeline that gives you contextual relevance around your alerts and powerful reporting features that make post-mortems more effective, VictorOps helps your IT/DevOps team solve problems faster.
Skeuomorphism usually means retaining existing design cues in something new that doesn’t actually need them. However, the concept of skeuomorphism can be thought of as relating more broadly to applying existing patterns to new technologies that, in fact, cry out for new approaches. In his session at DevOps Summit, Gordon Haff, Senior Cloud Strategy Marketing and Evangelism Manager at Red Hat, will discuss why containers should be paired with new architectural practices such as microservices ra...
Roberto Medrano, Executive Vice President at SOA Software, had reached 30,000 page views on his home page - http://RobertoMedrano.SYS-CON.com/ - on the SYS-CON family of online magazines, which includes Cloud Computing Journal, Internet of Things Journal, Big Data Journal, and SOA World Magazine. He is a recognized executive in the information technology fields of SOA, internet security, governance, and compliance. He has extensive experience with both start-ups and large companies, having been ...
The industrial software market has treated data with the mentality of “collect everything now, worry about how to use it later.” We now find ourselves buried in data, with the pervasive connectivity of the (Industrial) Internet of Things only piling on more numbers. There’s too much data and not enough information. In his session at @ThingsExpo, Bob Gates, Global Marketing Director, GE’s Intelligent Platforms business, to discuss how realizing the power of IoT, software developers are now focu...
Operational Hadoop and the Lambda Architecture for Streaming Data Apache Hadoop is emerging as a distributed platform for handling large and fast incoming streams of data. Predictive maintenance, supply chain optimization, and Internet-of-Things analysis are examples where Hadoop provides the scalable storage, processing, and analytics platform to gain meaningful insights from granular data that is typically only valuable from a large-scale, aggregate view. One architecture useful for capturing...
SYS-CON Events announced today that Vitria Technology, Inc. will exhibit at SYS-CON’s @ThingsExpo, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. Vitria will showcase the company’s new IoT Analytics Platform through live demonstrations at booth #330. Vitria’s IoT Analytics Platform, fully integrated and powered by an operational intelligence engine, enables customers to rapidly build and operationalize advanced analytics to deliver timely business outcomes ...
DevOps is about increasing efficiency, but nothing is more inefficient than building the same application twice. However, this is a routine occurrence with enterprise applications that need both a rich desktop web interface and strong mobile support. With recent technological advances from Isomorphic Software and others, it is now feasible to create a rich desktop and tuned mobile experience with a single codebase, without compromising performance or usability.
SYS-CON Events announced today Arista Networks will exhibit at SYS-CON's DevOps Summit 2015 New York, which will take place June 9-11, 2015, at the Javits Center in New York City, NY. Arista Networks was founded to deliver software-driven cloud networking solutions for large data center and computing environments. Arista’s award-winning 10/40/100GbE switches redefine scalability, robustness, and price-performance, with over 3,000 customers and more than three million cloud networking ports depl...
The speed of software changes in growing and large scale rapid-paced DevOps environments presents a challenge for continuous testing. Many organizations struggle to get this right. Practices that work for small scale continuous testing may not be sufficient as the requirements grow. In his session at DevOps Summit, Marc Hornbeek, Sr. Solutions Architect of DevOps continuous test solutions at Spirent Communications, will explain the best practices of continuous testing at high scale, which is r...
Thanks to Docker, it becomes very easy to leverage containers to build, ship, and run any Linux application on any kind of infrastructure. Docker is particularly helpful for microservice architectures because their successful implementation relies on a fast, efficient deployment mechanism – which is precisely one of the features of Docker. Microservice architectures are therefore becoming more popular, and are increasingly seen as an interesting option even for smaller projects, instead of bein...
Security can create serious friction for DevOps processes. We've come up with an approach to alleviate the friction and provide security value to DevOps teams. In her session at DevOps Summit, Shannon Lietz, Senior Manager of DevSecOps at Intuit, will discuss how DevSecOps got started and how it has evolved. Shannon Lietz has over two decades of experience pursuing next generation security solutions. She is currently the DevSecOps Leader for Intuit where she is responsible for setting and driv...
The explosion of connected devices / sensors is creating an ever-expanding set of new and valuable data. In parallel the emerging capability of Big Data technologies to store, access, analyze, and react to this data is producing changes in business models under the umbrella of the Internet of Things (IoT). In particular within the Insurance industry, IoT appears positioned to enable deep changes by altering relationships between insurers, distributors, and the insured. In his session at @Things...
SYS-CON Events announced today that Open Data Centers (ODC), a carrier-neutral colocation provider, will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place June 9-11, 2015, at the Javits Center in New York City, NY. Open Data Centers is a carrier-neutral data center operator in New Jersey and New York City offering alternative connectivity options for carriers, service providers and enterprise customers.
In his session at DevOps Summit, Tapabrata Pal, Director of Enterprise Architecture at Capital One, will tell a story about how Capital One has embraced Agile and DevOps Security practices across the Enterprise – driven by Enterprise Architecture; bringing in Development, Operations and Information Security organizations together. Capital Ones DevOpsSec practice is based upon three "pillars" – Shift-Left, Automate Everything, Dashboard Everything. Within about three years, from 100% waterfall, C...
Even as cloud and managed services grow increasingly central to business strategy and performance, challenges remain. The biggest sticking point for companies seeking to capitalize on the cloud is data security. Keeping data safe is an issue in any computing environment, and it has been a focus since the earliest days of the cloud revolution. Understandably so: a lot can go wrong when you allow valuable information to live outside the firewall. Recent revelations about government snooping, along...
PubNub on Monday has announced that it is partnering with IBM to bring its sophisticated real-time data streaming and messaging capabilities to Bluemix, IBM’s cloud development platform. “Today’s app and connected devices require an always-on connection, but building a secure, scalable solution from the ground up is time consuming, resource intensive, and error-prone,” said Todd Greene, CEO of PubNub. “PubNub enables web, mobile and IoT developers building apps on IBM Bluemix to quickly add sc...
Data-intensive companies that strive to gain insights from data using Big Data analytics tools can gain tremendous competitive advantage by deploying data-centric storage. Organizations generate large volumes of data, the vast majority of which is unstructured. As the volume and velocity of this unstructured data increases, the costs, risks and usability challenges associated with managing the unstructured data (regardless of file type, size or device) increases simultaneously, including end-to-...
The excitement around the possibilities enabled by Big Data is being tempered by the daunting task of feeding the analytics engines with high quality data on a continuous basis. As the once distinct fields of data integration and data management increasingly converge, cloud-based data solutions providers have emerged that can buffer your organization from the complexities of this continuous data cleansing and management so that you’re free to focus on the end goal: actionable insight.
Between the compelling mockups and specs produced by your analysts and designers, and the resulting application built by your developers, there is a gulf where projects fail, costs spiral out of control, and applications fall short of requirements. In his session at DevOps Summit, Charles Kendrick, CTO and Chief Architect at Isomorphic Software, will present a new approach where business and development users collaborate – each using tools appropriate to their goals and expertise – to build mo...
The Internet of Things (IoT) is causing data centers to become radically decentralized and atomized within a new paradigm known as “fog computing.” To support IoT applications, such as connected cars and smart grids, data centers' core functions will be decentralized out to the network's edges and endpoints (aka “fogs”). As this trend takes hold, Big Data analytics platforms will focus on high-volume log analysis (aka “logs”) and rely heavily on cognitive-computing algorithms (aka “cogs”) to mak...