Welcome!

Machine Learning Authors: Zakia Bouachraoui, Liz McMillan, Yeshim Deniz, Pat Romanski, Elizabeth White

Related Topics: @CloudExpo, Microservices Expo, Machine Learning , Agile Computing, Cloud Security, @DXWorldExpo, SDN Journal

@CloudExpo: Article

Deep Insight and Collaboration in the Cloud: A Customer Story

The true power of continuous holistic APM approach in a cloud-based environment

Recently, one of our customers, let's call him PointInFact, had a very typical problem. After deploying a new version of its software, some user requests degraded horribly. Requests that should have taken half a second took up to a minute. Interestingly, the PointInFact team runs a multi-tenant SaaS solution in the AWS Cloud and relies heavily on cloud services. This reliance makes User Experience Management and fault domain isolation very challenging.

Back Story: Application Running in the AWS Cloud
PointInFact runs a SaaS service. Internally this results in a multi-tenant service where each customer has his own instance of the application he subscribes to. All of these applications and services are hosted in Amazon's EC2 Cloud where they dynamically create new application environments and offload some functionality to AWS by using the provided services. As a SaaS business, customer satisfaction is very important to them, as a consequence they monitor all applications and services centrally and from an end-user and server-side perspective with Compuware APM.

Performance Degradation
After one deployment, the APM solution informed the operations team that user experience was degrading. A look at the geographical distribution showed them that this was not a localized phenomenon, but worldwide.

Worldwide distribution of End User Experience

Notice all the red circles in the above screenshot. Each red circle indicates frustrated users. One particular interesting fact in this dashboard is that the average web page response time (upper right corner) remains stable and well below the one-second mark. This means that the system in general is still running fine and not in general melt-down mode. However, it also shows why it is not good to rely on averages for actual monitoring and why server-side response times alone are not enough. Your end-users are at the edge, around the world, and not sitting next to one of Amazon's data centers.

The next thing that the operations team did was look at the application flow. They were hoping for something big to show up immediately, but nothing much out of the ordinary showed up.

Complete Application Flow

This is not really surprising; they were looking at an application flow overview of about half a million transactions - the averaging effect in full action.

The interesting takeaway however was although user experience suffered across the board, it could not be attributed to a general meltdown of the environment. It was time to look at specific transaction types and their baselines.

High-level Performance dashboard that shows a response time violation

In the dashboard above, the marked and highlighted upper right chart shows that one particular service call in the application was off the charts. The dashboard also shows that at the same time the CPU (lower left corner) of one of their servers was exhausted. Were the two events related even though they occurred on different hosts? A detailed look at the offending request type revealed something very interesting.

Detailed Performance Analysis of the offending request showing that most of the CPU is spent in XML and XSLT processing

The highlighted chart in the middle shows the CPU distribution of the offending service calls. CPU was spiking and the root cause could be attributed to XML processing and subsequent XSL transformations. This is indicated by the yellow and blue bars that represent XML and XSL processing, respectively. This was the reason for the CPU exhaustion noticed earlier.

However, having determined the likely root cause for the slow down the PointInFact team took a step back and asked which users and documents were impacted by this.

This shows which End Users are impacted by the performance issue - focus on the last page load taking ~360s

This was very important for two reasons. First, it allowed them to be proactive with their own users who experienced slowdowns. Second, it further isolated the real problem area.

The Performance Bottleneck That Should Not Be
Now that the trigger for the slowdown was revealed, the performance team looked into the root cause. When looking at the Transaction Flow for the impacted business transactions two things stood out.

The Application Flow for the offending requests that most time is spent in the service on the lower right corner

One can see that most of the response time is spent in the document request service (lower-right corner). In addition, they knew from the previous dashboard that the application tier consumed a lot of CPU in the XML/XSLT processing. The conclusion to the performance team was clear, caching did not work.

To understand this, we need to know that the document requests and subsequent transformations should only happen once per document. After that, all follow-up requests should take the result from the Cache. PointInFact is leveraging the memcached-compliant AWS ElastiCache for this purpose. What the analysis revealed was that the same document was transformed many times, hence caching was not working!

The obvious conclusion was that there was a problem with ElastiCache. As this was a third-party component, the customer needed more information before approaching Amazon with support requests. Thanks to their APM strategy they had sufficient insight into their usage of ElastiCache in production. This turned out to be very good, because otherwise opening a support ticket for ElastiCache would not only have been time consuming, it would have also been futile as we shall see.

Do or Do Not Cache, There Is No Try...
In an attempt to get more information about the caching problem, the customer identified the real root cause. While each of the offending document requests was doing a cache lookup upfront, none of them put the result in the cache afterwards. There was no problem with the cache; it simply was not being used.

At this point, the development team started looking into it at the code level and could identify a problem with the cache client library that they were using. That cache client was also a third-party component, but now they had something tangible to share with the maintainers of the cache client. Long story short, the problem was fixed upstream and one deployment later the problem was solved to everybody's satisfaction.

Conclusion
To me, this story shows the true power of continuous holistic APM approach in a cloud-based environment.

  • The customer was able identify a problem in production that had a big end-user impact, although the average transaction was still considered fast.
  • The operations team could identify exactly which users were impacted and be proactive in their customer support.
  • More importantly the R&D team was able to identify the real root cause in one third-party component while avoiding a lengthy back and forth with another third-party vendor (Amazon ElastiCache) which would have been futile.

Finally, PointInFact was able to track down the root cause with sufficient depth to provide the responsible third party with a fix proposal, giving them a faster turnaround time on a permanent solution. And all of this in a public globally distributed multi-tenant cloud application.

More Stories By Michael Kopp

Michael Kopp has over 12 years of experience as an architect and developer in the Enterprise Java space. Before coming to CompuwareAPM dynaTrace he was the Chief Architect at GoldenSource, a major player in the EDM space. In 2009 he joined dynaTrace as a technology strategist in the center of excellence. He specializes application performance management in large scale production environments with special focus on virtualized and cloud environments. His current focus is how to effectively leverage BigData Solutions and how these technologies impact and change the application landscape.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


CloudEXPO Stories
The challenges of aggregating data from consumer-oriented devices, such as wearable technologies and smart thermostats, are fairly well-understood. However, there are a new set of challenges for IoT devices that generate megabytes or gigabytes of data per second. Certainly, the infrastructure will have to change, as those volumes of data will likely overwhelm the available bandwidth for aggregating the data into a central repository. Ochandarena discusses a whole new way to think about your next-gen applications and how to address the challenges of building applications that harness all data types and sources.
DXWorldEXPO LLC announced today that Big Data Federation to Exhibit at the 22nd International CloudEXPO, colocated with DevOpsSUMMIT and DXWorldEXPO, November 12-13, 2018 in New York City. Big Data Federation, Inc. develops and applies artificial intelligence to predict financial and economic events that matter. The company uncovers patterns and precise drivers of performance and outcomes with the aid of machine-learning algorithms, big data, and fundamental analysis. Their products are deployed by some of the world's largest financial institutions. The company develops and applies innovative machine-learning technologies to big data to predict financial, economic, and world events. The team is a group of passionate technologists, mathematicians, data scientists and programmers in Silicon Valley with over 100 patents to their names. Big Data Federation was incorporated in 2015 and is ...
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more business becomes digital the more stakeholders are interested in this data including how it relates to business. Some of these people have never used a monitoring tool before. They have a question on their mind like "How is my application doing" but no idea how to get a proper answer.
All in Mobile is a place where we continually maximize their impact by fostering understanding, empathy, insights, creativity and joy. They believe that a truly useful and desirable mobile app doesn't need the brightest idea or the most advanced technology. A great product begins with understanding people. It's easy to think that customers will love your app, but can you justify it? They make sure your final app is something that users truly want and need. The only way to do this is by researching target group and involving users in the designing process.
CloudEXPO New York 2018, colocated with DevOpsSUMMIT and DXWorldEXPO New York 2018 will be held November 12-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI and Machine Learning to one location.