Welcome!

Machine Learning Authors: Liz McMillan, Ed Featherston, Jason Bloomberg, William Schmarzo, Pat Romanski

Related Topics: Microservices Expo, Java IoT, Microsoft Cloud, Containers Expo Blog, Machine Learning , Agile Computing, @CloudExpo, @DXWorldExpo

Microservices Expo: Article

Escape Application Performance "Groundhog Day" with Culture Change

When the problem of performance is addressed across the lifecycle true change can be made

I love movies. There is just something about them that can teach us a lot about life. One of my favorites is "Groundhog Day." The construct of this movie is that the main character Phil, played by Bill Murray, is trapped having to live the same day over and over again. This is what it can be like for IT when it comes to managing application performance. When a problem is detected in production, a "groundhog day" process is put into action to try and address that issue. Once the problem is addressed everyone resets and waits for the next problem to occur. In the end, the same process takes place over and over and the team never really makes an impact on the performance of an application.

In order to escape "Groundhog Day" there are some key practices that you can implement to build a team and transition the management of your applications' performance from firefighting to truly being proactive. Those key things are:

  • Getting some separation with the performance team
  • Picking the right skill set
  • Having the right tools in place

Each one of these practices changes the culture of application performance management (APM) from being reactive to becoming proactive. The concepts address some of the pitfalls that organizations face when trying to deal with the issue of performance problems.

To illustrate these key concepts let's look at how one of CompuwareAPM's customers, Raiffeisen Bank in Hungary, was able to change the culture of performance and escape its own "Groundhog Day." By implementing these key things the company was able to transition from variable performance in production with sporadic unplanned downtime to a more agile operation, deploying multiple releases a week with zero downtime.

Different Day - Same Results
Once Phil discovered that he was trapped in Punxatawney he attempted to escape his fate by trying to do the repeat his actions over and over. Similarly, most companies do the same thing when it comes to APM. I have discussed this scenario before. With Raiffeisen this was a common occurrence surrounding its portal application.

The teams at Raiffeisen were constantly fighting the same battles. It didn't matter whether it was during a peak traffic load or some random set of circumstances - the issue was always the same. The portal application's performance would degrade and the operations team had to spring into action. In some cases there was no indication that there was a performance problem other than the end users calling in stating they were having problems.

Poor Performance before the formation of a solid performance team

When a problem was detected, the operations team's process of performance management was reactive. First, the team would capture all the data relating to the performance of the portal. This included all the logs, dumps and hardware statistics associated with the portal. The data then had to be manually correlated and communicated to all the team members. This process was highly resource intensive both in time and manpower.

All of these measures still did not guarantee that the performance issues would be identified in time to rectify the issues. Without understanding the root cause of the performance bottlenecks, the team was left with few options. In a lot of cases the only way to address the issues in a timely fashion was with drastic measures. The application cluster had to be restarted. This was not an acceptable solution to these problems.

This is the "Groundhog Day" event that I am talking about. No matter what the team did the result was always the same. This is the constant fire drill that IT shops have been fighting about for years. The constant cycle of wait for the fire, find the fire, put out the fire as best as possible is an everyday occurrence for a lot of IT shops. Many APM tools that you might have implemented (legacy tools that can't copy any more and ‘point' solutions that only address one aspect of APM) in the past are usually only designed to cope with this cycle, not break it.

Continuous Improvement: The Only Way Out
Just as Phil was only able to escape his "day" after he began to continuously improve himself and others around him, The team at Raiffeisen realized the same thing for their own "Groundhog Day." They came to the understanding that what had to change was the way they were managing the performance of the portal application. The company made a conscious decision to fundamentally change the way it did APM as a team.

Get Some Separation
The first thing the team did was spin off a group and created their own team just to manage application performance. This is a key difference between performance initiatives that succeed and fail. The problem is that most companies assign the task of performance to one of the phases of the lifecycle. This isolates the impact that this team has over the entire lifecycle. For example, if this team is attached to testing then it has very little influence in the architecture of the application. The team also has no visibility other than what is provided to them by the tools currently in place.

Being separate gives the team a level of visibility so that its members can look at the whole process. This allows them to look for bottlenecks wherever their place in the lifecycle. This single act had a lasting impact at Raiffeisen. They had the ability to scrutinize the application at every level. When problems were discovered they had the visibility to make actionable recommendations.

Skills
The makeup of this team is as important as how it is positioned into the application lifecycle. When building a team to oversee the performance of the portal application Raiffeisen was looking for the right people. Each member was hand selected. This was not a task that was assigned to a single team or multiple groups of teams.

Selecting members who are very good detectives and subject matter experts is better than selecting the leads of the different application support groups. Picking people who have never worked on the application means they come with no potentially unhelpful prejudices and preconceptions: they're a blank slate. There are no thoughts of familiarity with the application that can lead to overlooking a problem.

Here, Raiffeisen was very specific about the criteria around how this team was to be built. The members of the team were selected based on their skills not their knowledge of the application. Each member of the team had no previous knowledge of the portal environment. Nor were they involved in any of the development or design phases. This was very important when it came to scrutinizing the application. The whole application was suspect until proven otherwise.

Right Tools for the Job
As discussed earlier, most performance tools that are in place are only good for fighting fires- not preventing them. To complete the transformation from reactive to proactive you need to get a solution that allows you to manage across the entire application lifecycle. Having the integrations and the ability to plug into the release process is crucial when selecting the right APM solution. Any solution that cannot do this is only upgrading the fire extinguisher.

When it came to Raiffeisen, it selected Compuware APM because of this. This was the turning point for dealing with the performance of the portal application. With this solution in place the newly formed team broke the fire-fighting cycle and started transforming the way Raiffeisen managed application performance. Teams easily shared data with each other, and they could clearly see the implications for actions needed to improve the performance of the application.

Fast Turn Around
The last "day" for Phil was after he was able to see how his life impacted those around him. With that he was able to escape Puxatawney and move on. With Raiffeisen that "day" ended two months after starting the project. The team was able to make a huge impact and change how the application was performing.

At the start of 2012 the performance team was created and Compuware APM was implemented. One month later the new application performance troubleshooting team had completed a full analysis of the portal application and created custom dashboards and views for all teams involved with the portal application: developers, testers, and operations managers. On top of that, developers started to implement changes into the application based on the findings from the performance troubleshooting team. In March of 2012 those changes were being tested and released into the production environment. Since the end of March there have been no unplanned outages. They have also been able to increase the number of updates to an almost daily occurrence. Additionally, the team was able to find problems over 30x faster than before and improved the login transaction from an average of 10 seconds to 3 seconds.

When the problem of performance is addressed across the lifecycle true change can be made. This is real lasting change in the way performance is managed. That is why managing performance is not just about having the newest tool in production. It is simply not enough. It just repeats the same cycle even though it may seem "easier" than the previous day. A company must realize that it is a combination of software, cultural change, and process that creates a lasting effect.

More Stories By Stephen Wilson

Stephen Wilson is a 15 year IT professional that currently holds the Subject Matter Expert role for Compuware APM within the Field Technology Sales organization. His role puts him in front of customers and their challenges on a daily basis. His background includes both development and operations. This kind of insight into the challenges that both developers face as well as those faced by the operational team allows him to be seen as a trusted advisor to his customers. His unique perspective into client needs and goals give creditability to the need for performance not just at one level but across the entire lifecycle.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@CloudExpo Stories
Leading companies, from the Global Fortune 500 to the smallest companies, are adopting hybrid cloud as the path to business advantage. Hybrid cloud depends on cloud services and on-premises infrastructure working in unison. Successful implementations require new levels of data mobility, enabled by an automated and seamless flow across on-premises and cloud resources. In his general session at 21st Cloud Expo, Greg Tevis, an IBM Storage Software Technical Strategist and Customer Solution Architec...
Coca-Cola’s Google powered digital signage system lays the groundwork for a more valuable connection between Coke and its customers. Digital signs pair software with high-resolution displays so that a message can be changed instantly based on what the operator wants to communicate or sell. In their Day 3 Keynote at 21st Cloud Expo, Greg Chambers, Global Group Director, Digital Innovation, Coca-Cola, and Vidya Nagarajan, a Senior Product Manager at Google, discussed how from store operations and ...
Blockchain. A day doesn’t seem to go by without seeing articles and discussions about the technology. According to PwC executive Seamus Cushley, approximately $1.4B has been invested in blockchain just last year. In Gartner’s recent hype cycle for emerging technologies, blockchain is approaching the peak. It is considered by Gartner as one of the ‘Key platform-enabling technologies to track.’ While there is a lot of ‘hype vs reality’ discussions going on, there is no arguing that blockchain is b...
In his keynote at 18th Cloud Expo, Andrew Keys, Co-Founder of ConsenSys Enterprise, provided an overview of the evolution of the Internet and the Database and the future of their combination – the Blockchain. Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settle...
"As we've gone out into the public cloud we've seen that over time we may have lost a few things - we've lost control, we've given up cost to a certain extent, and then security, flexibility," explained Steve Conner, VP of Sales at Cloudistics,in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Blockchain is a shared, secure record of exchange that establishes trust, accountability and transparency across business networks. Supported by the Linux Foundation's open source, open-standards based Hyperledger Project, Blockchain has the potential to improve regulatory compliance, reduce cost as well as advance trade. Are you curious about how Blockchain is built for business? In her session at 21st Cloud Expo, René Bostic, Technical VP of the IBM Cloud Unit in North America, discussed the b...
The use of containers by developers -- and now increasingly IT operators -- has grown from infatuation to deep and abiding love. But as with any long-term affair, the honeymoon soon leads to needing to live well together ... and maybe even getting some relationship help along the way. And so it goes with container orchestration and automation solutions, which are rapidly emerging as the means to maintain the bliss between rapid container adoption and broad container use among multiple cloud host...
In his session at 21st Cloud Expo, Michael Burley, a Senior Business Development Executive in IT Services at NetApp, described how NetApp designed a three-year program of work to migrate 25PB of a major telco's enterprise data to a new STaaS platform, and then secured a long-term contract to manage and operate the platform. This significant program blended the best of NetApp’s solutions and services capabilities to enable this telco’s successful adoption of private cloud storage and launching ...
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
Imagine if you will, a retail floor so densely packed with sensors that they can pick up the movements of insects scurrying across a store aisle. Or a component of a piece of factory equipment so well-instrumented that its digital twin provides resolution down to the micrometer.
"Since we launched LinuxONE we learned a lot from our customers. More than anything what they responded to were some very unique security capabilities that we have," explained Mark Figley, Director of LinuxONE Offerings at IBM, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, answered these questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and co...
A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, whic...
Sanjeev Sharma Joins June 5-7, 2018 @DevOpsSummit at @Cloud Expo New York Faculty. Sanjeev Sharma is an internationally known DevOps and Cloud Transformation thought leader, technology executive, and author. Sanjeev's industry experience includes tenures as CTO, Technical Sales leader, and Cloud Architect leader. As an IBM Distinguished Engineer, Sanjeev is recognized at the highest levels of IBM's core of technical leaders.
The need for greater agility and scalability necessitated the digital transformation in the form of following equation: monolithic to microservices to serverless architecture (FaaS). To keep up with the cut-throat competition, the organisations need to update their technology stack to make software development their differentiating factor. Thus microservices architecture emerged as a potential method to provide development teams with greater flexibility and other advantages, such as the abili...
Product connectivity goes hand and hand these days with increased use of personal data. New IoT devices are becoming more personalized than ever before. In his session at 22nd Cloud Expo | DXWorld Expo, Nicolas Fierro, CEO of MIMIR Blockchain Solutions, will discuss how in order to protect your data and privacy, IoT applications need to embrace Blockchain technology for a new level of product security never before seen - or needed.
As DevOps methodologies expand their reach across the enterprise, organizations face the daunting challenge of adapting related cloud strategies to ensure optimal alignment, from managing complexity to ensuring proper governance. How can culture, automation, legacy apps and even budget be reexamined to enable this ongoing shift within the modern software factory? In her Day 2 Keynote at @DevOpsSummit at 21st Cloud Expo, Aruna Ravichandran, VP, DevOps Solutions Marketing, CA Technologies, was jo...
While some developers care passionately about how data centers and clouds are architected, for most, it is only the end result that matters. To the majority of companies, technology exists to solve a business problem, and only delivers value when it is solving that problem. 2017 brings the mainstream adoption of containers for production workloads. In his session at 21st Cloud Expo, Ben McCormack, VP of Operations at Evernote, discussed how data centers of the future will be managed, how the p...
When shopping for a new data processing platform for IoT solutions, many development teams want to be able to test-drive options before making a choice. Yet when evaluating an IoT solution, it’s simply not feasible to do so at scale with physical devices. Building a sensor simulator is the next best choice; however, generating a realistic simulation at very high TPS with ease of configurability is a formidable challenge. When dealing with multiple application or transport protocols, you would be...
When it comes to cloud computing, the ability to turn massive amounts of compute cores on and off on demand sounds attractive to IT staff, who need to manage peaks and valleys in user activity. With cloud bursting, the majority of the data can stay on premises while tapping into compute from public cloud providers, reducing risk and minimizing need to move large files. In his session at 18th Cloud Expo, Scott Jeschonek, Director of Product Management at Avere Systems, discussed the IT and busine...