Machine Learning Authors: Pat Romanski, Yeshim Deniz, Elizabeth White, Jason Bloomberg, William Schmarzo

Related Topics: @CloudExpo, Java IoT, Microservices Expo, Linux Containers, Machine Learning , @DXWorldExpo

@CloudExpo: Blog Feed Post

Managing the Performance of Cloud-Based Applications

Here are a few important considerations for choosing an APM solution that works in the cloud

In the last post I covered several architectural techniques you can use to build a highly scalable, failure resistant application in the cloud. However, these architectural changes - along with the inherent unreliability of the cloud - introduce some new problems for application performance management. Many organizations rely on logging, profilers, and legacy application performance monitoring (APM) solutions to monitor and manage performance in the data center, but these strategies and solutions simply aren't enough when you move into the cloud. Here are a few important considerations for choosing an APM solution that works in the cloud.

Business Transactions
Many monitoring solutions check for server availability and alert users when a server goes down. In the cloud, however, servers can come and go all the time, so alerting on availability will result in a lot of false positives. In addition, many of the server-level metrics that APM tools and server monitoring tools report are no longer as relevant as they were on a vertically scaled system. For example, what does 90% CPU utilization mean to the behavior of your cloud application? Does it mean there is an impending performance problem that needs to be addressed? Or does it mean that more servers need to be added into that tier? This goes for other metrics, too, like physical memory usage, JVM memory usage, thread usage, database connection pool usage, and so on. These are all good indicators of the performance of a single server, but when servers can come and go they're no longer the best approximation of the performance of your application as a whole.

Instead, it's best to understand performance in terms of Business Transactions. A business transaction is essentially a user request - for an eCommerce application, "Check out" or "Add to Cart" may be two important business transactions. Each business transaction includes all of the downstream activities until the end user receives a response (and perhaps more, if your application uses asynchronous communication). For example, an application may define a service that performs request validation, stores data in a database, and then publishes a request to a topic. A JMS listener might receive that message from the topic, make a call to an external service, and then store the data in a Hadoop cluster. All of these activities need to be grouped together into a single Business Transaction so that you can understand how every part of your system affects your end users.


With these various tiers tracked at the Business Transaction level, the next step is to measure performance at the tier level. While it is important to know when a Business Transaction is behaving abnormally, it is equally as important to detect performance anomalies at the tier level. If the response time of a Business Transaction, as a whole, is slow by one standard deviation (which is acceptable) but one of its tiers is slower by a factor of three standard deviations, you may have a problem developing, even though it hasn't affected your end users yet. Chances are the tier's problem will evolve into a systemic problem that causes multiple Business Transactions to suffer.

Returning to our example from figure 3, let's say the web service behaves well, but the topic listener is significantly slower than usual. The topic listener has not caused a problem in the Business Transaction itself, but it has slowed down enough to cause concern, so there might be an issue that needs to be addressed. Business Transactions, therefore, need to be evaluated both as a whole and at the tier level in order to identify performance issues before they arise. The only way to effectively monitor the performance of an application in a dynamic environment is to capture metrics at the Business Transaction level and the tier level.

One of the most important reasons that many organizations move to the cloud is to be able to scale applications up and down rapidly as load changes. If the load on your application fluctuates dramatically over the day, week or year, the cloud will allow you to scale your application infrastructure efficiently to meet that load. However, most application monitoring tools are not equipped to handle such dramatic shifts in load or performance. Application monitoring tools that rely on static thresholds for alerting and data collection will create alert storms when load increases and miss potential problems when it decreases. You need to be able to understand what normal performance is for a given time of day, day of the week or time of the year, which is best done by baselining the performance of your application over time.

Baselining your application essentially means collecting data around how your application performs (or how a specific Business Transactions performs) at any given time. Having this data will allow you (or your APM solution) to determine if how your application is performing now is normal or if it might indicate a problem. Baselines can be defined on a per-hour basis over a period of time - for example, for the past 30 days, how has Checkout performed from 9:00am to 10:00am? In this configuration, the response time of a specific Business Transaction will be compared to the average response time for that Business Transaction over the past 30 days, between the hours of 9:00am and 10:00am. If the response time is greater than some measurable value, such as two standard deviations, then the monitoring system should raise an alert. Figure 4 attempts to show this graphically.

The average response time for this Business Transaction is about 1.75 seconds, with two standard deviations being between 1.5 seconds and 2 seconds, captured over the past 30 days. All incoming occurrences of this Business Transaction during this hour (9:00am to 10:00am in this example) will be compared to the average of 1.75 seconds, and if the response time exceeds two standard deviations from this normal (2 seconds), then an alert will be raised.

What happens if the behavior of your users differs from day to day or month to month? Your monitoring solution should be configurable enough to handle this. Banking applications probably have spikes in load twice a month when most people get paid, and eCommerce applications are inundated on Black Friday. By baselining the performance of these applications over the year, an APM tool could anticipate this load and expect slower performance during these times. Make sure your APM tool is configurable or intelligent enough that it can understand what's "normal" behavior for your app.

Dynamic Application Mapping
Many monitoring solutions today require manual configuration to instrument and monitor a new server. If new servers are disappearing and appearing all the time, however, this will result in blind spots as you update the tool to reflect the new environment. This will quickly become untenable as your application scales. A cloud-ready monitoring tool must automatically detect and map the application in real time, so you always have an up-to-date idea of what your application looks like. For agent-based monitoring solutions, this can be accomplished by deploying your agent along with your application so that new nodes are automatically instrumented by your APM solution of choice.

The post Managing the Performance of Cloud-Based Applications written by Dustin.Whittle appeared first on Application Performance Monitoring Blog from AppDynamics.

Read the original blog entry...

More Stories By AppDynamics Blog

In high-production environments where release cycles are measured in hours or minutes — not days or weeks — there's little room for mistakes and no room for confusion. Everyone has to understand what's happening, in real time, and have the means to do whatever is necessary to keep applications up and running optimally.

DevOps is a high-stakes world, but done well, it delivers the agility and performance to significantly impact business competitiveness.

@CloudExpo Stories
@DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the world's largest enterprises - and delivering real results.
DXWorldEXPO LLC announced today that Dez Blanchfield joined the faculty of CloudEXPO's "10-Year Anniversary Event" which will take place on November 11-13, 2018 in New York City. Dez is a strategic leader in business and digital transformation with 25 years of experience in the IT and telecommunications industries developing strategies and implementing business initiatives. He has a breadth of expertise spanning technologies such as cloud computing, big data and analytics, cognitive computing, m...
DXWorldEXPO LLC announced today that Kevin Jackson joined the faculty of CloudEXPO's "10-Year Anniversary Event" which will take place on November 11-13, 2018 in New York City. Kevin L. Jackson is a globally recognized cloud computing expert and Founder/Author of the award winning "Cloud Musings" blog. Mr. Jackson has also been recognized as a "Top 100 Cybersecurity Influencer and Brand" by Onalytica (2015), a Huffington Post "Top 100 Cloud Computing Experts on Twitter" (2013) and a "Top 50 C...
Cloud-enabled transformation has evolved from cost saving measure to business innovation strategy -- one that combines the cloud with cognitive capabilities to drive market disruption. Learn how you can achieve the insight and agility you need to gain a competitive advantage. Industry-acclaimed CTO and cloud expert, Shankar Kalyana presents. Only the most exceptional IBMers are appointed with the rare distinction of IBM Fellow, the highest technical honor in the company. Shankar has also receive...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Daniel Jones is CTO of EngineerBetter, helping enterprises deliver value faster. Previously he was an IT consultant, indie video games developer, head of web development in the finance sector, and an award-winning martial artist. Continuous Delivery makes it possible to exploit findings of cognitive psychology and neuroscience to increase the productivity and happiness of our teams.
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
The standardization of container runtimes and images has sparked the creation of an almost overwhelming number of new open source projects that build on and otherwise work with these specifications. Of course, there's Kubernetes, which orchestrates and manages collections of containers. It was one of the first and best-known examples of projects that make containers truly useful for production use. However, more recently, the container ecosystem has truly exploded. A service mesh like Istio addr...
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
Evan Kirstel is an internationally recognized thought leader and social media influencer in IoT (#1 in 2017), Cloud, Data Security (2016), Health Tech (#9 in 2017), Digital Health (#6 in 2016), B2B Marketing (#5 in 2015), AI, Smart Home, Digital (2017), IIoT (#1 in 2017) and Telecom/Wireless/5G. His connections are a "Who's Who" in these technologies, He is in the top 10 most mentioned/re-tweeted by CMOs and CIOs (2016) and have been recently named 5th most influential B2B marketeer in the US. H...
DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of bus...
DXWordEXPO New York 2018, colocated with CloudEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
Cloud Expo | DXWorld Expo have announced the conference tracks for Cloud Expo 2018. Cloud Expo will be held June 5-7, 2018, at the Javits Center in New York City, and November 6-8, 2018, at the Santa Clara Convention Center, Santa Clara, CA. Digital Transformation (DX) is a major focus with the introduction of DX Expo within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive ov...
DXWorldEXPO | CloudEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.
The dynamic nature of the cloud means that change is a constant when it comes to modern cloud-based infrastructure. Delivering modern applications to end users, therefore, is a constantly shifting challenge. Delivery automation helps IT Ops teams ensure that apps are providing an optimal end user experience over hybrid-cloud and multi-cloud environments, no matter what the current state of the infrastructure is. To employ a delivery automation strategy that reflects your business rules, making r...
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Digital transformation is about embracing digital technologies into a company's culture to better connect with its customers, automate processes, create better tools, enter new markets, etc. Such a transformation requires continuous orchestration across teams and an environment based on open collaboration and daily experiments. In his session at 21st Cloud Expo, Alex Casalboni, Technical (Cloud) Evangelist at Cloud Academy, explored and discussed the most urgent unsolved challenges to achieve fu...