Click here to close now.

Welcome!

AJAX & REA Authors: Elizabeth White, AppDynamics Blog, Pat Romanski, Carmen Gonzalez, Liz McMillan

Blog Feed Post

What Customers expect in a new generation APM (2.0) solution

In the last blog, I discussed the challenges with an APM 1.0 solution. 

 

As an application owner or application support personnel, you want to

  • Exceed service levels and avoid costly, reputation-damaging application failures through improved visibility into the end-user experience
  • Ensure reliable, high-performing applications by detecting problems faster and prioritizing issues based on service levels and impacted users
  • Improve time to market with new applications, features, and technologies, such as virtualization, acceleration, and cloud-based services

 

The APM 2.0 products enable you manage application performance leading with real user activity monitoring. Following are some of the top functionalities they provide that help you achieve your business objectives.

Visibility to real users and end-user driven diagnostics

  • APM 2.0 solutions provide visibility to end-to-end application performance as experience by real end users and help application support to focus on critical issues affecting end-users.

 

The dashboard shown in Figure 1, as an example, provides visibility of application performance as experienced by users in real-time.

 

image001.png

 

 

  • As an application owner, you probably care about which users are impacted, what pages they are navigating and what kind of errors they are getting. You want your APM product to improve MTTR by identifying what is causing the latency issues or failure e.g. network, load balancer, ADN like Akamai, SSL or the application tier itself. Figure 2 shows a specific user session and what pages the user navigated and identified that the application tier is the cause.

image003.png

  • The “details” link in Figure 2 allows the application support personnel to drill down further which application tier is the culprit for the slow or failed transaction in context to the specific user. This allows the application support personnel to track end-user request to the line of the code.

Ease of use and superior time-to-value

You want to use a product that is simple to use for your application support / operation team.

  • A modern APM solution does not require manual definition of instrumentation policies.
  • It should not require manual changes such as Java script injection for visibility to the end user.
  • APM 2.0 tools provide ability to drill down from end-user to deep-dive for diagnostics and drill up from deep-dive data to identify the impacted user and the context for the transaction without having to do a manual correlation, jumping between consoles.
  • The agent install is typically a 5-10 mts process in the modern APM deep-dive tools.
  • The APM 2.0 deep-dive solution provides automatic detection of application servers, business transactions, frameworks etc.

 

Figure 3 shows a specific user transaction request and latencies by tiers. It also shows the SQL and latencies information.

 

image005.png

 

Suitable for production deployment

  • The real user monitoring tool should be non-invasive in nature and it should put additional overhead on application response time.
  • You should be able to deploy an always-on, deep-dive monitoring and diagnostic solution for your production enterprise and cloud-based applications.
  • It should work in an agile environment without having to configure new instrumentation policies with application releases.
  • It should scale for a large production deployment to 1000s of application servers that you want to manage in your production environment.

 

Operations Ready product and enables DevOps collaboration

The APM 1.0 products were originally built for developers and hence they were not very intuitive for operations use. The APM 2.0 products are operations friendly. Also you would expect some of those to enable DevOps collaboration for intelligent escalation to development.

  • Most application support personnel do not understand what frameworks or application technologies used by an application. The majority deep-dive tools in the market move very fast from a transaction view to line of code thus being not providing much value to operations team.

 

For example, Figure 4 shows the transaction break-down by specific technologies used by the transaction. This also provides baselines for different tiers and the system resource usage along with tiers to make intelligent decision. Figure 3 shows an application flow map for a specific transaction and time spent in each SQL or a remote web service call without having to drill down to the line of code.

image007.png

 

  • There are many instances operations team need to escalate problems to developers. The tool should allow application support personnel to escalate to Tier 3/development for diagnostics by sending a direct link to the diagnostic instance. However in many organizations, developers do not have access to production environment and as shown in Figure 5, solution from BMC allows exporting the diagnostics data call tree with latency, parameters, etc in a HTML format.

 

image009.png

Adaptive to virtualization and Cloud environment

The new APM 2.0 products are purpose-built and architected for cloud and virtualized environments. 

  • The APM 2.0 product components and agents are designed to communicate in a firewall-friendly protocol and can be encrypted / secured.
  • They support virtualized and dynamic environment without causing a lot of false alerts.
  • They support modern cloud frameworks and Big Data platforms such as Hadoop.

 

Conclusion

The APM 2.0 solution provides the functionalities that you need to manage your applications that will help exceed business expectations and increase customer loyalty. These tools provide capability to improve time to market. These provide you understanding how application performance affects user behavior — and how that behavior impacts the bottom line. You can leverage an APM 2.0 solutionlike BMC Application Performance Management to improve your application performance and thus meeting your business objectives.

Read the original blog entry...

More Stories By Debu Panda

Debu Panda is a Director of Product Management at Oracle Corporation. He is lead author of the EJB 3 in Action (Manning Publications) and Middleware Management (Packt). He has more than 20 years of experience in the IT industry and has published numerous articles on enterprise Java technologies and has presented at many conferences. Debu maintains an active blog on enterprise Java at http://debupanda.blogspot.com.

@CloudExpo Stories
Gartner predicts that the bulk of new IT spending by 2016 will be for cloud platforms and applications and that nearly half of large enterprises will have cloud deployments by the end of 2017. The benefits of the cloud may be clear for applications that can tolerate brief periods of downtime, but for critical applications like SQL Server, Oracle and SAP, companies need a strategy for HA and DR protection. While traditional SAN-based clusters are not possible in these environments, SANless cluste...
In a recent research, analyst firm IDC found that the average cost of a critical application failure is $500,000 to $1 million per hour and the average total cost of unplanned application downtime is $1.25 billion to $2.5 billion per year for Fortune 1000 companies. In addition to the findings on the cost of the downtime, the research also highlighted best practices for development, testing, application support, infrastructure, and operations teams.
SYS-CON Events announced today that the "First Containers & Microservices Conference" will take place June 9-11, 2015, at the Javits Center in New York City. The “Second Containers & Microservices Conference” will take place November 3-5, 2015, at Santa Clara Convention Center, Santa Clara, CA. Containers and microservices have become topics of intense interest throughout the cloud developer and enterprise IT communities.
Disruptive macro trends in technology are impacting and dramatically changing the "art of the possible" relative to supply chain management practices through the innovative use of IoT, cloud, machine learning and Big Data to enable connected ecosystems of engagement. Enterprise informatics can now move beyond point solutions that merely monitor the past and implement integrated enterprise fabrics that enable end-to-end supply chain visibility to improve customer service delivery and optimize sup...
The OpenStack cloud operating system includes Trove, a database abstraction layer. Rather than applications connecting directly to a specific type of database, they connect to Trove, which in turn connects to one or more specific databases. One target database is Postgres Plus Cloud Database, which includes its own RESTful API. Trove was originally developed around MySQL, whose interfaces are significantly less complicated than those of the Postgres cloud database. In his session at 16th Cloud...
IoT is still a vague buzzword for many people. In his session at @ThingsExpo, Mike Kavis, Vice President & Principal Cloud Architect at Cloud Technology Partners, discussed the business value of IoT that goes far beyond the general public's perception that IoT is all about wearables and home consumer services. He also discussed how IoT is perceived by investors and how venture capitalist access this space. Other topics discussed were barriers to success, what is new, what is old, and what th...
To manage complex web services with lots of calls to the cloud, many businesses have invested in Application Performance Management (APM) and Network Performance Management (NPM) tools. Together APM and NPM tools are essential aids in improving a business's infrastructure required to support an effective web experience... but they are missing a critical component - Internet visibility. Internet connectivity has always played a role in customer access to web presence, but in the past few years u...
There's Big Data, then there's really Big Data from the Internet of Things. IoT is evolving to include many data possibilities like new types of event, log and network data. The volumes are enormous, generating tens of billions of logs per day, which raise data challenges. Early IoT deployments are relying heavily on both the cloud and managed service providers to navigate these challenges. In her session at Big Data Expo®, Hannah Smalltree, Director at Treasure Data, discussed how IoT, Big D...
Buzzword alert: Microservices and IoT at a DevOps conference? What could possibly go wrong? In this Power Panel at DevOps Summit, moderated by Jason Bloomberg, the leading expert on architecting agility for the enterprise and president of Intellyx, panelists will peel away the buzz and discuss the important architectural principles behind implementing IoT solutions for the enterprise. As remote IoT devices and sensors become increasingly intelligent, they become part of our distributed cloud en...
SYS-CON Events announced today that MetraTech, now part of Ericsson, has been named “Silver Sponsor” of SYS-CON's 16th International Cloud Expo®, which will take place on June 9–11, 2015, at the Javits Center in New York, NY. Ericsson is the driving force behind the Networked Society- a world leader in communications infrastructure, software and services. Some 40% of the world’s mobile traffic runs through networks Ericsson has supplied, serving more than 2.5 billion subscribers.
There is no doubt that Big Data is here and getting bigger every day. Building a Big Data infrastructure today is no easy task. There are an enormous number of choices for database engines and technologies. To make things even more challenging, requirements are getting more sophisticated, and the standard paradigm of supporting historical analytics queries is often just one facet of what is needed. As Big Data growth continues, organizations are demanding real-time access to data, allowing immed...
Enterprises are fast realizing the importance of integrating SaaS/Cloud applications, API and on-premises data and processes, to unleash hidden value. This webinar explores how managers can use a Microservice-centric approach to aggressively tackle the unexpected new integration challenges posed by proliferation of cloud, mobile, social and big data projects. Industry analyst and SOA expert Jason Bloomberg will strip away the hype from microservices, and clearly identify their advantages and d...
Enterprises are turning to the hybrid cloud to drive greater scalability and cost-effectiveness. But enterprises should beware as the definition of “policy” varies wildly. Some say it’s the ability to control the resources apps’ use or where the apps run. Others view policy as governing the permissions and delivering security. Policy is all of that and more. In his session at 16th Cloud Expo, Derek Collison, founder and CEO of Apcera, will: Explain what policy is; Show how policy should be arc...
Even though it’s now Microservices Journal, long-time fans of SOA World Magazine can take comfort in the fact that the URL – soa.sys-con.com – remains unchanged. And that’s no mistake, as microservices are really nothing more than a new and improved take on the Service-Oriented Architecture (SOA) best practices we struggled to hammer out over the last decade. Skeptics, however, might say that this change is nothing more than an exercise in buzzword-hopping. SOA is passé, and now that people are ...
Information Technology (IT) service providers have historically struggled between the huge capital expenditure and long development cycles of building their own cloud versus the thin margins and limited flexibility of using public retailers such as Amazon Web Services (AWS). The emergence of wholesale cloud, and the technologies that make it possible, is revolutionizing how and by whom enterprise IT is delivered. Wholesale cloud is the game-changing third option between building your own (BYO) c...
Shipping daily, injecting faults, and keeping an extremely high availability "without Ops"? Understand why NoOps does not mean no operations. Agile development methodologies require evolved operations to be successful. In his keynote at DevOps Summit, David Tesar, Microsoft Technical Evangelist on Microsoft Azure and DevOps, will discuss how Microsoft teams who have made huge progress with a DevOps transformation effectively utilize operations staff and how challenges were overcome. Regardless ...
You often hear the two titles of "DevOps" and "Immutable Infrastructure" used independently. In his session at DevOps Summit, John Willis, Technical Evangelist for Docker, will cover the union between the two topics and why this is important. He will cover an overview of Immutable Infrastructure then show how an Immutable Continuous Delivery pipeline can be applied as a best practice for "DevOps." He will end the session with some interesting case study examples.
SYS-CON Media named Andi Mann editor of DevOps Journal. DevOps Journal is focused on this critical enterprise IT topic in the world of cloud computing. DevOps Journal brings valuable information to DevOps professionals who are transforming the way enterprise IT is done. Andi Mann, Vice President, Strategic Solutions, at CA Technologies, is an accomplished digital business executive with extensive global expertise as a strategist, technologist, innovator, marketer, communicator, and thought lea...
There is little doubt that Big Data solutions will have an increasing role in the Enterprise IT mainstream over time. 8th International Big Data Expo, co-located with 17th International Cloud Expo - to be held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA - has announced its Call for Papers is open. As advanced data storage, access and analytics technologies aimed at handling high-volume and/or fast moving data all move center stage, aided by the cloud computing bo...
Data-intensive companies that strive to gain insights from data using Big Data analytics tools can gain tremendous competitive advantage by deploying data-centric storage. Organizations generate large volumes of data, the vast majority of which is unstructured. As the volume and velocity of this unstructured data increases, the costs, risks and usability challenges associated with managing the unstructured data (regardless of file type, size or device) increases simultaneously, including end-to-...