Welcome!

AJAX & REA Authors: Rajesh Lain, Sebastian Kruk, RealWire News Distribution, Harald Zeitlhofer

Related Topics: Java, SOA & WOA, Open Source, Virtualization, AJAX & REA, Apache

Java: Article

Losing Sleep Over Monitoring Complex Distributed Java Apps?

Analytics, metrics and modeling to the rescue

When IT people think about application performance monitoring, they're usually thinking about which metrics they should monitor. Some examples of resource metrics may include CPU utilization, disk queue length, and thread pool size. Examples of performance metrics may be application response time, responses per interval of time, and concurrent invocations of an application.

"Modeling" is probably not the first term that comes to mind when considering application performance monitoring. But, in fact, "modeling" is exactly what a "domain expert" does when he decides how application components are related with one another, and which metrics matter in gauging application performance.

The problem for IT organizations is to extract this type of "institutional knowledge" from a handful of experts to make it accessible and relevant to more people in IT Operations and Application Support. So whether you are talking about a complex approach like using UML diagrams, or something easier to grasp like calculating workload for your monitored elements, a model is simply an abstraction of best practices to make it easier to understand application performance.

Gartner underscores the importance of modeling in its analysis of the APM market. Its Magic Quadrant for Application Performance Monitoring discusses five functional dimensions, one of them being "runtime application architecture discovery, modeling, and display." This is the discovery of the hardware and software components of an application and the communication paths connecting these components together. Put even more simply, one of the key criteria for a good APM solution is to discover and create an accurate model.

Let's go through a brief example of why application modeling is so important for performance monitoring, and why Netuitive put so much effort on this in our recent Netuitive 6.0 release.

A typical Java application runs on an application server such as Tomcat, JBoss, WebSphere, or WebLogic. Because the application is distinct from the application server and JVM, it makes sense to model these as separate components.

The application has performance metrics such as response time and responses per time interval. The application server has JVM resource metrics such as CPU utilization and thread pool size.

Traditional "monolithic" models of performance combine metrics for an application and its application server into a single entity. But this monolithic approach makes it more difficult to model a scenario where multiple applications run on the same application server.

The monolithic approach is also not as intuitive if you want to quickly see if there is a problem with an application. It is straightforward to mark an application as "red" if its response time is increasing and to mark an application server as "red" if CPU utilization is high. But if resource and performance metrics are combined together, do you mark an application as red if CPU utilization is high? It isn't clear. High CPU utilization may not necessarily affect application performance, but you still want to know about it from a resource utilization perspective.

But a "monolithic" model is no longer appropriate for today's distributed enterprise applications. A modern Java application runs on multiple application servers in a clustered architecture. The cluster provides increased scalability and redundancy as more cluster nodes are added.

The most typical way to model an application cluster is as a cluster entity that contains multiple application servers.

This model focuses primarily on infrastructure, where one can determine if resources are evenly distributed among cluster nodes.

You can also adopt a more "application-centric" model by creating a cluster that contains only the applications.

This model provides more visibility into total application throughput and average response time. It focuses mainly on application performance throughout the entire cluster.

The bottom line is that a good model is essential for understanding and evaluating application performance. Today's distributed enterprise-class Java applications is more complex than ever, and depending on the "institutional knowledge" of a handful of application support experts is risky. Predictive IT analytics have now advanced to the point of eliminating this risk by condensing modeling best practices into templates that define which metrics matter, and by distilling the analysis of these metrics into composite health and workload indices.

To learn more about how this all works, check out our white paper on monitoring distributed Java applications.

More Stories By Richard Park

Richard Park is Director of Product Management at Netuitive. He currently leads Netuitive's efforts to integrate with application performance and cloud monitoring solutions. He has nearly 20 years of experience in network security, database programming, and systems engineering. Some past jobs include product management at Sourcefire and Computer Associates, network engineering and security at Booz Allen Hamilton, and systems engineering at UUNET Technologies (now part of Verizon). Richard has an MS in Computer Science from Johns Hopkins, an MBA from Harvard Business School, and a BA in Social Studies from Harvard University.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Cloud Expo Latest Stories
With the explosion of the cloud, more businesses are transitioning to a recurring revenue model to generate reliable sales, grow profits, and open new markets. This opportunity requires businesses to get to market quickly with the pricing and packaging options customers want. In addition, you will want to take advantage of the ensuing tidal wave of data to more effectively upsell, cross-sell and manage your customers. All of this is possible, but only with the right approach. At 15th Cloud Expo, Brendan O'Brien, Co-founder at Aria Systems and the inventor of cloud billing panelists, will lead a panel discussion on what it takes to launch and manage a successful recurring revenue business. The panelists will offer their insights about what each department will need to consider, from financial management to line of business and IT. The panelists will also offer examples from their success in recurring revenue with companies such as Audi, Constant Contact, Experian, Pitney-Bowes, Teleko...
Planning scalable environments isn't terribly difficult, but it does require a change of perspective. In his session at 15th Cloud Expo, Phil Jackson, Development Community Advocate for SoftLayer, will broaden your views to think on an Internet scale by dissecting a video publishing application built with The SoftLayer Platform, Message Queuing, Object Storage, and Drupal. By examining a scalable modular application build that can handle unpredictable traffic, attendees will able to grow your development arsenal and pick up a few strategies to apply to your own projects.
Come learn about what you need to consider when moving your data to the cloud. In her session at 15th Cloud Expo, Skyla Loomis, a Program Director of Cloudant Development at Cloudant, will discuss the security, performance, and operational implications of keeping your data on premise, moving it to the cloud, or taking a hybrid approach. She will use real customer examples to illustrate the tradeoffs, key decision points, and how to be successful with a cloud or hybrid cloud solution.
The cloud provides an easy onramp to building and deploying Big Data solutions. Transitioning from initial deployment to large-scale, highly performant operations may not be as easy. In his session at 15th Cloud Expo, Harold Hannon, Sr. Software Architect at SoftLayer, will discuss the benefits, weaknesses, and performance characteristics of public and bare metal cloud deployments that can help you make the right decisions.
Over the last few years the healthcare ecosystem has revolved around innovations in Electronic Health Record (HER) based systems. This evolution has helped us achieve much desired interoperability. Now the focus is shifting to other equally important aspects – scalability and performance. While applying cloud computing environments to the EHR systems, a special consideration needs to be given to the cloud enablement of Veterans Health Information Systems and Technology Architecture (VistA), i.e., the largest single medical system in the United States.
Cloud and Big Data present unique dilemmas: embracing the benefits of these new technologies while maintaining the security of your organization’s assets. When an outside party owns, controls and manages your infrastructure and computational resources, how can you be assured that sensitive data remains private and secure? How do you best protect data in mixed use cloud and big data infrastructure sets? Can you still satisfy the full range of reporting, compliance and regulatory requirements? In his session at 15th Cloud Expo, Derek Tumulak, Vice President of Product Management at Vormetric, will discuss how to address data security in cloud and Big Data environments so that your organization isn’t next week’s data breach headline.
Scott Jenson leads a project called The Physical Web within the Chrome team at Google. Project members are working to take the scalability and openness of the web and use it to talk to the exponentially exploding range of smart devices. Nearly every company today working on the IoT comes up with the same basic solution: use my server and you'll be fine. But if we really believe there will be trillions of these devices, that just can't scale. We need a system that is open a scalable and by using the URL as a basic building block, we open this up and get the same resilience that the web enjoys.
Is your organization struggling to deal with skyrocketing volumes of digital assets? The amount of data is growing exponentially and organizations are having a hard time managing this growth. In his session at 15th Cloud Expo, Amar Kapadia, Senior Director of Open Cloud Strategy at Seagate, will walk through the essential considerations when developing a cloud storage strategy. In this discussion, you will understand the challenges IT is facing, why companies need to move to cloud, and how the right cloud model can help your business economically overcome the data struggle.
If cloud computing benefits are so clear, why have so few enterprises migrated their mission-critical apps? The answer is often inertia and FUD. No one ever got fired for not moving to the cloud – not yet. In his session at 15th Cloud Expo, Michael Hoch, SVP, Cloud Advisory Service at Virtustream, will discuss the six key steps to justify and execute your MCA cloud migration.
The 16th International Cloud Expo announces that its Call for Papers is now open. 16th International Cloud Expo, to be held June 9–11, 2015, at the Javits Center in New York City brings together Cloud Computing, APM, APIs, Security, Big Data, Internet of Things, DevOps and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal today!
Most of today’s hardware manufacturers are building servers with at least one SATA Port, but not every systems engineer utilizes them. This is considered a loss in the game of maximizing potential storage space in a fixed unit. The SATADOM Series was created by Innodisk as a high-performance, small form factor boot drive with low power consumption to be plugged into the unused SATA port on your server board as an alternative to hard drive or USB boot-up. Built for 1U systems, this powerful device is smaller than a one dollar coin, and frees up otherwise dead space on your motherboard. To meet the requirements of tomorrow’s cloud hardware, Innodisk invested internal R&D resources to develop our SATA III series of products. The SATA III SATADOM boasts 500/180MBs R/W Speeds respectively, or double R/W Speed of SATA II products.
In today's application economy, enterprise organizations realize that it's their applications that are the heart and soul of their business. If their application users have a bad experience, their revenue and reputation are at stake. In his session at 15th Cloud Expo, Anand Akela, Senior Director of Product Marketing for Application Performance Management at CA Technologies, will discuss how a user-centric Application Performance Management solution can help inspire your users with every application transaction.
SYS-CON Events announced today that Gridstore™, the leader in software-defined storage (SDS) purpose-built for Windows Servers and Hyper-V, will exhibit at SYS-CON's 15th International Cloud Expo®, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Gridstore™ is the leader in software-defined storage purpose built for virtualization that is designed to accelerate applications in virtualized environments. Using its patented Server-Side Virtual Controller™ Technology (SVCT) to eliminate the I/O blender effect and accelerate applications Gridstore delivers vmOptimized™ Storage that self-optimizes to each application or VM across both virtual and physical environments. Leveraging a grid architecture, Gridstore delivers the first end-to-end storage QoS to ensure the most important App or VM performance is never compromised. The storage grid, that uses Gridstore’s performance optimized nodes or capacity optimized nodes, starts with as few a...
SYS-CON Events announced today that Cloudian, Inc., the leading provider of hybrid cloud storage solutions, has been named “Bronze Sponsor” of SYS-CON's 15th International Cloud Expo®, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Cloudian is a Foster City, Calif.-based software company specializing in cloud storage. Cloudian HyperStore® is an S3-compatible cloud object storage platform that enables service providers and enterprises to build reliable, affordable and scalable hybrid cloud storage solutions. Cloudian actively partners with leading cloud computing environments including Amazon Web Services, Citrix Cloud Platform, Apache CloudStack, OpenStack and the vast ecosystem of S3 compatible tools and applications. Cloudian's customers include Vodafone, Nextel, NTT, Nifty, and LunaCloud. The company has additional offices in China and Japan.
SYS-CON Events announced today that TechXtend (formerly Programmer’s Paradise), a leading value-added provider of server and storage virtualization, and r-evolution will exhibit at SYS-CON's 15th International Cloud Expo®, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. TechXtend (formerly Programmer’s Paradise) is a leading value-added provider of software, systems and solutions for corporations, government organizations, and academic institutions across the United States and Canada. TechXtend is the Exclusive Reseller in the United States for r-evolution