Welcome!

AJAX & REA Authors: David H Deans, Carmen Gonzalez, Pat Romanski, Aria Blog, Lori MacVittie

Related Topics: Java, XML, SOA & WOA, ColdFusion, .NET, AJAX & REA

Java: Article

Eleven Tips to Becoming a Better Performance Engineer

How to conduct performance testing

The ability to conduct effective performance testing has become a highly desired skillset within the IT industry. Unfortunately, these highly sought-after skills are consistently in short supply. "Front-end testers" can work with a tool to create a realistic load and although this is an important skillset, creating the load is just the beginning of any performance project. Understanding the load patterns and tuning the environment makes the unique talents of a "performance engineer" worth their weight in gold.

Performance engineers require skills in data analysis such as resource usage patterns, modeling, capacity planning, and tuning in order to detect, isolate, and alleviate saturation points within a deployment. Performance testing generates concurrency conditions and exposes resource competition at a server level. When the competition results in a resource (such as a thread pool) becoming over-utilized, this resource becomes a bottleneck or a saturation point. Performance engineers need to first understand the underlying architectures and develop a sense of where to look for potential scalability issues. Much of these "senses" or skills come from experience, working in many multi-tier environments and successfully tuning bottlenecks. Here are some tips to make the challenging but rewarding transition from a front-end tester to a performance engineer.

Wisdom, Determination, Patience, and Communication
Who said there isn't a whole lot of psychology in technology? ;) Whether you are determining the current capacity of a deployment or you are recreating a production problem, it's often a very complex task- so many moving parts within the infrastructure, so many numbers to analyze from so many sources, data sets of raw test results to turn into understandable formats, so many people to keep in the loop, so much technical coordination... I could go on and on. It's your professional soft skills which will keep you on the right course. It requires determination to unpeel the layers of an onion and investigate each tier of the deployment. It requires the wisdom to spot trends instead of pursuing the tangents of anomalies. It requires the dedication to keep an eye on many different metrics and isolate resource saturation. And it requires the patience to reproduce scenarios in order to make conclusions based on proof/evidence. And you need to accomplish all of this while being an excellent communicator!

Methodical Approach - The Constant
Spend your time wisely in the beginning and set up the most realistic test scenarios. Then "set" the performance scenario in stone. This means Do Not change even the most minute details in your test case: All transactions flows, all mixtures, all think times, all behaviors - no variations at this point. This is the "constant" in your experiment and it is the only way you can reproduce and compare results. Any deviation within the test case scenario will result in different throughputs which affect resource patterns. Not following this tip will surely lead you on a collision course with Analysis Paralysis!

Architectural Diagram - Identify Potential Bottlenecks by Visualization
Make sure you ask for and receive an architectural diagram of the entire deployment. Map out business transactions to resources utilized within the environment. Make sure you understand all the transaction flows, from front end load balancers down to the shared resource database. Study the deployment and hook up precise monitors, leaving no blind spots. Visualize where contentions or bottlenecks COULD occur. Each resource of the environment must be monitored for signs of saturation. In reality, it's in the identification of where to look for bottlenecks that is the more difficult task. Alleviating these bottlenecks is the easy (and most rewarding) part. But without an architectural map, your journey will easily end by the frustration of getting lost in the dark.

Tuning Hardware and Software Level Bottlenecks
"Tuning is an Art". "Tuning is a Science". Which is it? Hardware servers are restricted by the physical resources (disk io/memory, cpu). Software servers are much more configurable and this is where expertise in needed for tuning. Performance engineers must understand the workings of a "server" in thread pools, caching policies, memory allocations, connection pooling, etc. Tuning is a balancing act. It's the situation where you tune the software servers in order to take full advantage of hardware resources, without causing a flood. Simply opening up all the gates isn't going to help when the backend is saturated with requests. Tuning must be conservative, weighing all the benefits as well as the consequences.

Proof: Reproducible Results
Typically, a seasoned performance engineer will tune a layer of the environment only when the results are reproducible. Always use trends instead of points in time, mere spikes are not cause for architectural changes. As a rule of thumb, you should reproduce 3 times before you make a change. Sometimes this takes a while... So be prepared to be patient. For example, if you are emulating a production login rate of 3 users per second, but the performance deterioration doesn't occur until you have 2000 active users, it will take a while to see it. Making an unnecessary change simply muddies the waters, keep it clear and recreate those exact conditions.

Tune the First Occurring Bottleneck
Make sure you tune the layer which showed contention earliest in the performance test, not the first identified bottleneck. When monitoring a large complex system, there are many counters to keep in your sights. Don't jump the gun and tune a thread pool when you see it becomes saturated, this could actually be a symptom of the problem, not the root cause. Correlate (using graphing is easiest) the point of time of degradation of performance to the first saturation within the environment. Understandably, there is a ton of information to look at - keep it simpler by just looking at the free resources based on percentages (free threads, free cache, and free file descriptors) and this will allow you to spot a bottleneck quicker. When a free resource runs low, there's a possible bottleneck. Understand the resource utilization and free resources will allow you to understand a bottleneck before it affects the end-user response time. In other words, watch as the resource becomes utilized. When free gets low, keep it on your radar for a cause of performance degradation.

Iterative Tuning Process
Tuning is an iterative process. Know that once you have alleviated one bottleneck, you will surely encounter another one. But do not fret... All aspects of servers are limited and since nothing is infinite you will eventually reach the end. Tuning manipulates the gates, requests which don't have a resource are queued and must wait to be serviced. Tuning becomes a process you must repeat until the workload reaches target capacity with acceptable response times.

Validation
Validate, validate, validate. Just as important as recreating and tuning based upon proof is validating that the tuning change had the desired effect. Did it indeed impact scalability in a positive way? Often, performance engineers test out theories. And sometimes, the validation stage will cause a change to be reverted. It's ok that not every change will make it to production. The key is to use a very scientific approach in which you prove the result as well as the requirement.

I hope you gleaned some pearls of wisdom.

Creating the load and emulating production workload is a means to end - you obviously need to create the load before you can capacity plan or understand the scalability of the deployment. But it is the skills in performance analysis that are most valuable. The performance engineer who walks into a project, takes the lead, wastes no time in learning the environment, creates and/or executes the realistic tests, identifies current capacity, isolates and alleviates bottlenecks, documents results, mentors the juniors, and clearly and effectively communicates with everyone from developers on up to the CIO/CTO's, is truly a GOLD MINE.

Becoming a true performance engineer is no easy task, but it's well worth the effort!

More Stories By Rebecca Clinard

Rebecca Clinard is a Senior Performance Engineer at Neotys, a provider of load testing software for Web applications. Previously, she worked as a web application performance engineer for Bowstreet, Fidelity Investments, Bottomline Technologies and Timberland companies, industries spanning retail, financial services, insurance and manufacturing. Her expertise lies in creating realistic load tests and performance tuning multi-tier deployments. She has been orchestrating and conducting performance tests since 2001. Clinard graduated from University of New Hampshire with a BS and also holds a UNIX Certificate from Worcester Polytechnic Institute.

@CloudExpo Stories
SYS-CON Events announced today that Verizon has been named "Gold Sponsor" of SYS-CON's 15th International Cloud Expo®, which will take place on November 4-6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Verizon Enterprise Solutions creates global connections that generate growth, drive business innovation and move society forward. With industry-specific solutions and a full range of global wholesale offerings provided over the company's secure mobility, cloud, strategic network...
Blue Box has closed a $10 million Series B financing. The round was led by a strategic investor and included participation from prior investors including Voyager Capital and Founders Collective, as well as the Blue Box executive team. This round follows a $4.3 million Series A closed in December of 2012 and led by Voyager Capital. In May of this year, the company announced general availability of its private cloud as a service offering, Blue Box Cloud. Since that release, the company has dem...
Ixia develops amazing products so its customers can connect the world. Ixia helps its customers provide an always-on user experience through fast, secure delivery of dynamic connected technologies and services. Through actionable insights that accelerate and secure application and service delivery, Ixia's customers benefit from faster time to market, optimized application performance and higher-quality deployments.
SimpleECM is the only platform to offer a powerful combination of enterprise content management (ECM) services, capture solutions, and third-party business services providing simplified integrations and workflow development for solution providers. SimpleECM is opening the market to businesses of all sizes by reinventing the delivery of ECM services. Our APIs make the development of ECM services simple with the use of familiar technologies for a frictionless integration directly into web applicat...
The only place to be June 9-11 is Cloud Expo & @ThingsExpo 2015 East at the Javits Center in New York City. Join us there as delegates from all over the world come to listen to and engage with speakers & sponsors from the leading Cloud Computing, IoT & Big Data companies. Cloud Expo & @ThingsExpo are the leading events covering the booming market of Cloud Computing, IoT & Big Data for the enterprise. Speakers from all over the world will be hand-picked for their ability to explore the economic...
Cloudwick, the leading big data DevOps service and solution provider to the Fortune 1000, announced Big Loop, its multi-vendor operations platform. Cloudwick Big Loop creates greater collaboration between Fortune 1000 IT staff, developers and their database management systems as well as big data vendors. This allows customers to comprehensively manage and oversee their entire infrastructure, which leads to more successful production cluster operations, and scale-out. Cloudwick Big Loop supports ...
SAP is delivering break-through innovation combined with fantastic user experience powered by the market-leading in-memory technology, SAP HANA. In his General Session at 15th Cloud Expo, Thorsten Leiduck, VP ISVs & Digital Commerce, SAP, will discuss how SAP and partners provide cloud and hybrid cloud solutions as well as real-time Big Data offerings that help companies of all sizes and industries run better. SAP launched an application challenge to award the most innovative SAP HANA and SAP ...
To manage complex web services with lots of calls to the cloud, many businesses have invested in Application Performance Management (APM) and Network Performance Management (NPM) tools. Together APM and NPM tools are essential aids in improving a business’s infrastructure required to support an effective web experience… but they are missing a critical component – Internet visibility. Internet connectivity has always played a role in customer access to web presence, but in the past few years use...
Software AG helps organizations transform into Digital Enterprises, so they can differentiate from competitors and better engage customers, partners and employees. Using the Software AG Suite, companies can close the gap between business and IT to create digital systems of differentiation that drive front-line agility. We offer four on-ramps to the Digital Enterprise: alignment through collaborative process analysis; transformation through portfolio management; agility through process automation...
What are the benefits of using an enterprise-grade orchestration platform? In their session at 15th Cloud Expo, Jeff Tegethoff, CEO of Appcore, and Kedar Poduri, Senior Director of Product Management at Citrix Systems, will take a closer look at the architectural design factors needed to support diverse workloads and how to run these workloads efficiently as a service provider. They will also discuss how to deploy private cloud environments in 15 minutes or less.
Headquartered in Santa Monica, California, Bitium was founded by Kriz and Erik Gustavson. The 1,500 cloud-based application using Bitium’s analytics, app management, and single sign-on services include bug trackers, customer service dashboards, Google Apps, and social networks. The firm states website administrators can do multiple tasks online without revealing passwords. Bitium’s advisors include Microsoft’s former CMO and the former senior vice president of strategy, the founder and CEO of Li...
The Transparent Cloud-computing Consortium (abbreviation: T-Cloud Consortium) will conduct research activities into changes in the computing model as a result of collaboration between "device" and "cloud" and the creation of new value and markets through organic data processing High speed and high quality networks, and dramatic improvements in computer processing capabilities, have greatly changed the nature of applications and made the storing and processing of data on the network commonplace. ...
StackIQ offers a comprehensive software suite that automates the deployment, provisioning, and management of Big Infrastructure. With StackIQ’s software, you can spin up fully configured big data clusters, quickly and consistently — from bare-metal up to the applications layer — and manage them efficiently. Our software’s modular architecture allows customers to integrate nearly any application with the StackIQ software stack.
The Internet of Things (IoT) is going to require a new way of thinking and of developing software for speed, security and innovation. This requires IT leaders to balance business as usual while anticipating for the next market and technology trends. Cloud provides the right IT asset portfolio to help today’s IT leaders manage the old and prepare for the new. Today the cloud conversation is evolving from private and public to hybrid. This session will provide use cases and insights to reinforce t...
Things are being built upon cloud foundations to transform organizations. This CEO Power Panel at 15th Cloud Expo, moderated by Roger Strukhoff, Cloud Expo and @ThingsExpo conference chair, will address the big issues involving these technologies and, more important, the results they will achieve. How important are public, private, and hybrid cloud to the enterprise? How does one define Big Data? And how is the IoT tying all this together?
TechCrunch reported that "Berlin-based relayr, maker of the WunderBar, an Internet of Things (IoT) hardware dev kit which resembles a chunky chocolate bar, has closed a $2.3 million seed round, from unnamed U.S. and Switzerland-based investors. The startup had previously raised a €250,000 friend and family round, and had been on track to close a €500,000 seed earlier this year — but received a higher funding offer from a different set of investors, which is the $2.3M round it’s reporting."...
The 3rd International Internet of @ThingsExpo, co-located with the 16th International Cloud Expo - to be held June 9-11, 2015, at the Javits Center in New York City, NY - announces that its Call for Papers is now open. The Internet of Things (IoT) is the biggest idea since the creation of the Worldwide Web more than 20 years ago.
The Industrial Internet revolution is now underway, enabled by connected machines and billions of devices that communicate and collaborate. The massive amounts of Big Data requiring real-time analysis is flooding legacy IT systems and giving way to cloud environments that can handle the unpredictable workloads. Yet many barriers remain until we can fully realize the opportunities and benefits from the convergence of machines and devices with Big Data and the cloud, including interoperability, da...
All major researchers estimate there will be tens of billions devices - computers, smartphones, tablets, and sensors - connected to the Internet by 2020. This number will continue to grow at a rapid pace for the next several decades. Over the summer Gartner released its much anticipated annual Hype Cycle report and the big news is that Internet of Things has now replaced Big Data as the most hyped technology. Indeed, we're hearing more and more about this fascinating new technological paradigm. ...
In his session at 15th Cloud Expo, Mark Hinkle, Senior Director, Open Source Solutions at Citrix Systems Inc., will provide overview of the open source software that can be used to deploy and manage a cloud computing environment. He will include information on storage, networking(e.g., OpenDaylight) and compute virtualization (Xen, KVM, LXC) and the orchestration(Apache CloudStack, OpenStack) of the three to build their own cloud services.