Welcome!

Machine Learning Authors: Yeshim Deniz, Pat Romanski, Liz McMillan, Elizabeth White, Corey Roth

Related Topics: Java IoT, Industrial IoT, Microservices Expo, Open Source Cloud, Machine Learning , Apache

Java IoT: Article

Performing Under Pressure | Part 2

Collecting and visualizing load-test performance data

In part 1 of this article, we covered writing web app load tests using multi-mechanize.  This post picks up where the other left off and will discuss how to gather interesting and actionable performance data from a load-test, using (of course) Traceview as an example.

Description: oad-test

The big problem we had after writing load tests was that timing data gathered by multi-mechanize is inherently external to the application. This means it can tell us the response times of requests when the app is under load but doesn't identify bottlenecks or configuration problems. So we need to be gathering a bit more data about how the internals of our web application respond to the workload.

For this article, I'll be using Traceview's instrumentation, which is installable from OS-native packages and, in the case of Reddit, takes care of instrumenting nginx, pylons, SQL queries, memcache calls, and Cassandra calls automatically.

Test 1: ramp-up read threads

So, this test is going to run for 30 minutes and generate steadily increasing read-oriented loads on various pages.  Like in a cooking show, I've taken all the waiting out, so let's skip straight to the results!

What we're looking at here is the performance of the deafult open-source Reddit install under a steadily increasing read load, broken down by layer of the stack:

Description: oad-test

At first, it performs like a champ. But as the number of concurrent users rises over time, we see that requests slow down. In fact, it looks like we are spending a lot of time per-request in nginx.

We also have access to machine metrics here (blue bar at bottom), so I've pulled up the load on the box. Our machine is bored-the max load the machine reaches is only 1.06-but it's serving slowly! This is a sign that we might not have enough worker threads in our application layer.

In fact, the default Reddit install only sets up a single uwsgi worker. So, let's fix that, and move on. Here's what it looks like with 10 uwsgi processes, same workload:

Description: oad-tests

It seems that we've traded our uwsgi queuing problems for an overloaded machine, but at least it's fully utilizing the hardware now-and our throughput is much greater!

Test 2: ramp-up write threads

This test will vote and submit comments on a particular thread with inceasing numbers of logged in users.  Ok, go!

Description: oad testing

One really interesting thing is that we can see there are two distinct trends in the data-one band grows slower faster than the other.  Selecting them for comparison, we can see that the slower band is for rendering the comments, while the faster one is the POST requests for commenting/voting:

Description: oad-test

We might have expected to see contention for the database (in this case, postgres).  However, by pushing the limits with our load tests, we figured out that the actual limiting factor will be cores on our app servers (or, in this case, server) before we have to worry about the database.  Here's what the breakdown by layer of stack looked like-note that we're spending almost no time in our database calls (measured through sqlalchemy and the Cassandra client):

Description: oad-testing

Where to go from here:

  • Performance testing is not only valuable to ensure that a new Web app meets projected demand; it can also be part of your CI system to detect performance regressions during everyday development. Here's a screencast about getting performance tests running in Jenkins.
  • If your website is particularly AJAX-heavy, you may also want to do load testing that simulates a browser better and execute JavaScript in order to create the exact load patterns that users will. This makes testing significantly more resource intensive as it requires spinning up headless browsers, but can be accomplished using selenium or a hosted selenium service.
  • Tracelytics performance monitoring and analysis isn't only for load tests; most of our customers run our lightweight instrumentation in production as well as development environments.

Related Articles

Performing Under Pressure, Pt. 1: Load Testing with Multi-Mechanize

Profiling Python Performance Using lineprof, statprof and cProfile

Solving Two of the Most Common Performance Mistakes

More Stories By Dan Kuebrich

Dan Kuebrich is a web performance geek, currently working on Application Performance Management at AppNeta. He was previously a founder of Tracelytics (acquired by AppNeta), and before that worked on AmieStreet/Songza.com.

CloudEXPO Stories
Your job is mostly boring. Many of the IT operations tasks you perform on a day-to-day basis are repetitive and dull. Utilizing automation can improve your work life, automating away the drudgery and embracing the passion for technology that got you started in the first place. In this presentation, I'll talk about what automation is, and how to approach implementing it in the context of IT Operations. Ned will discuss keys to success in the long term and include practical real-world examples. Get started on automating your way to a brighter future!
Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science," is responsible for setting the strategy and defining the Big Data service offerings and capabilities for EMC Global Services Big Data Practice. As the CTO for the Big Data Practice, he is responsible for working with organizations to help them identify where and how to start their big data journeys. He's written several white papers, is an avid blogger and is a frequent speaker on the use of Big Data and data science to power the organization's key business initiatives. He is a University of San Francisco School of Management (SOM) Executive Fellow where he teaches the "Big Data MBA" course. Bill was ranked as #15 Big Data Influencer by Onalytica. Bill has over three decades of experience in data warehousing, BI and analytics. He authored E...
When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the massive amount of information associated with these devices. Ed presented sought out sessions at CloudEXPO Silicon Valley 2017 and CloudEXPO New York 2017. He is a regular contributor to Cloud Computing Journal.
Wooed by the promise of faster innovation, lower TCO, and greater agility, businesses of every shape and size have embraced the cloud at every layer of the IT stack – from apps to file sharing to infrastructure. The typical organization currently uses more than a dozen sanctioned cloud apps and will shift more than half of all workloads to the cloud by 2018. Such cloud investments have delivered measurable benefits. But they’ve also resulted in some unintended side-effects: complexity and risk. End users now struggle to navigate multiple environments with varying degrees of performance. Companies are unclear on the security of their data and network access. And IT squads are overwhelmed trying to monitor and manage it all.
Sanjeev Sharma Joins November 11-13, 2018 @DevOpsSummit at @CloudEXPO New York Faculty. Sanjeev Sharma is an internationally known DevOps and Cloud Transformation thought leader, technology executive, and author. Sanjeev's industry experience includes tenures as CTO, Technical Sales leader, and Cloud Architect leader. As an IBM Distinguished Engineer, Sanjeev is recognized at the highest levels of IBM's core of technical leaders.