Welcome!

Machine Learning Authors: Kevin Jackson, Madhavan Krishnan, VP, Cloud Solutions, Virtusa, Pat Romanski, Elizabeth White, William Schmarzo

Related Topics: Microservices Expo, Java IoT, Industrial IoT, Open Source Cloud, Machine Learning , Python

Microservices Expo: Article

Performing Under Pressure | Part 1

Load-Testing with Multi-Mechanize

Many types of performance problems can result from the load created by concurrent users of web applications, and all too often these scalability bottlenecks go undetected until the application has been deployed in production.  Load-testing, the generation of simulated user requests, is a great way to catch these types of issues before they get out of hand.  Last month I presented about load testing with Canonical's Corey Goldberg at the Boston Python Meetup last week and thought the topic deserved blog discussion as well.

load-testing

In this two-part series, I'll walk through generating load using the Python multi-mechanize load-testing framework, then collect and analyze data about app performance using Tracelytics.

Also, a request: there's mechanize documentation available, but I unfortunately haven't found any full documentation of the python mechanize API online-post a comment if you know where to find it!

Meet the app: Reddit
The web app that I'll be using for all the examples in this post is an open source reddit running on a single node in EC2. You don't need to understand how it works in order to enjoy this post, but if you do want to play along, there is a super-easy install script that sets up a whole stack.

Generating the data
Performance testing can start off simple: hit pages in your app, and monitor how long they take to load. You can automate this using something like the mechanize library in Python, or even something more low level like httplib/urllib2.  This is a good start, but today we're looking for concurrency as well.

Enter multi-mechanize. Multi-mechanize takes simple request or transaction simulation scripts you've written and fires them repeatedly from many threads simultaneously in configurable patterns. It's as easy as writing a few scripts that simulate users doing different actions on your website (login, browse, submit comment, etc.) and then writing a short config that tells multi-mechanize how to play them back.

Since our site is reddit, I'm going to write a few scripts that read discussion threads, one that posts comments, and one that votes on comments. I'll walk through writing to of the scripts: a simple read-only request, and more complicated one that logs in and submits a comment. The rest of them are available in the full source of the load tests on github.

A simple mechanize script

First, the test is wrapped in a Transaction class-this is how multi-mechanize will run each of your scripts. The __init__() is called once per worker thread, then run() will be invoked repeatedly to generate the requests of your transaction. For development and debugging, it's easier to just run the scripts individually, so the __main__ block at the end provides that functionality.

All of the work here is happening in the run() method. A mechanize browser is instantiated, and our simple request for the front page of Reddit is performed. Finally, we make sure that a valid page was returned.

There's one more thing: custom timers.  Multi-mechanize can collect timing information about the requests it performs. If you store that information in a correctly-named dict, it will be able to generate charts of the data later.

A more complicated script
Now, let's take a look at a slightly more complicated script. This one posts a comment on a particular story, so it'll have to take the following actions:

  • Log in as a user
  • Open thread page on Reddit
  • Post comment

It's a bit longer, so I've broken it up according to the bullets above with accompanying explanation.  The full version of the example can be found here:  https://gist.github.com/1529242

The first thing that's happening here is familiar: pulling up a page in our mechanize browser. In order to login, though, we need to start interacting with forms on the page, and this means tweaking some of the default mechanize browser settings.  We need to set three attributes on our browser: follow 30x redirects (lets the login redirect back), specify the Referrer page header (validation for comment post), and ignore robots.txt rules (Reddit doesn't like robots playing human).

After that, it's on to forms.  The mechanize browser interface with forms is pretty simple: you can list all the forms on the page with browser.forms, select a form to interact with using select_form, and then manipulate the fields of the form using the browser.form object.

select_form  can take a variety of selection predicates, most of which revolve around using attributes such as the form's CSS ID. Our example, Reddit, doesn't have much identifying information associated with forms, so I've used numeric selection to grab them. The login form happens to be form 1.

Pretty straightforward: head to the thread page now that we're logged in.  Now we want to actually submit the comment.  Here's the heavy lifting:

Comment submission is a little bit different because it works via AJAX.  The mechanized browser doesn't process JavaScript, meaning that we'll have to take things into our own hands here.  So, we inspect two forms to grab the state information that JavaScript on the page would use to submit the form, and we construct our own request manually.  (Form 0 provides the ‘uh' value in a hidden field; form 12 is the top-level comment submit form.)

In this simple example, user credentials are provided in __init__. However, a more realistic example might involve many different users logging in. In the code on github, I've written auser pool implementation that takes care of this problem by instantiating a pool of logged in users for each script (then, each invocation of run can check out a different user).

(Debugging note for those playing along: if comments are not showing up in the thread but are showing up in the users profile, that means that some of the background jobs may not be running correctly. The site re-caches the comment tree asynchronously after posts.)

Running the full load test
After writing a few individual mechanize scripts, the final step is putting them all together with a multi-mechanize config. Multi-mechanize organizes load tests in terms of "projects" which are represented by subdirectories of a directory called projects. Each project contains a config file and a directory called test_scripts which contains your individual load tests scripts. It should look like this:

The config file specifies how long the load test should run for, whether it should ramp up the amount of pain or keep it constant,  a few output and statistics settings, and of course the number of threads and scripts you want to run. Here's an example config:

Runtime sets the duration of the test, in seconds. Ramp up, if nonzero, tolls multi-mechanize to linearly increase the number of threads up to the specified numbers during the wrapup.

And here's how to invoke them, finally:

Learning from our load tests
Multi-mechanize collects statistics about timing information that you provide in your tests (custom_timers) and dumps the output in a results subdirectory of your project. This can easily be plotted in your favorite graphics package.  Here's an example of the average times from a read load increasing over 30 minutes:

Ok, so it's getting slower, but why??  These timers treat the application like a black box-they'll show you that it can be slow, but you won't know why or what layers of the stack are slow. In the next article, we'll talk about how to gather actionable data from your load tests.

Related Articles

Performing under pressure, pt. 2: Collecting and visualizing load-test performance data

Python and Gevent

Tracing Python - An API

More Stories By Dan Kuebrich

Dan Kuebrich is a web performance geek, currently working on Application Performance Management at AppNeta. He was previously a founder of Tracelytics (acquired by AppNeta), and before that worked on AmieStreet/Songza.com.

@CloudExpo Stories
"There's plenty of bandwidth out there but it's never in the right place. So what Cedexis does is uses data to work out the best pathways to get data from the origin to the person who wants to get it," explained Simon Jones, Evangelist and Head of Marketing at Cedexis, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
In his session at 21st Cloud Expo, Carl J. Levine, Senior Technical Evangelist for NS1, will objectively discuss how DNS is used to solve Digital Transformation challenges in large SaaS applications, CDNs, AdTech platforms, and other demanding use cases. Carl J. Levine is the Senior Technical Evangelist for NS1. A veteran of the Internet Infrastructure space, he has over a decade of experience with startups, networking protocols and Internet infrastructure, combined with the unique ability to it...
Agile has finally jumped the technology shark, expanding outside the software world. Enterprises are now increasingly adopting Agile practices across their organizations in order to successfully navigate the disruptive waters that threaten to drown them. In our quest for establishing change as a core competency in our organizations, this business-centric notion of Agile is an essential component of Agile Digital Transformation. In the years since the publication of the Agile Manifesto, the conn...
SYS-CON Events announced today that CrowdReviews.com has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5–7, 2018, at the Javits Center in New York City, NY. CrowdReviews.com is a transparent online platform for determining which products and services are the best based on the opinion of the crowd. The crowd consists of Internet users that have experienced products and services first-hand and have an interest in letting other potential buye...
Enterprises are moving to the cloud faster than most of us in security expected. CIOs are going from 0 to 100 in cloud adoption and leaving security teams in the dust. Once cloud is part of an enterprise stack, it’s unclear who has responsibility for the protection of applications, services, and data. When cloud breaches occur, whether active compromise or a publicly accessible database, the blame must fall on both service providers and users. In his session at 21st Cloud Expo, Ben Johnson, C...
"We're developing a software that is based on the cloud environment and we are providing those services to corporations and the general public," explained Seungmin Kim, CEO/CTO of SM Systems Inc., in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Enterprises are adopting Kubernetes to accelerate the development and the delivery of cloud-native applications. However, sharing a Kubernetes cluster between members of the same team can be challenging. And, sharing clusters across multiple teams is even harder. Kubernetes offers several constructs to help implement segmentation and isolation. However, these primitives can be complex to understand and apply. As a result, it’s becoming common for enterprises to end up with several clusters. Thi...
"MobiDev is a software development company and we do complex, custom software development for everybody from entrepreneurs to large enterprises," explained Alan Winters, U.S. Head of Business Development at MobiDev, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that Telecom Reseller has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Telecom Reseller reports on Unified Communications, UCaaS, BPaaS for enterprise and SMBs. They report extensively on both customer premises based solutions such as IP-PBX as well as cloud based and hosted platforms.
Data scientists must access high-performance computing resources across a wide-area network. To achieve cloud-based HPC visualization, researchers must transfer datasets and visualization results efficiently. HPC clusters now compute GPU-accelerated visualization in the cloud cluster. To efficiently display results remotely, a high-performance, low-latency protocol transfers the display from the cluster to a remote desktop. Further, tools to easily mount remote datasets and efficiently transfer...
"Codigm is based on the cloud and we are here to explore marketing opportunities in America. Our mission is to make an ecosystem of the SW environment that anyone can understand, learn, teach, and develop the SW on the cloud," explained Sung Tae Ryu, CEO of Codigm, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
WebRTC is great technology to build your own communication tools. It will be even more exciting experience it with advanced devices, such as a 360 Camera, 360 microphone, and a depth sensor camera. In his session at @ThingsExpo, Masashi Ganeko, a manager at INFOCOM Corporation, introduced two experimental projects from his team and what they learned from them. "Shotoku Tamago" uses the robot audition software HARK to track speakers in 360 video of a remote party. "Virtual Teleport" uses a multip...
In his session at 21st Cloud Expo, James Henry, Co-CEO/CTO of Calgary Scientific Inc., introduced you to the challenges, solutions and benefits of training AI systems to solve visual problems with an emphasis on improving AIs with continuous training in the field. He explored applications in several industries and discussed technologies that allow the deployment of advanced visualization solutions to the cloud.
"Infoblox does DNS, DHCP and IP address management for not only enterprise networks but cloud networks as well. Customers are looking for a single platform that can extend not only in their private enterprise environment but private cloud, public cloud, tracking all the IP space and everything that is going on in that environment," explained Steve Salo, Principal Systems Engineer at Infoblox, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Conventio...
"We're focused on how to get some of the attributes that you would expect from an Amazon, Azure, Google, and doing that on-prem. We believe today that you can actually get those types of things done with certain architectures available in the market today," explained Steve Conner, VP of Sales at Cloudistics, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"NetApp is known as a data management leader but we do a lot more than just data management on-prem with the data centers of our customers. We're also big in the hybrid cloud," explained Wes Talbert, Principal Architect at NetApp, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Gemini is Yahoo’s native and search advertising platform. To ensure the quality of a complex distributed system that spans multiple products and components and across various desktop websites and mobile app and web experiences – both Yahoo owned and operated and third-party syndication (supply), with complex interaction with more than a billion users and numerous advertisers globally (demand) – it becomes imperative to automate a set of end-to-end tests 24x7 to detect bugs and regression. In th...
"Space Monkey by Vivent Smart Home is a product that is a distributed cloud-based edge storage network. Vivent Smart Home, our parent company, is a smart home provider that places a lot of hard drives across homes in North America," explained JT Olds, Director of Engineering, and Brandon Crowfeather, Product Manager, at Vivint Smart Home, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
The question before companies today is not whether to become intelligent, it’s a question of how and how fast. The key is to adopt and deploy an intelligent application strategy while simultaneously preparing to scale that intelligence. In her session at 21st Cloud Expo, Sangeeta Chakraborty, Chief Customer Officer at Ayasdi, provided a tactical framework to become a truly intelligent enterprise, including how to identify the right applications for AI, how to build a Center of Excellence to oper...
"IBM is really all in on blockchain. We take a look at sort of the history of blockchain ledger technologies. It started out with bitcoin, Ethereum, and IBM evaluated these particular blockchain technologies and found they were anonymous and permissionless and that many companies were looking for permissioned blockchain," stated René Bostic, Technical VP of the IBM Cloud Unit in North America, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Conventi...