In this post I’m focusing on some aspects of testing HTTP endpoints in Azure using Loader.IO. This is applicable to both websites and API’s.
Performance testing versus load testing – What’s the difference?
First lets be clear on our goals. The type of artificial load we’ll be generating can be used for both performance testing and load testing, but we’ll do it very differently depending on which we are trying to achieve.
Performance testing for HTTP endpoints generally involves working against a set of baseline expectations for concurrent users and response time. We measure our current state, and carefully isolate different layers of the application to identify bottlenecks.
Ideally we automate our performance tests so that we can quickly identify both regressions and improvements for each build.
When doing performance testing artificial traffic is important, but it typically needs to be carefully scaled up to identify the point at which bottlenecks occur.
Load testing is about endurance – we want to know what happens when we push our system to its limits. This is an opportunity to expose bugs and find intermittent issues, like a small piece of code that isn’t thread safe and only periodically fails.
In this scenario we care much more about generating a large amount of traffic, and maintaining it as long as needed. Prior performance testing may provide benchmarks to help us target our load properly.
In both cases it is important to think about opportunities to extrapolate, and verify those assumptions. If you are testing a service hosted in an Azure Virtual Machine can you test two machines and extrapolate the values to four, eight or sixteen? Test that you actually scale linearly before you assume that you’ll be able to solve the problem with more front-ends later. This means understanding your bottlenecks. Nginx serving static files is easy to make assumptions about – but multi-tier systems with dependencies on external API’s may behave unpredictably as they scale.
You have a huge number of options when it comes to generating artificial load, ranging from a simple shell script to robust systems that integrate instrumentation with load generation. One simple, straight forward option, is loader.io.
The folks over at SendGrid have released this great tool and integrated it into the Azure Marketplace so you can start using it free without creating a new account. You access it directly from your Azure Portal.
This marketplace item isn’t available in the preview portal, so for now navigate over to the legacy portal at http://manage.windowsazure.com
You’ll want to hit the big “New” button at the bottom of the page and choose the “Marketplace – Preview” button.
Under “App Services” you can scroll down to loader.io and select it.
When creating the service you’ll get a few simple options. First, you’ll notice that the plan offered is free. Later you may be able to choose more premium plans directly in the marketplace, but for now you have to upgrade after the service is provisioned. The description claims 50,000 connections, but in practice you’ll be able to generate 10,000 concurrent connections. You get an additional 5,000 for each person you share loader.io with, up to 50,000.
Pay attention to the region. If you are testing an Azure Virtual Machine or Cloud Service you may just want to co-locate with it.
After you create the service you’ll find it provisioning in the “Marketplace” section of the portal
Once you get into the service, find the “Manage” button
You’ll be asked to provide the domain for the service you want to test, and then you have to verify ownership. The folks at SendGrid don’t want to help you carry out any DOS attacks.
On the Ubuntu Azure Virtual Machine I am testing this was as simple as connecting via SSH and creating a file in my www directory, something like this:
echo loaderio-xxxxx > loaderio-xxxxx.html
You’ll need to find the best option for your system. If you are testing something like an ASP.Net MVC or WebApi project you may just want to add a new action to your default controller with the appropriate name and return value. You can also add a new txt DNS record if you already have custom DNS setup for your service.
Creating a test
Once you get everything setup and verified, you’ll need to create your first test.
You have three options for the Test Type, they do very different things.
|Clients per test||Client count will be split up across the test duration. 100 clients over 50 seconds will yield two client connections per second.||Scenarios where you want to see unique clients come in for the duration of the test and achieve a fixed quantity of requests.Typically a performance test.|
|Clients per second||This test is almost identical to clients per test. The difference is in how you specify the test parameters.||When you want to clearly define the number of unique clients per second. Typically a performance test.|
|Maintain client load||Clients will continuously make requests. You can specify a linear ramp by providing the start and end client counts. The total number of requests made depends on your response times.||This is ideal for a load or stress test. You can also use the ramp options to quickly find the point at which your performance begins to degrade.|
You can get more details about the test types here.
Lets do a “Maintain client load” test that ramps clients from 1000 to 5000 to see if we can identify when our response times begin to degrade.
You’ll also need to add in the URL’s you want the clients to make requests to. You can specify as many as you would like, and choose the verb, protocol and path. Choose an assortment of URL’s that will provide a good test of your site or API functionality (or focus on very specific endpoints for more detailed analysis).
Looking at the results
Loader.io provides some great graphs as the test runs which you can analyze upon completion.
So here we can very easily see where my service hit its breaking point. Everything looks nice and flat until we get to around 1600 client connections. Suddenly my response times spike to nearly 1 second. You’ll notice the tests never reaches 5000 client connections – the error count spiked and the test ended.
This gives me some great information – based on this test I can hypothesize that my endpoint can serve around 1500 connections reliably, and I can perform additional load tests at a fixed 1500 connection count to confirm that. I can also double my VM count and see if I scale linearly to 3000 connections, this lets me build a picture of how my service will scale.
My favorite part of Loader.io – The Webhooks are simple and effective for automation.
Simple do an HTTP POST against the webhook and this test will run. You could do this from custom code, a script, or almost any build system. Or just use curl:
curl -X POST https://api.loader.io/v2/tests/14eed142a0276528b6cab1758e94f2c3/token/feadfbf9aec09f5d9062a95fcb3c343a/run
The notification field lets you provide your own hook, which they will POST to when the work is complete. Fully automate both your testing and gathering of results on each build or check-in!
That is all! You can sign up for loader.io here.