Click here to Skip to main content
65,938 articles
CodeProject is changing. Read more.
Articles
(untagged)

Testing XML Web Services (Basic)

0.00/5 (No votes)
21 Feb 2002 1  
An introduction to testing XML web services

Introduction

This whitepaper aims to give an introduction to the challenges in testing web services. It is aimed at developers, testers, and project managers who do not have detailed technical knowledge about web services or the way they work. This whitepaper puts particular emphasis on load testing web services.

What are web services ?

In the past, if you wanted to book a holiday over the Internet you would browse to a travel agent's web site, select a holiday and book it. If you were lucky, you'd be able to book car hire and possibly even check on the weather. Behind the scenes, the web server (the computer and software that the web site lives on) would be accessing data in a proprietary format in some sort of database, or would be communicating, again in a proprietary fashion, to back-office systems to get information about your holiday.

This is all set to change with the concept of web services. A web service is a system which sits somewhere on a network and all it ever does is service specific requests from clients. These clients aren't real people browsing the internet from their desks; they're other computers. When you book a holiday, you'll still browse to your travel agent's web site but behind the scenes something different will happen. To find flight details, rather than access proprietary data in a database, the web server will talk to a web service somewhere else on the Internet. All this web service will do, day in, day out, is respond to requests for flight information from travel agents' web sites. What is more, these web services will be able to publicise their existence and communicate with web sites, and other web services, in an open, public, format. The travel agent's web site will access different web services for flight, accommodation and weather information.

Since the web service is on the internet, and since it communicates in standard, published, protocols, that means that anybody can communicate with it. If you're running a conference and want people to visit, you can have a 'fly here' page on your web page. With 5 minutes' coding, you'll be able to let people view flight availability to your conference and book flights online, without leaving your site.

Figure 1 - A travel agents web site

Figure 1 - A travel agents web service

What are the challenges in testing web services ?
Like all software, web services need to be tested. Like web sites, they are often highly visible, so they need to be very robust.

There are two broad types of web services - web services used in an intranet and web services used on the internet. Intranet web services are web services used internally by organizations but not exposed to the general public. For example, an intranet web service might be responsible for handling vacations requests from employees. The company's intranet web site would then access this web service and employees could request vacations. Managers could authorize the vacations and colleagues could check when other employees were on holiday. Human resources could then write a simple application in Visual Basic to make sure that helpdesk staff don't take all their vacation at the same time without having detailed knowledge of how or where this information is kept. An example of an Internet web service is the holiday web service in the previous paragraph.

Testing intranet and internet web services provides subtly different problems. With an intranet web service you, as an organization, are likely to have control over who access your web service. Since it is on an internal network, only internal users can have access to it, so you have a theoretical maximum. Similarly, you can make certain assumptions about security. With an internet web service, anybody can access it. This means that there are additional scalability and security considerations.

Another challenge in testing web services is that they are completely UI-less; they do not display a user interface that can be tested. This means that they hard to test manually, but are an ideal candidate for automated testing. A consequence of this is that some programming skills are almost certainly needed for testers who need to test web services. A web service is not the sort of application you can test by key-bashing.

Different sorts of testing

As with any other application, there are different sorts of testing you can carry out on web services:

Proof of concept testing

Web services are a new type of software. Because of this, you will need to understand if the architecture you have chosen for your web service is the correct one. There are many different choices to be made - which tool vendor to use, which programming language and which database backend, for example. If you can clarify and resolve these issues before you start developing, or early on in the development lifecycle, then you will save a lot of time and money further on down the line. Because most of the questions you need to resolve will concern scalability issues (will the architecture we're using really cope with 1000 simultaneous users), a proof of concept test is normally a cut-down load test (see below). There's no need to run it on powerful hardware, or get exact answers; your aim is to find answer the question "am I going in the right direction ?".

Functional testing

This is to ensure that the functionality of the web service is as expected. If you have a web service that divides two numbers, does it give the expected result ? If you pass in a 0 as the denominator, does it handle this correctly ? Does your web service implement security / authentication as it is meant to ? Does your web service support all the communications protocols it is meant to ? Because your web service can be accessed by clients that you can't control, what happens if they make requests you aren't expecting ? Bounds testing and error checking is especially important.

Regression testing

A regression test is normally a cut-down version of a functional test. Its aim is to ensure that the web service is still working between builds or releases. It assumes that some area of functionality was working in the past, and its job is to check that it still does. If your development team has changed the method that divides two numbers, does that method still work ? Does the method that multiplies two numbers still work ? Is the performance still acceptable ? Since regression testing is, by its nature, a repetitive task then it must be automated. This is true for traditional applications and web sites; it is even truer for UI-less web services.

Load / stress testing

This white paper will concentrate on load and stress testing. The aim of load / stress testing is to find how your web service scales as the number of clients accessing it increases. You have carried out functional and regression testing, so you know that your web service will cope with a single user. What you need to know now is if it will cope with 10, 100 or 1000 users, or how many users it will cope with. If you double the number of users, do response times stay the same ? If you double the number of servers running your web service, does its capacity double ? The following section in this white paper will go into more details about these issues.
Monitoring
Once your web service live and being used by real clients, you will need to keep an eye on it. Is it still working ? Are response times adequate ? At what times of day is it busiest ? It is essential to monitor the web service.

Load / stress testing

This whitepaper concentrates on load and stress testing a web service. It is important to note that this, almost by definition, must be an automated task. You cannot feasibly employ 1000 people to simulate 1000 clients accessing your web service.

To produce objective results, it is important to carry out the testing in a controlled environment. If you want to know how your web service responds as you simulate more and more users, then this means that you must keep all other factors (hardware and networking, for example) constant. This means that you should not carry out a load test on a live system over the Internet. Apart from the problem that bandwidth would pose, this is not a controlled environment.

You might decide that you simply cannot carry out the test in-house. If you are writing a large-scale web service that needs to cope with 10,000 requests a second then the odds are you do not have the necessary hardware in-house to do this. You will probably need to consider hiring a third party consultancy, or using a third party scalability lab to carry out this test.

The ultimate aim of load testing is to reassure you, and confirm "My web service will respond acceptably for up to x clients making y requests a second".

Before you can start, you need to decide who the clients will be. Will they be other web services, other web sites or other sorts of applications ? Do you have any idea of how many potential clients your web service will have ?

Once you have answered that, you need to ascertain what these clients will be doing, and how often. You may have different sorts of clients doing different things; 10% of your clients might be booking flights while 90% might be simply checking flight availability.

If you are expecting to have 100,000 clients, but these clients will only ever make one request a day then that is not too bad (about 1 request a second). If, however, 50% of these requests will happen between the 9 and 10am then that's a different story (about 15 requests a second). Another possibility is that you will only have 100 clients, but that these clients will be making 10 request a second, each and every second of the day. In summary, it is important to know your clients.

You might expect, on average, to be able to service 100 requests a second, but what happens in the event of a massive peak ? Will your web service cope, slow down, or crash ? Or will it simply refuse to service the 101st request ? These are all things you need to know, and test, before you release your web service.

You must also now how the clients will be accessing the web services. The odds are they will be making SOAP requests. In this case, you should make sure that your web testing tool supports this protocol. Some testing tools on the market record scripts by browsing to the web page representing the web service and then record the HTTP GET and POST requests that the browser makes. Although similar, this is not the same as using the SOAP requests that your clients will almost certainly be making.

The next question to answer is "what is acceptable ?". Your service level agreements might specify that you need to respond to 95% of requests within 1 second; in this case you know what is acceptable. In any case, it is important to realize that a web service is not the same as a web site, so acceptable response times will be different. While it may be acceptable to keep response times for a web site under 10 seconds, a web site may be making 10 calls to different web services, so the web site will expect each web service to return information within 1 second.

While you are carrying out your test, the tool you are using should be able to identify how the following values change as you increase the number of clients. These are all measurements of how the client is experiencing your web service:

  • Time to connect: This is the time it takes to make a connection from the client to the web service. This should be as low as possible.
  • Time to first byte: This is the time it takes for the client to start receiving data back from the web service. If the web service needs to do a lot of thinking for each request, then this time could be significant.
  • Time to last byte: This is the time it takes for the client to receive the last byte of information back from the service. If the service needs to return a large amount of data (if it is returning maps, or images, for example), then this could be significant.

Although the absolute values of these metrics are important (you want to keep these numbers as low as possible), the way they change as you increase the load on the web service is more important. Ideally you want these metrics to remain constant. If they increase linearly, then you're in for a shock. Suppose that with 1 user the time to last byte is a blazingly-fast 10 milliseconds, for 10 users is 1/10th second and for 100 users it's 1 second. That's still reasonable. For 1000 users it's going to be 10 seconds, which is probably unacceptable. If your web service needs to cope with 10,000 users then requests are going to take more than a minute. That's clearly not acceptable.

It's more likely that you'll find that your web service scales (i.e. the response time for requests remains constant) until a certain number of virtual users, and then it stops scaling. To troubleshoot this, you need to keep an eye on the performance on the server the service is running on. You need to know what is causing this change in behaviour; is the CPU saturated, is the disk thrashing or is the network card causing the bottleneck ? These are all possible causes of performance problems. This detailed analysis is outside the scope of this white paper.

Once your system has hit a bottleneck, you need to know if there is a way round this problem. If you add another processor to the server, will it double its capacity ? If you add another server, will that double its capacity ? If you have a web service that will scale in this way, then you know that you will be able to cope with extra demand by adding more and more hardware at the problem. If your web service doesn't scale in this way, then your web service won't perform no matter how much money you spend on expensive hardware.

Figure 2 - Web services with different scalability properties

Figure 2 - Web services with different scalability properties

In the above illustration, Web Service 1 scales well until about 120 requests / second (the time to last byte is constant), but then stops scaling. Web service 2 scales poorly - as the number of requests / second increases, the responsiveness decreases linearly.

Conclusion

Testing web services provides a number of unique challenges which can only be overcome by the use of an automated testing tool. It is important to check the capacity and scalability of your web service at the design stage as well as carrying out a full-scale load test before launch.

This whitepaper has only scratched the surface of web service testing, but hopefully has given you some useful ideas of what is involved.

About the author

This white paper was written by Neil Davidson, Technical Director of Red Gate Software Ltd. Red Gate's latest product is the Advanced .NET Testing System (ANTS), a tool for proof of concept and load testing .NET web sites and services. Visit www.red-gate.com/ants.htm for more details.

License

This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.

A list of licenses authors might use can be found here