Tuesday, March 15, 2011

Performance Testing

One of my users recently asked me about the process of taking an application from build to live.    The steps we take include:

*Functional Testing
*End-to-End Testing
*Performance Testing
*User Acceptance
*Cutover Planning including training, communications, and the technical details of transitioning one application to another.

I was asked to explain the difference between end-to-end testing and performance testing.

End-to-end testing is done to make sure the application code does what is expected in terms of function.  For example, if you look up a patient result, is it presented accurately?   End-to-end testing could theoretically done by one person entering one type transaction after another.

Performance testing places the application under "load" to see if there are bottlenecks with the server, database, storage and middleware.   The purpose is to avoid slow performance after go live.

Many times you can do performance tests using simulated input.   Two typical software tools for doing this are HP's Loadrunner and Microfocus' SilkPerformer.  

Some vendors recommend using manual load testing i.e. put all the staff on the new system and do a day of work to see if infrastructure performance suffers.

Although manual testing is often the easiest thing to do, it may not find bottlenecks in transactional performance. Each transaction type and software module creates a different load on the infrastructure.   Some transactions have minimal impact while others cause significant strain.  Doing load testing right, requires a representative mixture of transactions including automated interfaces, data entry, reports and others.

Our approach is generally a combination of manual and automated performance testing.    We pre-load the databases with years of data.    We use automated load testing tools to simulate heavy web site use.    We run scripts that emulate interface activity.   In the context of this real world simulation, we then let the users exercise the software fully.

Of course, even such comprehensive testing can miss software flaws, such as queries against unindexed database tables or processes become a rate limiting step to application performance.   Thus, it's also important to have tools that diagnose problems if slow downs should occur after go live (such as OpNet) and have a strong working relationship with your software vendors so they can rapidly correct any flaws that appear once an application is in full production.

4 comments:

  1. I have been reading Dr. Halamka's blog for about a year now, and after reading this morning's post about Performance Testing, I just wanted to make my admiration for the subject matter of this blog known: I get SO much inspiration from this blog, both personally and professionally. Reading these posts is something that I TRULY anticipate each week. I am an IT professional working for a health care IT employer - these posts consistently provide ideas, topics of conversation, and validation of my career choice. Thanks for taking the time to share your experiences!

    ReplyDelete
  2. I might include some aspects of boundary condition testing and failure mode analysis/testing. Assumptions, especially in more complex and integrated systems are by nature, harder to anticipate. The result is they tend to be more often overlooked and under-analyzed.

    ReplyDelete
  3. "have a strong working relationship with your software vendors so they can rapidly correct any flaws" -JH.

    There in lies the rub. How to find the right vendor who truly provides the great customer service and the quick actionable corrective steps is the key isn't it? Sadly, it isn't what we tend to see in the HIT realm. There are a few stars out there though.

    ReplyDelete
  4. As you allude to in the last paragraph, it is impossible to catch all the bugs before going live, that's why I believe real time monitoring and agile deployment practices are seeing such strong adoption - particularly in the software-as-a-service space.

    If your processes are set up with agile operations in mind, you can deploy code, watch in realtime whether your app behaves as expected and if not, roll back before to the last know functional state.

    To achieve this flexibility and control you need
    a) to ensure predictable behavior in your production environment and
    b) the correct monitoring tools such as Librato, NewRelic, CloudKick, etc. (disclosure: I work for Librato) to visualize code performance and availability.

    In the end, a strong relationship with your ISVs is important but the data to back up your sense of what is going wrong and the ability to act quickly is absolutely crucial.

    ReplyDelete