Abstract: The discipline of performance testing has had difficulty keeping up with Agile software development and deployment processes. Many people still see performance testing as a single experiment, run against a completely assembled, code-frozen, production-resourced system, with the “accuracy” of simulation and environment considered critical to the value of the data the test provides. This clashes directly with Agile principles of embracing continuous change, frequent delivery, and regular feedback.
Performance and scalability can become significant concerns once users get on the system, and can trigger expensive refactoring. Critical design decisions could come much more cheaply and sooner with timely performance feedback earlier in the project. How do we provide actionable and timely information about performance and reliability when the software is not (or never) complete, when the system is not yet assembled, or when the software will be deployed in more than one environment? I will deconstruct “realism” in performance simulation, talk about performance testing more cheaply in order to test more often, and suggest strategies and techniques.
Learning Outcomes: - What are the risks we are testing for?
- What is “Realistic”? Why do we need it?
- Agile Performance Testing Techniques
- Testing with Available Hardware
- Performance Testing Incomplete Systems
- Strategies for Reporting Performance Feedback
Attachments: