| While designing | Prepare |
|---|---|
| Requirements | Acceptance tests |
| System requirements | System tests |
| Global design | Integration tests |
| Detailed design | Unit tests |
Objectives
Testing shows the presence of bugs, not their absences.
Two main objectives:
- Validation: check that the software fulfils its requirements
- Verification: identifying erroneous behavior
Determine if it is fit for purpose:
- Purpose: is it safety critical? Go through formal validation
- Expectations: do users expect it to be polished?
- Marketing: time-to-market and prices
Staged Testing
Run unit tests, then component tests, then system tests (scenario-based user testing).
Unit Testing
- Every feature should be testable and tested
- Each piece of code should be self-sufficient
- Use fake/simulated inputs
- Avoid human/third-party interactions
Methods should also prevent missuses:
- Regression tests
- Explicit verifications of pre/post conditions at component boundaries
Be skeptical:
- Identify domains for values that should have the same effects
- Consider edge cases (boundaries of domains)
Component Testing
Various types of interfaces:
- Application behavior through operations such as method calls
- Shared memory between processes
- Messages being passed through some communication medium
Be skeptical of input data:
- Make components fail (differing failures - stop failures from cascading between components)
- Stress testing (e.g. message overflow)
- If a call order exists, try call operations in a different order
System Testing
- Integrate third party components/systems
- Should be performed by dedicated testers, or at least, not only developers
Scenario-based testing:
- Main usages (full interaction flows)
- Tests should touch all layers (e.g. the GUI)
Trace and record test executions in some structured way:
- Input values, expected output, observed output
- Include metadata like who, when, issue IDs
Agile Testing
For each commit run:
- Automated unit tests
- Automated story tests
Peer reviews should also be done before merging.
Any successful builds should become a release candidate.
Run manual tests on candidate at end of each sprint.
Run automated performance tests before each release.
Assertion clauses:
- Check parameter values
- Ensure invariants are actually invariants
- Useful for regression testing
Guideline-based testing identifies common programming mistakes (e.g. null values) and ensures tests are performed on these aspects.
Acceptance Test-Driven Development
User stories accompanied by acceptance criteria.
To automate the tests:
- Define application interfaces; isolate the UI
- Use dependency injection (inversion of control)
- Fake minimal implementations for dependencies (stubs) or databases
- Split asynchronous scenarios into synchronous ones
- Fake time where possible (e.g. latency, scheduling)
Automated Acceptance Tests
Use the same path as the users; playback tools like Selenium test directly on the GUI, but may be fragile and time-consuming.
Hence, decoupling the UI from business logic is useful.
Acceptance criteria can be tested using Cucumber.
Sprint Reviews
Prepare and plan sprint reviews:
- Ensure acceptance criteria are all running
- Acts as a last-minute verification test (smoke tests)
- May need to rework some stories
- Prefer rolling back to previous candidate rather than fixing bugs
Capacity, Load and Stress Testing
Quality requirements are often difficult to test (e.g. maintainability, auditability are not testable).
Capacity-focused requirements can be tested in a semi-automated fashion: response time can be expressed as user stories or as required system features.