Topic 24: Software Validation, Verification, Debugging, and Testing
Synopsis
- "...generic¤
name given to those checking processes which ensure that software conforms
to its specification and meets the needs of the customer..." (Sommerville)
- Those two goals are not the same!
- "Are we building the product right?"
- Does it meet the design specification?
- Need to check at all stages of development
- That is:
- ...does the implementation¤
satisfy the design?
- ...is the documentation (of what actually happens) consistent with the
design (of what should happen)?
- ...does the testing ensure that the system satisfies the design?
- "Are we building the right product?"
- Does it meet the requirements specification?
- Again, need to check this at all phases of development (but this time,
test against design document)
- That is:
- ...does the design satisfy the requirements?
- ...does the implementation¤
satisfy the requirements?
- ...is the documentation consistent with the requirements?
- ...does the testing ensure that the system satisfies the requirements?
- The objective of testing is to find errors
- A good test is one that has a high probability of finding an error
- A successful test is one that finds a new error
- This view (of Sommerville's) is a complete reversal of what we normal
consider testing to be
- Unit testing (each module, separately)
- Integration testing (interactions of modules)
- Acceptance testing (interactions with users)
- Logical errors are inversely proportional to execution probability
- Program flow is often counter-intuitive (not according to our assumptions)
- Typographical errors occur randomly
- "White-box" testing (early)
- "Black-box" testing (late)
- Execution path testing (use McCabe's metric to generate minimal set of
"testing trajectories")
- Conditional testing (focus on validity and coverage of each conditional
in the program)
- Data flow testing (focus on a datum and its transformations)
- Loop testing (4 kinds: simple, concatenated, nested, unstructured)
- Tests component interactions within program
- Equivalence partitioning (generates valid and invalid test cases covering
a range of valid and invalid input values)
- Boundary value analysis (concentrates on data at the edges of validity
- seems to be the likely spot for errors)
- Cause-and-effect techniques (list range of possible inputs and their
effects - i.e outputs)
- Comparison testing (compare system performance with existing system,
or prototypes, or alternative implementations¤)
- The essential step is to define what you expect to happen for a given
test case (i.e. what constitutes an error)
- Avoid testing your own code (like proofreading)
- Test the invalid and unexpected more thoroughly than the valid and expected
- When you get "stuck", stop and think (and maybe sleep on it)
- Keep your test cases around (to retest later versions)
- A consequence of successful testing
- Remove and record the error
- Still an art (because errors almost always present symptomatically, not
deductively)
- The cause and effect of an error may be remote in space and/or time
- The symptoms may arise (or temporarily vanish!) due to interactions of
two or more errors
- The problem may be caused by timing issues, not processing issues
- There are psychological issues to contend with (feelings of failure or
guilt, pressure to solve the problem, frustration)
- Brute force (instrumenting, walk-through, assembler dumps)
- Backtracking (work backwards from the point of error, tracing possible
paths to it)
- Cause elimination (binary search approach)
- Examination/hypothesis/testing cycle
- Work logically to eliminate possible causes
- Try the most likely hypotheses first
- Pressman: Chapters 17,18 and 23
- Sommerville: Chapters 19 to 21
This material is part of the CSE2305 - Object-Oriented
Software Engineering course.
Copyright © Jon McCormack & Damian Conway, 1998–2005. All rights
reserved.