Regression Testing Optimization Strategies
Retest-all Strategy: execute every tast case after each change
Test Case Prioritization: order test case according to chosen surrogate property
-> hope to maximize early fault detection
Test Suite Minimiation: Reduce unnecessary efforts by permanently removing reduant tests
Regression Test Selection: execute only relevant tests
Assumptions for mutation testing
Competent Programmer Hypothesis:
Most software faults are introduced due to small syntetic errors
Coupling Hypothesis:
Test suite that detects all simple bugs will also detect more complex bugs
-> generalizability
Some empirical evidence, but programmers also make logical mistakes!
Approaches for Regression Test Optimization
dynamic program analysis:
focus on execution traces of test cases (e.g. coverage per test case)
different hypothesis for different techniques
minimazation: similar execution traces -> redundant
prioritaziation: tests that cover larger parts preferred
selection: relevant, if it executes the modified code
static program analysis:
approximate behaviour by operating on static representation of programs
less precise, but cheaper
predictive techniques:
based on e.g. execution history, code authorship, …
Mutant Testing (Test Ending Criteria)
Apply small syntatical changes to program -> mutant
Run program and see if mutant detected
-> If test fails on mutant -> mutant killed
Mutation score: #killed mutants / #non-equivalent mutants
-> Problem: equivalence undecidable
Usually applied with unit tests
-> Low mutant score: deficiency of the test suite
Flaky Tests
Fails and passes on separate occasions.
Controlled environmental factors remain constant.
Inconsistent behavior observed.
Root Causes of flaky tests
async wait: async call but does not wait properly for result
concurrency: threads interacting in undesirable way
test order dependency: tests relying on certain order of execution
network: network hard to control -> prone to flakiness
randomness: non-determinism of algorithm (especially common in ml projects)
others (IO, hardwar, time, …)
Example properties for Test Case Prioritazition (TCP)
Vale-based: early fault-detection capability, coverage
Cost-based: execution time, setup costs
Types of abstraction
Abstraction by encapsulation
hide details but keep them accessible
frequently used in architecture & design
Abstraction by omission
details are not known
enables to focus on certain aspect
frequently used in requirements engineering
Different roles of models of SUT and models of environment for testing
Model of SUT:
provides oracle
structure can be used for automated test case generation
Model of the environment:
restricts possible inputs to system under test
acts as test selection criterion
can also describe typicel interaction patterns
Test selection criteria at level of code and at level of models
Test case selection criteria for code:
specification-based (functional) criteria
code-based (structural) criteria
Test case selection for models
model coverage criteria
requirements-based criteria (if we have requirements models)
Average Percentage of Fault-Detection Formular (APFD)
TF_i: rank order position of first test case that reveals i-th fault
n: total numbers of faults
m: total numebrs of test cases
Last changeda year ago