| Topic:
 automated testing
 Topic:
 testing by voting or N-version
 Topic:
 testing testing
 Topic:
 statistical testing based on a usage profile
 Topic:
 symbolic execution
 
 |  | Subtopic: adequate test set   Subtopic: data-driven test
| Quote: the adequacy of test data may be based on the program or on its specification [»weyuEJ6_1988] |  | Quote: an adequate test set executes every statement of a program [»weyuEJ6_1988] |  | Quote: for every program, there exists an adequate test set [»weyuEJ6_1988] | 
   Subtopic: white box testing
| Quote: turn unit tests into parameterized, data-driven unit tests (PUT); generate inputs by symbolic execution and constraint solving [»tillN7_2006] | 
   Subtopic: random testing
| Quote: by examining program structure can select statistically significant test cases; not so if treat program as a black box [»huanJC_1977] |  | Quote: generate test cases by selecting operations randomly according to the operational profile and input states randomly with its domain [»musaJD3_1993] |  | Quote: in testing a hardware multiply unit must use its internal structure; exhaustive testing takes too long [»dijkEW_1972, OK] | 
   Subtopic: test input domains
| Quote: SPIN's simulation mode finds the most bugs; random choice at each turn; users rarely find rare bugs |  | Quote: wynot minimized the random input that caused a crashed program; crash identification by stack trace [»zellA2_2002] | 
   Subtopic: test boundaries
| Quote: specify input domain for a software system by its symbolic input attribute decomposition (SIAD); for discipline, test plan, and statistical sampling [»choCK_1987] |  | Quote: do not use partitioning for test-case selection; use partitioning when narrowly defined with a high probability of error [»hamlD12_1990] | 
   Subtopic: testing as case analysis
| Quote: tests should exercise all requirements for input values which are likely to cause failure; e.g., boundary conditions [»postRM5_1987] |  | Quote: probable errors by testing boundaries + - one, normal, abnormal, and troublesome |  | Quote: push an algorithm animation with pathological data; e.g. regular polygons revealed a subtle bug [»browMH12_1992] | 
   Subtopic: test predicates
| Quote: testing a program consists of case analysis, i.e., testing all possible cases [»goodJB_1977] |  | Quote: test data selection by a condition-table of logically possible combinations of conditions [»goodJB_1977] |  | Quote: need one test case per 10-15 LOC; can design 100 test cases per day [»yamaT11_1998] |  | Quote: design test cases to cover all the features; use a matrix-based test sheet and eliminate redundant cases [»yamaT11_1998] |  | Quote: a matrix-based test sheet works well because it redesigns the software as a decision table | 
   Subtopic: path testing
| Quote: select test data to exercise test predicates relevant to program correctness [»goodJB_1977] |  | Quote: test predicate conditions from program specifications or implementation; each one added to condition-table [»goodJB_1977] | 
   Subtopic: minimal test case and bug predicates
| Quote: path testing should exercise paths in such a way that errors are detectable; structure based tests are insufficient [»goodJB_1977] | 
   Subtopic: automated test generation from model
| Quote: automated testing should include automated test case simplification  [»zellA2_2002] |  | Quote: isolation better than simplification for test cases; e.g., two test cases, one that works and the other doesn't [»zellA2_2002] |  | Quote: use input grammar for faster identification of 1-minimal, failing inputs [»zellA2_2002] |  | Quote: Delta Debugging to produce a minimal, failing test case; e.g., sequence of browser commands, bad html for Mozilla [»zellA2_2002] |  | Quote: ddmin identifies failing test cases where any single change does not fail; quadratic algorithm  [»zellA2_2002] |  | Quote: dd faster than ddmin; finds a 1-minimal difference with quadratic worst case [»zellA2_2002] |  | Quote: use sampling and statistical regression to identify predicates that are correlated with program failure  [»liblB6_2003] |  | Quote: use sampling to identify predicates that are always true for a bug; called deterministic bugs [»liblB6_2003] |  | Quote: for deterministic bugs, discard irrelevant predicates by universal falsehood, lack of failing coverage, lack of failing example, successful counterexample [»liblB6_2003] | 
   Subtopic: exhaustive testing
| Quote: CADES generates tests from a formal holon description and a design information system database [»pearDJ7_1973] |  | Quote: TOPD's checker generates test cases from the model and determines if the procedure's result matches the model's result [»hendP9_1975] |  | Quote: the TOPD tester exhaustively executes the model of a procedure; result is state vectors for each valid execution path [»hendP9_1975] | 
   Subtopic: manual execution
| Quote: exhaustive test of binary search by all successes, all failures, and boundary conditions [»bentJ10_1983, OK] |  | Quote: correct code by using sound principles during design, verification by analysis, and exhaustive testing of small cases [»bentJ10_1983] |  | Quote: exhaustive test of binary search for n=0 to 10 catches empty array, one, two, three, powers of two, one away from power of two [»bentJ10_1983] | 
   
| Quote: check sheets should test every instruction of the program and all points of difficulty [»turiA3_1951] |  | Quote: stepping through a program and comparing the results to a check sheet is the quickest way to find errors [»turiA3_1951] | 
 
 Related Topics   Topic: automated testing (25 items)
Topic: testing by voting or N-version (10 items)
 Topic: testing testing (13 items)
 Topic: statistical testing based on a usage profile (27 items)
 Topic: symbolic execution
 (9 items)
 |