Map
Index
Random
Help
Group
th

Group: testing

topics > computer science > Group: programming



Topic:
automated testing
Topic:
automated tests of specifications and designs
Topic:
consistency testing
Topic:
incremental testing
Topic:
programming without errors
Topic:
execution profile
Topic:
execution with program stubs
Topic:
model checker
Topic:
performance testing
Topic:
software review
Topic:
symbolic execution
Topic:
test data selection
Topic:
test hardware
Topic:
test scripts
Topic:
testing by program mutation
Topic:
testing by voting or N-version
Topic:
testing testing
Topic:
statistical testing based on a usage profile
Group:
debugging
Group:
exception handling
Group:
program proving
Group:
requirement specification
Group:
security
Group:
software engineering
Group:
software maintenance
Group:
testing and evaluating user interfaces
Group:
type checking

Topic:
debugging by reading code
Topic:
debugging by usage rules
Topic:
error safe systems
Topic:
error messages
Topic:
handling complexity
Topic:
hardware vs. software
Topic:
management of large software projects
Topic:
program proving is infeasible
Topic:
quality assurance
Topic:
software lifecycle
Topic:
software maintenance and testing of distributed systems
Topic:
user-centered design

Summary

Testing is an important phase in software development which is itself split into three phases. First is testing newly implemented programs by programmers using program listings. Many errors are found and many modifications are made. Patching used to be useful at this stage. The first stage is terminated by an exhaustive test of program code. The second testing phase is an independent test by a quality assurance team working from requirements specifications. Hopefully few errors are found while interacting with the program through test data or a terminal. Requirements testing is time consuming because many features and combinations of features should be tested. Maintenance testing is the final phase. A program undergoes many changes during its lifespan; each change should be tested for internal consistency and external validity. Ideally most maintenance testing is automated by a script-driven exerciser.

Despite thorough testing, software reliability is lower than normal engineering standards. Several reasons can be identified: (1) Engineering builds on many known, previously tested components while software is created from some base language. (2) Engineering components have been used in many working projects, while software components such as procedures are usually used in a single project. (3) Engineering components have safety margins and give warnings of failure. Software components either work or don't work all the time. (4) Engineering components can be maintained on site while software components are often not maintainable by the user. (5) Engineering components work in many different environments while software components are usually limited. (cbb 5/80)

Subtopic: importance of testing up

Quote: test every logical path in a program at least once; uncovers most coding errors; customers should require a certified test [»roycWW8_1970]
Quote: all software gets tested, either by you or your users; better to test now [»huntA_2000]
Quote: a reliable system must be tested extensively under realistic conditions; a strategic defense system can not be tested [»parnDL12_1985]
Quote: it is only through modifications that software becomes reliable [»parnDL12_1985]
Quote: only testing reveals discrepancies between a model and the real situation; also errors in proofs [»parnDL6_1990]
Quote: an untested program does not work [»stroB_1991]
Quote: in practice, can not design and verify a program to work the first time
Quote: NASA's Genesis spacecraft failed to deploy its drogue parachutes because the original design specified the wrong orientation for all four accelerometer switches [»nislE5_2005]

Subtopic: testing vs. debugging up

Quote: testing detects errors by their effects; debugging searches for the cause of errors [»clarLA8_1983]

Subtopic: testing vs. verification up

Quote: functional verification leaves 2-5 fixes/Kloc for later phases compared to 10-30 fixes/Kloc from unit testing [»cobbRH11_1990]
Quote: can simplify problem if formal verification is used for finding bugs instead of proving correctness; e.g., check cache-coherence protocol with four processors, one cache line, and two data values [»dillDL4_1996]
Quote: testing remains important with formal methods; bugs will be found in the requirements and their refinement [»boweJP4_1995]
Quote: derived test cases from formal specification; tested each precondition and postcondition; also white-box testing; only three faults during operation; acceptance testing found cosmetic faults [»tretJ9_2001]

Subtopic: testing vs. type checking up

Quote: a simple testing strategy captures virtually all errors that a static type system would capture

Subtopic: testing vs. inspections up

Quote: code inspections are two to four times more effective than formal designer testing and system testing [»russGW1_1991]
Quote: testing is better than code inspection for errors due to execution, timing, traffic, load, system interactions [»russGW1_1991]
Quote: catch 20% of bugs in code inspection; half of these from designing the test cases, half from the inspection [»yamaT11_1998]
Quote: Apache developers do not use QA testing; review code before or after checkin; most core developers review all changes [»mockA7_2002]

Subtopic: testing vs. reuse up

Quote: library subroutines are know to be correct; reduces errors from the fallible human element [»wheeDJ6_1949]

Subtopic: testing and error handling up

Quote: use error handling to reduce testing for rare conditions and control flow interruptions [»buhrPA9_2000]

Subtopic: trial and error up

Quote: programming is a trial and error craft [»parnDL12_1985]
Quote: trial and error testing is inadequate since programs are not continuous; must prove that programs meet requirements [»dijkEW_1982]
Quote: unit verification by debugging may cause design faults when the fixed modules are combined [»cobbRH11_1990]
Quote: in most cases, the cause of a software product failure was introduced while fixing another failure [»cobbRH11_1990]

Subtopic: test case reduction up

Quote: Delta Debugging to produce a minimal, failing test case; e.g., sequence of browser commands, bad html for Mozilla [»zellA2_2002]

Subtopic: test documentation up

Quote: Hitachi catches 99.8% of bugs; documents test cases before debugging and testing [»yamaT11_1998]
Quote: Roast combines test cases with component API documentation; readable test cases with expected output; e.g., valueCheck for asserts and execMonitor for thrown errors [»hoffD7_2000]

Subtopic: testability up

Quote: need design for testability since seldom execute most intraprocedural, acyclic paths [»ballT7_2000]
Quote: use debug hooks and system monitoring to help debug microprocessors; millions or billions of cycles before a bug manifests itself [»colwRP_2006]

Subtopic: continuous testing up

Quote: code a little, test a little; test early, often, and automatically; bugs easier to fix [»huntA_2000]
Quote: run 10 test cases per day during code inspection, and 25 per day during machine debugging

Subtopic: testing as quality assurance up

Quote: provide reference tests for abstractions; confirm that users implement the contract [»cwalK_2006]
Quote: testing should provide evidence of correctness; errors indicate a failure that may require restarting the testing process [»hennMA3_1984]
Quote: Cleanroom rejects testing's fundamental purpose: to find bugs [»beizB3_1997]
Quote: testing is facilitated in a controlled environment which is separate from the development environment [»mcgoMJ7_1983]
Quote: a tester reviews already tested code; programmers should not produce untested, slipshod code [»maguS_1993]

Subtopic: developers vs. users and QA up

Quote: at Hitachi, development and QA use different test cases [»yamaT11_1998]
Quote: while top 1% of developers contributed 90% of added lines, only 3 core developers in top 15 problem reporters; system testing by Apache users [»mockA7_2002]
Quote: Microsoft probably has one to two testers for every developer in the company
Quote: choice of tester was a larger factor in test effectiveness than was the choice of test technique [»lautL6_1989]

Subtopic: unit vs acceptance test up

Quote: a unit test specifies module behavior while an acceptance test specifies feature behavior

Subtopic: test coverage vs. usage up

Quote: coverage test everything with a high embarrassment factor; e.g., every opcode and addressing mode [»colwRP_2006]
Quote: traditional coverage testing finds errors in random order, usage testing finds them in failure-rate order, 20x better for MTTF [»lingRC5_1993]
Quote: need design for testability since seldom execute most intraprocedural, acyclic paths [»ballT7_2000]
Quote: users have the right to expect that all parts of a program were adequately tested, unless notified of environmental restrictions [»weyuEJ12_1986]
Quote: axiom, for every program there is an adequate test set that is not an exhaustive [»weyuEJ12_1986]
Quote: an adequate test of a program is not necessarily adequate for its components
Quote: test every logical path in a program at least once; uncovers most coding errors; customers should require a certified test [»roycWW8_1970]

Subtopic: test cases at Hitachi up

Quote: Hitachi catches 99.8% of bugs; documents test cases before debugging and testing [»yamaT11_1998]
Quote: need one test case per 10-15 LOC; can design 100 test cases per day [»yamaT11_1998]
Quote: run 10 test cases per day during code inspection, and 25 per day during machine debugging
Quote: catch 20% of bugs in code inspection; half of these from designing the test cases, half from the inspection [»yamaT11_1998]
Quote: loops are the most error-prone structure; test 0, 1, 2, average, max-1, max, and max+1 iterations [»yamaT11_1998]
Quote: 10% of test cases for boundary and limits, 15% for errors, and 15% for different platforms and performance [»yamaT11_1998]
Quote: run basic tests for 48-hours; catches memory leaks, deadlock, and time-outs [»yamaT11_1998]

Subtopic: test cases up

Quote: the error-based approach to testing develops tests for each possible class of errors [»howdWE1_1990]
Quote: informal verification usually consists of executing a test case by hand [»bentJ10_1983]
Quote: a programmer may consider specific examples and then try to guarantee that the program works similarly for the whole input domain [»gellM5_1978]
Quote: create tests before coding so that programmers will design out the most-probable errors [»postRM5_1987]
Quote: test subroutines with short programs constructed for the purpose [»wilkMV_1951]
Quote: axiom, for every program there is an adequate test set that is not an exhaustive [»weyuEJ12_1986]

Subtopic: testing object-oriented code up

Quote: if add or modify a subclass must retest all inherited methods; they have a new context [»perrDE1_1990]
Quote: encapsulation and inheritance requires more testing because methods have more execution contexts; antidecomposition
Quote: with white box inheritance, need to retest all inherited methods and all superclasses [»edwaSH2_1997]
Quote: with representation inheritance, do not need to retest the superclass; as long as its representation invariant and abstraction relation are correctly maintained
Quote: an adequate test of a program is not necessarily adequate for its components

Subtopic: testing hardware up

Quote: analog systems can be reliable because they are continuous; small input changes cause small output changes [»parnDL12_1985]
Quote: computer hardware is reliable because its repetitive structure does not need exhaustive testing [»parnDL12_1985]

Subtopic: axioms for testing up

Quote: an adequate test of a program is not necessarily adequate for its components
Quote: for every program, there exists an adequate test set [»weyuEJ6_1988]
Quote: antidecomposition of testing; even though a program is adequately tested, its components may not be adequately tested [»weyuEJ6_1988]
Quote: axiom, for every program there is an adequate test set that is not an exhaustive [»weyuEJ12_1986]

Subtopic: safety vs. cost up

Quote: in a mature engineering discipline, 'reliable' never means 'perfect' [»demiRA_1977]
Quote: testing is the most expensive phase of software development [»roycWW8_1970]
Quote: testing has the greatest risk in cost and schedule; testing occurs last when backup alternatives are least available
Quote: reliability is only one of many desired properties; high reliability may cost too much [»gammRC10_1974]
Quote: the trade-off in structural engineering is been sufficient safety and economy [»abraP12_1986]
Quote: in designing a system, allow some unreliability to occur; perfection is too expensive [»akscRM5_1984]
Quote: constant tension between improving a design and providing stability and continuity [»lampBW10_1983]
Quote: only through failure can engineers advance the state of the art; success alone gives insufficient information [»abraP12_1986]
QuoteRef: boehBW5_1973 ;;52 45-50% of software effort went into check out and testing (6 projects)

Subtopic: desk checking up

Quote: algebraic notation makes programs easy to read and mentally check; reduces programming error and increases productivity [»glenAE2_1953]
Quote: use desk checking to verify that subroutines occupy distinct locations, specifications are satisfied, overwrites do not occur, and unpreserved registers are invalid [»wilkMV_1951]

Subtopic: difficulty of testing up

Quote: nothing can prove the absence of bugs; verification can not prevent failures from unforeseen causes [»abraP12_1986]
Quote: software reliability is poor because of design faults, radically new systems, and discontinuous input-to-output mappings [»littB11_1993]
Quote: no matter what is done, small mistakes with large consequences will still occur; prolonged field testing is necessary for a payment system [»andeRJ5_1996]
Quote: functional testing does not identify security flaws; need public, expert evaluation
[»schnB_2000]

Group: testing up

Topic: automated testing (25 items)
Topic: automated tests of specifications and designs (12 items)
Topic: consistency testing (60 items)
Topic: incremental testing (26 items)
Topic: programming without errors (28 items)
Topic: execution profile (43 items)
Topic: execution with program stubs (5 items)
Topic: model checker (49 items)
Topic: performance testing (8 items)
Topic: software review (80 items)
Topic: symbolic execution (9 items)
Topic: test data selection (39 items)
Topic: test hardware (5 items)
Topic: test scripts (13 items)
Topic: testing by program mutation (18 items)
Topic: testing by voting or N-version (10 items)
Topic: testing testing (13 items)
Topic: statistical testing based on a usage profile
(27 items)

Related Topics up

Group: debugging   (10 topics, 333 quotes)
Group: exception handling   (12 topics, 314 quotes)
Group: program proving   (10 topics, 311 quotes)
Group: requirement specification   (11 topics, 307 quotes)
Group: security   (23 topics, 874 quotes)
Group: software engineering   (18 topics, 470 quotes)
Group: software maintenance   (14 topics, 368 quotes)
Group: testing and evaluating user interfaces   (9 topics, 262 quotes)
Group: type checking   (12 topics, 392 quotes)

Topic: debugging by reading code (11 items)
Topic: debugging by usage rules (41 items)
Topic: error safe systems (76 items)
Topic: error messages (37 items)
Topic: handling complexity (60 items)
Topic: hardware vs. software (15 items)
Topic: management of large software projects (63 items)
Topic: program proving is infeasible (47 items)
Topic: quality assurance (22 items)
Topic: software lifecycle (13 items)
Topic: software maintenance and testing of distributed systems (16 items)
Topic: user-centered design
(65 items)


Updated barberCB 3/05
Copyright © 2002-2008 by C. Bradford Barber. All rights reserved.
Thesa is a trademark of C. Bradford Barber.