1. Heuristic test strategy model - Taken from Michael Bolton's Rapid Software Testing notes
  2. Project Environment
    1. Mission
      1. Who are your customers? Whose opinions matter? Who benefits or suffers from the work you do?
      2. Do you have contact and communication with your customers? Maybe they can help you test.
      3. Maybe your customers have strong ideas about what tests you should create and run.
      4. Maybe they have conflicting expectations. You may have to help identify and resolve those.
    2. Information
      1. Whom to consult about the project
      2. Documentation
        1. Available?
        2. Up-to-date?
        3. User stories?
      3. Product history
      4. Patterns of customer complaints
      5. Comparable products
      6. Do you need to familiarize yourself with the product more, before you will know how to test it?
    3. Developer relations
      1. Feedback on test strategy
      2. any feature of product that the developer is overconfident or underconfident about
    4. Test Team
      1. Do you know who will be testing?
      2. Are there people not on the “test team” that might be able to help?
        1. People who’ve tested similar products before and might have advice?
        2. Programmers?
        3. Application Specialists who talk to users directly
      3. Do you have enough people with the right skills to fulfill a reasonable test strategy?
      4. Is any training needed? Is any available?
      5. Are there particular test techniques that the team has special skill or motivation to perform?
    5. Equipment and Tools
      1. Hardware: Do we have all the equipment you need to execute the tests? Is it set up and ready to go?
      2. Automation: Are any test automation tools needed? Are they available?
      3. Probes: Are any tools needed to aid in the observation of the product under test?
      4. Matrices & Checklists: Are any documents needed to track or record the progress of testing?
    6. Schedule
      1. Test Design: How much time do you have? Are there tests which need to be created later than sooner?
      2. Test Execution: When will tests be executed? Are some tests executed repeatedly, say, for regression purposes?
      3. Development: When will builds be available for testing, features added, code frozen, etc.?
      4. Documentation: When will the user documentation be available for review?
    7. Item under test
      1. Scope: What parts of the product are and are not within the scope of your testing responsibility?
      2. Availability: Do you have the product to test?
      3. Volatility: Is the product constantly changing? What will be the need for retesting?
      4. New Stuff: What has recently been changed or added in the product?
      5. Testability: Is the product functional and reliable enough that you can effectively test it?
      6. Future Releases: What part of your tests, if any, must be designed to apply to future releases of the product?
    8. Deliverables
      1. Media: How will you record and communicate your reports?
      2. Content: What sort of reports will you have to make? Will you share your working notes, or just the end results?
  3. oracle
    1. Familiar problems
      1. the system is not consistent with the pattern of any familiar problem.
    2. Explainability
      1. the system is consistent with our ability to describe it clearly
    3. World
      1. the system is consistent with things that we recognize in the world.
    4. History
      1. the present version of the system is consistent with past versions of it.
    5. Image
      1. The system is consistent with an image that the organization wants to project.
    6. Comparable product
      1. the system is consistent with comparable systems.
    7. Claims
      1. the system is consistent with what important people say its supposed to be.
    8. Users' desires
      1. the system is consistent with what users want
    9. Purpose
      1. the system is consistent with its purposes, both explicit and implicit.
    10. Product
      1. Each element of the system is consistent with comparable elements in the same system.
    11. Standards and Statutes
      1. The system is consistent with applicable laws, or relevant implicit or explicit standards.
  4. TEST TECHNIQUES
    1. Function Testing
      1. Test what it can do
        1. Identify things that the product can do (functions and sub- functions).
        2. Determine how you’d know if a function was capable of working.
        3. Test each function, one at a time.
        4. See that each function does what it’s supposed to do and not what it isn’t supposed to do.
    2. Domain Testing
      1. Divide and conquer the data
        1. Look for any data processed by the product. Look at outputs as well as inputs.
        2. Decide which particular data to test with.
          1. boundary values
          2. typical values
          3. convenient values
          4. invalid values
          5. best representatives
        3. Consider combinations of data worth testing together.
    3. Stress Testing
      1. Overwhelm the product
        1. Look for sub-systems and functions that are vulnerable to being overloaded or “broken” in the presence of challenging data or constrained resources.
        2. Identify data and resources related to those sub-systems and functions.
        3. Select or generate challenging data, or resource constraint conditions to test with
          1. large or complex data structures
          2. high loads
          3. long test runs
          4. many test cases
          5. low memory conditions
    4. Flow Testing
      1. Do one thing after another
        1. Define test procedures or high level cases that incorporate multiple activities connected end-to-end.
        2. Don’t reset the system between tests.
        3. Vary timing and sequencing, and try parallel threads.
    5. Scenario Testing
      1. Test to a compelling story
        1. Think about everything going on around the product.
        2. Design tests that involve meaningful and complex interactions with the product.
        3. Personas
          1. Individual contributors
          2. Analysts
          3. Managers
          4. System admins
        4. Activity patterns
          1. Tug of war; contention. Multiple users resetting the same values on the same objects.
          2. Interruptions; aborts; backtracking. Unfinished activities are a normal occurrence in work environments that are full of distractions.
          3. Object lifecycle. Create some entity, such as a task or project or view, change it, evolve it, then delete it.
          4. Long period activities. Transactions that take a long time to play out, or involve events that occur predictably, but infrequently, such as system maintenance.
          5. Function interactions. Make the features of the product work together.
          6. Personas. Imagine stereotypical users and design scenarios from their viewpoint.
          7. Mirror the competition. Do things that duplicate the behaviors or effects of competing products.
          8. Learning curve. Do things more likely to be done by people just learning the product.
          9. Oops. Make realistic mistakes. Screw up in ways that distracted, busy people do.
          10. Industrial Data. Use high complexity project data.
    6. Claims Testing
      1. Verify every claim
        1. Identify reference materials that include claims about the product (implicit or explicit).
        2. Analyze individual claims, and clarify vague claims.
        3. Verify that each claim about the product is true.
        4. If you’re testing from an explicit specification, expect it and the product to be brought into alignment.
    7. User Testing
      1. Involve the users
        1. Identify categories and roles of users.
        2. Determine what each category of user will do (use cases), how they will do it, and what they value.
        3. Get real user data, or bring real users in to test.
        4. Otherwise, systematically simulate a user (be careful—it’s easy to think you’re like a user even when you’re not).
        5. Powerful user testing is that which involves a variety of users and user roles, not just one.
    8. Risk Testing
      1. Imagine a problem, then look for it.
        1. What kinds of problems could the product have?
        2. Which kinds matter most? Focus on those.
        3. How would you detect them if they were there?
        4. Make a list of interesting problems and design tests specifically to reveal them.
        5. It may help to consult experts, design documentation, past bug reports, or apply risk heuristics.
    9. Automatic checking
      1. Run a million different tests
        1. Look for opportunities to automatically generate a lot of tests.
        2. Develop an automated, high speed evaluation mechanism.
        3. Write a program to generate, execute, and evaluate the tests.
  5. Quality criteria
    1. OPERATIONAL CRITERIA
      1. Capability
        1. Can it perform the required functions?
      2. Reliability
        1. Will it work well and resist failure in all required situations?
          1. Data Integrity: the data in the system is protected from loss or corruption.
          2. Error handling: the product resists failure in the case of errors, is graceful when it fails, and recovers readily.
          3. Safety: the product will not fail in such a way as to harm life or property.
      3. Usability
        1. How easy is it for a real user to use the product?
          1. Learnability: the operation of the product can be rapidly mastered by the intended user.
          2. Operability: the product can be operated with minimum effort and fuss.
          3. Accessibility: the product meets relevant accessibility standards and works with O/S accessibility features.
      4. Security
        1. How well is the product protected against unauthorized use or intrusion?
          1. Security holes: the ways in which the system cannot enforce security (e.g. social engineering vulnerabilities)
          2. Authorization: the rights that are granted to authenticated users at varying privilege levels.
          3. Authentication: the ways in which the system verifies that a user is who she says she is.
          4. - Privacy: the ways in which customer or employee data is protected from unauthorized people.
      5. Scalability
        1. How well does the deployment of the product scale up or down?
      6. Performance
        1. How speedy and responsive is it?
      7. Installability
        1. How easily can it be installed onto its target platform(s)?
          1. Upgrades: Can new modules or versions be added easily? Do they respect the existing configuration?
          2. Uninstallation: When the product is uninstalled, is it removed cleanly?
          3. Configuration: What parts of the system are affected by installation? Where are files and resources stored?
          4. System requirements: Does the product recognize if some necessary component is missing or insufficient?
      8. Compatibility
        1. How well does it work with external components & configurations?
          1. Resource Usage: the product doesn’t unnecessarily hog memory, storage, or other system resources.
          2. Backward Compatibility: the products works with earlier versions of itself.
          3. Hardware Compatibility: the product works with particular hardware components and configurations.
          4. Operating System Compatibility: the product works with a particular operating system.
          5. Application Compatibility: the product works in conjunction with other software products.
    2. DEVELOPMENT CRITERIA
      1. Supportability
        1. How economical will it be to provide support to users of the product?
      2. Testability
        1. How effectively can the product be tested?
          1. Controllability
          2. Observability
          3. Availability
          4. Simplicity
          5. Stability
          6. Information
        2. Types
          1. Project-related
          2. Change Control. Frequent and disruptive change requires retesting and invalidates our existing product knowledge. Careful change control helps the product to evolve in testable stages.
          3.  Information Availability. We get all information we want or need to test well.
          4.  Tool Availability. We are provided all tools we want or need to test well.
          5.  Test Item Availability. We can access and interact with all relevant versions of the product.
          6.  Sandboxing. We are free to do any testing worth doing (perhaps including mutation or destructive testing), without fear of disrupting users, other testers, or the development process.
          7.  Environmental Controllability. We can control all potentially relevant experimental variables in the environ-ment surrounding our tests.
          8.  Time. Having too little time destroys testability. We require time to think, prepare, and cope with surprises.
          9. Value-related
          10. Oracle Availability. We need ways to detect each kind of problem that is worth looking for. A well-written specification is one example of such an oracle, but there are lots of other kinds of oracles (including people and tools) that may help.
          11.  Oracle Authority. We benefit from oracles that identify problems that will be considered important.
          12.  Oracle Reliability. We benefit from oracles that can be trusted to work over time and in many conditions.
          13.  Oracle Precision. We benefit from oracles that facilitate identification of specific problems.
          14.  Oracle Inexpensiveness. We benefit from oracles that don’t require much cost or effort to acquire or operate.
          15.  User Stability & Unity. The less users change and the less variety and discord among users, the easier the testing.
          16.  User Familiarity. The more we understand and identify with users, the easier it is to test for them.
          17.  User Availability. The more we can talk to and observe users, the easier it is to test for them.
          18.  User Data Availability. The more access we have to natural data, the easier it is to test.
          19.  User Environment Availability. Access to natural usage environments improves testing.
          20.  User Environment Stability & Unity. The less user environments and platforms change and the fewer of them there are, the easier it is to test.
          21. Epistemic
          22. Prior Knowledge of Quality. If we already know a lot about a product, we don’t need to do as much testing.
          23.  Tolerance for Failure. The less quality required, or the more risk that can be taken, the less testing is needed.
          24. Subjective
          25. Product Knowledge. Knowing a lot about the product, including how it works internally, profoundly im-proves our ability to test it. If we don't know about the product, testing with an exploratory approach helps us to learn rapidly.
          26.  Technical Knowledge. Ability to program, knowledge of underlying technology and applicable tools, and an understanding of the dynamics of software development generally, though not in every sense, makes testing easier for us.
          27.  Domain Knowledge. The more we know about the users and their problems, the better we can test.
          28.  Testing Skill. Our ability to test in general obviously makes testing easier. Relevant aspects of testing skill include experiment design, modeling, product element factoring, critical thinking, and test framing.
          29.  Engagement. Testing is easier when a tester is closer to the development process, communicating and col-laborating well with the rest of the team. When testers are held away from development, test efficiency suf-fers terribly.
          30.  Helpers. Testing is easier when we have help. A “helper” is anyone who does not consider himself responsi-ble for testing the product, and yet does testing or performs some useful service for the responsible testers.
          31.  Test Strategy. A well-designed test strategy may profoundly reduce the cost and effort of testing.
          32. Intrinsic
          33. Observability. To test we must be able to see the product. Ideally we want a completely transparent product, where every fact about its states and behavior, including the history of those facts is readily available to us.
          34.  Controllability. To test, we must be able to visit the behavior of the product. Ideally we can provide any pos-sible input and invoke any possible state, combination of states, or sequence of states on demand, easily and immediately.
          35.  Algorithmic Simplicity. To test, we must be able to visit and assess the relationships between inputs and out-puts. The more complex and sensitive the behavior of the product, the more we will need to look at.
          36.  Unbugginess. Bugs slow down testing because we must stop and report them, or work around them, or in the case of blocking bugs, wait until they get fixed. It’s easiest to test when there are no bugs.
          37.  Smallness. The less there is of a product, the less we have to look at and the less chance of bugs due to inter-actions among product components.
          38.  Decomposability. When different parts of a product can be separated from each other, we have an easier time focusing our testing, investigating bugs, and retesting after changes.
          39.  Similarity (to known and trusted technology). The more a product is like other products we already know the easier it is to test it. If the product shares substantial code with a trusted product, or is based on a trusted framework, that’s especially good.
      3. Maintainability
        1. How economical is it to build, fix or enhance the product?
      4. Portability
        1. How economical will it be to port or reuse the technology elsewhere?
      5. Localizability
        1. How economical will it be to adapt the product for other places?
        2. Regulations: Are there different regulatory or reporting requirements over state or national borders?
        3. Language: Can the product adapt easily to longer messages, right-to-left, or ideogrammatic script?
        4. Money: Must the product be able to support multiple currencies? Currency exchange?
        5. Social or cultural differences: Might the customer find cultural references confusing or insulting?
  6. PRODUCT ELEMENTS
    1. Structure
      1. Everything that comprises the physical product.
        1. Collateral: anything beyond software and hardware that is also part of the product, such as paper documents, web links and content, packaging, license agreements, etc..
        2. Non-executable files: any files other than multimedia or programs, like text files, sample data, or help files.
        3. Hardware: any hardware component that is integral to the product.
        4. Interfaces: points of connection and communication between sub-systems.
        5. Code: the code structures that comprise the product, from executables to individual routines.
    2. Functions
      1. Everything that the product does.
        1. Testability: any functions provided to help test the product, such as diagnostics, log files, asserts, test menus, etc.
        2. Interactions: any interactions or interfaces between functions within the product.
        3. Error Handling: any functions that detect and recover from errors, including all error messages.
        4. Multimedia: sounds, bitmaps, videos, or any graphical display embedded in the product.
        5. Startup/Shutdown: each method and interface for invocation and initialization as well as exiting the product.
        6. Time Related
          1. time-out settings
          2. daily or month-end reports
          3. nightly batch jobs
          4. time zones
          5. business holidays
          6. interest calculations
          7. terms and warranty periods
          8. chronograph functions.
        7. Calculation: any arithmetic function or arithmetic operations embedded in other functions.
        8. Application: any function that defines or distinguishes the product or fulfills core requirements.
        9. Transformations: functions that modify or transform something (e.g. setting fonts, inserting clip art, withdrawing money from account).
        10. System Interface: any functions that exchange data with something other than the user, such as with other programs, hard disk, network, printer, etc.
        11. User Interface: any functions that mediate the exchange of data with the user (e.g. navigation, display, data entry).
    3. Data
      1. Everything that the product processes.
        1. Input: any data that is processed by the product.
        2. Output: any data that results from processing by the product.
        3. Preset: any data that is supplied as part of the product, or otherwise built into it, such as prefabricated databases, default values, etc.
        4. Persistent: any data that is stored internally and expected to persist over multiple operations. This includes modes or states of the product, such as options settings, view modes, contents of documents, etc.
        5. Sequences/Combinations: any ordering or permutation of data, e.g. word order, sorted vs. unsorted data, order of tests.
        6. Big/Little: variations in the size and aggregation of data.
        7. Noise: any data or state that is invalid, corrupted, or produced in an uncontrolled or incorrect fashion.
        8. Lifecycle: transformations over the lifetime of a data entity as it is created, accessed, modified, and deleted.
    4. Interfaces
      1. User Interfaces: any element that mediates the exchange of data with the user (e.g. navigation, display, data entry).
      2. System Interfaces: any element that exchange data with something other than the user, such as with other programs, hard disk, network, printer, etc.
      3. API: Any programmatic interfaces or tools intended to allow the development of new applications using this product.
      4. Import/export: any functions that package data for use by a different product, or interpret data from a different product.
    5. Platform
      1. Everything on which the product depends (and that is outside your project).
        1. Internal Components: libraries and other components that are embedded in your product but are produced outside your project. Since you don’t control them, you must determine what to do in case they fail.
        2. External Software: software components and configurations that are not a part of the shipping product, but are required (or optional) in order for the product to work: operating systems, concurrently executing applications, drivers, fonts, etc.
        3. External Hardware: hardware components and configurations that are not part of the shipping product, but are required (or optional) in order for the product to work: CPU's, memory, keyboards, peripheral boards, etc.
    6. Operations
      1. How the product will be used.
        1. Extreme Use: challenging patterns and sequences of input that are consistent with the intended use of the product.
        2. Disfavored Use: patterns of input produced by ignorant, mistaken, careless or malicious use.
        3. Common Use: patterns and sequences of input that the product will typically encounter. This varies by user.
        4. Environment: the physical environment in which the product operates, including such elements as noise, light, and distractions.
        5. Users: the attributes of the various kinds of users.
    7. Time
      1. Any relationship between the product and time.
        1. Concurrency: more than one thing happening at once (multi-user, time-sharing, threads, and semaphores, shared data).
        2. Changing Rates: speeding up and slowing down (spikes, bursts, hangs, bottlenecks, interruptions).
        3. Fast/Slow: testing with “fast” or “slow” input; fastest and slowest; combinations of fast and slow.
        4. Input/Output: when input is provided, when output created, and any timing relationships (delays, intervals, etc.) among them.
        5. time zones
        6. units of measurement scale