Potential financial, ecological, social losses or liability
Civil or criminal legal sanctions
Safety concerns
Fines, loss of license
Lack of reasonable workarounds
Visibility of feature
Visibility of failure leading to negative publicity and potential image damage
Loss of customers
Mitigating risk
Depth-first approach
Priority order
Breadth-first approach
Testing across all areas
Defects
Test cases
Traceability
Confidence
Talking with other testers
Insourced
Outsourced
Distributed
Centralized
Advanced tester
Chosen a career path in testing by passing ISTQB Foundation Level
Demonstrated theoretical and practical skills
Experienced in testing projects
Types of systems
Systems of systems
High levels of complexity
The time and effort needed to localize defect
More integration testing may be required
Higher management overhead
Lack of overall control
Safety critical systems
Performing explicit safety analysis as part of the risk management
Performing testing according to a predefined sdlc model, such as the V-model
Conducting failover and recovery tests to ensure that software architectures are correctly designed and implemented
Performing reliability testing to demonstrate low failure rates and high levels of availability
Taking measures to ensure that safety and security requirements are fully implemented
Showing that faults are correctly handled
Demonstrating that specific levels of test coverage have been achieved
Creating full test documentation with complete traceability between requirements and test cases
Retaining test data, results, or test environments
Food and drug industry
Space industry
Aircraft industry
Real-time and Embedded systems
Specific testing techniques
Specify and perform dynamic analysis with tools
Testing infrastructure must be provided that allows embedded software to be executed and results obtained
Simulators and emulators may need to be developed and tested to be used during testing
Can fulfill role of test analyst in a project
Never stop learning and improving
More chances to being and stayed employed
Technical test analyst role
Understand the technical issues and concepts in applying test automation
Recognize and classify typical risks associated with the performance, security, reliability, portability and maintainability risks
Create test plans that detail the planning, design and execution of tests for mitigating performance, security, reliability, portability and maintainability risks
Select and apply appropriate structural design techniques to ensure that tests provide an adequate level of confidence, based on code coverage and design coverage
Effectively participate in technical reviews with developers and software architects, applying knowledge of typical mistakes made in code and architecture
Recognise risks in code and software architecture and create test plan elements to mitigate those risks through dynamic analysis
Propose improvements to the security, maintainability and testability of code by applying static analysis
Outline the cost and benefit to be expected from introducing particular types of test automation
Select appropriate tools to automate technical testing tasks
Functionality
Security
Reliability
Maturity (robustness)
Fault - tolerance
Recoverability
Compliance
Efficiency
Performance
Resource utilization
Compliance
Maintainability
Analyzability
Changeability
Stability
Testability
Compliance
Portability
Adaptability
Installability
Co-existence
Replaceability
Compliance
Test analyst role
Apply appropriate techniques to achieve the defined testing goals
Prepare and execute all necessary testing activities
Judge when testing criteria have been fulfilled
Report on progress in a concise and thorough manner
Support evaluations and reviews with evidence from testing
Implement the tools appropriate to performing the testing tasks
Structure the testing tasks required to implement the test strategy
Provide the appropriate level of documentation relevant to the testing activities
Determine the appropriate types of functional testing to be performed
Assume responsibilities for the usability testing for a given project
Select and apply appropriate testing techniques to ensure that tests provide an adequate level of confidence, based on defined coverage criteria
Determine the proper prioritization of the testing activities based on the information provided by the risk analysis
Perform the appropriate testing activities based on sdlc being used
Effectively participate in formal and informal reviews with stakeholders, applying knowledge of typical mistakes made in work products
Design and implement defect classification scheme
Apply tools to support efficient testing process
Support the test manager in creating appropriate testing strategies
Perform analysis on a system in sufficient detail to permit appropriate test conditions to be identified
Test process
Planning, monitoring and control
Analyst provide the information to test manager
Project and product risk
Test manager responsible for risk management
Monitor - manage a project
Control - initiate change as needed
Analysis
Review test basis
Risk analysis
Design
Concrete or logical test cases
Define the objective
Determine the level of detail
What the test cases should do
Pick your target test level
Review your work products
Implementation
Organizing the tests
Deciding the level of detail
Automating the automatable
Setting up the environment
Implementing the approach
Execution
Order of execution
Logging
Evaluating exit criteria and reporting
Test closure activities
Test process activity
Fundamental test process activity
Test planning
Test control
Test analysis and design
Test environment implementation
System test execution
Evaluating of exit criteria and reporting
Closure activities
V-Model testing activity
Concurrent with project planning
Throughout the project from start to completion
Concurrent with requirements specification, high-level design and low-level design
Started during system design, done during coding and component testing, conclude prior to system testing
Start when entry criteria are met, when component and integration testing are completed, continues until the exit criteria are metaaq
Throughout the testing, more frequent as the end of project approaches
System test is concluded, exit criteria are met, may be postponed after all testing is completed
Iterative model testing activity
At the beginning of each iteration
Done by iteration, trends tracked for the entire project
Per items designed for the particular iteration
Limited to what is needed for iteration
Start after component testing is completed, combined with integration testing, entry criteria may not be used
Throughout the testing, at the end of each iteration and project
At the end of the project, all iterations are completed, may be postponed until all testing is completed
Involvement in SDLC
Requirements engineering and management
Reviewing the requirements
Participating in review meetings
Clarifying requirements
Verifying testability
Project management
Providing input to the schedule and specific task milestones
Configuration and change management
Reviewing release notes
Conducting build verification testing
Noting versions for defect and test case execution reporting
Software development
Planning and designing test cases
Coordinating tasks and deliverables
Software maintenance
Managing defects
Tracking defect turnaround time
Creating new test cases
Technical support
Providing accurate documentation regarding known defects and workarounds
Technical documentation
Providing input to the writers
Providing review services
Usability and accessibility testing
Usability testing
Effectiveness
Capability of software product to enable users to achieved specified goals with accuracy and completeness in a specified context
Efficiency
Capability of product to enable users to expend appropriate amounts of resources in relation to the effectiveness achieved in specified context of use
Satisfaction
Ability to satisfy the user in a particular context of use
Formative
Helping developing the interface during design
Detection and removal of defects
Summative
Identify usability problems after implementation
Testing of requirements
Accessibility testing
Test process
Planning issues
Test design
Designing for the user
Considerations for usability tests
Verification
Did we build the product right
Validation
Did we build the right product
Syntax
The structure or grammar of the interface
Semantics
Reasonable and meaningful messages and output
Information transfer
Specifying usability tests
Inspecting, evaluation, reviewing
Interacting with prototypes
Verifying and validating the implementation
Conducting surveys and questionnaires
SUMI Software Usability Measurement Inventory
Brief questionnaire filled by the user
Software questionnaire from user perspective
WAMMI Website Analysis Measurement Inventory
Standardized publicly available usability survey
Web questionnaire from user perspective
ISO 9126
Attractiveness
Learnability
Operability
Understandability
Compliance
Functional testing
Accuracy
What the software should do
Suitability
Verify a set of functions is appropriate for their set of intended specified tasks
Interoperability
Verify is the SUT will function correctly in all the intended target environments
Compliance
Test tools
Test design tools
Help us to create test cases
Data tools
Analyze requirements and generate data to test
Anonymizing the data
Creating data from set of input parameters
Database tools
Test execution tools
Reduce the cost of repeated executions of the same tests
Better coverage of the software than would be possible with only manual testing
Execution of the same tests in many environments or configurations with no additional development effort
The ability to test facets of software that would be impossible to test with only manual testing
Data-driven automation
Data
Scripts
Keyword-driven automation
Keywords (Action words)
Data
Scripts
Principal points of automation
Automation won't solve all testing problems
A test automation project is like any other development project
No point in buying expensive automation tool if we won't use it's capabilities
Automation fails for many reasons (bad organization, politics, unrealistic expectations, no management backing)
Good automation requires strong technical skills and domain knowledge
First design tests then find tools to support it
Avoid automating tests that are human-centric
Risks
Automating bad tests
When software change the automation must change
Automation can't catch all defects
Checklist what to automate
How often we need to execute the test case
Are there procedural aspects that can easily be automated
Is partial automation better approach
Do we have the required details to enable test automation
Do we have an automation concept, should we automate smoke test
Should we automate regression testing
How much change are we expecting
What are objectives of test automation (lower cost)
Benefits
Test execution time should be more predictable
Regression testing will be faster and more reliable
The status of the team should grow
Test automation can help when repetition of regression testing is needed
Some testing is only possible with automation
Test automation is more cost effective than doing testing manually
Testing techniques
Specification-based techniques
Equivalence partitioning
Grouping the test conditions into partitions that will be handled the same way
Used for data handling
Boundary value analysis
Defining and testing for the boundaries of the partitions
Displacement or omission of boundaries and occasional extra boundary
Two value boundary testing
Three value boundary testing
Decision tables
Defining and testing for combinations of conditions using a tabular model
Incorrect processing resulting from combination of interacting conditions
Most frequently used condition goes first
Number of columns
Collapsing decision table - looking for conditions result in the same action
Collapsing decision table - replace the conditions which doesn't affect the outcome with dash and remove the duplicate columns
Used when conditions that exist at a given moment in time for a single transaction are sufficient by themselves to determine the actions
Cause-effect graphing
Defining and testing for combinations of conditions using a graphical model
Incorrect processing resulting from combination of interacting conditions
Combinations of conditions that cause an effect (causality)
Combinations of conditions that exclude a particular result (not)
Combinations of conditions that have to be true cause a particular result (and)
Alternative combinations that can be true to cause a particular result (or)
State transition testing
Identifying all the valid states and transitions that must be tested
Incorrect processing in the current state based on previous processing/incorrect or unsupported transitions/states without exits/missing states and transitions
0 - switch coverage
1 - switch coverage
N-1 - switch coverage (Chow's coverage measure)
State transition table (Start state, Event, Effect, End state, Transition)
State transition table with 1 - switch coverage (Test case, Start state, Switch state, End state)
Current state, event/condition, action, new state
We must refer to what conditions have existed in the past
Combinatorial testing
Determining the combinations of configurations to be tested
Incorrect handling of combinations and discovery of combinations that interact when they should not
Pairwise
Orthogonal Arrays
Every parameter is compared with every parameter in the neighboring column
All-pairs
All-pairs possible
All-pairs for m and n
All triples possible
Each option represented in at least one configuration
Classification trees
Singleton coverage
Two-wise (pairs) coverage
Every pairing of each option
Three-wise (triples) coverage
Advantage is visualisation
Input parameter model
Use case testing
Determining usage scenarios and testing accordingly
Use cases - workflows are independent one of each other
User story testing
Determining small pieces of functionality for implementation and testing when using an Agile
Failure to provide defined functionality
Acceptance criteria
User story is concisely expressed use case
Domain analysis
A combination of equivalence partitioning, boundary value analysis, and decision tables used to define tests for simple or complex set of values from multiple variables
OFF Value that is just off the boundary of the partition
Minimum coverage is at least one test case defined for each condition
Defect based testing techniques
Beizer's defect taxonomies
Requirements
Requirements incorrect
Requirements logic faulty
Requirements incomplete
Requirements not verifiable
Presentation and documentation
Requirements changes
Features and Functionality
Feature or function incorrect
Feature incomplete
Functional case incomplete
Domain defects
User messages and diagnostics
Exception conditions mishandled
Structural defects
Defects in control flow and structure
Processing
Data
Data definition and structure
Data access and handling
Implementation and coding
Coding and typing faults
Violation of style guidelines and standards
Poor documentation
Integration
Internal interfaces
External interfaces, timing, throughput
System and software architecture
Operating system calls and use
Software architecture
Recovery and accountability
Performance
Incorrect diagnostics and exceptions
Partitions and overlays
Environment
Test definition and execution
Test design defects
Test execution defects
Poor test documentation
Incomplete test cases
IEEE Std 1044-1993
Correct input not accepted
Wrong input accepted
Description incorrect or missing
Parameters incomplete or missing
Taxonomy may serve as checklist to be used during testing without the subsequent creation of detailed test cases
Experience - based testing techniques
Error guessing
Guessing errors based on experience and knowledge and testing for those errors
Defects that might have been missed with specification based testing and have been found in a defect taxonomy or are guessed by the tester
Fault attack is structured to error guessing - enumerate a list of possible defects and to design test that attack these effects
Checklist-based testing
Defining a high-level reminder checklist of the features and characteristics to be covered in testing and then testing to that list
Defects that have been missed by more formal techniques and can be found by varying the test pre-conditions, the test data used or the general approach
Exploratory testing
Simultaneously learning about software;planning, designing and executing the tests and documenting the results
Serious defects that are apparent while testing scenarios rather than targeting specific functional capabilities
Reviews
Planning
Understanding the review process, training the reviewers, getting management support
Kick-off
Having the initial meetings so that everyone understands what they are supposed to do
Individual preparation
Read, the work product, prepare comments, or just provide reactive comments
Review meeting
Conducting the meeting, outcomes are no changes or minor changes, changes are required but review isn't necessary, major changes are required and further review is necessary
Rework
Changes to the work made by author are required after the review
Follow-up
Re-review of changes may be required, look at the efficiency and to gather suggestions for improvement
Checklist for reviews
Checklist for requirements reviews
Is each requirement testable
Are there specific acceptance criteria associated with each requirement
Is there a calling structure specified to the use cases
Is there a unique identification for each stated requirement
Does each requirement have a version assigned to it
Is there a traceability from each requirement to its source (higher-level requirement or business requirement)
Is there traceability between the stated requirements and the use cases
Is each requirement clear
Is each requirement unambiguous
Does each requirement contain only a single item of testable functionality
Checklist for use case reviews
Is the main path clearly defined
Are all alternative paths (scenarios) identified, complete with error handling
Are the user interface messages defined
Is there only one main path or does the use case definition combine multiple cases into one
Is each path testable
Does this use case call other use cases
Is this use case called by other use cases
What is expected frequency of use for this use case
What are the types of users who will use this use case
Checklist for usability reviews
Is each field and it's function clearly defined
Are all error messages defined
Are all user prompts defined and consistent
Is the tab order of the fields defined
Are there keyboard alternatives to mouse actions
Are there shortcut keys defined for the user
Are there dependencies between the fields
Is there a screen layout
Does the screen layout match the specified requirements
Is there an indicator for the user that appears when the system is processing
Does the screen meet the minimum mouse click requirement
Does the navigation flow logically for the user based on use case information
Does the screen meet any requirements for learnability
Is there any help text available for the user
Is there any hover message available to the user
Will the user consider the user interface to be attractive
Is the use of colors consistent with other applications and with organization standards
Are there sound effects used appropriately and are they configurable
Does the screen meet localization requirements
Can the user determine what to do
Will the user be able to remember what to do
Are there usability standards that must be met
Are there accessibility requirements that must be met
Checklist for user story reviews
Is the story appropriate for the target iteration/sprint
Are the acceptance criteria defined and testable
Is the functionality clearly defined
Are there any dependencies between this story and others
Is the story prioritized
Does the story contain a single item of functionality
Is a framework or harness required for this story
Who will provide the harness
Checklist for success
Follow the defined review process
Keep good metrics regarding time spent, defects found, costs saved and efficiency gained
Review documents as soon as it's efficient to do so
Use checklist when conducting the review and record metrics while the review is in progress
Use different type of reviews on the same work item if needed
Focus on the most important problems
Ensure that adequate time is allocated for preparation, conducting and rework for the review
Time and budget shouldn't be allocated based on number of defects found
Make sure the right people are reviewing the right work and everyone is reviewing and under review
The reviews should be conducted in positive, blame - free and constructive environment
Keep a focus on continuous improvement
How to make review effective
Right work product
Conducting review at the right time in the project
Effective review based on the type selected
People with knowledge and experience
Trained team and receptive to the review process
Defect found in review tracked and resolved
Test manager is responsible for coordinating the training and sustaining effective review program, planning and follow-up activities
Conduct review as soon as we have document which describe the project requirements
Decision makers, project stakeholders, customers are involved, managers aren't
Single most effective way to improve software quality
Defect
Failure
Metric and reporting
Defect density analysis
More testing effort on defect clusters
Found vs. Fixed metric
Do we have an efficient bug life cycle
Convergence metrics
Open vs closed issues should converge
Phase containment diagram
Where problems are being introduced and where they are found
Is our defect information objective
Root cause of defects
Unclear requirements
Missing requirements
Wrong requirements
Incorrect design implementation
Incorrect interface implementation
Code logic error
Calculation error
Hardware error
Interface error
Invalid data
Error
Incident
Defect which doesn't require fix
Invalid configuration
Defect which does require fix
New
Invalid
Deferred
Opened
Submitted
Build
QA
Verified
Closed
Archived
Defect report
Accurate
Complete
Objective
Concise
Classification information
The activity tat was occuring when the defect was found
The phase in which the defect was introduced
The phase in which defect was detected
The ability of the tester to reproduce the defect
The root cause of the problem
The work product in which the mistake was made that cause the defect