T-76.5613 Software testing and quality assurance

The sole purpose of this resource is to prepare students for the exam of the course T-76.5613 Software testing and quality assurance, which can be taken in Helsinki University of Technology. The exam will consist mostly of lecture definitions and questions, which this resource will try to provide answers to.

In the ideal situation, reading this page, instead of the way too long course book, would be more than enough to pass the course exam. So if you are a student taking this course, so please contribute!

Lecture definitions

Term: Definition:
Lecture 1: Introduction to Software Quality and Quality Assurance
Software quality
  • The degree to which a system, component or process meets specified requirements.
  • The degree to which a system, component, or process meets customer or user needs or expectations.
Software quality assurance
  • Planned processes that provide confidence in a product's suitability for its intended purpose.
  • It is a set of activities intended to ensure that products satisfy customer requirements.
  • Create good and applicable methods and practices for achieving good enough quality level.
Software testing Testing is the execution of programs with the intent of finding defects.
Good enough quality
  • There are sufficient benefits.
  • No critical problems.
  • The benefits outweigh the problems.
  • Further improvements would be more harmful than helpful.
Lecture 2: Testers and Testing terminology
Verification Verification ensures that the software correctly implements the specification.
Validation Validation ensures that the software is meets the customer requirements.
Black-box testing The software being tested is considered as a black box and there is no knowledge of the internal structure.
White-box testing Testing that is based on knowing the inner structure of the software and the program logic.
Functional testing Testing used to evaluate a system with its specified functional requirements.
Non-functional testing Testing to see how to system performs to its specified non-functional requirements, such as reliability and usability.
Dynamic quality practices Testing that executes code. Traditional testing methods.
Static quality practices Practices that do not execute code, such as reviews, inspections and static analysis.
Scripted testing Test case based, where each test case is pre-documented in detail with step-by-step instructions.
Non-scripted testing Exploratory testing without detailed test case descriptions.
Test oracle A document or a piece of software that allows the tester to decide if the test was passed or failed.
Reliability The ability of a software to perform its required functions under stated conditions for a specified period of time.
Maintainability The effort needed to make changes into the software.
Testability Effort needed to test a software system to ensure it performs its intended functions.
Defect severity Severity of the consequences caused by a software fault.
Defect priority The order of which the found defects are fixed.
Regression testing
  • Running tests that have been run before after there has been changes in the software.
  • To get confidence that everything still works and to reveal any unanticipated side effects.
Testing techniques
  • A testing technique is a definitive procedure that produces a test result.
  • Methods or ways of applying defect detection strategies.
Test case
  • A test case is an input with an expected result.
  • Normally consists of a unique identifier, requirement references from a design specification, preconditions, events, a series of steps to follow, input, output, expected result, and actual result.
Test level
  • Group of test activities that focus into a certain level of the tested software.
  • Examples: Unit testing, Integration testing, System testing, Acceptance testing.
  • Can be seen as level of detail and abstraction.
Test type
  • Group of test activities that evaluate a system concerning some quality characteristics.
  • Examples: Functional testing, Performance testing, Usability testing.
Test phase Temporal parts of the testing process that follow sequentically each other, with or without overlapping each others.
Unit testing A unit (a basic building block) is the smallest testable part of an application.
Integration testing Individual software modules are combined and tested as a group. Communication between units.
System testing Testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements.
Acceptance testing Final stage of validation where the customer or end-user is usually involved.
Alpha testing Testing that is usually done before release to the general public.
Beta testing Beta versions are released to a limited audience outside of the programming team. Testing is performed in the user's environment.
Lecture 3: White-box testing
Control-flow testing Control flow refers to the order in which the individual statements, instructions or function calls of an imperative or functional program are executed or evaluated.
Data-flow testing
  • Data-flow testing looks at the life-cycle of a particular piece of data (varialbe) in an application.
  • By looking for patterns of data usage, risky areas of code can be found and more test cases can be applied.
Statement coverage
  • Percentage of executable statements (nodes in a flow graph) exercised by a test suite.
  • Statement coverage is 100% if each program statement is executed at least once by some test case.
Decision (branch) coverage Percentage of decision outcomes (edges in a flow graph) exercised by a test suite.
Condition coverage
  • Testing if each boolean sub-expression has evaluated both to true and false.

*Example: (a<0

b>0) => (true, false) and (false, true)
Decision/condition (multi-condition) coverage
  • Execution of all sub-predicate boolean value combinations.

*Example: (a<0

b>0) => (true, true), (true, false), (false, true) and (false, false)
Lecture 4: Black-box testing techniques
Equivalence partitioning Dividing test inputs into partitions, where each partition contains similar inputs. Testing only one input of each partition leads to a lower number of test cases.
Boundary value analysis Values are chosen that lie along boundaries of the input domain.
Cause and effect graphing
  • A directed graph that maps a set of causes to a set of effects.
  • Allows the tester to combine conditions.
Classification tree testing Can be used to visualise equivalence classes (in a hierachial way) and selecting test cases.
Function testing Testing one function (feature) at a time.
State transition testing State transition testing focuses on the testing of transitions from one state (e.g., open, closed) of an object (e.g., an account) to another state.
Specification testing
  • Testing based on some sort of specification document.
  • If there is ambiguity in the document, it is more probable that there is a defect.
Failure mode analysis Error guessing based on experience and previous or typical defects.
Scenario testing
  • Testing scenarios are complicated and realistic stories of real usage of the system.
  • The goal is to focus on business needs and realistic situations.
Soap-opera testing Extreme scenario testing. Substitute all input values with extreme values.
Combination testing
  • An algorithm for selecting combinations.
  • Makes it possible to test interactions.
  • Too many combinations. Need for systematic techniques.
Pair-wise testing Every possible pair of interesting values of any two parameters is covered. Reduces combinations radically.
Decision table testing
  • For modelling complicated business rules.
  • Different combinations lead to different actions.
  • Combinations are visaulized. This can reveal unspecified issues.
Testing heuristics A way of making an educated guess. Can be used to generate ideas for good tests.
Lecture 5: Test Planning
Test Plan
  • Prescribes the scope, approach, resources and schedule of testing activities.
  • Includes features to test and tasks to perform.
Test Policy A company or product level test plan.
Test Strategy
  • Describes how testing is performed and how quality goals are achieved.
  • Includes test levels, test phases, test strategies and test completion criteria.
  • Plans how defect reporting and communication for testing is done.
Detailed Test Plan A more detailed version of the test plan?
Item pass/fail criteria
  • Completion criteria for the test plan.
  • Examples:
    • All test cases completed.
    • Code coverage tool indicates all code covered.
Test completion criteria (stop-test criteria) Criteria of when to stop testing activities.
Release criteria Criteria of when the product is in such a condition that it can be released. Usually not decided by testers.
Lecture 6: Test Case Design and Test Reporting
Test Case
  • A set of inputs, execution conditions and expected results.
  • In order to exercise a program path or to verify compliance with a requirement.
Test Summary Report
  • An IEEE Standard that includes:
    1. Identifier.
    2. Summary.
    3. Variances.
      • Variances to the planned activities.
    4. Comprehensiveness assessment
      • Against the criteria specified in the plan.
      • Reasoning for untested features.
    5. Summary of results
      • Success of testing.
      • All resolved and unresolved defects.
    6. Evaluation
    7. Summary of activities
      • Testing activities and events.
    8. Approvals
      • Who approves the report.
Testing Dashboard A dashboard with coverage and quality for each functional area.
Risk-based testing
  1. Risk based test design techniques.
    • Analyse product risks and use that information to design good test cases.
  2. Risk based test management.
    • Estimate risks for each feature.
    • Prioritize testing according to these estimates.
Lecture 7: Exploratory testing
Exploratory Testing
  • Testing without predefined test cases.
  • Manual testing based on:
    • Knowledge of the tester.
    • Skills of the tester.
  • Exploring the software.
  • Allows tester to adjust plans.
  • Minimize time spent on (pre)documentation.
Session-Based Test Management
  • Enables planning and tracking exploratory testing.
  • Includes a charter that answers:
    • What? Why? How? What problems?
    • And possibly: Tools? Risks? Documents?
    • Allows reviewable results.
  • Done in sessions of ~90 minutes
    • Gets the testing done and allows flexible reporting.
  • Debriefing
Lecture 9: Reviews and Inspections
Formal Inspection
  • A meeting with defined roles and trained participants:
    • Author, reader, moderator, recorder and inspectors.
  • Complete focus on revealing defects.
  • No discussion on solutions or alternatives.
  • Formal entry and exit criteria.
  • Carefully measured and tracked.
Audit An independent evaluation to check conformance of software products and processes. (ISO 9001, CMM)
Scenario based reading Limits the attention of the reader to a certain area.
Joint Application Design (JAD) A workshop where knowledge workers and IT specialists meet, sometimes for several days, to define and review the business requirements for the system.
Individual checking Searching for defects alone.
Additional definitions found from old exams
Design by Contract Prescribes that software designers should define formal, precise and verifiable interface specifications for software components.

Lecture 1 Questions: Introduction to Software QA

Describe Garvin's five viewpoints to product quality and explain how these viewpoints apply to software quality.

Compare ISO 9126 quality model and McCall quality models for software quality?

Describe different reasons that cause defects or lead to low quality software.

Explain what the statement "software is not linear" means in the context of software defects and their consequences. Give two examples of this.

What is quality assurance and how it is related to software testing?

Describe and compare different definitions of software testing that have been presented. How these definitions differ in terms of the objectives of testing?

Describe the main challenges of software testing and reasons why we cannot expect any 'silver bullet' solutions to these challenges in the near future.

Testing is based on risks. Explain different ways of prioritizing testing. How prioritizing is applied in testing and how it can be used to manage risks?

Lecture 2 Questions: Software testers and testing terminology

Describe typical characteristics and skills of a good software tester. Why professional testers are needed? How testers can help developers to achieve better quality?

Describe the V-model of testing and tell how testing differs in different test levels?

Describe the purpose and main difference of Performance testing, Stress testing and Recovery testing.

Lecture 3 Questions: White box testing

Describe branch coverage and condition coverage testing. What can you state about the relative strength of the two criteria? Is one of them stronger than the other?

How coverage testing tools can be used in testing? What kind of conclusions you can draw based on the results of statement coverage analysis?

Give examples of defect types that structural (coverage) testing techniques are likely to reveal and five examples of defect types that cannot necessarily be revealed with structural techniques. Explain why.

Describe the basic idea of mutation testing. What are the strengths and weaknesses of mutation testing.

Lecture 4 Questions: Black-box testing techniques

Compare function testing and scenario testing techniques. What kind of purposes these two techniques are good? What are shortcomings of the two techniques? How function testing and scenario testing could be used to complement each other?

List and describe briefly different ways of creating good scenarios for testing?

Describe at least five testing heuristics and explain why they are good rules of thumb and how they help testing.

What is Soap Opera Testing? Why soap opera testing is not same as performing equivalence partitioning and boundary value analysis using extreme values?

Describe different coverage criteria for combination testing strategies. How these criteria differ in their ability to reveal defects and cover functionality systematically?

Describe the basic idea of decision table testing and the two different approaches to applying it.

Lecture 5 Questions: Test Planning

List and describe at least five different functions that a test plan can be used for.

Describe six essential topics of test planning, what decisions has to be made concerning each of these topics?

  1. Why: Overall test objectives.
    • Quality goals.
  2. What?: What will and won't be tested.
    • Prioritize. Provide reasoning.
    • Analyze the product to make reasonable decisions.
  3. How?: Test strategy.
    • How testing is performed:
    • Test techniques, test levels, test phases.
    • Tools and automation.
    • Processes:
      • How test cases are created and documented.
      • How defect reporting is done.
  4. Who?: Resource requirements.
    • People.
      • Plan responsibilities.
    • Equipment.
    • Office space.
    • Tools and documents.
    • Outsourcing.
  5. When: Test tasks and schedule.
    • Connect testing with overall project schedule.
  6. What if?: Risks and issues.
    • Risk management of the test project, not the product.

How estimating testing effort differs from estimating other development efforts? Why planning relative testing schedules might be a good idea?

How is test prioritization different from implementation or requirements prioritization, why cannot we skip all low priority tests when time is running out?

What defines a good test plan? Present some test plan heuristics (6 well explained heuristics will produce six points in the exam)

Lecture 6 Questions: Test case design and Test Reporting

Describe the difference between designing test ideas or conditions and designing test cases. How this difference affects test documentation?

Why is it important to plan testing in the early phases of software development project? Why could it be a good idea to delay the detailed test design and test case design to later phases, near the actual implementation?

Give reasons why test case descriptions are needed and describe different qualities of good tests or test cases?

What issues affect the needed level of detail in test case descriptions? In what kinds of situations are very detailed test case descriptions needed? What reasons can be used to motivate using less detailed, high level test case descriptions?

How defect reports can be made more useful and understandable? What kinds of aspects you should pay attention to when writing defect reports?

What is essential information in test reporting for management? How management utilizes the information that testing provides? Why a list of passed and failed test cases with defect counts is not sufficient for reporting test results?

Lecture 7 Questions: Exploratory testing

What are the five differences that distinguish exploratory testing from test-case based (or scripted) testing?

What benefits can be achieved using exploratory testing (ET) approach. What are the most important challenges of using ET? In what kinds of situations ET would be a good approach?

Describe the main idea of Session-Based Test Management (SBTM). How the needs for test planning, tracking and reporting are handled in SBTM?

Give reasons that support the hypothesis that Exploratory Testing could be more efficient than test-case-based testing in revealing defects?

Lecture 9 Questions: Software Reviews and Inspections

Describe and compare reviews and dynamic testing (applicability, benefits, shortcomings, defect types)

Present and describe briefly the four dimensions of inspections.

Explain the different types of reviews and compare their similarities and differences.

Describe the costs, problems, and alternatives of reviews.

Lecture 10 Article Questions: Static Code Analysis and Code Reviews

Describe the taxonomy for code review defects for both functional and evolvability defects and describe the type of defect actually found in code reviews. (Article: What types of defects are really discovered in code reviews )

What is static code analysis and what can be said about the pros and cons of static code analyzes for defect detection (Article: Predicting Software Defect Density: A Case Study on Automated Static Code Analysis, Article Using static analysis to find bugs )

Describe the Clean Room process model. What are the benefits of the Clean room model? How has the model been criticized?

Lecture 11 Article Questions: Test Automation

Describe data-driven and keyword-driven test automation. How do they differ from regular test automation techniques, what are their benefits and shortcomings?

Software lifecycle V-model and testing tools. What kind of tools are available for different phases, how they improve the software quality?


What problems are associated with automating testing? What kinds of false beliefs and assumptions people make with test automation?

Lecture 12 Article Questions: Agile Testing

What kinds of challenges agile development approach places for software testing? Describe contradictions between the principles of agile software development and traditional testing and quality assurance.

Agile Principle: Challenge:
Frequent deliveries of valuable software
  • Short time for testing in each cycle
  • Testing cannot exceed the deadline
Responding to change even late in the development Testing cannot be based on completed specifications
Relying on face-to-face communication Getting developers and business people actively involved in testing
Working software is the primary measure of progress Quality information is required early and frequently throughout development
Simplicity is essential Testing practices easily get dropped for simplicity's sake


Testing principle: Contradicting practices in agile methods:
Independency of testing
  • Developers write tests for their own code
  • The tester is one of the developers or a rotating role in the development team
Testing requires specific skills
  • Developers do the testing as part of the development
  • The customer has a very important and collaborative role and a lot of responsibility for the resulting quality
Oracle problem Relying on automated tests to reveal defects
Destructive attitude Developers concentrate on constructive QA practices, i.e., building quality into the product and showing that features work
Evaluating achieved quality Confidence in quality through tracking conformance to a set of good practices

Read the experiences of David Talby et al. presented in their article "Agile Software Testing in a Large-Scale Project". Describe how they tackled the following areas in a large-scale agile development project: Test design and execution, Working with professional testers, Activity planning and Defect management

Additional questions from old exams

Describe the relationship of equivalence partitioning (EP), boundary value analysis (BVA) and cause-and-effect graphing (CEG). What are the differences of the three techniques. Can the techniques be used together to complement each other, why/why not?


Describe the basic idea of pair-wise testing. What kind of testing problems pair-wise testing is good for and why does it work? Describe also what shortcomings or problems you should pay attention to when applying pair-wise testing?


This article is issued from Wikiversity - version of the Thursday, May 07, 2015. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.