keyword | definition |
---|---|
coverage | The degree to which specified coveage items have been determined or have been exercised by a test suite expressed as a percentage |
coverage item | An attribute or combination of attributes that is derived from one or more test conditions by using a test technique that enabels the measurement of the thoroughness of the test execution |
debugging | The process of finding, analyzing and removing the causes of failures in software |
error | A human action that produces an incorrect result |
failure | An event in which a component or system does not perform a required function within specified limits |
quality | The degree to which a component, system or process meets specified requirements and/or user/customer needs and expectations |
quality assurance | Part of qaulity management focused on providing confident that quality requirements will be fulfilled |
root cause | A source of defect such that if it is removed, the occurance of the defect type is decreased or removed |
test analysis | The activity that identifies test conditions by analyzing the test basis |
test basis | The body of knowledge used as the basis for test analysis and design |
test case | A set of preconditions, inputs, actions (where applicable), expected results and postconditions, developed based on test conditions |
test completion | The activity that makes test assets available for later use, leaves test environments in a satisfactory condtion and communicates the results of testing to relevant statkeholders |
test condition | An aspect of the test basis that is relevant in order to achieve specific test objectives |
test control | A test management task that deals with developing and applying a set of corrective actions to get a test project on track when monitoring shows a deviation from what was planned |
test data | Data created or selected to satisfy the exectuion preconditions and inputs to execute one or more test cases |
test design | The activity of deriving and specifying test cases from test conditions |
test execution | The process of running a test on the component or system under test, producing actual result(s) |
test implementation | The activity that prepares the testware needed for test execution based on test analysis and design |
test monitoring | A test management activity taht involves checking the status of testing activities, identifying any variacnes from the planned or expected status, and reporing status to stakeholders |
test object | The component or system to be tested |
test objective | A reason or purpose for designing and exectuting a test |
test plan | Documentation describing the test objectives to be achieved and the means and the schedules for achieving them, organized to coordinate testing activities |
test planning | The acitivity of establishing or updating a test plan |
test policy | A high-level document describing the principles, approach and major objectives of the organization regarding testing |
test procedure | A sequence of test cases in execution order, and any associated actions that may be required to set upt the initial preconditions and any wrap up activities post exectuion |
actual result | The behavior produced/observed when a component or system is tested |
expected result | The predicated observable behavior of a component or system executing under specified conditions, based on its specification or another source |
result | outcome, test outcome, test result. The consequnece/outcome of the execution of a test. It includes outputs to screens, changes to data, reports, and communication messages sent out. |
testing | The process consisting of all lifecycle activites, both static and dynamic, concerned with planning, preparation and evaluation of software products and related work products determine that they satisfy specified requirements, to demonstrate that they are fit for purpose and to detect defects |
testware | Work prodcuts produced during the test process for use in planning, designing, executiong, evaluationg and reporting on testing |
validation | Confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled |
verification | Confirmation by examination and through provision of objective evidence that specified requirements have been fulfilled |
Software taht does not work correctly can lead to many problems, including loss of money, time or business reputation, and, in extreme cases, even injury or death.
verification: checking whether the system meets specified requirements
validation: checking whether the system meets users' and other stakeholders' needs in its operational envrionment.
Evaluating work products such as requirements, user stories, designs, and code
Triggering failures and finding detects
Ensuring required coverage for a test object
Reducing the level of risk of inadequate software quality
Verifying whether specified requirements habe been fulfilled
Verifying that a test object complies with contractual, legal, and regulatory requirements
Providing information to statkeholders to allow them to make informed decisions
Building confidence in the quality of the test object
Validating wheter the test object is complete and works as expected by the stakeholders
context: the work products, the test level, risks, SDLC(Software Development LifeCycle), business contexts such as corporate structure, competitive considerations, or time to market.
LO: Examplify why testing is necessary
cost effective means of detecting defects
confidence of movinfg to next stage of the SDLC such as the release of software
indirect representation of users (direct participation of users costs highly and needs the availability of suitable users
comply contractual or legal requirements, regulatory standads
LO: Recall the relation between testing and quaility assurance
Testing is a form of qaulity control (QC)
QA works on the basis that if a good process is followed correctly, then it will generate a good product.
In QA, test results provide feedback on how well the development and test processes are performing
LO: Distinguish between root cause, error, defect, and failure
Error: human mistake. produce defects.
defect: fault, bug. result in failure.
failure: system fail to do what it should do or do something it shouldn't
root cause: identified through root cause analysis when a failure occurs or a defect is identified.
even if no defects are found, testing cannot prove test object correctness.
test techinques, test case priorization, risk-based testing
Predicted defect clusters and actual defect clusters are important input of risk-based testing
pesticide paradox.
system should fulfill the specified requirements, but also fulfill users' needs and expectations, help in achieving the customer's business goals and competitive to other systems
LO: Summarize the different test activities and tasks
These testing activities usually need to be tailored to the system and the project
Test planning: consists of defining the test objectives and then selecting an approach that best achieves the objectives within constraints imposed by the overall context.
Test monitoring and control: Test monitoring involves the ongoing checking of all test activities and the comparison of actual progress against the plan. Test control involves taking the actions necessary to meet the objectives of testing.
Test analysis: includes analyzing the test basis to identify testable features and to define and prioritize associated test conditons, together with the related risks and risk levels. The test basis and the test objects are allso evaluated to identify defects they may contain and to assess their testability. Test analysis is often supported by the use of test techniques.
Test design: includes elaborating the test conditions into test cases and other testware (e.g. test charters). This activity often involves the identification of coverage items, which serve as a guide to specity test case inputs. Test design is supported by the use of test techniques. Test design also includes defining the test data requirements, designing the test envrionment and identifying any other required infrastructure and tools. Test design answers the question "how to test?".
Test exectution: includes running the tests in accordance with the test execution schedule (test runs). Test exectuion may be manual or automated. Test execution can take many forms, including continuous testing or pair testing sessions. Actual test results are compared with the expected results. The test results are logged. Anomalies are analyzed to identify their likely causes. This analysis allows us to report the anomalies based on the failures observed.
Test completion: activities usualy occur at project milestones(e.g. release, end of iteration, test level completion) for any unresolved defects, change request or product backlog item created. Any testware that may be useful in the future is identified and archived or handed over to the appropriate teams. The test envrionment is shut down to an agreed state. The test activities are analyzed to identify lessons learned and improvements for future iterations, releases, or projects. A test completion report is created and communicated to the stakeholders.
LO: Explain the impact of context on the test process
Testing is funded by stakeholders.
Stakeholders (needs, expectations, requirements, willingness to cooperate)
Team members (skills, knowledge, level of experience, availabilty, training needs)
Business domain (criticality of the test object, identified risks, market needs, specific legal regulations)
Technical factors (type of software, product architecture, technology used)
Project constraints (scope, time, budget, resources)
Organizational factors (organizational structure, exsting policies, practices used)
Software development lifecycle (engineering practices, development methods)
Tools (availability, usability, compliance)
Context impacts on test strategy, test techniques, degree of test automation, required level of coverage, level of deatil of test documentation, reporting.
LO: Differentiate the testware that supports the test activities
Proper configuration management ensures consistencpy and integrity of work products.
Test planning: test plan, test schedule, risk register, entry and exit criteria. Risk register is a list of risks together with risk likelihood, risk impact and information about risk mitigration.
Test monitoring and control: test progress reports, documentation of control directives and risk information
Test analysis: (prioritized) test conditions (e.g. acceptance criteria) and defect reports regarding defects in the test basis (if not fixed directly)
test design: (prioritized) test cases, test charters, coverage items, test data requirements and test environment requirements
test implementation: test procedures, autoamted test scripts, test suitres, test data, test execution schedule, and test environment elements (stubs, drives, simulators and service virtualization), test harness
test execution: test logs, defect reports
test completion: test completion report, action itemss for improvement subsequent projects or iterations, documented lessons learned, and change requests (product backlog items)
LO: Explain the value of maintaining traceability
effective test monitoring and control
test basis elements, testware associated (test conditions, risks, test cases) test results, and detected defets
coverage evaluation (if measurable coverage criteia are defined in the test basis)
kpi: key performance indicator for the test objectives
test case to requirements verify the requirements coverd by tc
test results to risks evaluate the level of residual risk in a test object
impact of changes
facilitates test audits
it governance criteria
test progress and completion reports more easy to understand including the status of test basis elements.
techincal aspects of testing to stakeholders
assess product quality, process capability, and projects progress agains business goals.
LO: compare the different testing roles
Test management: overall responsiblity process team leadership of the test activities. test planning, test monitoring and control and test completion. In agile some handled by agile team. development team outside test manager.
Testing: enegineering aspect responsibility. test anlaysis, test design, test implementation, test execution.
tm: team leader, development manager, test manager.
one person can test management and testing at the same time.
LO: Give examples of the generic skills required for testing
Testing knowledge (effectiveness, test techniques)
being methodical (to identify defects, especially the ones that are difficult to find)
creativity (to increase effectiveness of testing)
Technical knwolege( to increase efficiency of testing e.g. by using appropriate test tools)
bearers of bad news
confirmation bias
LO: Recall the adavantages of whole team approach
Extereme Programming
any knowlege and skills any task
team dynamicsm communication, interaction, coolaboration, sysnergy by allowing the various skills sets within the team to be leveraged for the benefic
business representatives help acceptance test
developers test strategy and test automation approaches.
safety-critical whole tean x -> high independence
LO: Distinguish the benefitst and drawbacks of independence of testing
differences between authors' and testers' cognitive biases
familiarity (author or developer effectively find many defects in their own code)
developer componet and componet integration testing
test team performing system and system integration testing
business representative acceptance testing
diffrent backgrounds technical perpectives and biases, assumptions
adversarial realtionship