ISTQB CTFL 1. Fundamentals of Testing

Ho Saint·2024년 8월 18일
0

Keywords

keyworddefinition
coverageThe degree to which specified coveage items have been determined or have been exercised by a test suite expressed as a percentage
coverage itemAn attribute or combination of attributes that is derived from one or more test conditions by using a test technique that enabels the measurement of the thoroughness of the test execution
debuggingThe process of finding, analyzing and removing the causes of failures in software
errorA human action that produces an incorrect result
failureAn event in which a component or system does not perform a required function within specified limits
qualityThe degree to which a component, system or process meets specified requirements and/or user/customer needs and expectations
quality assurancePart of qaulity management focused on providing confident that quality requirements will be fulfilled
root causeA source of defect such that if it is removed, the occurance of the defect type is decreased or removed
test analysisThe activity that identifies test conditions by analyzing the test basis
test basisThe body of knowledge used as the basis for test analysis and design
test caseA set of preconditions, inputs, actions (where applicable), expected results and postconditions, developed based on test conditions
test completionThe activity that makes test assets available for later use, leaves test environments in a satisfactory condtion and communicates the results of testing to relevant statkeholders
test conditionAn aspect of the test basis that is relevant in order to achieve specific test objectives
test controlA test management task that deals with developing and applying a set of corrective actions to get a test project on track when monitoring shows a deviation from what was planned
test dataData created or selected to satisfy the exectuion preconditions and inputs to execute one or more test cases
test designThe activity of deriving and specifying test cases from test conditions
test executionThe process of running a test on the component or system under test, producing actual result(s)
test implementationThe activity that prepares the testware needed for test execution based on test analysis and design
test monitoringA test management activity taht involves checking the status of testing activities, identifying any variacnes from the planned or expected status, and reporing status to stakeholders
test objectThe component or system to be tested
test objectiveA reason or purpose for designing and exectuting a test
test planDocumentation describing the test objectives to be achieved and the means and the schedules for achieving them, organized to coordinate testing activities
test planningThe acitivity of establishing or updating a test plan
test policyA high-level document describing the principles, approach and major objectives of the organization regarding testing
test procedureA sequence of test cases in execution order, and any associated actions that may be required to set upt the initial preconditions and any wrap up activities post exectuion
actual resultThe behavior produced/observed when a component or system is tested
expected resultThe predicated observable behavior of a component or system executing under specified conditions, based on its specification or another source
resultoutcome, test outcome, test result. The consequnece/outcome of the execution of a test. It includes outputs to screens, changes to data, reports, and communication messages sent out.
testingThe process consisting of all lifecycle activites, both static and dynamic, concerned with planning, preparation and evaluation of software products and related work products determine that they satisfy specified requirements, to demonstrate that they are fit for purpose and to detect defects
testwareWork prodcuts produced during the test process for use in planning, designing, executiong, evaluationg and reporting on testing
validationConfirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled
verificationConfirmation by examination and through provision of objective evidence that specified requirements have been fulfilled

What is Testing?

  • Software taht does not work correctly can lead to many problems, including loss of money, time or business reputation, and, in extreme cases, even injury or death.

  • verification: checking whether the system meets specified requirements

  • validation: checking whether the system meets users' and other stakeholders' needs in its operational envrionment.

Test Objectives

  • LO: Identify typical test objectives

Typical Objectives

  • Evaluating work products such as requirements, user stories, designs, and code

  • Triggering failures and finding detects

  • Ensuring required coverage for a test object

  • Reducing the level of risk of inadequate software quality

  • Verifying whether specified requirements habe been fulfilled

  • Verifying that a test object complies with contractual, legal, and regulatory requirements

  • Providing information to statkeholders to allow them to make informed decisions

  • Building confidence in the quality of the test object

  • Validating wheter the test object is complete and works as expected by the stakeholders

  • context: the work products, the test level, risks, SDLC(Software Development LifeCycle), business contexts such as corporate structure, competitive considerations, or time to market.

Testing and Debugging

  • LO: Differentiate testing from debugging

Testing: can trigger fauilures (dynamic) or find directly from the test object (static)

Debugging: find causes of the failure (or defects), analyze, and remove

  • Reproduction of a failure
  • Diagnosis (finding the root cause)
  • Fixing the cause
  • When static testing identifies a defect, debugging is concenred with removing it (no need to reproduction or diagnosis)

Why is Testing Necessary?

Testing's Contributions to Success

  • LO: Examplify why testing is necessary

  • cost effective means of detecting defects

  • confidence of movinfg to next stage of the SDLC such as the release of software

  • indirect representation of users (direct participation of users costs highly and needs the availability of suitable users

  • comply contractual or legal requirements, regulatory standads

Testing and Qaulity Assurance (QA)

  • LO: Recall the relation between testing and quaility assurance

  • Testing is a form of qaulity control (QC)

  • QA works on the basis that if a good process is followed correctly, then it will generate a good product.

  • In QA, test results provide feedback on how well the development and test processes are performing

Errors, Defects, Failures, and Root Causes

  • LO: Distinguish between root cause, error, defect, and failure

  • Error: human mistake. produce defects.

  • defect: fault, bug. result in failure.

  • failure: system fail to do what it should do or do something it shouldn't

  • root cause: identified through root cause analysis when a failure occurs or a defect is identified.

Testing principles

  • LO: Explain the seven testing principles
  1. Testing shows the presence, not the absence of defects
	even if no defects are found, testing cannot prove test object correctness.
  1. Exhaustive testing is impossible.
	test techinques, test case priorization, risk-based testing
  1. Early testing saves time and money.
  2. Defects cluster together.
	Predicted defect clusters and actual defect clusters are important input of risk-based testing
  1. Testing wear out.
	pesticide paradox.
  1. Testing is context dependent.
  2. Absence-of-defects fallacy.
	system should fulfill the specified requirements, but also fulfill users' needs and expectations, help in achieving the customer's business goals and competitive to other systems

Test Activities, Testware and Test Roles

  • Which test acitivities are included in this test process, how they are implemented, and when they occur is normally decided as part of the test planning for the specific situation

Test Activities and Tasks

  • LO: Summarize the different test activities and tasks

  • These testing activities usually need to be tailored to the system and the project

  • Test planning: consists of defining the test objectives and then selecting an approach that best achieves the objectives within constraints imposed by the overall context.

  • Test monitoring and control: Test monitoring involves the ongoing checking of all test activities and the comparison of actual progress against the plan. Test control involves taking the actions necessary to meet the objectives of testing.

  • Test analysis: includes analyzing the test basis to identify testable features and to define and prioritize associated test conditons, together with the related risks and risk levels. The test basis and the test objects are allso evaluated to identify defects they may contain and to assess their testability. Test analysis is often supported by the use of test techniques.

  • Test design: includes elaborating the test conditions into test cases and other testware (e.g. test charters). This activity often involves the identification of coverage items, which serve as a guide to specity test case inputs. Test design is supported by the use of test techniques. Test design also includes defining the test data requirements, designing the test envrionment and identifying any other required infrastructure and tools. Test design answers the question "how to test?".

  • Test exectution: includes running the tests in accordance with the test execution schedule (test runs). Test exectuion may be manual or automated. Test execution can take many forms, including continuous testing or pair testing sessions. Actual test results are compared with the expected results. The test results are logged. Anomalies are analyzed to identify their likely causes. This analysis allows us to report the anomalies based on the failures observed.

  • Test completion: activities usualy occur at project milestones(e.g. release, end of iteration, test level completion) for any unresolved defects, change request or product backlog item created. Any testware that may be useful in the future is identified and archived or handed over to the appropriate teams. The test envrionment is shut down to an agreed state. The test activities are analyzed to identify lessons learned and improvements for future iterations, releases, or projects. A test completion report is created and communicated to the stakeholders.

Test Process in Context

  • LO: Explain the impact of context on the test process

  • Testing is funded by stakeholders.

  • Stakeholders (needs, expectations, requirements, willingness to cooperate)

  • Team members (skills, knowledge, level of experience, availabilty, training needs)

  • Business domain (criticality of the test object, identified risks, market needs, specific legal regulations)

  • Technical factors (type of software, product architecture, technology used)

  • Project constraints (scope, time, budget, resources)

  • Organizational factors (organizational structure, exsting policies, practices used)

  • Software development lifecycle (engineering practices, development methods)

  • Tools (availability, usability, compliance)

Context impacts on test strategy, test techniques, degree of test automation, required level of coverage, level of deatil of test documentation, reporting.

Testware

  • LO: Differentiate the testware that supports the test activities

  • Proper configuration management ensures consistencpy and integrity of work products.

  • Test planning: test plan, test schedule, risk register, entry and exit criteria. Risk register is a list of risks together with risk likelihood, risk impact and information about risk mitigration.

  • Test monitoring and control: test progress reports, documentation of control directives and risk information

  • Test analysis: (prioritized) test conditions (e.g. acceptance criteria) and defect reports regarding defects in the test basis (if not fixed directly)

  • test design: (prioritized) test cases, test charters, coverage items, test data requirements and test environment requirements

  • test implementation: test procedures, autoamted test scripts, test suitres, test data, test execution schedule, and test environment elements (stubs, drives, simulators and service virtualization), test harness

  • test execution: test logs, defect reports

  • test completion: test completion report, action itemss for improvement subsequent projects or iterations, documented lessons learned, and change requests (product backlog items)

Traceability between the Test Basis and TestWare

  • LO: Explain the value of maintaining traceability

  • effective test monitoring and control

  • test basis elements, testware associated (test conditions, risks, test cases) test results, and detected defets

  • coverage evaluation (if measurable coverage criteia are defined in the test basis)

  • kpi: key performance indicator for the test objectives

  • test case to requirements verify the requirements coverd by tc

  • test results to risks evaluate the level of residual risk in a test object

  • impact of changes

  • facilitates test audits

  • it governance criteria

  • test progress and completion reports more easy to understand including the status of test basis elements.

  • techincal aspects of testing to stakeholders

  • assess product quality, process capability, and projects progress agains business goals.

Roles in Testing

  • LO: compare the different testing roles

  • Test management: overall responsiblity process team leadership of the test activities. test planning, test monitoring and control and test completion. In agile some handled by agile team. development team outside test manager.

  • Testing: enegineering aspect responsibility. test anlaysis, test design, test implementation, test execution.

tm: team leader, development manager, test manager.
one person can test management and testing at the same time.

Essential skills and good practices in testing

Generic Skills Required for Testing

  • LO: Give examples of the generic skills required for testing

  • Testing knowledge (effectiveness, test techniques)

  • being methodical (to identify defects, especially the ones that are difficult to find)

  • creativity (to increase effectiveness of testing)

  • Technical knwolege( to increase efficiency of testing e.g. by using appropriate test tools)

  • bearers of bad news

  • confirmation bias

Whole Team Approach

  • LO: Recall the adavantages of whole team approach

  • Extereme Programming

  • any knowlege and skills any task

  • team dynamicsm communication, interaction, coolaboration, sysnergy by allowing the various skills sets within the team to be leveraged for the benefic

  • business representatives help acceptance test

  • developers test strategy and test automation approaches.

  • safety-critical whole tean x -> high independence

Independence of Testing

  • LO: Distinguish the benefitst and drawbacks of independence of testing

  • differences between authors' and testers' cognitive biases

  • familiarity (author or developer effectively find many defects in their own code)

  • developer componet and componet integration testing

  • test team performing system and system integration testing

  • business representative acceptance testing

  • diffrent backgrounds technical perpectives and biases, assumptions

  • adversarial realtionship

profile
열정보다 시스템. 실수를 막을 수 있는 프로세스.

0개의 댓글