Review Question Set IV
- Does the software design testing phase ensure that a program is totally
error free? Explain your answer.
No. The objective of testing is reliable software. It is always possible
that undetected errors may exist even after the most comprehensive and
rigorous testing is performed.
- Explain the purpose of whitebox testing.
White box testing is based on the direct examination of the internal
logical structure of the software. It uses knowledge of the program structure
to develop efficient and effective tests of the program's functionality. Logic
paths through the software are tested by providing test cases that exercise
specific sets of sequence, if-the-else, do while and do until constructs.
- Explain the purpose of black box testing.
Black box testing demonstrates that software functions are operational,
that output is correctly produced from input, and that databases are properly
accessed and updated. It requires knowledge of the user requirements to
conduct such tests. Balck box test cases consist of sets of input conditions,
either intentionally valid or invalid, that fully exercise all functional
requirements of a program. Black box testing is focussed on uncovering errors
that occur in implementing user requirements and systems design specifications.
- Briefly describe the reasons to test the following commands SELECT,
OPEN/CLOSE, COPY REPLACING, IF, PERFORM UNTIL or PERFORM WHILE and CALL.
Reasons for testing are :
- SELECT - It can be checked to see if the processes proper and authorised files.
- OPEN/CLOSE - This type of command makes a file available or unavailable for processing. Multiple OPEN/CLOSE commands in a program could mean a file is being made available for unauthorised processing.
- COPY REPLACING - This command changes the definition of data items copied into a program from a source library. It should be examined to see that the changes give the right results.
- IF - It could be used to execute an unauthorised or erroroneous section of code when a certain condition is true or false.
- PERFORM UNTIL and PERFORM WHILE - The loop should be examined to make sure that it is activated the proper number of times.
- CALL - The module called may be the wrong one, or it may be unauthorised.
- Define an equivalence class. Give 2 examples of equivalence class.
Two input values are in the same equivalence class if they are handled by
the program in the same way. There are 2 types of equivalence classes : valid
and invalid. Full testing requires the application of both valid and invalid
tests. For example, in the valid test case, suppose that the acceptable input
for a data field is a number between 1 and 50. It is a waste of effort to test
45, 38, 12 and 16 because they all reside in the same equivalence class. It is
more efficient to test just 1 number between 1 and 50, test the end points 1
and 50 (bound testing) and test the invalid test cases, such as 0, negative
numbers or a number greater than 50.
- What elements should a test case matrix include?
Test case matrix records and documents test objectives, expected results
from the test, the test cases conducted and the actual results from the tests.
- What elements should an Error Report contain?
Elements that an Error Report should contain includes :
- Error report number
- Program name
- Report type, one of the following :
- Suggestion
- Design error
- Coding error
- Documentation error
- Query
- Severity, one of the following :
- Minor
- Serious
- Fatal
- Any attachment with a description of the attachment
- Note whether the error can be reproduced
- What the error was and how it can be reproduced
- Suggested fix
- Name of the tester and the date of the test
- Assigned (to coding team) member name and date
- Assigned member Resolution code, one of the following :
- Fixed
- Can't reproduce it
- Can't be fixed
- Disagree with suggestion
- Withdrawn by tester
- Works according to specifications
- Resolution certification which includes :
- Resolved by whom - coder's and tester's names
- Date
- Approval by project manager and date
- Name error report types and their severity.
Report type can be one of the following :
- Suggestion
- Design error
- Coding error
- Documentation error
- Query
The severity can be one of the following :
- Minor
- Serious
- Fatal
- List 6 ways to resolve error types.
Error types can be resolved in one of the following ways :
- Fixed
- Can't reproduce it
- Can't be fixed
- Disagree with suggestion
- Withdrawn by tester
- Works according to specifications
- If a tester is unsure of what triggered an error, what should he or she
do?
He or she should write down everything that occured just before the error
was triggered. Even good guesses should be noted. If any or all these items
could be helpful to the person assigned to resolve the error, they should be
attached to the error report.
- Define a regression test and explain its purpose.
After a program error have been fixed, the test that uncover the error in
the first place should be repeated. This is a regression test. This retesting
may dicover new program errors that were masked by errors found in the first
test. Added variations on the initial test, to make sure that the fix works
are part of the regression test.
- What's the purpose of a seeding model? Explain how it works and its
relationship to helping estimate software reliability.
Error seeding or bebugging is when a program is randomly seeded with a
number of known artificial errors that represents the kind of errors typically
encountered. Such errors are unknown to the testers. The program is run with
the test cases. The error seeding method assumes that program reliabilty is
related to the number of errors removed from it. After both real and seeded
errors are detected during a test run, the number of remaining real errors is
approximated by the formula :
remaining no. of errors no. of real errors detected
------------------------------- = ----------------------------
remaining no. of seeded errors no. of seeded errors detected
The proportion of errors not detected helps determine the quality of
test cases and the general testing process, which in turn helps estimate
software reliability. For the seeding method to give a fair approximation of
the remaining number of real errors; that is, the level of program reliability,
the seeded errors must be similar to the real errors.
- Define risk-driven testing. What is its purpose?
In most programs, it is impossible to test all possibilities, it makes
sense to concentrate on the areas that present the greatest risk. Software
modules should be plotted on a risk grid based on their probabilty od error
and level of impact.
From the module risk rating grid, the modules that have the highest risk
example, module A, need to receive most of the testing effort. This is because
it has the highest probability of an error occurring and the strongest impact
on the program.
- Define error density.
Twenty percent of the modules may account for eighty percent of the errors.
Error density is generally present in most programs. From this error density,
the probability of the existence of more errors in a module is proportional to
the number of errors already detected in that module.
- Differentiate between module testing and integration
testing.
Integration testing is performed by combining modules in steps. While
module testing concentrates on specific modules, integration testing is
performed on a hierarchy of modules, especially on interfaces between modules.
- Define a stub module and describe how it works. Define a driver module and
describe how it works.
A test modules that invokes and transmit data requires stub modules to
model this relationship. The stub takes the place of a called module that
hasn't been coded yet.
The driver modules call the module under test and pass it test data. Stubs
and drivers link modules to enable them to run in an environment close to the
real one of the future.
- What are the 3 components of systems testing? Briefly describe the purpose
of each.
Systems testing is the process of testing the integrated software in the
context of the total system thatit supports. The 3 components of systems
testing are :
- Recovery Testing
A system test forces the software to fail in various ways and verifies
that complete recovery is properly performed.
- Security Testing
Test cases are conducted (by Tiger Teams) to verify that proper controls
have been designed and installed in the system to protect it from a number of
risks.
- Stress Testing
This type of testing executes a system in a manner that demands resources
in abnormal quantity, frequency or volume. Stress Testing validates the
program's ability to handle large volume of transactions in a timely manner.
- What is the purpose of acceptance testing? Who are the 2 key players in
acceptance testing?
Acceptance testing evaluates the new system to determine if it meets user
requirements under operating conditions.
The 2 key players in acceptance testing are :
- the test group
- the user test group, composed of all or a sample of peope who will work
with the systems under development.
- Define alpha and beta tests.
Alpha testing is conducted in a natural setting, that is, a real-world
operating environment, with systems professionls in attendance, usually as
observers, recording errors and usage problems.
Beta testing is similar to alpha testing, except that no systems
professionals are present during user acceptance testing. The user test group
records all problems, real or imagined, encountered during beta testing and
reports them to the systems people periodically.