Review Question Set IV

  1. Does the software design testing phase ensure that a program is totally error free? Explain your answer.

    No. The objective of testing is reliable software. It is always possible that undetected errors may exist even after the most comprehensive and rigorous testing is performed.

  2. Explain the purpose of whitebox testing.

    White box testing is based on the direct examination of the internal logical structure of the software. It uses knowledge of the program structure to develop efficient and effective tests of the program's functionality. Logic paths through the software are tested by providing test cases that exercise specific sets of sequence, if-the-else, do while and do until constructs.

  3. Explain the purpose of black box testing.

    Black box testing demonstrates that software functions are operational, that output is correctly produced from input, and that databases are properly accessed and updated. It requires knowledge of the user requirements to conduct such tests. Balck box test cases consist of sets of input conditions, either intentionally valid or invalid, that fully exercise all functional requirements of a program. Black box testing is focussed on uncovering errors that occur in implementing user requirements and systems design specifications.

  4. Briefly describe the reasons to test the following commands SELECT, OPEN/CLOSE, COPY REPLACING, IF, PERFORM UNTIL or PERFORM WHILE and CALL.

    Reasons for testing are :

    1. SELECT - It can be checked to see if the processes proper and authorised files.
    2. OPEN/CLOSE - This type of command makes a file available or unavailable for processing. Multiple OPEN/CLOSE commands in a program could mean a file is being made available for unauthorised processing.
    3. COPY REPLACING - This command changes the definition of data items copied into a program from a source library. It should be examined to see that the changes give the right results.
    4. IF - It could be used to execute an unauthorised or erroroneous section of code when a certain condition is true or false.
    5. PERFORM UNTIL and PERFORM WHILE - The loop should be examined to make sure that it is activated the proper number of times.
    6. CALL - The module called may be the wrong one, or it may be unauthorised.


  5. Define an equivalence class. Give 2 examples of equivalence class.

    Two input values are in the same equivalence class if they are handled by the program in the same way. There are 2 types of equivalence classes : valid and invalid. Full testing requires the application of both valid and invalid tests. For example, in the valid test case, suppose that the acceptable input for a data field is a number between 1 and 50. It is a waste of effort to test 45, 38, 12 and 16 because they all reside in the same equivalence class. It is more efficient to test just 1 number between 1 and 50, test the end points 1 and 50 (bound testing) and test the invalid test cases, such as 0, negative numbers or a number greater than 50.

  6. What elements should a test case matrix include?

    Test case matrix records and documents test objectives, expected results from the test, the test cases conducted and the actual results from the tests.

  7. What elements should an Error Report contain?

    Elements that an Error Report should contain includes :

    1. Error report number
    2. Program name
    3. Report type, one of the following :

      1. Suggestion
      2. Design error
      3. Coding error
      4. Documentation error
      5. Query

    4. Severity, one of the following :

      1. Minor
      2. Serious
      3. Fatal

    5. Any attachment with a description of the attachment
    6. Note whether the error can be reproduced
    7. What the error was and how it can be reproduced
    8. Suggested fix
    9. Name of the tester and the date of the test
    10. Assigned (to coding team) member name and date
    11. Assigned member Resolution code, one of the following :

      1. Fixed
      2. Can't reproduce it
      3. Can't be fixed
      4. Disagree with suggestion
      5. Withdrawn by tester
      6. Works according to specifications

    12. Resolution certification which includes :

      1. Resolved by whom - coder's and tester's names
      2. Date

    13. Approval by project manager and date


  8. Name error report types and their severity.

    Report type can be one of the following :

    1. Suggestion
    2. Design error
    3. Coding error
    4. Documentation error
    5. Query


    The severity can be one of the following :

    1. Minor
    2. Serious
    3. Fatal


  9. List 6 ways to resolve error types.

    Error types can be resolved in one of the following ways :

    1. Fixed
    2. Can't reproduce it
    3. Can't be fixed
    4. Disagree with suggestion
    5. Withdrawn by tester
    6. Works according to specifications


  10. If a tester is unsure of what triggered an error, what should he or she do?

    He or she should write down everything that occured just before the error was triggered. Even good guesses should be noted. If any or all these items could be helpful to the person assigned to resolve the error, they should be attached to the error report.

  11. Define a regression test and explain its purpose.

    After a program error have been fixed, the test that uncover the error in the first place should be repeated. This is a regression test. This retesting may dicover new program errors that were masked by errors found in the first test. Added variations on the initial test, to make sure that the fix works are part of the regression test.

  12. What's the purpose of a seeding model? Explain how it works and its relationship to helping estimate software reliability.

    Error seeding or bebugging is when a program is randomly seeded with a number of known artificial errors that represents the kind of errors typically encountered. Such errors are unknown to the testers. The program is run with the test cases. The error seeding method assumes that program reliabilty is related to the number of errors removed from it. After both real and seeded errors are detected during a test run, the number of remaining real errors is approximated by the formula :

    remaining no. of errors no. of real errors detected
    ------------------------------- = ----------------------------
    remaining no. of seeded errors no. of seeded errors detected

    The proportion of errors not detected helps determine the quality of test cases and the general testing process, which in turn helps estimate software reliability. For the seeding method to give a fair approximation of the remaining number of real errors; that is, the level of program reliability, the seeded errors must be similar to the real errors.

  13. Define risk-driven testing. What is its purpose?

    In most programs, it is impossible to test all possibilities, it makes sense to concentrate on the areas that present the greatest risk. Software modules should be plotted on a risk grid based on their probabilty od error and level of impact.

    From the module risk rating grid, the modules that have the highest risk example, module A, need to receive most of the testing effort. This is because it has the highest probability of an error occurring and the strongest impact on the program.

  14. Define error density.

    Twenty percent of the modules may account for eighty percent of the errors. Error density is generally present in most programs. From this error density, the probability of the existence of more errors in a module is proportional to the number of errors already detected in that module.

  15. Differentiate between module testing and integration
    testing.
    Integration testing is performed by combining modules in steps. While module testing concentrates on specific modules, integration testing is performed on a hierarchy of modules, especially on interfaces between modules.

  16. Define a stub module and describe how it works. Define a driver module and describe how it works.

    A test modules that invokes and transmit data requires stub modules to model this relationship. The stub takes the place of a called module that hasn't been coded yet.

    The driver modules call the module under test and pass it test data. Stubs and drivers link modules to enable them to run in an environment close to the real one of the future.

  17. What are the 3 components of systems testing? Briefly describe the purpose of each.

    Systems testing is the process of testing the integrated software in the context of the total system thatit supports. The 3 components of systems testing are :

    1. Recovery Testing

      A system test forces the software to fail in various ways and verifies that complete recovery is properly performed.
    2. Security Testing

      Test cases are conducted (by Tiger Teams) to verify that proper controls have been designed and installed in the system to protect it from a number of risks.
    3. Stress Testing

      This type of testing executes a system in a manner that demands resources in abnormal quantity, frequency or volume. Stress Testing validates the program's ability to handle large volume of transactions in a timely manner.


  18. What is the purpose of acceptance testing? Who are the 2 key players in acceptance testing?

    Acceptance testing evaluates the new system to determine if it meets user requirements under operating conditions.
    The 2 key players in acceptance testing are :

    1. the test group
    2. the user test group, composed of all or a sample of peope who will work with the systems under development.


  19. Define alpha and beta tests.

    Alpha testing is conducted in a natural setting, that is, a real-world operating environment, with systems professionls in attendance, usually as observers, recording errors and usage problems.

    Beta testing is similar to alpha testing, except that no systems professionals are present during user acceptance testing. The user test group records all problems, real or imagined, encountered during beta testing and reports them to the systems people periodically.