Archive

Archive for December, 2011

Software Verification and Validation

December 30, 2011 1 comment

Software Verification:It is a disciplined approach to evaluate whether a software product fulfills the requirements. It is also called static testing. Verification is done by systematically reading the contents of a software product with intention of finding defects.

Methods of verification:

  1.  Walk-through: An informal process, initiated by the author of software product to a team for assistance in locating defects and suggesting improvements.
  2. Inspection: A type of peer review that relies on visual examination of documents to detect defects, e.g. violations of development standards and non-conformance to higher level documentation. The most formal review technique and therefore always based on a documented procedure.

                                                    OR

            A word-by-word checking of a software product with intention of

    1. Locating defects
    2. Confirming traceability of  relevant requirements
    3. Checking for relevant standards and conventions.
  1. Review: It is a formal examination. It is further involves 5 major roles
    1. Author: who constructed the work product. Ex. A programmer
    2. Moderator: who ensures the discussions proceed on the productive lines
    3. Reader: who is responsible for leading the inspection, reading aloud small logical units, paraphrasing when required
    4. Recorder: who is responsible for documenting all defects that arise from the inspection meeting.
    5. Inspector: who is responsible for analyzing and detecting defects within the work product.

Types of Reviews:

In Process Review Milestone Review Post Implementation Review
1. Assess progress towardsProduct requirements.2. During specific period of the development cycle. Ex. Design period

3.Limited to segment of the product

4. Used to find defects in the work product and the work process.

5. Catches defects early where they are less costly to correct.

1. Normally done at the end of a phase of SDLC, when the author feels that the product is error-free and can go to next phase.2. Usually conducted by a Manager3. Main purpose is to decide if the product can go to the next phase

4. Includes checking if suitable inspections have been done.

5. After review, the product is “base-lined”.

1.It is also known as “Postmortems”2. Review of the product that includes planned and actual developments results and compliance with requirements.3. Used for process improvement of software development

4. Conducted in a formal format.

5. It can be conducted up to 3 to 6 months.

 

Classes of Reviews: 

Informal Review Semiformal Review Formal Review
1. It is also called peer reviews.2. Generally one-on-one meeting between author of a work product and peer3. Initiated as a request for input.

4. No agenda

5. Results are not formally reported.

6. Occur as needed through out each phase.

1. Facilitated by the author2. Presentation is made with comments at the end or comments made throughout3. Issues are raised are captured and published

4. Possible solutions for defects not discussed

5. Occur one or more times during a phase

1. Facilitated by a moderator2. Moderator is assisted by a recorder3. Defects are recorded and assigned

4. Meeting is planned

5. Materials are distributed beforehand

6. Participants are prepared

7. Defects found are tracked through the defect tracking system

Software Validation:

It is a disciplined approach to evaluate whether final as built software product, fulfills its specific intended use. It is also called as dynamic testing. It is necessary to demonstrate not just that the software is doing what it is supposed to do, but also is not doing what it is not supposed to do.

Software Validation: Levels of Testing:

levels of testing

Software Validation: Types of Testing

1. Unit Testing

  •  The smallest piece of software that can be tested in isolation
  • It is procedure used to validate that individual unit of source code is working properly.
  • Approaches
    1. Black Box
    2. White Box

 2. Integration Testing

  • Starts  at module level when various modules are integrated with each other to form a sub-system or system.
  • More stress is given on interfaces between the modules
  • Focuses on design and construction of the software architecture
  • Four Basic approaches to testing while integrating modules
  1. Bottom Up
  2. Top Down
  3.  Critical Part First
  4. Big Bang

i. Bottom up Integration Testing:

The program is combined and tested from the bottom of the tree to the top. If the higher modules are not available at the time of testing, then a module driver (component driver) is used to replicate the higher module. Then, put together sub trees and test until whole tree. It is very common and effective approach in case of object oriented design.

Component driver: is a calling module for low level components and developed for temporary use.

  • A special code to aid the integration
  • A routine that calls a particular component and passes a test case to it
  • Take care of driver’s interface with the test component

ii. Top down Integration Testing:

The program is combined and tested from the top of the tree to the down. The program is designed, implemented and tested. If lower level modules are not ready at the time of testing, stubs (or dummy modules) are used to replicate the lower module.

Stubs: A component being tested may call another that is not yet tested. Therefore, a special purpose program called stubs are used to simulate the activity of missing component.

iii. Critical Part First: In this, critical part of the program is designed, implemented and tested first. Important for time critical systems, where the performance of the critical part makes up for performance of the whole system.

iv. Big Bang Testing: A common approach in non-process oriented organizations, where all modules are integrated at once. In this model, maximum number of People, resources are put together to build up the product. It is based on theory in which Universe was created in a single huge explosion of infinite energy.

 3. System Testing

  • System testing is conducted on a complete, Integrated system to evaluate the system’s compliance with its  specified requirements
  • Validates that the system meets its functional and non-functional requirements
  • It is also intended to test up to and beyond the bounds defined in the software/hardware requirements specifications.
  • Final phase of testing before delivery

 4. Acceptance Testing

  • Final stage of testing before the system is accepted for operational use
  • Done with data supplied by  the client
  • Validates- User needs ( Functional ) and System Performance ( Non-Functional )

 5. Alpha Testing

  • Tested  at developer site by customer
  • Developer “ looks over shoulder” and records errors and usage problems
  • Tests conducted in a controlled environment

  6. Beta Testing

  • Beta testing conducted at one or more customer sites by end user of software
  • Live application environment cannot be controlled by developer
  • Customer records all problems encountered and reports to developer at regular intervals