Tuesday 24 April 2012

Basics of Software Testing


Software Testing:
1. The process of operating a product under certain conditions, observing and recording the results, and making an evaluation of some part of the product.
2. The process of executing a program or system with the intent of finding defects/bugs/problems.
3. The process of establishing confidence that a product/application does what it is supposed to.

Verification:
"The process of evaluating a product/application/component to evaluate whether the output of the development phase satisfies the conditions imposed at the start of that phase.
Verification is a Quality control process that is used to check whether or not a product complies with regulations, specifications, or conditions imposed at the start of a development phase. This is often an internal process."

Validation:
"The process of evaluating a system or component during or at the end of the development process to determine whether it satisfies specified user requirements.
Validation is Quality assurance process of establishing evidence that provides a high degree of assurance that a product accomplishes its intended requirements."

Quality Assurance (QA):
"A set of activities designed to ensure that the development and/or maintenance process is adequate to ensure a system will meet its objectives.
A planned and systematic set of activities necessary to provide adequate confidence that requirements are properly established and products or services conform to specified requirements."

Quality Control (QC):
The process by which product quality is compared with applicable standards, and the action is taken when nonconformance is detected.

White Box:
In this type of testing we use an internal perspective of the system/product to design test cases based on the internal structure. It requires programming skills to identify all flows and paths through the software. The test designer uses test case inputs to exercise flows/paths through the code and decides the appropriate outputs.

Gray Box:
Gray box testing is a software testing method that uses a combination of black box testing and white box testing. Gray box testing is completely not black box testing, because the tester will have knowledge of some of the internal workings of the software under test. In gray box testing, the tester applies a limited number of test cases to the internal workings of the software under test. In the remaining part of the gray box testing a black box approach is taken in applying inputs to the software under test and observing the outputs.

Black Box:
Black box testing takes an external perspective of the test object to design and execute the test cases. In this testing technique one need not know the test object's internal structure. These tests can be functional or non-functional, however these are usually functional. The test designer selects valid and invalid inputs and determines the correct output.
Below are some of the important types of testing.

Functional Testing:

Functionality testing is performed to verify whether the product/application meets the intended specifications and the functional requirements mentioned in the documentation.
Functional tests are written from a user's perspective. These tests confirm that the system does what the users are expecting it to do.
Both positive and negative test cases are performed to verify the product/application responds correctly. Functional Testing is critically important for the products success since it is the customer's first opportunity to be disappointed.

Structural Testing:

In Structural testing, a white box testing approach is taken with focus on the internal mechanism of a system or component
 - Types of structural testing:
– Branch testing
– Path testing
– Statement testing"

System Testing:

System testing of software or hardware is testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements. System testing falls within the scope of black box testing, and as such, should require no knowledge of the inner design of the code or logic.
System testing is performed on the entire system with reference of a Functional Requirement Specification(s) (FRS) and/or a System Requirement Specification (SRS).

Integration Testing:

Integration testing is the activity of software testing in which individual software modules are combined and tested as a group.
Testing in which software components or hardware components or both are combined and tested to evaluate the interaction between them.

Unit Testing:

Testing of individual software components or groups of related components
Testing conducted to evaluate whether systems or components pass data and control correctly to one another

Regression Testing:

When a defect is found in verification and it is fixed we need to verify that
1) the fix was done correctly
2) to verify that the fix doesn’t break anything else. This is called regression testing.
Regression testing needs to be performed to ensure that the reported errors are indeed fixed. Testing also needs to be performed to ensure that the fixes made to the application do not cause new errors to occur.
Selective testing of a system or component to verify that modifications have not caused unintended effects.

Retesting:

Retesting means executing the same test case after fixing the bug to ensure the bug fixing.

Negative Testing:

Negative Testing is testing the application beyond and below of its limits.
For ex: If the requirements is to check for a name (Characters),
1) We can try to check with numbers.
2) We can enter some ascii characters.
3) First we can enter some numbers and then some characters.
4) If the name should have some minimum length, we can check beyond that length.

Performance Testing:

Performance test is testing the product/application with respect to various time critical functionalities. It is related to benchmarking of these functionalities with respect to time. This is performed under a considerable production sized setup.
Performance Tests are tests that determine end to end timing (benchmarking) of various time critical business processes and transactions, while the system is under low load, but with a production sized database. This sets 'best possible' performance expectation under a given configuration of infrastructure.
It can also serve to validate and verify other quality attributes of the system, such as scalability(measurable or quantifiable), reliability and resource usage.
Under performance testing we define the essential system level performance requirements that will ensure the robustness of the system.
The essential system level performance requirements are defined in terms of key behaviors of the system and the stress conditions under which the
system must continue to exhibit those key behaviors."
Some examples of the Performance parameters (in a Patient monitoring system - Healthcare product) are,
1. Real-time parameter numeric values match the physiological inputs
2. Physiological input changes cause parameter numeric and/or waveform modifications on the display within xx seconds.
3. The system shall transmit the numeric values frequently enough to attain an update rate of x seconds or shorter at a viewing device.




Stress Testing:

1. Stress Tests determine the load under which a system fails, and how it fails.
2. Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements to determine the load under which it fails and how.
3. A graceful degradation under load leading to non-catastrophic failure is the desired result.
Often Stress Testing is performed using the same process as Performance Testing but employing a very high level of simulated load.
Some examples of the Stress parameters (in a Patient monitoring system - Healthcare product) are,
1. Patient admitted for 72 Hours and all 72 hours of data availale for all the parameters (Trends).
2. Repeated Admit / Discharge (Patient Connection and Disconnection)
3. Continuous printing
4. Continuous Alarming condition

Load Testing:

Load testing is the activity under which Anticipated Load is applied on the System, increasing the load slightly and checking when the performance starts to degrade.
Load Tests are end to end performance tests under anticipated production load. The primary objective of this test is to determine the response times for various time critical transactions and business processes.
Some of the key measurements provided for a web based application include:
1. How many simultaneous users, can the web site support?
2. How many simultaneous transactions, can the web site support?
3. Page load timing under various traffic conditions.
4. To find the bottlenecks.

Adhoc Testing:

Adhoc testing is a commonly used term for software testing performed without planning and documentation.
The tests are intended to be run only once, unless a defect is discovered.

Exploratory Testing:

Exploratory testing is a method of manual testing that is described as simultaneous learning, design and execution.

Exhaustive Testing:

Testing which covers all combination's of input values and preconditions for an element of the software under test.

Sanity Testing:

A sanity test is a narrow regression test that focuses on one or a few areas of functionality. Sanity testing is usually narrow and deep. Testing few functions/parameters and checking all their main features.
In which one can perform testing on an overall application (all features) initially to check whether the application is proper in terms of availability and Usability.
Sanity testing is done by Test engineer.

Smoke Testing:

In software industry, smoke testing is a wide and shallow approach whereby all areas of the application are tested, without getting into too deep.
Smoke testing originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch fire and smoke.
When a build is received, a smoke test is run to ascertain if the build is stable and it can be considered for further testing. Smoke testing is done by Developer or White box engineers.

Soak Testing:

Soak testing involves testing a system with a significant load extended over a significant period of time, to discover how the system behaves under sustained use.
For example, in software testing, a system may behave exactly as expected when tested for 1 hour. However, when it is tested for 3 hours, problems such as memory leaks cause the system to fail or behave randomly.

Compatibility Testing:

Compatibility testing is done to check that the system/application is compatible with the working environment.
For example if it is a web based application then the browser compatibility is tested.
If it is a installable application/product then the Operating system compatibility is tested.
Compatibility testing verifies that your product functions correctly on a wide variety of hardware, software, and network configurations. Tests are run on a matrix of platform hardware configurations including High End, Core Market, and Low End.

Alpha Testing:

Testing performed by actual customers at the developer’s site.

Beta Testing:

Testing performed by actual customers at their site (customers site)."




Acceptance Testing:

Formal testing conducted to enable a user, customer or other authorized entity to determine whether to accept a system or component.

Static Testing:

The intention to find defects/bugs without executing the software or the code is called static testing. Example: Review, Walkthrough, CIP(Code Inspection Procedure).
Static testing is a form of software testing where the software isn't actually used. It is generally not detailed testing, but checks mainly for the sanity of the code, algorithm, or document. It is primarily syntax checking of the code and/or manually reviewing the code or document to find errors. This type of testing can be used by the developer who wrote the code, in isolation. Code reviews, inspections and walkthroughs are also used.

Dynamic testing:

Dynamic testing is nothing but functional testing. It is used to test software by executing it.


Bug Life Cycle

Some terminologies in the Defect / Bug life cycle.

Bug:

Product is still not yet released in the market. Found before production.

Defect:

Product is released in the market. Found after production.



Defect States:

States:
-Open
-Fixed
-Verified
-Closed
-Double/Duplicate
-Reject (Works as per Requirements, wont fix, works for me)


Severity:

Severity refers to the level of impact, the defect has created on the users. This is set by the tester. Below are some of the common types.
Unacceptable Risk/Blocker: Crash or A risk to the patient/user, operator, service personnel, etc. Creates a Safety Risk
Design NC/Critical: Non-conformance to design input (product specification)
Improvement Opportunity: Observations that do not represent a Design Defect or belong to any of the previous classes
- Enhancements
- Productivity Opportunities
However this varies from company to company.

Priority:

Priority refers to the "Urgency" with which the incident has to be fixed. This is set by the lead.
In most of the cases a critical incident can be of high priority.

Defect example for each severity:

Unacceptable Risk - In a patient monitoring system if the Product /application Crashes, Incorrect parameter value (ex. Instead of 60 Heart Rate 80 Heart Rate is displayed)
Design NC - Patient information not displayed under User Information.
Minor Issue - IP address textbox should be grayed out when DHCP is selected
Improvement Opportunity - Some cosmetic defect which is negligible. ex. when a sub-menu is opened the borders of the sub-menu is cut off. or Documentation Issue.

What to do when a defect is not reproducible:

1. Test on another setup.
2. Take pictures and video the first time the defect occurs.
3. Save and Attach the Log file.



Test design Techniques

Some general software documentation terms.

USR:

User Requirements Specification - Contains User (Customer) requirements received from the user/client.

PS:

Product Specification. Derived from UR such that it can be implemented in the product. A high level product requirement.

DRS:

Design Requirement Specifications - Related to design. Design requirements. For hardware components.

SRS:

Software Requirement Specifications - Low level requirements derived from PS. For Software related components.

SVP:

Software Verification Procedure written from SRS. These are the actual test cases.

DVP:

Design Verification Procedure written from DRS. More design oriented test cases.

Some of the Test Design Techniques are as below,

Test Design Technique 1 - Fault Tree analysis

Fault tree analysis is useful both in designing new products/services (test cases for new components) or in dealing with identified problems in existing products/services. Fault tree analysis (FTA) is a failure analysis in which the system is analyzed using boolean logic.

Test Design Technique 2 - Boundary value analysis

Boundary value analysis is a software testing design technique in which tests are designed to include representatives of boundary values. The test cases are developed around the boundary conditions. One common example for this technique can be, if a text box (named username) supports 10 characters, then we can write test cases which contain 0,1, 5, 10, >10 characters.

Test Design Technique 3 - Equivalence partitioning

Equivalence partitioning is a software design technique that divides the input data to a software unit into partition of data from which test cases can be derived.

Test Design Steps - Test case writing steps

1. Read the requirement. Analyze the requirement.
2. Write the related prerequisites and information steps if required (ex. If some setting should have already been done, or Some browser should have been selected).
3. Write the procedure (steps to perform some connection, configuration). This will contain the majority of steps to reproduce is this test case fails.
4. Write a step to capture the tester input/record. This is used for objective evidence.
5. Write the Verify step (Usually the expected Result).

What is a Test Case? – It is a document, which specifies the test inputs, events and expected results developed for a particular objective, so as to evaluate a particular program path or to verify the compliance with a specific requirement based on the test specification.
Areas for Test Design - Below are some of the areas in Test Design.
o Deriving Test Cases from Use Cases
o Deriving Test Cases from Supplementary Specifications
>>>>>> Deriving Test Cases for Performance Tests
>>>>>> Deriving Test Cases for Security / Access Tests
>>>>>> Deriving Test Cases for Configuration Tests
>>>>>> Deriving Test Cases for Installation Tests
>>>>>> Deriving Test Cases for other Non-Functional Tests
o Deriving test cases for Unit Tests
>>>>>> White-box tests
>>>>>> Black-box tests
o Deriving Test Cases for Product Acceptance Test
o Build Test Cases for Regression Test

Test case content:

The contents of a test case are,
* Prerequisites
* Procedures
* Information if required
* Tester input/record
* Verify step

Please refer the Software Test Templates area for a Test Case Template.
Types of Test Cases They are often categorized or classified by the type of test / requirement for test they are associated with, and will vary accordingly. Best practice is to develop at least two test cases for each requirement for test:
1. A Test Case to demonstrate the requirement has been achieved. Often referred to as a Positive Test Case.
2. Another Test Case, reflecting an unacceptable, abnormal or unexpected condition or data, to demonstrate that the requirement is only achieved under the desired condition, referred to as a Negative Test Case.



Software Testing Process


One of the main processes involved in Software Testing is the preparation of Test Plan.
The contents of a

Test Plan

would contain the following,
- Purpose.
- Scope.
- References.
- Resources Required.
- 1 Roles and responsibilities
- 2 Test Environment and Tools
- Test Schedule.
- Types of Testing involved.
- Entry/Exit criteria.
- Defect Tracking.
- Issues/Risks/Assumptions/Mitigation's.
- Deviations.

Please refer the Software Test Templates area for a Test Plan Template.

Process involved in Test Case Design:

1. Review of requirements.
2. comments for the requirements.
3. Fix the review comments.
4. Baseline the requirements document.
5. Prepare test cases with respect to the baselined requirements documents.
6. Send the test cases for review.
7. Fix the comments for the test cases.
8. Baseline the test case document.
9. If there are any updates in the requirements, Update the requirements document.
10. Send the updated requirements document for review.
11. Fix any comments, if received.
12. Baseline the requirements document.
13. Update the test case document with respect to the latest baselined requirements document.

Please refer the Software Test Templates area for a Test Case Template.

Traceability matrix:

Traceability matrix is a matrix which associates the requirements to its work products,Test cases. This can also be used to associate the Use case to the Requirements.
The advantage of traceability is to ensure the completeness of requirements.
Every Test case should associate to a requirement and Every Requirement has one or more associated test cases.




Testers Role

The various Roles and Responsibilities of a Tester or Senior Software Tester

Requirement Analysis - in case of doubt check with global counter parts or clinical specialist.
Test design - Always clarify, never assume and write.
Review of Other Test design documents.
Approving Test Design documents.
Test Environment setup.
Test results gathering.
Evaluation of any tools if required. - ex. QTP, Valgrind memory profiling tool.
Mentoring of new joinees. Helping them ramp up.

Finding Defects

Testers need to identify two types of defects:
Variance from Specifications – A defect from the perspective of the builder of the product.
Variance from what is Desired – A defect from a user (or customer) perspective.

Testing Constraints

Anything that inhibits the tester’s ability to fulfill their responsibilities is a constraint.
Constraints include:
Limited schedule and budget
Lacking or poorly written requirements
Changes in technology
Limited tester skills

The various Roles and Responsibilities of a Software Test Lead

The Role of Test Lead is to effectively lead the testing team. To fulfill this role the Lead must understand the discipline of testing and how to effectively implement a testing process while fulfilling the traditional leadership roles. What does This mean that the Lead must manage and implement or maintain an effective testing process. This involves creating a test infrastructure that supports robust communication and a cost effective testing framework.
The Test Lead is responsible for:
* Defining and implementing the role testing plays within the organizational structure.
* Defining the scope of testing within the context of each release / delivery.
* Deploying and managing the appropriate testing framework to meet the testing mandate.
* Implementing and evolving appropriate measurements and metrics.
o To be applied against the Product under test.
o To be applied against the Testing Team.
* Planning, deploying, and managing the testing effort for any given engagement / release.
* Managing and growing Testing assets required for meeting the testing mandate:
o Team Members
o Testing Tools
o Testing Process
* Retaining skilled testing personnel.

The various Roles and Responsibilities of a Software Test Manager

* Manage and deliver testing projects with multi-disciplinary teams while respecting deadlines.
* Optimize and increase testing team productivity by devising innovation solutions or improving existing processes.
* Experience identifying, recruiting and retaining strong technical members.
* Bring a client-based approach to all aspects of software testing and client interactions when required.
* Develop and manage organizational structure to provide efficient and productive operations.
* Provide issue resolution to all direct reportees/subordinates, escalating issues when necessary with appropriate substantiation and suggestions for resolution.
* Provide required expertise to all prioritization, methodology, and development initiative.
* Assist direct reports in setting organization goals and objectives and provide annual reviews with reporting of results to HR and the executive team.
* Work with Product Management and other teams to meet organization initiatives.
* Promote customer orientation through organizing and motivating development personnel.
The Test Manager must understand how testing fits into the organizational structure, in other words, clearly define its role within the organization.

Challenges in People Relationships testing

The top ten people challenges have been identified as:
Training in testing
Relationship building with developers
Using tools
Getting managers to understand testing
Communicating with users about testing
Making the necessary time for testing
Testing “over the wall” software
Trying to hit a moving target
Fighting a lose-lose situation
Having to say “no”
*According to the book “Surviving the Top Ten Challenges of Software Testing, A People-Oriented Approach” by William Perry and Randall Rice




Software Test Management

Software Test Management involves a set of activities for managing a software testing cycle. It is the practice of organizing and controlling the process and activities required for the testing effort.
Some of the goals of Software Test Management are plan, develop, execute, and assess all testing activities within the application/product development. This includes coordinating the efforts of all those involved in the testing cycle, tracking dependencies and relationships among test assets and, most importantly, defining, measuring, and tracking quality goals.
Software Test Management can be broken into different phases: organization, planning, authoring, execution, and reporting.
Test artifact and resource organization is a clearly necessary part of test management. This requires organizing and maintaining an inventory of items to test, along with the various things used to perform the testing. This addresses how teams track dependencies and relationships among test assets. The most common types of test assets that need to be managed are:
* Test scripts
* Test data
* Test software
* Test hardware
Test planning is the overall set of tasks that address the questions of why, what, where, and when to test. The reason why a given test is created is called a test motivator (for example, a specific requirement must be validated). What should be tested is broken down into many test cases for a project. Where to test is answered by determining and documenting the needed software and hardware configurations. When to test is resolved by tracking iterations (or cycles, or time period) to the testing.
Test authoring is a process of capturing the specific steps required to complete a given test. This addresses the question of how something will be tested. This is where somewhat abstract test cases are developed into more detailed test steps, which in turn will become test scripts (either manual or automated).
Test execution entails running the tests by assembling sequences of test scripts into a suite of tests. This is a continuation of answering the question of how something will be tested (more specifically, how the testing will be conducted).
Test reporting is how the various results of the testing effort are analyzed and communicated. This is used to determine the current status of project testing, as well as the overall level of quality of the application or system.
The testing effort will produce a great deal of information. From this information, metrics can be extracted that define, measure, and track quality goals for the project. These quality metrics then need to be passed to whatever communication mechanism is used for the rest of the project metrics.
A very common type of data produced by testing, one which is often a source for quality metrics, is defects. Defects are not static, but change over time. In addition, multiple defects are often related to one another. Effective defect tracking is crucial to both testing and development teams.



Software Test Estimation



Estimation must be based on previous projects: All estimation should be based on previous projects.
Estimation must be recorded: All decisions should be recorded. It is very important because if requirements change for any reason, the records would help the testing team to estimate again.
Software Test Estimation shall be always based on the software requirements: All estimation should be based on what would be tested. The software requirements shall be read and understood by the testing team as well as development team. Without the testing participation, no serious estimation can be considered.
Estimation must be verified. All estimation should be verified: Two spreadsheets can be created for recording the estimations. At the end, compare both the estimations. If the estimation has any deviation from the recorded ones, then a re-estimation should be made.
Software Test Estimation must be supported by tools: Tools such as spreadsheet containing metrics calculates automatically the costs and duration for each testing phase.
The Software Test Estimation shall be based on expert judgment: The experienced resources can easily make estimate that how long it would take for testing.

Estimation for Test execution

It can be based on the following points,
* Number of Man Days available for testing
* Number of test cases to be executed
* Complexity of test cases
* Based on previous cycle.

Functional Point Analysis

A method for Estimation
One of the initial design criteria for function points was to provide a mechanism that both software developers and users could utilize to define functional requirements. It was determined that the best way to gain an understanding of the users' needs was to approach their problem from the perspective of how they view the results an automated system produces. Therefore, one of the primary goals of Function Point Analysis is to evaluate a system's capabilities from a user's point of view. To achieve this goal, the analysis is based upon the various ways users interact with computerized systems. From a user's perspective a system assists them in doing their job by providing five (5) basic functions. Two of these address the data requirements of an end user and are referred to as Data Functions. The remaining three address the user's need to access data and are referred to as Transactional Functions.
The Five Components of Function Points
Data Functions
* Internal Logical Files
* External Interface Files
Transactional Functions
* External Inputs
* External Outputs
* External Inquiries"
In addition to the five functional components described above there are two adjustment factors that need to be considered in Function Point Analysis.
Functional Complexity - The first adjustment factor considers the Functional Complexity for each unique function. Functional Complexity is determined based on the combination of data groupings and data elements of a particular function. The number of data elements and unique groupings are counted and compared to a complexity matrix that will rate the function as low, average or high complexity. Each of the five functional components (ILF, EIF, EI, EO and EQ) has its own unique complexity matrix. The following is the complexity matrix for External Outputs.
Value Adjustment Factor - The Unadjusted Function Point count is multiplied by the second adjustment factor called the Value Adjustment Factor. This factor considers the system's technical and operational characteristics"


 Software Test Metrics

Software Test Metrics is used in decision making. The test metrics is derived from raw test data.
Because what cannot be measured cannot be managed. Hence Test Metrics is used in test management. It helps in showcasing the progress of testing.
Some of the Software Test Metrics are as below,

What is Test Summary

It is a document summarizing testing activities and results, and it contains an evaluation of the test items.

Requirements Volatility

Formula = {(No. of requirements added + No. of requirements deleted + No. of requirements modified) / No. of initial approved requirements} * 100
Unit Of measure = Percentage

Review Efficiency

Components - No. of Critical, Major & Minor review defects
- Effort spent on review in hours
- Weightage Factors for defects:
- Critical = 1; Major = 0.4; Minor = 0.1
Formula = (No. of weighted review defects/ Effort spent on reviews)
Unit Of measure = Defects per person hour

Productivity in Test Execution

Formula = (No. of test cases executed / Time spent in test execution)
Unit Of measure = Test Cases per person per day
Here the time is the cumilative time of all the resources. example - If there were 1000 Test cases executed in a cycle by 4 resources.
If resource 1 executed 300 test cases in 2 days,
resource 2 executed 400 test cases in 3 days
resource 3 executed 75 test cases in 1 day
resource 4 executed 225 test cases in 4 days.
Then the cumulative time spent for executing 1000 test cases is 10 man days.
Then the Productivity in Test Execution = 1000/10=100
So the productivity in test execution is 100 test cases per person per day.

Defect Rejection Ratio

Formula = (No. of defects rejected / Total no. of defects raised) * 100
Unit of Measure = Percentage

Defect Fix Rejection Ratio

Formula = (No. of defect fixes rejected / No. of defects fixed) * 100
Unit of Measure = Percentage

Delivered Defect Density

Components - No. of Critical, Major & Minor review defects
- Weightage Factors for defects:
- Critical = 1; Major = 0.4; Minor = 0.1
Formula = [(No of weighted defects found during Validation/customer review + acceptance testing)/ (Size of the work product)]
Unit Of measure = Defects for the work product / Cycle.

Outstanding defect ratio

Formula = (Total number of open defects/Total number of defects found) * 100
Unit Of measure = Percentage

COQ (Cost of Quality)

Formula = [(Appraisal Effort + Prevention Effort + Failure Effort) / Total Project effort] * 100
Unit Of measure = Percentage


Some metrics related to the entire Test cycle.

Schedule Variance

Schedule variance wrt Latest Baselines
Formula = {(Actual End Date - Latest Baselined End Date)/(Latest Baselined End Date - Latest Baselined Start Date)} *100
Unit Of measure = Percentage
Schedule variance wrt Original Baselines
Formula = {(Actual End Date - Original Baselined End Date)/(Original Baselined End Date - Original Baselined Start Date)} *100
Unit Of measure = Percentage

Effort variance

Effort variance wrt Original Baseline
Formula = {(Actual Effort - Estimated Effort) / (Estimated Effort)}* 100
Unit Of measure = Percentage
Effort variance wrt Last Revised Baseline
Formula = {(Actual Effort - Revised Effort) / (Revised Effort)} * 100
Unit Of measure = Percentage



Software Test Release Metrics

Some of the Software test release related metrics are as below. However they vary from company to company and project to project.

Test Cases executed

General Guidelines:
1. All the test cases should be executed atleast once. 100% test case execution.
2. Pass test cases >= 98% (this number can vary).

Effort Distribution

General Guidelines:
1. Sufficient effort has been spent on all the phases, components/modules of the software/product under test.
2. This needs to be quantified as (Effort spent per module / Effort spent on all modules)*100
Example: effort needs to be quantified for each phase like Requirements analysis, Design(test cases design), execution, etc.

Open Defects with priority

General Guidelines:
1. If we plot a graph with number of open defects against time, it should show a downward trend.
2. There should not be any open show stoppers/blockers before release. So 0 blockers in open state.
3. Close to 0 Critical/major defects before release. However this is never 0, as these fixes will be postponed to the next release as long as they are ok to have.






Software Testing Tools

The software testing tools can be classified into the following broad category.
Test Management Tools
White Box Testing Tools
Performance Testing Tools
Automation Testing Tools

Test Management Tools:

Some of the objectives of a Test Management Tool are as below. However all of these characteristics may not be available in one single tool. So the team may end up using multiple tools, with each tool focusing on a set of key areas.
* To manage Requirements.
* To Manage Manual Test Cases, Suites and Scripts.
* To Manage Automated Test Scripts.
* To mange Test Execution and the various execution activities. (recording results, etc)
* To be able to generate various reports with regard to status, execution, etc.
* To Manage defects. In other words a defect tracking tool.
* Configuration management Tool. Version Control Tool ( example - for Controlling and Sharing the Test Plan, Test Reports, Test Status, etc.)
Some of the tool which can be used along with their key areas of expertise are as below,
Uses of Telelogic/IBM Doors
1. Used for Writing requirements/Test cases.
2. Baseline functionality available.
3. The document can be exported into microsoft excel/word.
4. Traceability matrix implemented in doors. So the requirements can be mapped to the test cases and vice versa.
Uses of HP Quality Center.
1. Used for Writing requirements/Test cases.
2. Used for baselineing of documents.
3. For exporting of documents.
4. Traceability matrix. So the requirements can be mapped to the test cases and vice versa.
Some of the defect tracking tools are as below,
* IBM Lotus Notes
* Bugzilla (Open Source/Free)
A Comparison of different issue tracking systems.
Some of the other Test management tools are as below,
* Bugzilla Testopia
* qaManager
* TestLink
Configuration management Tool.
Roger Pressman, in his book Software Engineering: A Practitioner's Approach, states that CM "is a set of activities designed to control change by identifying the work products that are likely to change, establishing relationships among them, defining mechanisms for managing different versions of these work products, controlling the changes imposed, and auditing and reporting on the changes made."
In Software Testing, configuration management plays the role of tracking and controlling changes in the various test components (example - for Controlling and Sharing the Test Plan, Test Reports, Test Status, etc.). Configuration management practices include revision control and the establishment of baselines.
Some of the configuration management tools are,
* IBM Clearcase
* CVS
* Microsoft VSS


White Box Testing Tools

White Box Testing Tools:

Some of the aspects and characteristics which are required in a White Box Testing Tools are,
To check the Code Coverage
Code coverage is a measure used in software testing. It describes the degree to which the source code of a program has been
tested. It is a form of testing that inspects the code directly and is therefore a form of white box testing
Some of the tools available in this space are,
Tools for C / C++
* IBM Rational Pure Coverage
* Cantata++
* Insure++
* BullseyeCoverage
Tools for C# .NET
* NCover
* Testwell CTC++
* Semantic Designs Test Coverage
* TestDriven.NET
* Visual Studio 2010
* PartCover (Open Source)
* OpenCover (Open Source)
Tools for COBOL
* Semantic Designs Test Coverage
Tools for Java
* Clover
* Cobertura
* Jtest
* Serenity
* Testwell CTC++
* Semantic Designs Test Coverage
Tools for Perl
* Devel::Cover
Tools for PHP
* PHPUnit with Xdebug
* Semantic Designs Test Coverage
To check Coding Standards
A comprehensive list of tools in coding standards can be found at
List of tools to check coding standards
To check the Code Complexity
Code Complexity is a measure of the number of linearly-independent paths through a program module and is calculated by
counting the number of decision points found in the code (if, else, do, while, throw, catch, return, break etc.).
Some of the free tools available for checking the code complexity are as below,
* devMetrics by Anticipating minds.
* Reflector Add-In.
To check Memory Leaks
A memory leak happens when a application/program has utilized memory but is unable to release it back to the operating
system. A memory leak can reduce the performance of the computer by reducing the amount of available memory. Eventually, in
the worst case, too much of the available memory may become utilized and all or part of the system or device stops working
correctly, the application fails, or the system crashes.
Some of the free tools available for checking the memory leak issues are as below,
* Valgrind
* Mpatrol
* Memwatch

CAPA in Software Testing

CAPA, is also addressed as Corrective Action and Preventive Action. This is also a regulatory requirement by both FDA and ISO, which requires an active CAPA program as an essential element of a quality system.
CAPA also helps with customer Satisfaction. The ability to correct existing problems or implement controls to prevent potential problems is equally important for continued customer satisfaction.
Quality Issues which are not caught/fixed soon enough have their own financial impact.
Corrective Action is the process of reacting to an existing product problem, customer issue or other nonconformity and fixing it.
Preventive Action is a process for detecting potential problems or nonconformance’s and resolving them.


Action plan for Defect Slippage:
1. Increase the regression testing wrt all the functions.
2. Adopt some validation test cases during regression. i.e. testing wrt a real time scenario.
3. Ensure that different kinds of devices are used during testing. Usually during the different rounds of testing the same device types are used for testing by the tester. This happens when the device type to be selected is at the users discretion. The user is more comfortable using the same type of device. This results with not all device types being tested.
Here Device would relate to cross compatibility.
4. Perform the RCA(Root Cause Analysis) on the defects found after testing and arrive at the area where the most no of defect arises. Concentrate on this area.
5. Check domain understanding among the team from time to time. Regular domain understanding sessions would help in increased awareness.
6. Set up a reliability system (system which can be used to check performance and reliability). Ensure that the Test Environment is connected to this system. Regularly send the crash logs to the development team to analyze if any new defects were responsible for the crash.


Action plan for Productivity Increase:
1. Regular domain knowledge training and brown bag sessions with respect to testing to increase knowledge and help with easier testing.
2. Assign testers with a correct/good mix of documents. Example Complex Test Case execution with Simple test case execution. This averages out the test productivity wrt easy and hard. Ensure that this kind of mixture is consistently implemented across all the members of the team.
3. Constant motivation and feedback.



Testing Infrastructure


The testing infrastructure consists of the testing activities, events, tasks and processes that immediately support automated, as well as manual, software testing. The stronger the infrastructure the more it provides for stability, continuity and reliability of the automated testing process.
The testing infrastructure includes:
• Test Plan
• Test cases
• Baseline test data
• A process to refresh or roll back baseline data
• Dedicated test environment, i.e. stable back end and front end
• Dedicated test lab
• Integration group and process
• Test case database, to track and update both automated and manual tests
• A way to prioritize, or rank, test cases per test cycle
• Coverage analysis metrics and process
• Defect tracking database
• Risk management metrics/process (McCabe tools if possible)
• Version control system
• Configuration management process
• A method/process/tool for tracing requirement to test cases
• Metrics to measure improvement
The testing infrastructure serves many purposes, such as:
• Having a place to run automated tests, in unattended mode, on a regular bases
• A dedicated test environment to prevent conflicts between on-going manual and automated testing
• A process for tracking results of test cases, both those that pass or fail
• A way of reporting test coverage levels
• Ensuring that expected results remain consistent across test runs
• A test lab with dedicated machines for automation enables a single automation test suite to conduct multi-user and stress testing
It is important to remember that it is not necessary to have all the infrastructure components in place in the beginning to be successful. Prioritize the list, add components gradually, over time, so the current culture has time to adapt and integrate the changes. Experience has proven it takes an entire year to integrate one major process, plus one or two minor components into the culture.
Again, based on experience, start by creating a dedicated test environment and standardizing test plans and test cases. This, along with a well-structured automated testing system will go a long way toward succeeding in automation.