Friday 24 February 2012

ISTQB Foundation level Sample Paper 1

Paper 1


1.Software testing activities should start

a. as soon as the code is written
b. during the design stage
c. when the requirements have been formally documented
d. as soon as possible in the development life cycle

2.Faults found by users are due to:

a. Poor quality software
b. Poor software and poor testing
c. bad luck
d. insufficient time for testing

3.What is the main reason for testing software before releasing it?

a. to show that system will work after release
b. to decide when the software is of sufficient quality to release
c. to find as many bugs as possible before release
d. to give information for a risk based decision about release

4. which of the following statements is not true

a. performance testing can be done during unit testing as well as during the testing of whole system
b. The acceptance test does not necessarily include a regression test
c. Verification activities should not involve testers (reviews, inspections etc)
d. Test environments should be as similar to production environments as possible

5. When reporting faults found to developers, testers should be:

a. as polite, constructive and helpful as possible
b. firm about insisting that a bug is not a “feature” if it should be fixed
c. diplomatic, sensitive to the way they may react to criticism
d. All of the above

6.In which order should tests be run?

a. the most important tests first
b. the most difficult tests first(to allow maximum time for fixing)
c. the easiest tests first(to give initial confidence)
d. the order they are thought of

7. The later in the development life cycle a fault is discovered, the more expensive it is to fix. why?

a. the documentation is poor, so it takes longer to find out what the software is doing.
b. wages are rising
c. the fault has been built into more documentation,code,tests, etc
d. none of the above

8. Which is not true-The black box tester

a. should be able to understand a functional specification or requirements document
b. should be able to understand the source code.
c. is highly motivated to find faults
d. is creative to find the system’s weaknesses

9. A test design technique is

a. a process for selecting test cases
b. a process for determining expected outputs
c. a way to measure the quality of software
d. a way to measure in a test plan what has to be done

10. Testware(test cases, test dataset)

a. needs configuration management just like requirements, design and code
b. should be newly constructed for each new version of the software
c. is needed only until the software is released into production or use
d. does not need to be documented and commented, as it does not form part of the released
software system

11. An incident logging system

a only records defects
b is of limited value
c is a valuable source of project information during testing if it contains all incidents
d. should be used only by the test team.

12. Increasing the quality of the software, by better development methods, will affect the time needed for testing (the test phases) by:

a. reducing test time
b. no change
c. increasing test time
d. can’t say

13. Coverage measurement

a. is nothing to do with testing
b. is a partial measure of test thoroughness
c. branch coverage should be mandatory for all software
d. can only be applied at unit or module testing, not at system testing

14. When should you stop testing?

a. when time for testing has run out.
b. when all planned tests have been run
c. when the test completion criteria have been met
d. when no faults have been found by the tests run


15. Which of the following is true?

a. Component testing should be black box, system testing should be white box.
b. if u find a lot of bugs in testing, you should not be very confident about the quality of software
c. the fewer bugs you find,the better your testing was
d. the more tests you run, the more bugs you will find.

16. What is the important criterion in deciding what testing technique to use?

a. how well you know a particular technique
b. the objective of the test
c. how appropriate the technique is for testing the application
d. whether there is a tool to support the technique

17. If the pseudocode below were a programming language ,how many tests are required to achieve 100% statement coverage?

1. If x=3 then
2. Display_messageX;
3. If y=2 then
4. Display_messageY;
5. Else
6. Display_messageZ;
7. Else
8. Display_messageZ;

a. 1
b. 2
c. 3
d. 4



18. Using the same code example as question 17,how many tests are required to achieve 100% branch/decision coverage?

a. 1
b. 2
c. 3
d. 4

19 Which of the following is NOT a type of non-functional test?

a. State-Transition
b. Usability
c. Performance
d. Security

20. Which of the following tools would you use to detect a memory leak?

a. State analysis
b. Coverage analysis
c. Dynamic analysis
d. Memory analysis

21. Which of the following is NOT a standard related to testing?

a. IEEE829
b. IEEE610
c. BS7925-1
d. BS7925-2




22.which of the following is the component test standard?


a. IEEE 829
b. IEEE 610
c. BS7925-1
d. BS7925-2

23 which of the following statements are true?

a. Faults in program specifications are the most expensive to fix.
b. Faults in code are the most expensive to fix.
c. Faults in requirements are the most expensive to fix
d. Faults in designs are the most expensive to fix.

24. Which of the following is not the integration strategy?

a. Design based
b. Big-bang
c. Bottom-up
d. Top-down

25. Which of the following is a black box design technique?

a. statement testing
b. equivalence partitioning
c. error- guessing
d. usability testing

26. A program with high cyclometic complexity is almost likely to be:

a. Large
b. Small
c. Difficult to write
d. Difficult to test

27. Which of the following is a static test?

a. code inspection
b. coverage analysis
c. usability assessment
d. installation test

28. Which of the following is the odd one out?

a. white box
b. glass box
c. structural
d. functional

29. A program validates a numeric field as follows:

values less than 10 are rejected, values between 10 and 21 are accepted, values greater than or equal to 22 are rejected


which of the following input values cover all of the equivalence partitions?

a. 10,11,21
b. 3,20,21
c. 3,10,22
d. 10,21,22


30. Using the same specifications as question 29, which of the following covers the MOST boundary values?

a. 9,10,11,22
b. 9,10,21,22
c. 10,11,21,22
d. 10,11,20,21

1.d
2.b
3.d
4.c
5.d
6.a
7.c
8.b
9.a
10.a
11.c
12.a
13.b
14.c
15.b
16.b
17.c
18.c
19.a
20.c
21.b
22.d
23.c
24.a
25.b
26.d
27.a
28.d
29.c
30.b



Types of Software testing

Black box testing – Internal system design is not considered in this type of testing. Tests are based on requirements and functionality.



White box testing – This testing is based on knowledge of the internal logic of an application’s code. Also known as Glass box Testing. Internal software and code working should be known for this type of testing. Tests are based on coverage of code statements, branches, paths, conditions.



Unit testing – Testing of individual software components or modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. may require developing test driver modules or test harnesses.


Incremental integration testing – Bottom up approach for testing i.e continuous testing of an application as new functionality is added; Application functionality and modules should be independent enough to test separately. done by programmers or by testers.



Integration testing – Testing of integrated modules to verify combined functionality after integration. Modules are typically code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.



Functional testing – This type of testing ignores the internal parts and focus on the output is as per requirement or not. Black-box type testing geared to functional requirements of an application.



System testing – Entire system is tested as per the requirements. Black-box type testing that is based on overall requirements specifications, covers all combined parts of a system.


End-to-end testing – Similar to system testing, involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.


Sanity testing - Testing to determine if a new software version is performing well enough to accept it for a major testing effort. If application is crashing for initial use then system is not stable enough for further testing and build or application is assigned to fix.


Regression testing – Testing the application as a whole for the modification in any module or functionality. Difficult to cover all the system in regression testing so typically automation tools are used for these testing types.



Acceptance testing -Normally this type of testing is done to verify if system meets the customer specified requirements. User or customer do this testing to determine whether to accept application.


Load testing – Its a performance testing to check system behavior under load. Testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system’s response time degrades or fails.



Stress testing – System is stressed beyond its specifications to check how and when it fails. Performed under heavy load like putting large number beyond storage capacity, complex database queries, continuous input to system or database load.


Performance testing – Term often used interchangeably with ‘stress’ and ‘load’ testing. To check whether system meets performance requirements. Used different performance and load tools to do this.


Usability testing – User-friendliness check. Application flow is tested, Can new user understand the application easily, Proper help documented whenever user stuck at any point. Basically system navigation is checked in this testing.



Install/uninstall testing - Tested for full, partial, or upgrade install/uninstall processes on different operating systems under different hardware, software environment.



Recovery testing – Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.



Security testing – Can system be penetrated by any hacking way. Testing how well the system protects against unauthorized internal or external access. Checked if system, database is safe from external attacks.



Compatibility testing – Testing how well software performs in a particular hardware/software/operating system/network environment and different combination s of above.



Comparison testing – Comparison of product strengths and weaknesses with previous versions or other similar products.



Alpha testing – In house virtual user environment can be created for this type of testing. Testing is done at the end of development. Still minor design changes may be made as a result of such testing.


Beta testing – Testing typically done by end-users or others. Final testing before releasing application for commercial purpose

Tuesday 14 February 2012

ISTQB - What and Why

What is ISTQB? ISTQB- International Software Testing Qualification Board Offically founded as a Non-profit Organisation in the year 2002 Currently there are 18 national boards approved Internationally recogononised certification program. If you’re a professional tester, test manager, quality assurance staff member, or programmer responsible for testing your own code, you have already discovered that, far from being trivial and straightforward, testing is hard. There’s a lot to know. In a nutshell, any tester certification program worth your consideration should confirm, through objective, carefully designed examinations, your professional capabilities. Not only does the ISTQB program do so, it is also practical and real-world focused. We address only concepts that you can apply to your work. We support your career path by providing levels of certification that correspond to your experience and roles. Further, we promote and advance software testing as a profession, not merely an ancillary role on a software development team. So far, the ISTQB program might sound like other tester certification programs you’ve heard of. Here are a couple unique characteristics. First, the ISTQB syllabi are developed by working groups composed of worldwide experts in the field of software testing, including prac- IISTTQB Cerrttiiffiicattiion:: Whyy You Need IItt and How tto Gett IItt If you’re a professional tester, test manager, quality assurance staff member, or programmer responsible for testing your own code, you have already discovered that, far from being trivial and straightforward, testing is hard. There’s a lot to know. Unfortunately, most practitioners tend to carry out testing as if it were 1976, not 2006. Common practices lag best practices by around 30 years. ISTQB certification is about raising common practices to the level of best practices. Suppose you are a tester on a project to develop a new system at your company. Somebody e-mails you some screen prototypes with notes that describe the input and output ranges for each field, the actions taken based on particular inputs, and the possible states associated with key objects managed by the system. Would you know how to start designing tests for such a system? ISTQB certified testers do. Suppose you are a programmer on the same project. You are using a newly purchased tool to help generate and execute unit tests on your code. It reports the statement, branch, condition, and multicondition decision coverage achieved by the tests, and flags constructs that were not tested. Would you know how to create additional tests for the uncovered constructs? ISTQB certified testers do. Suppose you are a test manager on this project. After four weeks of testing, the project manager asks you, ―Based on testing so far, what are the remaining risks to the quality of the system?‖ Would you know how to do risk-based test status reporting? ISTQB certified testers do. ISTQB certified testers know how to do these things and more because they have mastered the topics laid out in one or more of the ISTQB syllabi. They feel confident that they have mastered these topics, and can prove to others hat they have, because they have passed one or more of the ISTQB recognized examinations, rigorously developed to check each examinee’s abilities to recall, understand, and apply key testing concepts. In this article, I will explain how the ISTQB certification program works. You’ll become familiar with the Foundation and Advanced syllabi, and you’ll know where to find online copies of each so you can learn more. I’ll tell you how you can prepare for the exam, laying out options from self-guided self-study to attending courses. I’ll discuss the exams and what to expect when taking them. www.istqb.org

Wednesday 8 February 2012

Testing concepts glossary

Acceptance Testing:Testing conducted to enable a user/customer to determine whether to accept a software product. Normally performed to validate the software meets a set of agreed acceptance criteria.
Accessibility Testing:Verifying a product is accessible to the people having disabilities (deaf, blind, mentally disabled etc.).
Ad Hoc Testing:A testing phase where the tester tries to 'break' the system by randomly trying the system's functionality. Can include negative testing as well. 
Agile Testing:Testing practice for projects using agile methodologies, treating development as the customer of testing and emphasizing a test-first design paradigm. .
Application Binary Interface (ABI):A specification defining requirements for portability of applications in binary forms across defferent system platforms and environments.
Application Programming Interface (API):A formalized set of software calls and routines that can be referenced by an application program in order to access supporting system or network services.
Automated Software Quality (ASQ):The use of software tools, such as automated testing tools, to improve software quality.
Automated Testing:
  • Testing employing software tools which execute tests without manual intervention. Can be applied in GUI, performance, API, etc. testing.
  • The use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions.
B
Backus-Naur Form:A metalanguage used to formally describe the syntax of a language.
Basic Block:A sequence of one or more consecutive, executable statements containing no branches.
Basis Path Testing:A white box test case design technique that uses the algorithmic flow of the program to design tests.
Basis Set:The set of tests derived using basis path testing
Baseline:The point at which some deliverable produced during the software engineering process is put under formal change control.
Benchmark Testing:Tests that use representative sets of programs and data designed to evaluate the performance of computer hardware and software in a given configuration.
Beta Testing:Testing of a rerelease of a software product conducted by customers.
Binary Portability Testing:Testing an executable application for portability across system platforms and environments, usually for conformation to an  ABI specification.
Black Box Testing:Testing based on an analysis of the specification of a piece of software without reference to its internal workings. The goal is to test how well the component conforms to the published requirements for the component.
Bottom Up Testing:An approach to integration testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.
Boundary Testing:Test which focus on the boundary or limit conditions of the software being tested. (Some of these tests are stress tests).
Boundary Value Analysis:In boundary value analysis, test cases are generated using the extremes of the input domaini, e.g. maximum, minimum, just inside/outside boundaries, typical values, and error values. BVA is similar to Equivalence Partitioning but focuses on "corner cases".
Branch Testing:Testing in which all branches in the program source code are tested at least once.
Breadth Testing:A test suite that exercises the full functionality of a product but does not test features in detail.
Bug:A fault in a program which causes the program to perform in an unintended or unanticipated manner.

C
CAST:Computer Aided Software Testing.
Capture/Replay Tool:A test tool that records test input as it is sent to the software under test. The input cases stored can then be used to reproduce the test at a later time. Most commonly applied to GUI test tools.
CMM:The Capability Maturity Model for Software (CMM or SW-CMM) is a model for judging the maturity of the software processes of an organization and for identifying the key practices that are required to increase the maturity of these processes.
Cause Effect Graph:A graphical representation of inputs and the associated outputs effects which can be used to design test cases.
Code Complete:Phase of development where functionality is implemented in entirety; bug fixes are all that are left. All functions found in the Functional Specifications have been implemented.
Code Coverage:An analysis method that determines which parts of the software have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional attention.
Code Inspection:A formal testing technique where the programmer reviews source code with a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards.
Code Walkthrough:A formal testing technique where source code is traced by a group with a small set of test cases, while the state of program variables is manually monitored, to analyze the programmer's logic and assumptions.
Coding:The generation of source code.
Compatibility Testing:Testing whether software is compatible with other elements of a system with which it should operate, e.g. browsers, Operating Systems, or hardware.
Component:A minimal software item for which a separate specification is available.
Component Testing:See Unit testing
Concurrency Testing:Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single-threaded code and locking semaphores.
Conformance Testing:The process of testing that an implementation conforms to the specification on which it is based. Usually applied to testing conformance to a formal standard.
Context Driven Testing:The context-driven school of software testing is flavor of Agile testing that advocates continuous and creative evaluation of testing opportunities in light of the potential information revealed and the value of that information to the organization right now.
Conversion Testing:Testing of programs or procedures used to convert data from existing systems for use in replacement systems.
Cyclomatic Complexity:A measure of the logical complexity of an algorithm, used in white-box testing.
D
Data Dictionary:A database that contains definitions of all data items defined during analysis.
Data Flow Diagram:A modeling notation that represents a functional decomposition of a system.
Data Driven Testing:Testing in which the action of a test case is parameterized by externally defined data values, maintained as a file or spreadsheet. A common technique in Automated testing
Debugging:The process of finding and removing the causes of software failures.
Defect:Nonconformance to requirements or functional / program specification
Dependency Testing:Examines an application's requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.
Depth Testing:A test that exercises a feature of a product in full detail.
Dynamic Testing:Testing software through executing it.
E
Emulator:A device, computer program, or system that accepts the same inputs and produces the same outputs as a given system.
Endurance Testing:Checks for memory leaks or other problems that may occur with prolonged execution.
End-to-End testing:Testing a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
Equivalence Class:A portion of a component's input or output domains for which the component's behaviour is assumed to be the same from the component's specification.
Equivalence Partitioning:A test case design technique for a component in which test cases are designed to execute representatives from equivalence classes.
Error:A mistake in the system under test; usually but not always a coding mistake on the part of the developer.
Exhaustive Testing:Testing which covers all combinations of input values and preconditions for an element of the software under test.
F
Functional Decomposition:A technique used during planning, analysis and design; creates a functional hierarchy for the software.
Functional Specification:A document that describes in detail the characteristics of the product with regard to its intended features.
Functional Testing:See also Black Box testing
  • Testing the features and operational behavior of a product to ensure they correspond to its specifications.
  • Testing that ignores the internal mechanism of a system or component and focuses solely on the outputs generated in response to selected inputs and execution conditions.
G
Glass Box Testing:A synonym for White box testing
Gorilla Testing:Testing one particular module, functionality heavily.
Gray Box Testing:A combination of Black box and White box testing methodologies: testing a piece of software against its specification but using some knowledge of its internal workings.
 H
High Order Tests:Black-box tests conducted once the software has been integrated.
I
Independent Test Group (ITG):A group of people whose primary responsibility is software testing.
Inspection:A group review quality improvement process for written material. It consists of two aspects; product (document itself) improvement and process improvement (of both document production and inspection).
Integration Testing:Testing of combined parts of an application to determine if they function together correctly. Usually performed after unit and functional testing. This type of testing is especially relevant to client/server and distributed systems.
Installation Testing:Confirms that the application under test recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.
J
K
L
Load Testing:See Performance testing
Localization Testing:This term refers to making software specifically designed for a specific locality.
Loop Testing:A white box testing technique that exercises program loops.
M
Metric:A standard of measurement. Software metrics are the statistics describing the structure or content of a program. A metric should be a real objective measurement of something such as number of bugs per lines of code.
Monkey Testing:Testing a system or an Application on the fly, i.e just few tests here and there to ensure the system or an application does not crash out.
Mutation Testing:Testing done on the application where bugs are purposely added to it.
N
Negative Testing:Testing aimed at showing software does not work. Also known as "test to fail".

N+1 Testing:A variation of Regression Testing. Testing conducted with multiple cycles in which errors found in test cycle N are resolved and the solution is retested in test cycle N+1. The cycles are typically repeated until the solution reaches a steady state and there are no errors. See also
O
P
Path Testing:Testing in which all paths in the program source code are tested at least once.
Performance Testing:Testing conducted to evaluate the compliance of a system or component with specified performance requirements. Often this is performed using an automated test tool to simulate large number of users. Also know as "Load Testing".
Positive Testing:Testing aimed at showing software works. Also known as "test to pass".
Q
Quality Assurance:All those planned or systematic actions necessary to provide adequate confidence that a product or service is of the type and quality needed and expected by the customer.
Quality Audit:A systematic and independent examination to determine whether quality activities and related results comply with planned arrangements and whether these arrangements are implemented effectively and are suitable to achieve objectives.
Quality Circle:A group of individuals with related interests that meet at regular intervals to consider problems or other matters related to the quality of outputs of a process and to the correction of problems or to the improvement of quality.
Quality Control:The operational techniques and the activities used to fulfill and verify requirements of quality.
Quality Management:That aspect of the overall management function that determines and implements the quality policy.
Quality Policy:The overall intentions and direction of an organization as regards quality as formally expressed by top management.
Quality System:The organizational structure, responsibilities, procedures, processes, and resources for implementing quality management.
R
Race Condition:A cause of concurrency problems. Multiple accesses to a shared resource, at least one of which is a write, with no mechanism used by either to moderate simultaneous access.
Ramp Testing:Continuously raising an input signal until the system breaks down.
Recovery Testing:Confirms that the program recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.
Regression Testing:Retesting a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made.
Release Candidate:A pre-release version, which contains the desired functionality of the final version, but which needs to be tested for bugs (which ideally should be removed before the final version is released).
S
Sanity Testing:Brief test of major functional elements of a piece of software to determine if its basically operational. See also Smoke testing
Scalability Testing:Performance testing focused on ensuring the application under test gracefully handles increases in work load.
Security Testing:Testing which confirms that the program can restrict access to authorized personnel and that the authorized personnel can access the functions available to their security level.
Smoke Testing:A quick-and-dirty test that the major functions of a piece of software work. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire.
Soak Testing:Running a system at high load for a prolonged period of time. For example, running several times more transactions in an entire day (or night) than would be expected in a busy day, to identify and performance problems that appear after a large number of transactions have been executed.
Software Requirements Specification:A deliverable that describes all data, functional and behavioral requirements, all constraints, and all validation requirements for software/
Software Testing:A set of activities conducted with the intent of finding errors in software.
Static Analysis:Analysis of a program carried out without executing the program.
Static Analyzer:A tool that carries out static analysis.
Static Testing:Analysis of a program carried out without executing the program.
Storage Testing:Testing that verifies the program under test stores data files in the correct directories and that it reserves sufficient space to prevent unexpected termination resulting from lack of space. This is external storage as opposed to internal storage.
Stress Testing:Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements to determine the load under which it fails and how. Often this is  Performance testingusing a very higsimulated load.h level of
Structural Testing:Testing based on an analysis of internal workings and structure of a piece of software. See also white box testing
System Testing:Testing that attempts to discover defects that are properties of the entire system rather than of its individual components.
T
Testability:The degree to which a system or component facilitates the establishment of test criteria and the performance of tests to determine whether those criteria have been met.
Testing:
  • The process of exercising software to verify that it satisfies specified requirements and to detect errors.
  • The process of analyzing a software item to detect the differences between existing and required conditions (that is, bugs), and to evaluate the features of the software item (Ref. IEEE Std 829).
  • The process of operating a system or component under specified conditions, observing or recording the results, and making an evaluation of some aspect of the system or component.
Test Automation:See Auotmated testing
Test Bed:An execution environment configured for testing. May consist of specific hardware, OS, network topology, configuration of the product under test, other application or system software, etc. The Test Plan for a project should enumerated the test beds(s) to be used.
Test Case:
  • Test Case is a commonly used term for a specific test. This is usually the smallest unit of testing. A Test Case will consist of information such as requirements testing, test steps, verification steps, prerequisites, outputs, test environment, etc.
  • A set of inputs, execution preconditions, and expected outcomes developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement.
Test Driven Development:Testing methodology associated with Agile Programming in which every chunk of code is covered by unit tests, which must all pass all the time, in an effort to eliminate unit-level and regression bugs during development. Practitioners of TDD write a lot of tests, i.e. an equal number of lines of test code to the size of the production code.
Test Driver:A program or test tool used to execute a tests. Also known as a Test Harness.
Test Environment:The hardware and software environment in which tests will be run, and any other software with which the software under test interacts when under test including stubs and test drivers.
Test First Design:Test-first design is one of the mandatory practices of Extreme Programming (XP).It requires that programmers do not write any production code until they have first written a unit test.
Test Harness:A program or test tool used to execute a tests. Also known as a Test Driver.
Test Plan:A document describing the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning. Ref IEEE Std 829.
Test Procedure:A document providing detailed instructions for the execution of one or more Test cases
Test Scenario:Definition of a set of Test cases or test scripts and the sequence in which they are to be executed.
Test Script:Commonly used to refer to the instructions for a particular test that will be carried out by an automated test tool.
Test Specification:A document specifying the test approach for a software feature or combination or features and the inputs, predicted results and execution conditions for the associated tests.
Test Suite:A collection of tests used to validate the behavior of a product. The scope of a Test Suite varies from organization to organization. There may be several Test Suites for a particular product for example. In most cases however a Test Suite is a high level concept, grouping together hundreds or thousands of tests related by what they are intended to test.
Test Tools:Computer programs used in the testing of a system, a component of the system, or its documentation.
Thread Testing:A variation of top down testing where the progressive integration of components follows the implementation of subsets of the requirements, as opposed to the integration of components by successively lower levels.
Top Down Testing:An approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.
Total Quality Management:A company commitment to develop a process that achieves high quality product and customer satisfaction.
Traceability Matrix:A document showing the relationship between Test Requirements and Test Cases.
U
Usability Testing:Testing the ease with which users can learn and use a product.
Use Case:The specification of tests that are conducted from the end-user perspective. Use cases tend to focus on operating software as an end-user would conduct their day-to-day activities.
User Acceptance Testing:A formal product evaluation performed by a customer as a condition of purchase.
Unit Testing:Testing of individual software components.
V
Validation:The process of evaluating software at the end of the software development process to ensure compliance with software requirements. The techniques for validation is testing, inspection and reviewing.
Verification:The process of determining whether of not the products of a given phase of the software development cycle meet the implementation steps and can be traced to the incoming objectives established during the previous phase. The techniques for verification are testing, inspection and reviewing.
Volume Testing:Testing which confirms that any values that may become large over time (such as accumulated counts, logs, and data files), can be accommodated by the program and will not cause the program to stop working or degrade its operation in any manner.
W
Walkthrough:A review of requirements, designs or code characterized by the author of the material under review guiding the progression of the review.
White Box Testing:Testing based on an analysis of internal workings and structure of a piece of software. Includes techniques such as Branch testing Structural testing and Glass box testing. Contrast with Black box testing.
Workflow Testing:Scripted end-to-end testing which duplicates specific workflows which are expected to be utilized by the end-user.

Testing Concepts

 

There is a plethora of testing methods and testing techniques, serving multiple purposes in different life cycle phases. Classified by purpose, software testing can be divided into: correctness testing, performance testing, reliability testing and security testing. Classified by life-cycle phase, software testing can be classified into the following categories: requirements phase testing, design phase testing, program phase testing, evaluating test results, installation phase testing, acceptance testing and maintenance testing. By scope, software testing can be categorized as follows: unit testing, component testing, integration testing, and system testing.

Correctness testing


Correctness is the minimum requirement of software, the essential purpose of testing. Correctness testing will need some type of oracle, to tell the right behavior from the wrong one. The tester may or may not know the inside details of the software module under test, e.g. control flow, data flow, etc. Therefore, either a white-box point of view or black-box point of view can be taken in testing software. We must note that the black-box and white-box ideas are not limited in correctness testing only.

Black-box testing

The black-box approach is a testing method in which test data are derived from the specified functional requirements without regard to the final program structure.  It is also termed data-driven, input/output driven , or requirements-based testing. Because only the functionality of the software module is of concern, black-box testing also mainly refers to functional testing -- a testing method emphasized on executing the functions and examination of their input and output data.The tester treats the software under test as a black box -- only the inputs, outputs and specification are visible, and the functionality is determined by observing the outputs to corresponding inputs. In testing, various inputs are exercised and the outputs are compared against specification to validate the correctness. All test cases are derived from the specification. No implementation details of the code are considered.

It is obvious that the more we have covered in the input space, the more problems we will find and therefore we will be more confident about the quality of the software. Ideally we would be tempted to exhaustively test the input space. But as stated above, exhaustively testing the combinations of valid inputs will be impossible for most of the programs, let alone considering invalid inputs, timing, sequence, and resource variables. Combinatorial explosion is the major roadblock in functional testing. To make things worse, we can never be sure whether the specification is either correct or complete. Due to limitations of the language used in the specifications (usually natural language), ambiguity is often inevitable. Even if we use some type of formal or restricted language, we may still fail to write down all the possible cases in the specification. Sometimes, the specification itself becomes an intractable problem: it is not possible to specify precisely every situation that can be encountered using limited words. And people can seldom specify clearly what they want -- they usually can tell whether a prototype is, or is not, what they want after they have been finished. Specification problems contributes approximately 30 percent of all bugs in software.

The research in black-box testing mainly focuses on how to maximize the effectiveness of testing with minimum cost, usually the number of test cases. It is not possible to exhaust the input space, but it is possible to exhaustively test a subset of the input space. Partitioning is one of the common techniques. If we have partitioned the input space and assume all the input values in a partition is equivalent, then we only need to test one representative value in each partition to sufficiently cover the whole input space. Domain testing partitions the input domain into regions, and consider the input values in each domain an equivalent class. Domains can be exhaustively tested and covered by selecting a representative value(s) in each domain. Boundary values are of special interest. Experience shows that test cases that explore boundary conditions have a higher payoff than test cases that do not. Boundary value analysis  requires one or more boundary values selected as representative test cases. The difficulties with domain testing are that incorrect domain definitions in the specification can not be efficiently discovered.

Good partitioning requires knowledge of the software structure. A good testing plan will not only contain black-box testing, but also white-box approaches, and combinations of the two.


White-box testing


Contrary to black-box testing, software is viewed as a white-box, or glass-box in white-box testing, as the structure and flow of the software under test are visible to the tester. Testing plans are made according to the details of the software implementation, such as programming language, logic, and styles. Test cases are derived from the program structure. White-box testing is also called glass-box testing, logic-driven testing  or design-based testing .

There are many techniques available in white-box testing, because the problem of intractability is eased by specific knowledge and attention on the structure of the software under test. The intention of exhausting some aspect of the software is still strong in white-box testing, and some degree of exhaustion can be achieved, such as executing each line of code at least once (statement coverage), traverse every branch statements (branch coverage), or cover all the possible combinations of true and false condition predicates (Multiple condition coverage).

Control-flow testing, loop testing, and data-flow testing, all maps the corresponding flow structure of the software into a directed graph. Test cases are carefully selected based on the criterion that all the nodes or paths are covered or traversed at least once. By doing so we may discover unnecessary "dead" code -- code that is of no use, or never get executed at all, which can not be discovered by functional testing.

In mutation testing, the original program code is perturbed and many mutated programs are created, each contains one fault. Each faulty version of the program is called a mutant. Test data are selected based on the effectiveness of failing the mutants. The more mutants a test case can kill, the better the test case is considered. The problem with mutation testing is that it is too computationally expensive to use. The boundary between black-box approach and white-box approach is not clear-cut. Many testing strategies mentioned above, may not be safely classified into black-box testing or white-box testing. It is also true for transaction-flow testing, syntax testing, finite-state testing, and many other testing strategies not discussed in this text. One reason is that all the above techniques will need some knowledge of the specification of the software under test. Another reason is that the idea of specification itself is broad -- it may contain any requirement including the structure, programming language, and programming style as part of the specification content.

We may be reluctant to consider random testing as a testing technique. The test case selection is simple and straightforward: they are randomly chosen. Study in  indicates that random testing is more cost effective for many programs. Some very subtle errors can be discovered with low cost. And it is also not inferior in coverage than other carefully designed testing techniques. One can also obtain reliability estimate using random testing results based on operational profiles. Effectively combining random testing with other testing techniques may yield more powerful and cost-effective testing strategies.

Performance testing


Not all software systems have specifications on performance explicitly. But every system will have implicit performance requirements. The software should not take infinite time or infinite resource to execute. "Performance bugs" sometimes are used to refer to those design problems in software that cause the system performance to degrade.

Performance has always been a great concern and a driving force of computer evolution. Performance evaluation of a software system usually includes: resource usage, throughput, stimulus-response time and queue lengths detailing the average or maximum number of tasks waiting to be serviced by selected resources. Typical resources that need to be considered include network bandwidth requirements, CPU cycles, disk space, disk access operations, and memory usage . The goal of performance testing can be performance bottleneck identification, performance comparison and evaluation, etc. The typical method of doing performance testing is using a benchmark -- a program, workload or trace designed to be representative of the typical system usage.

Reliability testing


Software reliability refers to the probability of failure-free operation of a system. It is related to many aspects of software, including the testing process. Directly estimating software reliability by quantifying its related factors can be difficult. Testing is an effective sampling method to measure software reliability. Guided by the operational profile, software testing (usually black-box testing) can be used to obtain failure data, and an estimation model can be further used to analyze the data to estimate the present reliability and predict future reliability. Therefore, based on the estimation, the developers can decide whether to release the software, and the users can decide whether to adopt and use the software. Risk of using software can also be assessed based on reliability information.  advocates that the primary goal of testing should be to measure the dependability of tested software.

There is agreement on the intuitive meaning of dependable software: it does not fail in unexpected or catastrophic ways.  Robustness testing and stress testing are variances of reliability testing based on this simple criterion.

The robustness of a software component is the degree to which it can function correctly in the presence of exceptional inputs or stressful environmental conditions. Robustness testing differs with correctness testing in the sense that the functional correctness of the software is not of concern. It only watches for robustness problems such as machine crashes, process hangs or abnormal termination. The oracle is relatively simple, therefore robustness testing can be made more portable and scalable than correctness testing. This research has drawn more and more interests recently, most of which uses commercial operating systems as their target, such as the work in

Stress testing, or load testing, is often used to test the whole system rather than the software alone. In such tests the software or system are exercised with or beyond the specified limits. Typical stress includes resource exhaustion, bursts of activities, and sustained high loads.

Security testing

Software quality, reliability and security are tightly coupled. Flaws in software can be exploited by intruders to open security holes. With the development of the Internet, software security problems are becoming even more severe.

Many critical software applications and services have integrated security measures against malicious attacks. The purpose of security testing of these systems include identifying and removing software flaws that may potentially lead to security violations, and validating the effectiveness of security measures. Simulated security attacks can be performed to find vulnerabilities.

Testing automation


Software testing can be very costly. Automation is a good way to cut down time and cost. Software testing tools and techniques usually suffer from a lack of generic applicability and scalability. The reason is straight-forward. In order to automate the process, we have to have some ways to generate oracles from the specification, and generate test cases to test the target software against the oracles to decide their correctness. Today we still don't have a full-scale system that has achieved this goal. In general, significant amount of human intervention is still needed in testing. The degree of automation remains at the automated test script level.

The problem is lessened in reliability testing and performance testing. In robustness testing, the simple specification and oracle: doesn't crash, doesn't hang suffices. Similar simple metrics can also be used in stress testing.

When to stop testing?


Testing is potentially endless. We can not test till all the defects are unearthed and removed -- it is simply impossible. At some point, we have to stop testing and ship the software. The question is when.

Realistically, testing is a trade-off between budget, time and quality. It is driven by profit models. The pessimistic, and unfortunately most often used approach is to stop testing whenever some, or any of the allocated resources -- time, budget, or test cases -- are exhausted. The optimistic stopping rule is to stop testing when either reliability meets the requirement, or the benefit from continuing testing cannot justify the testing cost.  This will usually require the use of reliability models to evaluate and predict reliability of the software under test. Each evaluation requires repeated running of the following cycle: failure data gathering -- modeling -- prediction. This method does not fit well for ultra-dependable systems, however, because the real field failure data will take too long to accumulate.

Alternatives to testing


Software testing is more and more considered a problematic method toward better quality. Using testing to locate and correct software defects can be an endless process. Bugs cannot be completely ruled out. Just as the complexity barrier indicates: chances are testing and fixing problems may not necessarily improve the quality and reliability of the software. Sometimes fixing a problem may introduce much more severe problems into the system, happened after bug fixes, such as the telephone outage in California and eastern seaboard in 1991. The disaster happened after changing 3 lines of code in the signaling system.

In a narrower view, many testing techniques may have flaws. Coverage testing, for example. Is code coverage, branch coverage in testing really related to software quality? There is no definite proof. As early as in the so-called "human testing" -- including inspections, walkthroughs, reviews -- are suggested as possible alternatives to traditional testing methods.  advocates inspection as a cost-effect alternative to unit testing. The experimental results in suggests that code reading by stepwise abstraction is at least as effective as on-line functional and structural testing in terms of number and cost of faults observed.

Using formal methods to "prove" the correctness of software is also an attracting research direction. But this method can not surmount the complexity barrier either. For relatively simple software, this method works well. It does not scale well to those complex, full-fledged large software systems, which are more error-prone.

In a broader view, we may start to question the utmost purpose of testing. Why do we need more effective testing methods anyway, since finding defects and removing them does not necessarily lead to better quality. An analogy of the problem is like the car manufacturing process. In the craftsmanship epoch, we make cars and hack away the problems and defects. But such methods were washed away by the tide of pipelined manufacturing and good quality engineering process, which makes the car defect-free in the manufacturing phase. This indicates that engineering the design process (such as clean-room software engineering) to make the product have less defects may be more effective than engineering the testing process. Testing is used solely for quality monitoring and management, or, "design for testability". This is the leap for software from craftsmanship to engineering.

Sunday 5 February 2012

Testing as a Career!!!

Testing Knowledge sharing Forum Hi All,


I am Deepak, I am Software Engineer. I am in testing field for about 7yrs. I wanted to share some thoughts about testing and taking this as a career.

As I graduate when you pass out from the college, we normally aspire to become a software engineer. If anybody asks you what are your plans, the first answer from us would be I would join a software company and earn good money, have an onsite oppurtunity bla! bla!... Hang on!! What do you mean by that? Software engg is a very confusing word. Let me come back to this topic a little later.


Some of the studious students are very fortunate. They get campus selected; get FAT offers from Big SHOPS (Infy, Yahoo, Google etc...) We envy such students....we should have also got good marks....

It all happens after we get the pinch on our hand...till then we are in a happy scenario. As an Engineer, we set our goals to complete engg and come out of it...But is that what engineers are meant for?

Think about it!!! Engineers are said to Junior Scientist. Are we sure that we can call ourseleves the same... Yes offcourse!!!! You can call....Guys pls pat yourselves...its true that we have worked to our potential to complete all the semesters with so many subjects which hardly we care about it now...Thanks to our education system.


Let me come back to the Software Engg term!!!

Our aspirations would be entering S/W industry. But are we really sure what is that industry about? What excatly we want to become in this industry? What are the oppurtunities available? It’s a huge industry with huge set of options available.


For an instance, for a person who is good at Embedded and C- Programming, he should look for oppurtunities in embedded field. For a person who is strong in Oracle he should look in database management technologies and so on and so forth. Hang on!!! Is it really possible for us to judge ourselves nowadays what to choose? The most common scenario today is, He/She clears an interview in some ABC company and take the oppurtunity given from that company. That’s not wrong at all...Oppurtunities come our way and we pick it up.


But can we do a small re-thinking, to know what are the options available in the market? Where would I best fit in to the market? What are my skills sets....? Do I need to sharpen that further? Do I need to attend the formal training to sharpen my skill sets, or can I do it myself.


These are some of the very important Questions we should ask ourselves....Trust me you will have a better go later...



Since I have a testing backround. I am writing about testing as a career. The Oppurtunities for the same. Skill sets requirement.


The main intention of writing this blog is to help someone who is really in need of guidance, if he or she is looking to persue a career in Software testing.



What is testing? => test the product to see whether it complies the requirement or not.

Why was testing not so popular 10-15 yrs before?

10-15 yrs before testing was very much present. But it was done in an informal way.

However, today the scenario is completely different. Testing has become a very important aspect of any industry.

Even, I was searching for this answer from a long time. I would like to share some of my understanding here:

Today scenario the end user is very much quality savvy. The moment they see the product is of bad quality, he/she would look for a change. The main reason are, finacial indepencence, different options available in the market, competative costing of the product.


I would take an example of Nokia, Once said to be the mobile leading manufactures are struggling today because of the Android market. Android phones have become much cheaper and have better quality. The more competitive is the market, the more quality is demanded. If not the Big shops would loose the marked as there are lots of players. Consumerism is one of the revolutions what we have seen in last 5-10 yrs. Hence, testing has become an integral and a very important Function any industry.


Moving on, what are the skills sets required for taking testing as a career?

The very important skill set required is, you need to be very good in Analytical thinking. Think rational! Quick leaner and offcource programming will be an additional knowledge required.


Most of the testers would have ended up taking testing, because they were offered so, or they were bad at programming etc....But today it’s not the case. Testing has become a strong career path. Most of the companies spend lot of money on this fucntion. It’s no more s supporting fucntion in any industry. It’s the most important and a very essential fucntion in any industry.

Especially the Safety critical systems to name some, Medical equipments, Aviation etc. even small mistake can cause a disaster.


QUALITY is measured by testing!!!!!


If you have the right skill sets….You can do wonders in this field.



Deepak