Saturday 15 September 2012

Win Runner

Win Runner


What's New in WinRunner 7.5?

Automatic Recovery
The Recovery Manager provides an easy-to-use wizard that guides you through the process of defining a recovery scenario. You can specify one or more operations that enable the test run to continue after an exception event occurs. This functionality is especially useful during unattended test runs, when errors or crashes could interrupt the testing process until manual intervention occurs.
Silent Installation
Now you can install WinRunner in an unattended mode using previously recorded installation preferences. This feature is especially beneficial for those who use enterprise software management products or any automated software distribution mechanisms.
Enhanced Integration with TestDirector
WinRunner works with both TestDirector 6.0, which is client/server-based, and TestDirector 7.x, which is Web-based. When reporting defects from WinRunner’s test results window, basic information about the test and any checkpoints can be automatically populated in TestDirector’s defect form. WinRunner now supports version control, which enables updating and revising test scripts while maintaining old versions of each test.
Support for Terminal Servers
Support for Citrix and Microsoft Terminal Servers makes it possible to open several window clients and run WinRunner on each client as a single user. Also, this can be used with LoadRunner to run multiple WinRunner Vusers.
Support for More Environments
WinRunner 7.5 includes support for Internet Explorer 6.x and Netscape 6.x, Windows XP and Sybase's PowerBuilder 8, in addition to 30+ environments already supported by WinRunner 7.
WinRunner provides the most powerful, productive and cost-effective solution for verifying enterprise application functionality. For more information on WinRunner, contact a Mercury Interactive local representative for pricing, evaluation, and distribution information.
WinRunner(Features & Benefits)

Test functionality using multiple data combinations in a single test
WinRunner's DataDriver Wizard eliminates programming to automate testing for large volumes of data. This saves testers significant amounts of time preparing scripts and allows for more thorough testing.
Significantly increase power and flexibility of tests without any programming
The Function Generator presents a quick and error-free way to design tests and enhance scripts without any programming knowledge. Testers can simply point at a GUI object, and WinRunner will examine it, determine its class and suggest an appropriate function to be used.
Use multiple verification types to ensure sound functionality
WinRunner provides checkpoints for text, GUI, bitmaps, URL links and the database, allowing testers to compare expected and actual outcomes and identify potential problems with numerous GUI objects and their functionality.
Verify data integrity in your back-end database
Built-in Database Verification confirms values stored in the database and ensures transaction accuracy and the data integrity of records that have been updated, deleted and added.
View, store and verify at a glance every attribute of tested objects
WinRunner’s GUI Spy automatically identifies, records and displays the properties of standard GUI objects, ActiveX controls, as well as Java objects and methods. This ensures that every object in the user interface is recognized by the script and can be tested.
Maintain tests and build reusable scripts
The GUI map provides a centralized object repository, allowing testers to verify and modify any tested object. These changes are then automatically propagated to all appropriate scripts, eliminating the need to build new scripts each time the application is modified.
Test multiple environments with a single application
WinRunner supports more than 30 environments, including Web, Java, Visual Basic, etc. In addition, it provides targeted solutions for such leading ERP/CRM applications as SAP, Siebel, PeopleSoft and a number of others

NAVIGATIONAL STEPS FOR WINRUNNER LAB-EXERCISES

Using Rapid Test Script wizard
• Start->Program Files->Winrunner->winruner 
• Select the Rapid Test Script Wizard (or) create->Rapid Test Script wizard 
• Click Next button of welcome to script wizard 
• Select hand icon and click on Application window and Cilck Next button 
• Select the tests and click Next button 
• Select Navigation controls and Click Next button 
• Set the Learning Flow(Express or Comprehensive) and click Learn button 
• Select start application YES or NO, then click Next button 
• Save the Startup script and GUI map files, click Next button 
• Save the selected tests, click Next button 
• Click Ok button 
• Script will be generated.then run the scripts. Run->Run from top 
• Find results of each script and select tools->text report in Winrunner test results. 
Using GUI-Map Configuration Tool:
• Open an application. 
• Select Tools-GUI Map Configuration;Windows pops-up. 
• Click ADD button;Click on hand icon. 
• Click on the object, which is to be configured. A user-defined class for that object is added to list. 
• Select User-defined class you added and press ‘Configure’ button. 
• Mapped to Class;(Select a corresponding stanadard class from the combo box). 
• You can move the properties from available properties to Learned Properties. By selecting Insert button 
• Select the Selector and recording methods. 
• Click Ok button 
• Now, you will observe Winrunner indentifying the configured objects. 
Using Record-ContextSensitive mode:
• Create->Record context Sensitive 
• Select start->program files->Accessories->Calculator 
• Do some action on the application. 
• Stop recording 
• Run from Top; Press ‘OK’. 
Using Record-Analog Mode:
• Create->Insert Function->from function generator 
• Function nameselect ‘invoke_application’ from combo box). 
• Click Args button; File: mspaint. 
• Click on ‘paste’ button; Click on ‘Execute’ button to open the application; Finally click on ‘Close’. 
• Create->Record-Analog. 
• Draw some picture in the paintbrush file. 
• Stop Recording 
• Run->Run from Top; Press ‘OK’. 
GUI CHECK POINTS-Single Property Check:
• Create->Insert function->Function Generator-> (Function name:Invoke_application; File :Flight 1a) 
• Click on’paste’ and click on’execute’ & close the window. 
• Create->Record Context sensitive. 
• Do some operations & stop recording. 
• Create->GUI Check Point->For single Property. 
• Click on some button whose property to be checked. 
• Click on paste. 
• Now close the Flight1a application; Run->Run from top. 
• Press ‘OK’ it displays results window. 
• Double click on the result statement. It shows the expected value & actual value window. 
GUI CHECK POINTS-For Object/Window Property:
• Create->Insert function->Function Generator-> (Function name:Invoke_application; File :Flight 1a) 
• Click on’paste’ and click on’execute’ & close the window. 
• Create->Record Context sensitive. 
• Do some operations & stop recording. 
• Create->GUI Check Point->Object/Window Property. 
• Click on some button whose property to be checked. 
• Click on paste. 
• 40Now close the Flight 1a application; Run->Run from top. 
• Press ‘OK’ it displays results window. 
• Double click on the result statement. It shows the expected value & actual value window. 
GUI CHECK POINTS-For Object/Window Property:
• Create->Insert function->Function Generator-> (Function name:Invoke_application; File :Flight 1a) 
• Click on’paste’ and click on’execute’ & close the window. 
• Create->Record Context sensitive. 
• Do some operations & stop recording. 
• Create->GUI Check Point->For Multiple Object. 
• Click on some button whose property to be checked. 
• Click on Add button. 
• Click on few objects & Right click to quit. 
• Select each object & select corresponding properties to be checked for that object: click ‘OK’. 
• Run->Run from Top. It displys the results. 
BITMAP CHECK POINT:

For object/window. 
• Create->Insert function->Function Generator-> (Function name:Invoke_application; File :Flight 1a) 
• Click on’paste’ and click on’execute’ & close the window. 
• Create->Record Context sensitive. 
• Enter the Username, Password & click ‘OK’ button 
• Open the Order in Flight Reservation Application 
• Select File->Fax Order& enter Fax Number, Signature 
• Press ‘Cancel’ button. 
• Create->Stop Recording. 
• Then open Fax Order in Flight Reservation Application 
• Create->Bitmap Check->For obj.window; 
• Run->run from top. 
• The test fails and you can see the difference. 
For Screen Area: 
• Open new Paint Brush file; 
• Create->Bitmapcheck point->from screen area. 
• Paint file pops up; select an image with cross hair pointer. 
• Do slight modification in the paint file(you can also run on the same paint file); 
• Run->Run from Top. 
• The test fails and you can see the difference of images. 
DATABASE CHECK POINTS

Using Default check(for MS-Access only)
• Create->Database Check Point->Default check 
• Select the Specify SQL Statement check box 
• Click Next button 
• Click Create button 
• Type New DSN name and Click New button 
• Then select a driver for which you want to set up a database & double clcik that driver 
• Then select Browse button and retype same DSN name and Click save button. 
• Click Next button & click Finish button 
• Select database button & set path of the your database name 
• Click ‘OK’ button & then Click the your DSN window ‘OK’ button 
• Type the SQL query in SQL box 
• Theb click Finish button Note : same process will be Custom Check Point 
Runtime Record Check Point.
• Repeat above 10 steps. 
• Type query of two related tables in SQL box Ex: select Orders.Order_Number, Flights.Flight_Number from Orders, Flights where Flight.Flight_Number=Orders.Flight_Number. 
• Select Finish Button 
• Select hand Icon button& select Order No in your Application 
• Click Next button. 
• Select hand Icon button& select Filght No in your Application 
• Click Next button 
• Select any one of the following check box 1. One match record 2. One or more match records. 3. No match record 
• select Finish button the script will be generated. 
Synchronization Point

For Obj/Win Properties:
• Open start->Programs->Win Runner->Sample applications->Flight1A. 
• Open winrunner window 
• Create->RecordContext Sensitive 
• Insert information for new Order &click on "insert Order" button 
• After inserting click on "delete" button 
• Stop recording& save the file. 
• Run->Run from top: Gives your results. 
Without Synchronization:
• settings->General Options->Click on "Run" tab. "Timeout for checkpoints& Cs statements’ value:10000 follow 1 to 7->the test display on "Error Message" that "delete" button is disabled. 
With Synchronization:
• Keep Timeout value:1000 only 
• Go to the Test Script file, insert pointed after "Insert Order" button, press statement. 
• Create->Synchronization->For Obj/Window Property 
• Click on"Delete Order" button & select enable property; click on "paste". 
• It inserts the Synch statement. 
For Obj/Win Bitmap:
• Create-> Record Context Sensitive. 
• Insert information for new order & click on "Insert order" button 
• Stop recording & save the file. 
• Go to the TSL Script, just before inserting of data into "date of flight" insert pointer. 
• Create->Synchronization->For Obj/Win Bitmap is selected. 
• (Make sure flight reservation is empty) click on "data of flight" text box 
• Run->Run from Top; results are displayed. NoteKeep "Timeout value" :1000) 
Get Text: From Screen Area:
(Note: Checking whether Order no is increasing when ever Order is created) 
• Open Flight1A; Analysis->graphs(Keep it open) 
• Create->get text->from screen area 
• Capture the No of tickets sold; right clcik &close the graph 
• Now , insert new order, open the graph(Analysis->graphs) 
• Go to Winrunner window, create->get text->from screen area 
• Capture the No of tickets sold and right click; close the graph 
• Save the script file 
• Add the followinf script; If(text2==text1) tl_step("text comparision",0,"updateed"); else tl_step("text comparision",1,"update property"); 
• Run->Run from top to see the results. 
Get Text: For Object/Window:
• Open a "Calc" application in two windows (Assuming two are two versions) 
• Create->get text->for Obj/Window 
• Click on some button in one window 
• Stop recording 
• Repeat 1 to 4 for Capture the text of same object from another "Calc" application. 
• Add the following TSL(Note:Change "text" to text1 & text2 for each statement) if(text1==text2) report_msg("correct" text1); Else report_msg("incorrect" text2); 
• Run & see the results 
Using GUI-Spy:

Using the GUI Spy, you can view and verify the properties of any GUI object on selected application
• Tools->Gui Spy… 
• Select Spy On ( select Object or Window) 
• Select Hand icon Button 
• Point the Object or window & Press Ctrl_L + F3. 
• You can view and verify the properties. 
Using Virtual Object Wizard:

Using the Virtual Object wizard, you can assign a bitmap to a standard object class, define the coordinates of that object, and assign it a logical name 
• tools->Virtual Object Wizard. 
• Click Next Button 
• Select standard class object for the virtual object Ex: class:Push_button 
• Click Next button 
• Click Mark Object button 
• Drag the cursor to mark the area of the virtual object. 
• Click Next button 
• Assign the Logical Name, This name will appear in the test script when you record object. 
• Select Yes or No check box 
• Click Finish button 
• Go to winrunner window & Create->Start Recording. 
• Do some operations 
• Stop Recording 
Using Gui Map Editor:

Using the GUI Map Editor, you can view and modify the properties of any GUI object on selected application. To modify an object’s logical name in a GUI map file
• Tools->GUI Map Editor 
• Select Learn button 
• Select the Application A winrunner message box informs “do you want to learn all objects within the window” & select ‘yes’’ button. 
• Select perticular object and select Modify Button 
• Change the Logical Name& click ‘OK’ Button 
• Save the File 
To find an object in a GUI map file:
• Choose Tools > GUI Map Editor. 
• Choose View > GUI Files. 
• Choose File > Open to load the GUI map file. 
• Click Find. The mouse pointer turns into a pointing hand. 
• Click the object in the application being tested. The object is highlighted in the GUI map file. 
To highlight an object in a Application:
• Choose Tools > GUI Map Editor. 
• Choose View > GUI Files. 
• Choose File > Open to load the GUI map file. 
• Select the object in the GUI map file 
• Click Show. The object is highlighted in the Application. 
Data Driver Wizard
• Start->Programs->Wirunner->Sample applications->Flight 1A 
• Open Flight Reservation Application 
• Go to Winrunner window 
• Create->Start recording 
• Select file->new order, insert the fields; Click the Insert Order 
• Tools->Data Table; Enter different Customer names in one row and Tickets in another row. 
• Default that two column names are Noname1 and Noname2. 
• Tools->Data Driver Wizard 
• Click Next button &select the data table 
• Select Parameterize the test; select Line by Line check box 
• Click Next Button 
• Parameterize each specific values with column names of tables;Repeat for all 
• Finalli Click finish button. 
• Run->Run from top; 
• View the results. 
Merge the GUI Files:

Manual Merge
• Tools->Merge GUI Map Files A WinRunner message box informs you that all open GUI maps will be closed and all unsaved changes will be discarded & click ‘OK’ button. 
• Select the Manual Merge. Manual Merge enables you to manually add GUI objects from the source to target files. 
• To specify the Target GUI map file click the browse button& select GUI map file 
• To specify the Source GUI map file. Click the add button& select source GUI map file. 
• Click ‘OK’ button 
• GUI Map File Manual Merge Tool Opens Select Objects and move Source File to Target File 
• Close the GUI Map File Manual Merge Tool 
Auto Merge
• Tools->Merge GUI Map Files A WinRunner message box informs you that all open GUI maps will be closed and all unsaved changes will be discarded & click ‘OK’ button. 
• Select the Auto Merge in Merge Type. If you chose Auto Merge and the source GUI map files are merged successfully without conflicts, 
• To specify the Target GUI map file click the browse button& select GUI map file 
• To specify the Source GUI map file. 
• Click the add button& select source GUI map file. 
• Click ‘OK’ button A message confirms the merge. 
Manually Retrive the Records form Database
• db_connect(query1,DSN=Flight32); 
• db_execute_query(query1,select * from Orders,rec); 
• db_get_field_value(query1,#0,#0); 
• db_get_headers(query1, field_num,headers); 
• db_get_row(query1,5,row_con); 
• db_write_records(query1,,c:\\str.txt,TRUE,10); 

TSL SCRIPTS FOR WEB TESTING

1. web_browser_invoke ( browser, site );

// invokes the browser and opens a specified site. browser The name of browser (IE or NETSCAPE). site The address of the site.

2. web_cursor_to_image ( image, x, y ); 

// moves the cursor to an image on a page. image The logical name of the image. x,y The x- and y-coordinates of the mouse pointer when moved to an image

3. web_cursor_to_label ( label, x, y ); 

// moves the cursor to a label on a page. label The name of the label. x,y The x- and y-coordinates of the mouse pointer when moved to a label.

4.web_cursor_to_link ( link, x, y ); 

// moves the cursor to a link on a page. link The name of the link. x,y The x- and y-coordinates of the mouse pointer when moved to a link.

5.web_cursor_to_obj ( object, x, y );

// moves the cursor to an object on a page. object The name of the object. x,y The x- and y-coordinates of the mouse pointer when moved to an object.

6.web_event ( object, event_name [, x , y ] );

// uns an event on a specified object. object The logical name of the recorded object. event_name The name of an event handler. x,y The x- and y-coordinates of the mouse pointer when moved to an object

7.web_file_browse ( object ); 

// clicks a browse button. object A file-type object.

8.web_file_set ( object, value );

// sets the text value in a file-type object. object A file-type object. Value A text string.

9. web_find_text ( frame, text_to_find, result_array [, text_before, text_after, index, show ] ); 

// returns the location of text within a frame.

10. web_frame_get_text ( frame, out_text [, text_before, text_after, index ] );

// retrieves the text content of a frame.

11. web_frame_get_text_count ( frame, regex_text_to_find , count );

// returns the number of occurrences of a regular expression in a frame.

12. web_frame_text_exists ( frame, text_to_find [, text_before, text_after ] );

// returns a text value if it is found in a frame.

13.web_get_run_event_mode ( out_mode ); 

// returns the current run mode out_mode The run mode in use. If the mode is FALSE, the default parameter, the test runs by mouse operations. If TRUE, is specified, the test runs by events.

14. web_get_timeout ( out_timeout );

// returns the maximum time that WinRunner waits for response from the web. out_timeout The maximum interval in seconds 

15.web_image_click ( image, x, y ); 

// clicks a hypergraphic link or an image. image The logical name of the image. x,y The x- and y-coordinates of the mouse pointer when clicked on a hypergraphic link or an image. 

16. web_label_click ( label );

// clicks the specified label. label The name of the label.

17. web_link_click ( link ); 

// clicks a hypertext link. link The name of link.

18. web_link_valid ( name, valid );

// checks whether a URL name of a link is valid (not broken). name The logical name of a link. valid The status of the link may be valid (TRUE) or invalid (FALSE)

19. web_obj_click ( object, x, y );

object The logical name of an object. x,y The x- and y-coordinates of the mouse pointer when clicked on an object.

20. web_obj_get_child_item ( object, table_row, table_column, object_type, index, out_object ); 

// returns the description of the children in an object.

21. function returns the count of the children in an object. 

web_obj_get_child_item_count ( object, table_row, table_column, object_type, object_count );

22. returns the value of an object property.

web_obj_get_info ( object, property_name, property_value );

23. returns a text string from an object.

web_obj_get_text ( object, table_row, table_column, out_text [, text_before, text_after, index] );

24. returns the number of occurrences of a regular expression in an object.

web_obj_get_text_count ( object, table_row, table_column, regex_text_to_find, count );

25. returns a text value if it is found in an object.

web_obj_text_exists ( object, table_row, table_column, text_to_find [, text_before, text_after] );

26. web_restore_event_default ( ); 

//resets all events to their default settings. 27. web_set_event ( class, event_name, event_type, event_status ); 

// sets the event status.

28. web_set_run_event_mode ( mode ); 

//sets the event run mode. 29 web_set_timeout ( timeout );

//.sets the maximum time WinRunner waits for a response from the web. 30. web_set_tooltip_color ( fg_color, bg_color );

// sets the colors of the WebTest ToolTip.

31. web_sync ( timeout ); 

//waits for the navigation of a frame to be completed. 32. web_url_valid ( URL, valid );

// checks whether a URL is valid.

Tuesday 24 July 2012

Career path- QA/QC/Tester


There are some common myths regarding testing career. Some of these are:

  • Testing is simple and straight forward, just follow the best practices.

  • QA and its related activities are mere cost burden on the organization.

  • Testing is not creative.



Above mentioned myths confuse people while choosing “testing” as a profession. But ground reality is quite the opposite. In fact if we do compare testing with development, testing is more challenging than anyone can think of it to be.

Testing is simple and straight forward, just follow the best practices
Testing is not just about verification and validation. Most testers have hacker minds. Testers intentionally attempt to make things go wrong to determine if things happen when they shouldn’t or don’t happen when they should. You should have a thorough understanding of each individual project that you work on… You should go through different testing environments as you have to work on multiple software and hardware. You need to go through exploratory testing for complete understanding of project before you are going to write test plans, test scenarios, test cases etc.
QA and its related activities are mere cost burden on the organization
Is it a big deal if you save billion dollars by just investing some hundred pennies? I don’t think so! But if someone still has questions in their minds or are confused about why it is so important to have test engineers within an organization or why software testing is so important, the following examples might be helpful in clarifying their misconceptions

  • A new U.S government-run credit card complaint handling system was not working correctly according to August 2011 news reports.

Banks were required to respond to complaints routed to them from the system, but due to system bugs the complaints were not consistently being routed to companies as expected. Reportedly the system had not been properly tested.

  • News reports in Asia in July of 2011 reported that software bugs in a national computerized testing and grading system resulted in incorrect test results for tens of thousands of high school students. The national education ministry had to reissue grade reports to nearly 2 million students nationwide.


  • A smartphone online banking application was reported in July 2010 to have a security bug affecting more than 100,000 customers


  • In August of 2008 it was reported that more than 600 U.S. airline flights were significantly delayed due to a software glitch in the U.S. FAA air traffic control system. The problem was claimed to be a ‘packet switch’ that ‘failed due to a database mismatch’, and occurred in the part of the system that handles required flight plans.


  • Software system problems at a large health insurance company in August 2008 were the cause of a privacy breach of personal health information for several hundred thousand customers, according to news reports. It was claimed that the problem was due to software that ‘was not comprehensively tested’.

Testing is not creative
Testers lead the development team to put together a meaningful plan, understand the business needs, and test the logical, optional and failure paths.

On the other hand, the word “creative” might have different meanings for different people In my point of view your job is only as creative as YOU make it. One taxi driver could say that his job is not creative because all he does is drive passengers around all day. Another taxi driver could say that it’s creative because he tries to find the best routes. Another could say it’s creative because he enjoys meeting the people that he drives. It’s the same with testing. It is only as creative as you make it.


Every profession has its own value. It’s only you who can decide whether you want to do this or do that. People can just give you suggestions and can relate some examples or experiences that they have faced within that profession but it’s only YOU that can purely judge your abilities and can make a better decision.
“Your work is going to fill a large part of your life, and the only way to be truly satisfied is to do what you believe is great work. And the only way to do great work is to love what you do. If you haven’t found it yet, keep looking. Don’t settle. As with all matters of the heart, you’ll know when you find it. And, like any great relationship, it just gets better and better as the years roll on. So keep looking until you find it. Don’t settle.” (Steve Jobs)

This article was taken from Tech GIG, Keeping in mind that it will useful for the readers who visit this site.

Monday 11 June 2012

Session-based testing- Test Technique

Session based Testing is derived from Exploratory testing. But its not Exploratory testing itself!

Jonathan & James Bach, pioneers in the field of testing developed this way of testing. The very disavantage of exploratory testing gave birt to Session based testing.

 Session-based testing is a technique for managing and controlling unscripted tests. It is not a test
 generation strategy, and while it sets a framework around unscripted testing, it is not a systematic
approach whose goal is precise control and scope. Rather, it is a technique that builds on the strengths
of unscripted testing - speed, flexibility and range - and by allowing it to be controlled, enables it to
become a powerful part of an overall test strategy.

At the heart of the technique is the idea of effective limits. A Test Session has a well-defined start and
end time, limiting its duration. During a Test Session, a tester engages in a directed exploration of a
limited part of the thing being tested - it should be obvious to the tester that an action or test is inside or
outside these limits. Within these limits, moment-to-moment activities are not controlled, but left to the
tester's judgement. The tester records his or her activity - and includes whatever other information.

Elements of session-based testing:

Charter: A charter is a goal or agenda for a test session. Charters are created by the test team prior to the start of testing, but may be added or changed at any time. Often charters are created from a specification, test plan, or by examining results from previous test sessions.

Session: An uninterrupted period of time spent testing, ideally lasting one to two hours. Each session is focused on a charter, but testers can also explore new opportunities or issues during this time. The tester creates and executes test cases based on ideas, heuristics or whatever frameworks to guide them and records their progress. This might be through the use of written notes, video capture tools or by whatever method as deemed appropriate by the tester.

Session report: The session report records the test session. Usually this includes:

Charter.
Area tested.
Detailed notes on how testing was conducted.
A list of any bugs found.
A list of issues (open questions, product or project concerns)
Any files the tester used or created to support their testing
Percentage of the session spent on the charter vs investigating new opportunities.
Percentage of the session spent on:
Testing - creating and executing tests.
Bug investigation / reporting.
Session setup or other non-testing activities.
Session Start time and duration.

Debrief:  A debrief is :a short discussion between the manager and tester (or testers) about the session report. Jon Bach, one of the co-creators of session based test management, uses the aconymn PROOF to help structure his debriefing. PROOF stands for:-

Past. What happened during the session?

Results. What was achieved during the session?

Obstacles. What got in the way of good testing?

Outlook. What still needs to be done?

Feelings. How does the tester feel about all this?[1]

Parsing resultsWith a standardized Session Report, software tools can be used to parse and store the results as aggregate data for reporting and metrics. This allows reporting on the number of sessions per area or a breakdown of time spent on testing, bug investigation, and setup / other activities.

PlanningTesters using session-based testing can adjust their testing daily to fit the needs of the project. Charters can be added or dropped over time as tests are executed and/or requirements change.



Note: Some of the Definitions have been taken from Wikipedia.




Saturday 26 May 2012

Work Shop on Software testing


LinkedIn

MESSAGES

People Test wrote:
I found this and thought you might be interested.

People test (Found on LinkedIn Today)
July 22, 2012 - Bangalore, venue will be informed shorlty (See more articles »)
People who are wanting to make there career in Software Testing. Here is an Oppurtunity for you to know what is Testing, Oppurtunities and Career Growth in this Domain.

Hurry Limited seats available!!!!
Guest Speakers: From Reputed Companies with lot of Industry Experience.
 Contact peopletestdpp@gmail.com for regsitrations

Tuesday 24 April 2012

Basics of Software Testing


Software Testing:
1. The process of operating a product under certain conditions, observing and recording the results, and making an evaluation of some part of the product.
2. The process of executing a program or system with the intent of finding defects/bugs/problems.
3. The process of establishing confidence that a product/application does what it is supposed to.

Verification:
"The process of evaluating a product/application/component to evaluate whether the output of the development phase satisfies the conditions imposed at the start of that phase.
Verification is a Quality control process that is used to check whether or not a product complies with regulations, specifications, or conditions imposed at the start of a development phase. This is often an internal process."

Validation:
"The process of evaluating a system or component during or at the end of the development process to determine whether it satisfies specified user requirements.
Validation is Quality assurance process of establishing evidence that provides a high degree of assurance that a product accomplishes its intended requirements."

Quality Assurance (QA):
"A set of activities designed to ensure that the development and/or maintenance process is adequate to ensure a system will meet its objectives.
A planned and systematic set of activities necessary to provide adequate confidence that requirements are properly established and products or services conform to specified requirements."

Quality Control (QC):
The process by which product quality is compared with applicable standards, and the action is taken when nonconformance is detected.

White Box:
In this type of testing we use an internal perspective of the system/product to design test cases based on the internal structure. It requires programming skills to identify all flows and paths through the software. The test designer uses test case inputs to exercise flows/paths through the code and decides the appropriate outputs.

Gray Box:
Gray box testing is a software testing method that uses a combination of black box testing and white box testing. Gray box testing is completely not black box testing, because the tester will have knowledge of some of the internal workings of the software under test. In gray box testing, the tester applies a limited number of test cases to the internal workings of the software under test. In the remaining part of the gray box testing a black box approach is taken in applying inputs to the software under test and observing the outputs.

Black Box:
Black box testing takes an external perspective of the test object to design and execute the test cases. In this testing technique one need not know the test object's internal structure. These tests can be functional or non-functional, however these are usually functional. The test designer selects valid and invalid inputs and determines the correct output.
Below are some of the important types of testing.

Functional Testing:

Functionality testing is performed to verify whether the product/application meets the intended specifications and the functional requirements mentioned in the documentation.
Functional tests are written from a user's perspective. These tests confirm that the system does what the users are expecting it to do.
Both positive and negative test cases are performed to verify the product/application responds correctly. Functional Testing is critically important for the products success since it is the customer's first opportunity to be disappointed.

Structural Testing:

In Structural testing, a white box testing approach is taken with focus on the internal mechanism of a system or component
 - Types of structural testing:
– Branch testing
– Path testing
– Statement testing"

System Testing:

System testing of software or hardware is testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements. System testing falls within the scope of black box testing, and as such, should require no knowledge of the inner design of the code or logic.
System testing is performed on the entire system with reference of a Functional Requirement Specification(s) (FRS) and/or a System Requirement Specification (SRS).

Integration Testing:

Integration testing is the activity of software testing in which individual software modules are combined and tested as a group.
Testing in which software components or hardware components or both are combined and tested to evaluate the interaction between them.

Unit Testing:

Testing of individual software components or groups of related components
Testing conducted to evaluate whether systems or components pass data and control correctly to one another

Regression Testing:

When a defect is found in verification and it is fixed we need to verify that
1) the fix was done correctly
2) to verify that the fix doesn’t break anything else. This is called regression testing.
Regression testing needs to be performed to ensure that the reported errors are indeed fixed. Testing also needs to be performed to ensure that the fixes made to the application do not cause new errors to occur.
Selective testing of a system or component to verify that modifications have not caused unintended effects.

Retesting:

Retesting means executing the same test case after fixing the bug to ensure the bug fixing.

Negative Testing:

Negative Testing is testing the application beyond and below of its limits.
For ex: If the requirements is to check for a name (Characters),
1) We can try to check with numbers.
2) We can enter some ascii characters.
3) First we can enter some numbers and then some characters.
4) If the name should have some minimum length, we can check beyond that length.

Performance Testing:

Performance test is testing the product/application with respect to various time critical functionalities. It is related to benchmarking of these functionalities with respect to time. This is performed under a considerable production sized setup.
Performance Tests are tests that determine end to end timing (benchmarking) of various time critical business processes and transactions, while the system is under low load, but with a production sized database. This sets 'best possible' performance expectation under a given configuration of infrastructure.
It can also serve to validate and verify other quality attributes of the system, such as scalability(measurable or quantifiable), reliability and resource usage.
Under performance testing we define the essential system level performance requirements that will ensure the robustness of the system.
The essential system level performance requirements are defined in terms of key behaviors of the system and the stress conditions under which the
system must continue to exhibit those key behaviors."
Some examples of the Performance parameters (in a Patient monitoring system - Healthcare product) are,
1. Real-time parameter numeric values match the physiological inputs
2. Physiological input changes cause parameter numeric and/or waveform modifications on the display within xx seconds.
3. The system shall transmit the numeric values frequently enough to attain an update rate of x seconds or shorter at a viewing device.




Stress Testing:

1. Stress Tests determine the load under which a system fails, and how it fails.
2. Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements to determine the load under which it fails and how.
3. A graceful degradation under load leading to non-catastrophic failure is the desired result.
Often Stress Testing is performed using the same process as Performance Testing but employing a very high level of simulated load.
Some examples of the Stress parameters (in a Patient monitoring system - Healthcare product) are,
1. Patient admitted for 72 Hours and all 72 hours of data availale for all the parameters (Trends).
2. Repeated Admit / Discharge (Patient Connection and Disconnection)
3. Continuous printing
4. Continuous Alarming condition

Load Testing:

Load testing is the activity under which Anticipated Load is applied on the System, increasing the load slightly and checking when the performance starts to degrade.
Load Tests are end to end performance tests under anticipated production load. The primary objective of this test is to determine the response times for various time critical transactions and business processes.
Some of the key measurements provided for a web based application include:
1. How many simultaneous users, can the web site support?
2. How many simultaneous transactions, can the web site support?
3. Page load timing under various traffic conditions.
4. To find the bottlenecks.

Adhoc Testing:

Adhoc testing is a commonly used term for software testing performed without planning and documentation.
The tests are intended to be run only once, unless a defect is discovered.

Exploratory Testing:

Exploratory testing is a method of manual testing that is described as simultaneous learning, design and execution.

Exhaustive Testing:

Testing which covers all combination's of input values and preconditions for an element of the software under test.

Sanity Testing:

A sanity test is a narrow regression test that focuses on one or a few areas of functionality. Sanity testing is usually narrow and deep. Testing few functions/parameters and checking all their main features.
In which one can perform testing on an overall application (all features) initially to check whether the application is proper in terms of availability and Usability.
Sanity testing is done by Test engineer.

Smoke Testing:

In software industry, smoke testing is a wide and shallow approach whereby all areas of the application are tested, without getting into too deep.
Smoke testing originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch fire and smoke.
When a build is received, a smoke test is run to ascertain if the build is stable and it can be considered for further testing. Smoke testing is done by Developer or White box engineers.

Soak Testing:

Soak testing involves testing a system with a significant load extended over a significant period of time, to discover how the system behaves under sustained use.
For example, in software testing, a system may behave exactly as expected when tested for 1 hour. However, when it is tested for 3 hours, problems such as memory leaks cause the system to fail or behave randomly.

Compatibility Testing:

Compatibility testing is done to check that the system/application is compatible with the working environment.
For example if it is a web based application then the browser compatibility is tested.
If it is a installable application/product then the Operating system compatibility is tested.
Compatibility testing verifies that your product functions correctly on a wide variety of hardware, software, and network configurations. Tests are run on a matrix of platform hardware configurations including High End, Core Market, and Low End.

Alpha Testing:

Testing performed by actual customers at the developer’s site.

Beta Testing:

Testing performed by actual customers at their site (customers site)."




Acceptance Testing:

Formal testing conducted to enable a user, customer or other authorized entity to determine whether to accept a system or component.

Static Testing:

The intention to find defects/bugs without executing the software or the code is called static testing. Example: Review, Walkthrough, CIP(Code Inspection Procedure).
Static testing is a form of software testing where the software isn't actually used. It is generally not detailed testing, but checks mainly for the sanity of the code, algorithm, or document. It is primarily syntax checking of the code and/or manually reviewing the code or document to find errors. This type of testing can be used by the developer who wrote the code, in isolation. Code reviews, inspections and walkthroughs are also used.

Dynamic testing:

Dynamic testing is nothing but functional testing. It is used to test software by executing it.


Bug Life Cycle

Some terminologies in the Defect / Bug life cycle.

Bug:

Product is still not yet released in the market. Found before production.

Defect:

Product is released in the market. Found after production.



Defect States:

States:
-Open
-Fixed
-Verified
-Closed
-Double/Duplicate
-Reject (Works as per Requirements, wont fix, works for me)


Severity:

Severity refers to the level of impact, the defect has created on the users. This is set by the tester. Below are some of the common types.
Unacceptable Risk/Blocker: Crash or A risk to the patient/user, operator, service personnel, etc. Creates a Safety Risk
Design NC/Critical: Non-conformance to design input (product specification)
Improvement Opportunity: Observations that do not represent a Design Defect or belong to any of the previous classes
- Enhancements
- Productivity Opportunities
However this varies from company to company.

Priority:

Priority refers to the "Urgency" with which the incident has to be fixed. This is set by the lead.
In most of the cases a critical incident can be of high priority.

Defect example for each severity:

Unacceptable Risk - In a patient monitoring system if the Product /application Crashes, Incorrect parameter value (ex. Instead of 60 Heart Rate 80 Heart Rate is displayed)
Design NC - Patient information not displayed under User Information.
Minor Issue - IP address textbox should be grayed out when DHCP is selected
Improvement Opportunity - Some cosmetic defect which is negligible. ex. when a sub-menu is opened the borders of the sub-menu is cut off. or Documentation Issue.

What to do when a defect is not reproducible:

1. Test on another setup.
2. Take pictures and video the first time the defect occurs.
3. Save and Attach the Log file.



Test design Techniques

Some general software documentation terms.

USR:

User Requirements Specification - Contains User (Customer) requirements received from the user/client.

PS:

Product Specification. Derived from UR such that it can be implemented in the product. A high level product requirement.

DRS:

Design Requirement Specifications - Related to design. Design requirements. For hardware components.

SRS:

Software Requirement Specifications - Low level requirements derived from PS. For Software related components.

SVP:

Software Verification Procedure written from SRS. These are the actual test cases.

DVP:

Design Verification Procedure written from DRS. More design oriented test cases.

Some of the Test Design Techniques are as below,

Test Design Technique 1 - Fault Tree analysis

Fault tree analysis is useful both in designing new products/services (test cases for new components) or in dealing with identified problems in existing products/services. Fault tree analysis (FTA) is a failure analysis in which the system is analyzed using boolean logic.

Test Design Technique 2 - Boundary value analysis

Boundary value analysis is a software testing design technique in which tests are designed to include representatives of boundary values. The test cases are developed around the boundary conditions. One common example for this technique can be, if a text box (named username) supports 10 characters, then we can write test cases which contain 0,1, 5, 10, >10 characters.

Test Design Technique 3 - Equivalence partitioning

Equivalence partitioning is a software design technique that divides the input data to a software unit into partition of data from which test cases can be derived.

Test Design Steps - Test case writing steps

1. Read the requirement. Analyze the requirement.
2. Write the related prerequisites and information steps if required (ex. If some setting should have already been done, or Some browser should have been selected).
3. Write the procedure (steps to perform some connection, configuration). This will contain the majority of steps to reproduce is this test case fails.
4. Write a step to capture the tester input/record. This is used for objective evidence.
5. Write the Verify step (Usually the expected Result).

What is a Test Case? – It is a document, which specifies the test inputs, events and expected results developed for a particular objective, so as to evaluate a particular program path or to verify the compliance with a specific requirement based on the test specification.
Areas for Test Design - Below are some of the areas in Test Design.
o Deriving Test Cases from Use Cases
o Deriving Test Cases from Supplementary Specifications
>>>>>> Deriving Test Cases for Performance Tests
>>>>>> Deriving Test Cases for Security / Access Tests
>>>>>> Deriving Test Cases for Configuration Tests
>>>>>> Deriving Test Cases for Installation Tests
>>>>>> Deriving Test Cases for other Non-Functional Tests
o Deriving test cases for Unit Tests
>>>>>> White-box tests
>>>>>> Black-box tests
o Deriving Test Cases for Product Acceptance Test
o Build Test Cases for Regression Test

Test case content:

The contents of a test case are,
* Prerequisites
* Procedures
* Information if required
* Tester input/record
* Verify step

Please refer the Software Test Templates area for a Test Case Template.
Types of Test Cases They are often categorized or classified by the type of test / requirement for test they are associated with, and will vary accordingly. Best practice is to develop at least two test cases for each requirement for test:
1. A Test Case to demonstrate the requirement has been achieved. Often referred to as a Positive Test Case.
2. Another Test Case, reflecting an unacceptable, abnormal or unexpected condition or data, to demonstrate that the requirement is only achieved under the desired condition, referred to as a Negative Test Case.



Software Testing Process


One of the main processes involved in Software Testing is the preparation of Test Plan.
The contents of a

Test Plan

would contain the following,
- Purpose.
- Scope.
- References.
- Resources Required.
- 1 Roles and responsibilities
- 2 Test Environment and Tools
- Test Schedule.
- Types of Testing involved.
- Entry/Exit criteria.
- Defect Tracking.
- Issues/Risks/Assumptions/Mitigation's.
- Deviations.

Please refer the Software Test Templates area for a Test Plan Template.

Process involved in Test Case Design:

1. Review of requirements.
2. comments for the requirements.
3. Fix the review comments.
4. Baseline the requirements document.
5. Prepare test cases with respect to the baselined requirements documents.
6. Send the test cases for review.
7. Fix the comments for the test cases.
8. Baseline the test case document.
9. If there are any updates in the requirements, Update the requirements document.
10. Send the updated requirements document for review.
11. Fix any comments, if received.
12. Baseline the requirements document.
13. Update the test case document with respect to the latest baselined requirements document.

Please refer the Software Test Templates area for a Test Case Template.

Traceability matrix:

Traceability matrix is a matrix which associates the requirements to its work products,Test cases. This can also be used to associate the Use case to the Requirements.
The advantage of traceability is to ensure the completeness of requirements.
Every Test case should associate to a requirement and Every Requirement has one or more associated test cases.




Testers Role

The various Roles and Responsibilities of a Tester or Senior Software Tester

Requirement Analysis - in case of doubt check with global counter parts or clinical specialist.
Test design - Always clarify, never assume and write.
Review of Other Test design documents.
Approving Test Design documents.
Test Environment setup.
Test results gathering.
Evaluation of any tools if required. - ex. QTP, Valgrind memory profiling tool.
Mentoring of new joinees. Helping them ramp up.

Finding Defects

Testers need to identify two types of defects:
Variance from Specifications – A defect from the perspective of the builder of the product.
Variance from what is Desired – A defect from a user (or customer) perspective.

Testing Constraints

Anything that inhibits the tester’s ability to fulfill their responsibilities is a constraint.
Constraints include:
Limited schedule and budget
Lacking or poorly written requirements
Changes in technology
Limited tester skills

The various Roles and Responsibilities of a Software Test Lead

The Role of Test Lead is to effectively lead the testing team. To fulfill this role the Lead must understand the discipline of testing and how to effectively implement a testing process while fulfilling the traditional leadership roles. What does This mean that the Lead must manage and implement or maintain an effective testing process. This involves creating a test infrastructure that supports robust communication and a cost effective testing framework.
The Test Lead is responsible for:
* Defining and implementing the role testing plays within the organizational structure.
* Defining the scope of testing within the context of each release / delivery.
* Deploying and managing the appropriate testing framework to meet the testing mandate.
* Implementing and evolving appropriate measurements and metrics.
o To be applied against the Product under test.
o To be applied against the Testing Team.
* Planning, deploying, and managing the testing effort for any given engagement / release.
* Managing and growing Testing assets required for meeting the testing mandate:
o Team Members
o Testing Tools
o Testing Process
* Retaining skilled testing personnel.

The various Roles and Responsibilities of a Software Test Manager

* Manage and deliver testing projects with multi-disciplinary teams while respecting deadlines.
* Optimize and increase testing team productivity by devising innovation solutions or improving existing processes.
* Experience identifying, recruiting and retaining strong technical members.
* Bring a client-based approach to all aspects of software testing and client interactions when required.
* Develop and manage organizational structure to provide efficient and productive operations.
* Provide issue resolution to all direct reportees/subordinates, escalating issues when necessary with appropriate substantiation and suggestions for resolution.
* Provide required expertise to all prioritization, methodology, and development initiative.
* Assist direct reports in setting organization goals and objectives and provide annual reviews with reporting of results to HR and the executive team.
* Work with Product Management and other teams to meet organization initiatives.
* Promote customer orientation through organizing and motivating development personnel.
The Test Manager must understand how testing fits into the organizational structure, in other words, clearly define its role within the organization.

Challenges in People Relationships testing

The top ten people challenges have been identified as:
Training in testing
Relationship building with developers
Using tools
Getting managers to understand testing
Communicating with users about testing
Making the necessary time for testing
Testing “over the wall” software
Trying to hit a moving target
Fighting a lose-lose situation
Having to say “no”
*According to the book “Surviving the Top Ten Challenges of Software Testing, A People-Oriented Approach” by William Perry and Randall Rice




Software Test Management

Software Test Management involves a set of activities for managing a software testing cycle. It is the practice of organizing and controlling the process and activities required for the testing effort.
Some of the goals of Software Test Management are plan, develop, execute, and assess all testing activities within the application/product development. This includes coordinating the efforts of all those involved in the testing cycle, tracking dependencies and relationships among test assets and, most importantly, defining, measuring, and tracking quality goals.
Software Test Management can be broken into different phases: organization, planning, authoring, execution, and reporting.
Test artifact and resource organization is a clearly necessary part of test management. This requires organizing and maintaining an inventory of items to test, along with the various things used to perform the testing. This addresses how teams track dependencies and relationships among test assets. The most common types of test assets that need to be managed are:
* Test scripts
* Test data
* Test software
* Test hardware
Test planning is the overall set of tasks that address the questions of why, what, where, and when to test. The reason why a given test is created is called a test motivator (for example, a specific requirement must be validated). What should be tested is broken down into many test cases for a project. Where to test is answered by determining and documenting the needed software and hardware configurations. When to test is resolved by tracking iterations (or cycles, or time period) to the testing.
Test authoring is a process of capturing the specific steps required to complete a given test. This addresses the question of how something will be tested. This is where somewhat abstract test cases are developed into more detailed test steps, which in turn will become test scripts (either manual or automated).
Test execution entails running the tests by assembling sequences of test scripts into a suite of tests. This is a continuation of answering the question of how something will be tested (more specifically, how the testing will be conducted).
Test reporting is how the various results of the testing effort are analyzed and communicated. This is used to determine the current status of project testing, as well as the overall level of quality of the application or system.
The testing effort will produce a great deal of information. From this information, metrics can be extracted that define, measure, and track quality goals for the project. These quality metrics then need to be passed to whatever communication mechanism is used for the rest of the project metrics.
A very common type of data produced by testing, one which is often a source for quality metrics, is defects. Defects are not static, but change over time. In addition, multiple defects are often related to one another. Effective defect tracking is crucial to both testing and development teams.



Software Test Estimation



Estimation must be based on previous projects: All estimation should be based on previous projects.
Estimation must be recorded: All decisions should be recorded. It is very important because if requirements change for any reason, the records would help the testing team to estimate again.
Software Test Estimation shall be always based on the software requirements: All estimation should be based on what would be tested. The software requirements shall be read and understood by the testing team as well as development team. Without the testing participation, no serious estimation can be considered.
Estimation must be verified. All estimation should be verified: Two spreadsheets can be created for recording the estimations. At the end, compare both the estimations. If the estimation has any deviation from the recorded ones, then a re-estimation should be made.
Software Test Estimation must be supported by tools: Tools such as spreadsheet containing metrics calculates automatically the costs and duration for each testing phase.
The Software Test Estimation shall be based on expert judgment: The experienced resources can easily make estimate that how long it would take for testing.

Estimation for Test execution

It can be based on the following points,
* Number of Man Days available for testing
* Number of test cases to be executed
* Complexity of test cases
* Based on previous cycle.

Functional Point Analysis

A method for Estimation
One of the initial design criteria for function points was to provide a mechanism that both software developers and users could utilize to define functional requirements. It was determined that the best way to gain an understanding of the users' needs was to approach their problem from the perspective of how they view the results an automated system produces. Therefore, one of the primary goals of Function Point Analysis is to evaluate a system's capabilities from a user's point of view. To achieve this goal, the analysis is based upon the various ways users interact with computerized systems. From a user's perspective a system assists them in doing their job by providing five (5) basic functions. Two of these address the data requirements of an end user and are referred to as Data Functions. The remaining three address the user's need to access data and are referred to as Transactional Functions.
The Five Components of Function Points
Data Functions
* Internal Logical Files
* External Interface Files
Transactional Functions
* External Inputs
* External Outputs
* External Inquiries"
In addition to the five functional components described above there are two adjustment factors that need to be considered in Function Point Analysis.
Functional Complexity - The first adjustment factor considers the Functional Complexity for each unique function. Functional Complexity is determined based on the combination of data groupings and data elements of a particular function. The number of data elements and unique groupings are counted and compared to a complexity matrix that will rate the function as low, average or high complexity. Each of the five functional components (ILF, EIF, EI, EO and EQ) has its own unique complexity matrix. The following is the complexity matrix for External Outputs.
Value Adjustment Factor - The Unadjusted Function Point count is multiplied by the second adjustment factor called the Value Adjustment Factor. This factor considers the system's technical and operational characteristics"


 Software Test Metrics

Software Test Metrics is used in decision making. The test metrics is derived from raw test data.
Because what cannot be measured cannot be managed. Hence Test Metrics is used in test management. It helps in showcasing the progress of testing.
Some of the Software Test Metrics are as below,

What is Test Summary

It is a document summarizing testing activities and results, and it contains an evaluation of the test items.

Requirements Volatility

Formula = {(No. of requirements added + No. of requirements deleted + No. of requirements modified) / No. of initial approved requirements} * 100
Unit Of measure = Percentage

Review Efficiency

Components - No. of Critical, Major & Minor review defects
- Effort spent on review in hours
- Weightage Factors for defects:
- Critical = 1; Major = 0.4; Minor = 0.1
Formula = (No. of weighted review defects/ Effort spent on reviews)
Unit Of measure = Defects per person hour

Productivity in Test Execution

Formula = (No. of test cases executed / Time spent in test execution)
Unit Of measure = Test Cases per person per day
Here the time is the cumilative time of all the resources. example - If there were 1000 Test cases executed in a cycle by 4 resources.
If resource 1 executed 300 test cases in 2 days,
resource 2 executed 400 test cases in 3 days
resource 3 executed 75 test cases in 1 day
resource 4 executed 225 test cases in 4 days.
Then the cumulative time spent for executing 1000 test cases is 10 man days.
Then the Productivity in Test Execution = 1000/10=100
So the productivity in test execution is 100 test cases per person per day.

Defect Rejection Ratio

Formula = (No. of defects rejected / Total no. of defects raised) * 100
Unit of Measure = Percentage

Defect Fix Rejection Ratio

Formula = (No. of defect fixes rejected / No. of defects fixed) * 100
Unit of Measure = Percentage

Delivered Defect Density

Components - No. of Critical, Major & Minor review defects
- Weightage Factors for defects:
- Critical = 1; Major = 0.4; Minor = 0.1
Formula = [(No of weighted defects found during Validation/customer review + acceptance testing)/ (Size of the work product)]
Unit Of measure = Defects for the work product / Cycle.

Outstanding defect ratio

Formula = (Total number of open defects/Total number of defects found) * 100
Unit Of measure = Percentage

COQ (Cost of Quality)

Formula = [(Appraisal Effort + Prevention Effort + Failure Effort) / Total Project effort] * 100
Unit Of measure = Percentage


Some metrics related to the entire Test cycle.

Schedule Variance

Schedule variance wrt Latest Baselines
Formula = {(Actual End Date - Latest Baselined End Date)/(Latest Baselined End Date - Latest Baselined Start Date)} *100
Unit Of measure = Percentage
Schedule variance wrt Original Baselines
Formula = {(Actual End Date - Original Baselined End Date)/(Original Baselined End Date - Original Baselined Start Date)} *100
Unit Of measure = Percentage

Effort variance

Effort variance wrt Original Baseline
Formula = {(Actual Effort - Estimated Effort) / (Estimated Effort)}* 100
Unit Of measure = Percentage
Effort variance wrt Last Revised Baseline
Formula = {(Actual Effort - Revised Effort) / (Revised Effort)} * 100
Unit Of measure = Percentage



Software Test Release Metrics

Some of the Software test release related metrics are as below. However they vary from company to company and project to project.

Test Cases executed

General Guidelines:
1. All the test cases should be executed atleast once. 100% test case execution.
2. Pass test cases >= 98% (this number can vary).

Effort Distribution

General Guidelines:
1. Sufficient effort has been spent on all the phases, components/modules of the software/product under test.
2. This needs to be quantified as (Effort spent per module / Effort spent on all modules)*100
Example: effort needs to be quantified for each phase like Requirements analysis, Design(test cases design), execution, etc.

Open Defects with priority

General Guidelines:
1. If we plot a graph with number of open defects against time, it should show a downward trend.
2. There should not be any open show stoppers/blockers before release. So 0 blockers in open state.
3. Close to 0 Critical/major defects before release. However this is never 0, as these fixes will be postponed to the next release as long as they are ok to have.






Software Testing Tools

The software testing tools can be classified into the following broad category.
Test Management Tools
White Box Testing Tools
Performance Testing Tools
Automation Testing Tools

Test Management Tools:

Some of the objectives of a Test Management Tool are as below. However all of these characteristics may not be available in one single tool. So the team may end up using multiple tools, with each tool focusing on a set of key areas.
* To manage Requirements.
* To Manage Manual Test Cases, Suites and Scripts.
* To Manage Automated Test Scripts.
* To mange Test Execution and the various execution activities. (recording results, etc)
* To be able to generate various reports with regard to status, execution, etc.
* To Manage defects. In other words a defect tracking tool.
* Configuration management Tool. Version Control Tool ( example - for Controlling and Sharing the Test Plan, Test Reports, Test Status, etc.)
Some of the tool which can be used along with their key areas of expertise are as below,
Uses of Telelogic/IBM Doors
1. Used for Writing requirements/Test cases.
2. Baseline functionality available.
3. The document can be exported into microsoft excel/word.
4. Traceability matrix implemented in doors. So the requirements can be mapped to the test cases and vice versa.
Uses of HP Quality Center.
1. Used for Writing requirements/Test cases.
2. Used for baselineing of documents.
3. For exporting of documents.
4. Traceability matrix. So the requirements can be mapped to the test cases and vice versa.
Some of the defect tracking tools are as below,
* IBM Lotus Notes
* Bugzilla (Open Source/Free)
A Comparison of different issue tracking systems.
Some of the other Test management tools are as below,
* Bugzilla Testopia
* qaManager
* TestLink
Configuration management Tool.
Roger Pressman, in his book Software Engineering: A Practitioner's Approach, states that CM "is a set of activities designed to control change by identifying the work products that are likely to change, establishing relationships among them, defining mechanisms for managing different versions of these work products, controlling the changes imposed, and auditing and reporting on the changes made."
In Software Testing, configuration management plays the role of tracking and controlling changes in the various test components (example - for Controlling and Sharing the Test Plan, Test Reports, Test Status, etc.). Configuration management practices include revision control and the establishment of baselines.
Some of the configuration management tools are,
* IBM Clearcase
* CVS
* Microsoft VSS


White Box Testing Tools

White Box Testing Tools:

Some of the aspects and characteristics which are required in a White Box Testing Tools are,
To check the Code Coverage
Code coverage is a measure used in software testing. It describes the degree to which the source code of a program has been
tested. It is a form of testing that inspects the code directly and is therefore a form of white box testing
Some of the tools available in this space are,
Tools for C / C++
* IBM Rational Pure Coverage
* Cantata++
* Insure++
* BullseyeCoverage
Tools for C# .NET
* NCover
* Testwell CTC++
* Semantic Designs Test Coverage
* TestDriven.NET
* Visual Studio 2010
* PartCover (Open Source)
* OpenCover (Open Source)
Tools for COBOL
* Semantic Designs Test Coverage
Tools for Java
* Clover
* Cobertura
* Jtest
* Serenity
* Testwell CTC++
* Semantic Designs Test Coverage
Tools for Perl
* Devel::Cover
Tools for PHP
* PHPUnit with Xdebug
* Semantic Designs Test Coverage
To check Coding Standards
A comprehensive list of tools in coding standards can be found at
List of tools to check coding standards
To check the Code Complexity
Code Complexity is a measure of the number of linearly-independent paths through a program module and is calculated by
counting the number of decision points found in the code (if, else, do, while, throw, catch, return, break etc.).
Some of the free tools available for checking the code complexity are as below,
* devMetrics by Anticipating minds.
* Reflector Add-In.
To check Memory Leaks
A memory leak happens when a application/program has utilized memory but is unable to release it back to the operating
system. A memory leak can reduce the performance of the computer by reducing the amount of available memory. Eventually, in
the worst case, too much of the available memory may become utilized and all or part of the system or device stops working
correctly, the application fails, or the system crashes.
Some of the free tools available for checking the memory leak issues are as below,
* Valgrind
* Mpatrol
* Memwatch

CAPA in Software Testing

CAPA, is also addressed as Corrective Action and Preventive Action. This is also a regulatory requirement by both FDA and ISO, which requires an active CAPA program as an essential element of a quality system.
CAPA also helps with customer Satisfaction. The ability to correct existing problems or implement controls to prevent potential problems is equally important for continued customer satisfaction.
Quality Issues which are not caught/fixed soon enough have their own financial impact.
Corrective Action is the process of reacting to an existing product problem, customer issue or other nonconformity and fixing it.
Preventive Action is a process for detecting potential problems or nonconformance’s and resolving them.


Action plan for Defect Slippage:
1. Increase the regression testing wrt all the functions.
2. Adopt some validation test cases during regression. i.e. testing wrt a real time scenario.
3. Ensure that different kinds of devices are used during testing. Usually during the different rounds of testing the same device types are used for testing by the tester. This happens when the device type to be selected is at the users discretion. The user is more comfortable using the same type of device. This results with not all device types being tested.
Here Device would relate to cross compatibility.
4. Perform the RCA(Root Cause Analysis) on the defects found after testing and arrive at the area where the most no of defect arises. Concentrate on this area.
5. Check domain understanding among the team from time to time. Regular domain understanding sessions would help in increased awareness.
6. Set up a reliability system (system which can be used to check performance and reliability). Ensure that the Test Environment is connected to this system. Regularly send the crash logs to the development team to analyze if any new defects were responsible for the crash.


Action plan for Productivity Increase:
1. Regular domain knowledge training and brown bag sessions with respect to testing to increase knowledge and help with easier testing.
2. Assign testers with a correct/good mix of documents. Example Complex Test Case execution with Simple test case execution. This averages out the test productivity wrt easy and hard. Ensure that this kind of mixture is consistently implemented across all the members of the team.
3. Constant motivation and feedback.



Testing Infrastructure


The testing infrastructure consists of the testing activities, events, tasks and processes that immediately support automated, as well as manual, software testing. The stronger the infrastructure the more it provides for stability, continuity and reliability of the automated testing process.
The testing infrastructure includes:
• Test Plan
• Test cases
• Baseline test data
• A process to refresh or roll back baseline data
• Dedicated test environment, i.e. stable back end and front end
• Dedicated test lab
• Integration group and process
• Test case database, to track and update both automated and manual tests
• A way to prioritize, or rank, test cases per test cycle
• Coverage analysis metrics and process
• Defect tracking database
• Risk management metrics/process (McCabe tools if possible)
• Version control system
• Configuration management process
• A method/process/tool for tracing requirement to test cases
• Metrics to measure improvement
The testing infrastructure serves many purposes, such as:
• Having a place to run automated tests, in unattended mode, on a regular bases
• A dedicated test environment to prevent conflicts between on-going manual and automated testing
• A process for tracking results of test cases, both those that pass or fail
• A way of reporting test coverage levels
• Ensuring that expected results remain consistent across test runs
• A test lab with dedicated machines for automation enables a single automation test suite to conduct multi-user and stress testing
It is important to remember that it is not necessary to have all the infrastructure components in place in the beginning to be successful. Prioritize the list, add components gradually, over time, so the current culture has time to adapt and integrate the changes. Experience has proven it takes an entire year to integrate one major process, plus one or two minor components into the culture.
Again, based on experience, start by creating a dedicated test environment and standardizing test plans and test cases. This, along with a well-structured automated testing system will go a long way toward succeeding in automation.