Find below the meaning/definition of Terms frequently used in Software Testing.
Note that I am defining these terms using my own words to explain it easily. It may not be the exact definition.
Software Development Life Cycle (SDLC)
SDLC involves different phases such as Initial/Planning phase, Requirement analysis phase, Design phase, Coding phase, Testing phase, Delivery and Maintenance phase.
All these phases can be followed one by one linearly as Waterfall model or they can be followed as V-model which expects doing Testing activities in parallel with development activities.
Initial Phase involves gathering requirements by interacting with the Customer. Normally Business Analyst will do this and will prepare a requirement document.
Here Customer is the internal marketing team in case of product development. Otherwise Customer is the person who is paying for doing the project.
Requirement analysis phase involves doing details study of the customer requirements and judging possibilities and scope of the requirements. And it involves tentative planning and technology & resource selection.
SRS (System Requirement Specification) will be created in this phase.
Design phase involves dividing the whole project into modules and sub-modules by doing High level Designing (H.L.D) and Low level Designing (L.L.D).
Coding Phase involves creating source code or program by the programmers by referring the design document. Coding standards with proper comments should be followed.
Testing Phase involves getting clarification for the unclear requirements and then writing test cases by Testing Team based on the requirements. And, the testing team will execute the test cases once the build is released and they will report the bugs found during the test case execution.
Delivery & Maintenance phase involves installing the application in the customer place and providing the details such as release notes to the customer.
Maintenance or support Team will help the customers if they face any issue when using the application.
Software Testing - It is the process of verifying whether a software application or a software product meets the business and technical requirements. i-e verifying whether the developed software application works as expected.
It will be done by comparing the Actual result against the Expected Result.
For example, if the requirement says “After entering valid username and password the user should be logged in to the website” software testing is nothing but just entering valid username and password to verify whether the user got logged in to the website.
But, doing software testing is not so simple, lot of complex factors involved in doing testing. It needs lot of thinking and brainstorming processes.
In this simple example, we should know below things.
- In which environment we need to do the login testing?
- What is valid username and password?
- How to verify whether the user really logged into the website?
- How long it will take to complete the login?
- How the system should behave if the user enters invalid username or invalid password?
- What will happen if the user already logged in?
- How the system should behave if someone continuously try to login by giving invalid username and password?
- Is it possible to reach the target page even without doing login?
- How the system will behave if there is any network issue once after entering the username and password?
- What will happen if two people try to login with same username and password simultaneously from two different machines?
- How the system will behave if the user doesn’t use the website for longtime once after getting logged in?
- Will the login process take same time even after Millions of users registered with the system?
- What will happen if thousands of people try to do login at same time?
- What message will be shown to the user if the database which stores the user details goes down?
- Will the system provide any other interface (e.g API, webservice) other than the standard UI for doing login?
- How to help the user if he forgot the password?
- Will the system allow the username in non-English also?
- Most importantly, when we need to do this login testing?
- What are the pre-requisites? (e.g user should be registered before testing login).
- Is it necessary to do the testing in production environment also?
- To whom should we report if the Testing fails? And, what are the details we need to provide? How we come to know once it got fixed? What we need to do once after it got fixed?
- Is there any option (e.g Remember Me cookie, auto-complete) for making the login easy Will these options break the login system in any way
- Whether anyone (e.g hacker) can access the password while doing login?
- Whether the password is not readable to the developers of the system so as to avoid any misuse? (e.g password encryption before storing it in Database)?
- If we need to do this testing multiple times in many environments, is there any easy way to do the testing? (e.g using scripts and automation tools such as QTP)?
- Is the code is written well enough so that future enhancements can be made easily?
- Who has to do this testing? Whether the developer of the system can do the testing also?
So, it is important to have clear understanding of testing approaches before start doing any testing.
Functional & Non-functional Testing
Functional testing mainly focuses on verifying whether the features requested in the requirement document are working correctly.
Non-functional testing is checking the performance, stability, scalability, usability, internationalization and security of the software application.
Whitebox, blackbox and Greybox are three Testing methods.
White box testing will be done by going thro’ the coding and by understanding the algorithm used in the coding. It includes API testing and code coverage.
Black box testing will be done without knowing internal structure or coding of the application. It will help to find more bugs effectively. But the tester may spend more time by writing many test cases to check something which could have been tested easily by writing one test case.
Grey box testing involves having knowledge of internal data structures and algorithms for writing the test cases, but testing at the user, or black-box level
Unit Testing or component testing will be done by the developers to make sure that the small piece of the code works correctly.
Each and every unit of the program will be tested in order to confirm whether the conditions, functions and loops are working fine or not.
Integration Testing will help to expose defects in the interfaces and interaction between many different modules of the program.
System testing will be done by the Testing Team to make sure that the program or application meets the requirements.
It includes GUI software testing, Usability testing, Performance testing, Stress testing, Security testing, Scalability testing, Sanity Testing, Smoke Testing, ad hoc testing, etc..
Regression testing will done to make sure that the application or program is not affected by any code change done to the application. i-e Already working functions in other modules of the program should continue to work after changing any of the module .
We need to test each part of application even when the code change is done in any specific part or module of the program. Automation tools will be useful for doing regression testing.
Alpha Testing will be a part of user Acceptance testing. It will be done in the developers premise, and will be done by the customers or by independent test team.
Beta testing comes after alpha testing. Beta versions of the Software will be released to a limited number of people outside of the programming team.
Actual release will be done if there are no major issues found in Beta testing.
Test plan is a document which describes the objectives, scope, approach, and focus of a software testing effort. It will be made available for development team and business people also so that they can understand the testing activities done by Testing Team.
It will cover the features to be tested and features not be tested.
Testing environment details, risks, Responsibilities, testing schedule, Test deliverables and resource allocation details will be included in the Test plan.
So, it will be useful to have overall view of the testing activities to be done in particular release of a software application.
Traceability matrix is simply a mapping between the requirements and the Test cases. It will be prepared in a tabular form. i-e in excel spread sheet.
Once column will have the list of the Requirement IDs and the next column which have Test case IDs which test that requirement.
It will help to make sure that test cases are written well enough to cover all the requirements.
Similarly we can have reverse Traceability matrix also. i-e mapping between test cases and the requirements.
It will help to make sure that we are not having any test cases for the requirements which are not asked by the customer.
Test suite will be the collection of the Test cases. Mostly all relevant test cases will be grouped as one test case document. For example, the test cases which will test the login module will be stored in a particular spread sheet file named as “login_testcases.xls”. It can contain information such as Name of the module, description, total number of test cases and details of reference document (i-e requirement document, use case,etc).
Test case will have below things.
Test case ID for uniquely identifying the test case. For example, Test case ID can be TC001,TC002,….
Test case description will have condition which we are going to test.
e.g To verify user sees the message “invalid login details” when they enter valid username and invalid password”
Test steps will give details or steps required for executing this test case
e.g 1. Go to the login page
2. Enter valid username.
3. Enter invalid password.
4. Click “login” button.
Expected Result will give the details about the behavior or result we should see once after executing the test steps.
e.g User should see “invalid login details” message in Red color at top of the page.
Author who writes this test case.
Automatable- To mark whether this test case can be automated using automation tools such as QTP.
Apart from above things we can add pass/fail and remarks while executing the test cases.
Test case can be written by referring use case document and requirement document. We may need to refer the application for writing test cases.
We can use some techniques such as Equivalence partitioning and boundary value analysis for writing test cases.
According to Equivalence partitioning, writing one test case for each partition of the input data is enough. For example, if a password field accepts minimum 4 characters and maximum 10 characters, then there will be three partitions. First one is a valid partition 4 to 10. Second is invalid partition of values less than 4. Third one is another invalid partition of values more than 10. We can take one value from each partition to do the testing.
In this example the boundary values based on boundary value analysis are 3,4,5, 9,10 and11.
Software Test life cycle.
Test Planning - Scope of the testing will be defined according to the budget allocated for the testing. And, Test plan document will be prepared by the Test manager.
Test development- Test Cases will be written by the Testing Team (QA Team) during this phase. Test data files also will be created.
Test execution- Testers will execute the test cases and will report the issues to the development Team for fixing them.
Performance test should be executed only after the functional and regression testing got completed.
Bug Tracking is the methodology used to follow up the bugs/defects/issue found during Test execution. There are many free tools (e.g Bugzilla.) available for doing bug tracking effectively.
Normally the bug will be tracked as standard bug life cycle.
It will have below states.
New: When a tester finds a bug first time the state will be “NEW”. This means that the bug is not yet approved.
Open: After a tester has posted a bug, the lead of the testing team will check whether the reported bug is genuine and the he will change the state as “OPEN”.
Assign: The developerment Team lead will assign the bug to particular developer for fixing it. Now the state will be changed to “ASSIGN”.
Ready-to-Test: Once the developer fixes the bug, he will assign the bug to the testing team for next round of testing with the status "READ-TO-TEST".
Deferred: The status will be changed to "DEFERRED" if the team decides to fix it in next release or the priority is very low.
Rejected: If the developer decides that the bug is not genuine, he can reject the bug. Then the state of the bug is changed to “REJECTED”.
Duplicate: If the bug is repeated twice or the root cause for two bugs are same, then one bug status will be changed to “DUPLICATE”.
Verified: Once the bug is fixed and the status is changed to “Ready-to-test”, the tester tests the bug again. If the bug is not present in the software, he approves that the bug is fixed and changes the status to “VERIFIED”.
Reopened: If the bug still exists even after the bug is fixed by the developer, the tester will change the status to “REOPENED”. The bug will go thro' life cycle once again.
Closed: Once the bug is fixed, it will be tested by the tester again. If the tester verifies that the bug no longer exists in the software, he changes the status of the bug to “CLOSED”. This state means that the bug is fixed and tested again.
Reporting- Test summary report should be created for explaining the steps taken for delivering quality product. This summary report should show the number of test cases executed, how many passed and how many failed , test coverage, defect density and other test metrics. And, it should show performance test result also.
You can buy my Book about Software Testing from Amazon.
You can bookmark this blog for further reading, or you can subscribe to our blog feed.