Can you explain the PDCA cycle and where testing fits in?
Software testing is an important part of the software development process. In normal software development there are four important steps, also referred to, in short, as the PDCA (Plan, Do, Check, Act) cycle.
Let’s review the four steps in detail.
- Plan: Define the goal and the plan for achieving that goal.
- Do/Execute: Depending on the plan strategy decided during the plan stage we do execution accordingly in this phase.
- Check: Check/Test to ensure that we are moving according to plan and are getting the desired results.
- Act: During the check cycle, if any issues are there, then we take appropriate action accordingly and revise our plan again.
So developers and other stakeholders of the project do the “planning and building,” while testers do the check part of the cycle. Therefore, software testing is done in check part of the PDCA cyle.
2. What is the difference between white box, black box, and gray box testing?
Black box testing is a testing strategy based solely on requirements and specifications. Black box testing requires no knowledge of internal paths, structures, or implementation of the software being tested.
White box testing is a testing strategy based on internal paths, code structures, and implementation of the software being tested. White box testing generally requires detailed programming skills.
There is one more type of testing called gray box testing. In this we look into the “box” being tested just long enough to understand how it has been implemented. Then we close up the box and use our knowledge to choose more effective black box tests.
The above figure shows how both types of testers view an accounting application during testing. Black box testers view the basic accounting application. While during white box testing the tester knows the internal structure of the application. In most scenarios white box testing is done by developers as they know the internals of the application. In black box testing we check the overall functionality of the application while in white box testing we do code reviews, view the architecture, remove bad code practices, and do component level testing.
3. Can you explain usability testing?
Usability testing is a testing methodology where the end customer is asked to use the software to see if the product is easy to use, to see the customer’s perception and task time. The best way to finalize the customer point of view for usability is by using prototype or mock-up software during the initial stages. By giving the customer the prototype before the development start-up we confirm that we are not missing anything from the user point of view.
4. What are the categories of defects?
There are three main categories of defects:
- Wrong: The requirements have been implemented incorrectly. This defect is a variance from the given specification.
- Missing: There was a requirement given by the customer and it was not done. This is a variance from the specifications, an indication that a specification was not implemented, or a requirement of the customer was not noted properly.
- Extra: A requirement incorporated into the product that was not given by the end customer. This is always a variance from the specification, but may be an attribute desired by the user of the product. However, it is considered a defect because it’s a variance from the existing requirements.
5. How do you define a testing policy?
The following are the important steps used to define a testing policy in general. But it can change according to your organization. Let’s discuss in detail the steps of implementing a testing policy in an organization.
- Definition: The first step any organization needs to do is define one unique definition for testing within the organization so that everyone is of the same mindset.
- How to achieve: How are we going to achieve our objective? Is there going to be a testing committee, will there be compulsory test plans which need to be executed, etc?.
- Evaluate: After testing is implemented in a project how do we evaluate it? Are we going to derive metrics of defects per phase, per programmer, etc. Finally, it’s important to let everyone know how testing has added value to the project?.
- Standards: Finally, what are the standards we want to achieve by testing? For instance, we can say that more than 20 defects per KLOC will be considered below standard and code review should be done for it.
6. On what basis is the acceptance plan prepared?
In any project the acceptance document is normally prepared using the following inputs. This can vary from company to company and from project to project.
- Requirement document: This document specifies what exactly is needed in the project from the customers perspective.
- Input from customer: This can be discussions, informal talks, emails, etc.
- Project plan: The project plan prepared by the project manager also serves as good input to finalize your acceptance test.
The following diagram shows the most common inputs used to prepare acceptance test plans.
7. What is configuration management?
Configuration management is the detailed recording and updating of information for hardware and software components. When we say components we not only mean source code. It can be tracking of changes for software documents such as requirement, design, test cases, etc.
When changes are done in adhoc and in an uncontrolled manner chaotic situations can arise and more defects injected. So whenever changes are done it should be done in a controlled fashion and with proper versioning. At any moment of time we should be able to revert back to the old version. The main intention of configuration management is to track our changes if we have issues with the current system. Configuration management is done using baselines.
8. How does a coverage tool work?
While doing testing on the actual product, the code coverage testing tool is run simultaneously. While the testing is going on, the code coverage tool monitors the executed statements of the source code. When the final testing is completed we get a complete report of the pending statements and also get the coverage percentage.
9. Which is the best testing model?
In real projects, tailored models are proven to be the best, because they share features from The Waterfall, Iterative, Evolutionary models, etc., and can fit into real life time projects. Tailored models are most productive and beneficial for many organizations. If it’s a pure testing project, then the V model is the best.
10. What is the difference between a defect and a failure?
When a defect reaches the end customer it is called a failure and if the defect is detected internally and resolved it’s called a defect.
11. Should testing be done only after the build and execution phases are complete?
In traditional testing methodology testing is always done after the build and execution phases.
But that’s a wrong way of thinking because the earlier we catch a defect, the more cost effective it is. For instance, fixing a defect in maintenance is ten times more costly than fixing it during execution.
In the requirement phase we can verify if the requirements are met according to the customer needs. During design we can check whether the design document covers all the requirements. In this stage we can also generate rough functional data. We can also review the design document from the architecture and the correctness perspectives. In the build and execution phase we can execute unit test cases and generate structural and functional data. And finally comes the testing phase done in the traditional way. i.e., run the system test cases and see if the system works according to the requirements. During installation we need to see if the system is compatible with the software. Finally, during the maintenance phase when any fixes are made we can retest the fixes and follow the regression testing.
Therefore, Testing should occur in conjunction with each phase of the software development.
12. Are there more defects in the design phase or in the coding phase?
The design phase is more error prone than the execution phase. One of the most frequent defects which occur during design is that the product does not cover the complete requirements of the customer. Second is wrong or bad architecture and technical decisions make the next phase, execution, more prone to defects. Because the design phase drives the execution phase it’s the most critical phase to test. The testing of the design phase can be done by good review. On average, 60% of defects occur during design and 40% during the execution phase.
13. What group of teams can do software testing?
When it comes to testing everyone in the world can be involved right from the developer to the project manager to the customer. But below are different types of team groups which can be present in a project.
- Isolated test team
- Outsource – we can hire external testing resources and do testing for our project.
- Inside test team
- Developers as testers
- QA/QC team.
14. What impact ratings have you used in your projects?
Normally, the impact ratings for defects are classified into three types:
- Minor: Very low impact but does not affect operations on a large scale.
- Major: Affects operations on a very large scale.
- Critical: Brings the system to a halt and stops the show.
15. Does an increase in testing always improve the project?
No an increase in testing does not always mean improvement of the product, company, or project. In real test scenarios only 20% of test plans are critical from a business angle. Running those critical test plans will assure that the testing is properly done. The following graph explains the impact of under testing and over testing. If you under test a system the number of defects will increase, but if you over test a system your cost of testing will increase. Even if your defects come down your cost of testing has gone up.
16. What’s the relationship between environment reality and test phases?
Environment reality becomes more important as test phases start moving ahead. For instance, during unit testing you need the environment to be partly real, but at the acceptance phase you should have a 100% real environment, or we can say it should be the actual real environment. The following graph shows how with every phase the environment reality should also increase and finally during acceptance it should be 100% real.
17. What are different types of verifications?
Verification is static type of s/w testing. It means code is not executed. The product is evaluated by going through the code. Types of verification are:
- Walkthrough: Walkthroughs are informal, initiated by the author of the s/w product to a colleague for assistance in locating defects or suggestions for improvements. They are usually unplanned. Author explains the product; colleague comes out with observations and author notes down relevant points and takes corrective actions.
- Inspection: Inspection is a thorough word-by-word checking of a software product with the intention of Locating defects, Confirming traceability of relevant requirements etc.
18. How do test documents in a project span across the software development lifecycle?
The following figure shows pictorially how test documents span across the software development lifecycle. The following discusses the specific testing documents in the lifecycle:
- Central/Project test plan: This is the main test plan which outlines the complete test strategy of the software project. This document should be prepared before the start of the project and is used until the end of the software development lifecycle.
- Acceptance test plan: This test plan is normally prepared with the end customer. This document commences during the requirement phase and is completed at final delivery.
- System test plan: This test plan starts during the design phase and proceeds until the end of the project.
- Integration and unit test plan: Both of these test plans start during the execution phase and continue until the final delivery.
19. Which test cases are written first: white boxes or black boxes?
Normally black box test cases are written first and white box test cases later. In order to write black box test cases we need the requirement document and, design or project plan. All these documents are easily available at the initial start of the project. White box test cases cannot be started in the initial phase of the project because they need more architecture clarity which is not available at the start of the project. So normally white box test cases are written after black box test cases are written.
Black box test cases do not require system understanding but white box testing needs more structural understanding. And structural understanding is clearer i00n the later part of project, i.e., while executing or designing. For black box testing you need to only analyze from the functional perspective which is easily available from a simple requirement document.
20. Explain Unit Testing, Integration Tests, System Testing and Acceptance Testing?
Unit testing – Testing performed on a single, stand-alone module or unit of code.
Integration Tests – Testing performed on groups of modules to ensure that data and control are passed properly between modules.
System testing – Testing a predetermined combination of tests that, when executed successfully meets requirements.
Acceptance testing – Testing to ensure that the system meets the needs of the organization and the end user or customer (i.e., validates that the right system was built).
21. What is a test log?
The IEEE Std. 829-1998 defines a test log as a chronological record of relevant details about the execution of test cases. It’s a detailed view of activity and events given in chronological manner.
The following figure shows a test log and is followed by a sample test log.
22. Can you explain requirement traceability and its importance?
In most organizations testing only starts after the execution/coding phase of the project. But if the organization wants to really benefit from testing, then testers should get involved right from the requirement phase.
If the tester gets involved right from the requirement phase then requirement traceability is one of the important reports that can detail what kind of test coverage the test cases have.
23. What does entry and exit criteria mean in a project?
Entry and exit criteria are a must for the success of any project. If you do not know where to start and where to finish then your goals are not clear. By defining exit and entry criteria you define your boundaries.
For instance, you can define entry criteria that the customer should provide the requirement document or acceptance plan. If this entry criteria is not met then you will not start the project. On the other end, you can also define exit criteria for your project. For instance, one of the common exit criteria in projects is that the customer has successfully executed the acceptance test plan.
24. What is the difference between verification and validation?
Verification is a review without actually executing the process while validation is checking the product with actual execution. For instance, code review and syntax check is verification while actually running the product and checking the results is validation.
25. What is the difference between latent and masked defects?
A latent defect is an existing defect that has not yet caused a failure because the sets of conditions were never met.
A masked defect is an existing defect that hasn’t yet caused a failure just because another defect has prevented that part of the code from being executed.
26. Can you explain calibration?
It includes tracing the accuracy of the devices used in the production, development and testing. Devices used must be maintained and calibrated to ensure that it is working in good order.
27. What’s the difference between alpha and beta testing?
Alpha and beta testing has different meanings to different people. Alpha testing is the acceptance testing done at the development site. Some organizations have a different visualization of alpha testing. They consider alpha testing as testing which is conducted on early, unstable versions of software. On the contrary beta testing is acceptance testing conducted at the customer end.
In short, the difference between beta testing and alpha testing is the location where the tests are done.
28. How does testing affect risk?
A risk is a condition that can result in a loss. Risk can only be controlled in different scenarios but not eliminated completely. A defect normally converts to a risk.
29. What is coverage and what are the different types of coverage techniques?
Coverage is a measurement used in software testing to describe the degree to which the source code is tested. There are three basic types of coverage techniques as shown in the following figure:
- Statement coverage: This coverage ensures that each line of source code has been executed and tested.
- Decision coverage: This coverage ensures that every decision (true/false) in the source code has been executed and tested.
- Path coverage: In this coverage we ensure that every possible route through a given part of code is executed and tested.
30. A defect which could have been removed during the initial stage is removed in a later stage. How does this affect cost?
If a defect is known at the initial stage then it should be removed during that stage/phase itself rather than at some later stage. It’s a recorded fact that if a defect is delayed for later phases it proves more costly. The following figure shows how a defect is costly as the phases move forward. A defect if identified and removed during the requirement and design phase is the most cost effective, while a defect removed during maintenance is 20 times costlier than during the requirement and design phases.
For instance, if a defect is identified during requirement and design we only need to change the documentation, but if identified during the maintenance phase we not only need to fix the defect, but also change our test plans, do regression testing, and change all documentation. This is why a defect should be identified/removed in earlier phases and the testing department should be involved right from the requirement phase and not after the execution phase.
31. What kind of input do we need from the end user to begin proper testing?
The product has to be used by the user. He is the most important person as he has more interest than anyone else in the project.
From the user we need the following data:
- The first thing we need is the acceptance test plan from the end user. The acceptance test defines the entire test which the product has to pass so that it can go into production.
- We also need the requirement document from the customer. In normal scenarios the customer never writes a formal document until he is really sure of his requirements. But at some point the customer should sign saying yes this is what he wants.
- The customer should also define the risky sections of the project. For instance, in a normal accounting project if a voucher entry screen does not work that will stop the accounting functionality completely. But if reports are not derived the accounting department can use it for some time. The customer is the right person to say which section will affect him the most. With this feedback the testers can prepare a proper test plan for those areas and test it thoroughly.
- The customer should also provide proper data for testing. Feeding proper data during testing is very important. In many scenarios testers key in wrong data and expect results which are of no interest to the customer.
32. Can you explain the workbench concept?
In order to understand testing methodology we need to understand the workbench concept. A Workbench is a way of documenting how a specific activity has to be performed. A workbench is referred to as phases, steps, and tasks as shown in the following figure.
There are five tasks for every workbench:
- Input: Every task needs some defined input and entrance criteria. So for every workbench we need defined inputs. Input forms the first steps of the workbench.
- Execute: This is the main task of the workbench which will transform the input into the expected output.
- Check: Check steps assure that the output after execution meets the desired result.
- Production output: If the check is right the production output forms the exit criteria of the workbench.
- Rework: During the check step if the output is not as desired then we need to again start from the execute step.
33. Can you explain the concept of defect cascading?
Defect cascading is a defect which is caused by another defect. One defect triggers the other defect. For instance, in the accounting application shown here there is a defect which leads to negative taxation. So the negative taxation defect affects the ledger which in turn affects four other modules.
34. Can you explain cohabiting software?
When we install the application at the end client it is very possible that on the same PC other applications also exist. It is also very possible that those applications share common DLLs, resources etc., with your application. There is a huge chance in such situations that your changes can affect the cohabiting software. So the best practice is after you install your application or after any changes, tell other application owners to run a test cycle on their application.
35. What is the difference between pilot and beta testing?
The difference between pilot and beta testing is that pilot testing is nothing but actually using the product (limited to some users) and in beta testing we do not input real data, but it’s installed at the end customer to validate if the product can be used in production.
36. What are the different strategies for rollout to end users?
There are four major ways of rolling out any project:
- Pilot: The actual production system is installed at a single or limited number of users. Pilot basically means that the product is actually rolled out to limited users for real work.
- Gradual Implementation: In this implementation we ship the entire product to the limited users or all users at the customer end. Here, the developers get instant feedback from the recipients which allow them to make changes before the product is available. But the downside is that developers and testers maintain more than one version at one time.
- Phased Implementation: In this implementation the product is rolled out to all users in incrementally. That means each successive rollout has some added functionality. So as new functionality comes in, new installations occur and the customer tests them progressively. The benefit of this kind of rollout is that customers can start using the functionality and provide valuable feedback progressively. The only issue here is that with each rollout and added functionality the integration becomes more complicated.
- Parallel Implementation: In these types of rollouts the existing application is run side by side with the new application. If there are any issues with the new application we again move back to the old application. One of the biggest problems with parallel implementation is we need extra hardware, software, and resources.
37. What’s the difference between System testing and Acceptance testing?
Acceptance testing checks the system against the “Requirements.” It is similar to System testing in that the whole system is checked but the important difference is the change in focus:
System testing checks that the system that was specified has been delivered. Acceptance testing checks that the system will deliver what was requested. The customer should always do Acceptance testing and not the developer.
The customer knows what is required from the system to achieve value in the business and is the only person qualified to make that judgement. This testing is more about ensuring that the software is delivered as defined by the customer. It’s like getting a green light from the customer that the software meets expectations and is ready to be used.
38. Can you explain regression testing and confirmation testing?
Regression testing is used for regression defects. Regression defects are defects occur when the functionality which was once working normally has stopped working. This is probably because of changes made in the program or the environment. To uncover such kind of defect regression testing is conducted.
The following figure shows the difference between regression and confirmation testing.
If we fix a defect in an existing application we use confirmation testing to test if the defect is removed. It’s very possible because of this defect or changes to the application that other sections of the application are affected. So to ensure that no other section is affected we can use regression testing to confirm this.
What is Endurance Testing?
Endurance testing: in this testing we test application behavior against the load and stress applies over application for a long duration of time. The goal of this testing are:
– To determine the how the application is going to responds for high load and stress conditions in the real scenario.
– To ensure that the response times in highly load and stress conditions are within the user’s requirement of response time.
– Checks for memory leaks or other problems that may occur with prolonged execution.
What is Gorilla Testing?
A test technique that involves testing with various ranges of valid and invalid inputs a particular module or component functionality extensively. In Gorilla testing test case and test data are not required. It uses random data and test cases to perform testing of application. The purpose of Gorilla testing is to examine the capability of single module functionality by applying heavy load and stress to it. And determine how much load and stress it can tolerate without getting crashed.
Why we need Localization Testing?
Localization testing mainly deals with the functionality of application and GUI of the application. The purposes of using Localization testing are following:
– Mainly deal with internationalization and localization aspects of software.
– Evaluate how successfully the language is interpreted into a specific language.
– Translate GUI of application so that it can adapt to a particular region language and interface.
What is Metric?
Metric is a standard of measurement. Software metrics uses the statistical method for explaining the structure of the application. The software metric tells us the measurable things like number of bugs per lines of code. We can take the help of software metrics to make the decision regarding the application development. The test metrics is derived from raw test data because what cannot be measured cannot be managed. Software metric also helps the Project Management team to manage the project like schedule for development of each phase.
Explain Monkey testing.
Monkey testing is a type of Black Box Testing used mostly at the Unit Level. In this tester enter the data in any format and check the software is not crashing. In this testing we use Smart monkey and Dumb monkey.
Smart monkeys are used for load and stress testing, they will help in finding the bugs. They are very expensive to develop.
Dumb monkey are important for basic testing. They help in finding those bugs which are having high severity. Dumb monkey are less expensive as compare to Smart monkeys.
Example: In phone number filed Symbols are entered.
What is Negative Testing?
Negative Testing is performed to find the situation when the software crashed. It is a negative approach, in this tester try to put efforts to find the negative aspects of the application. Negative testing ensures that application can handle the invalid input, incorrect data and incorrect user response. For example, when user enters the alphabetical data in a numeric field, then error message should be display saying “Incorrect data type, please enter a number”.
What is Path Testing?
Path testing is a testing in which tester ensure that every path of the application should be executed at least once. In this testing, all paths in the program source code are tested at least once. Tester can use the control flow graph to perform this type of testing.
What is Performance Testing?
Performance Testing is focused on verifying the system performance requirements like response time, Transactional throughput and number of concurrent users. It is used to accurately measure the End-to-End performance of a system. It identifies the loop holes in Architectural Design which helps to tune the application.
It includes the following:
– Emulating ‘n’ number of users interacting with the system using minimal hardware.
– Measuring End-User’s Response time.
– Repeating the load consistently.
– Monitoring the system components under controlled load.
– Providing robust analysis and reporting engines.
What is the difference between baseline and benchmark testing?
The difference between baseline and benchmark testing are:
– Baseline testing is the process of running a set of tests to capture performance information whereas Benchmarking is the process of comparing application performance with respect to industry standard that is given by some other organization.
– Baseline testing use the information collected to made the change in the application to improve performance and capabilities of the application whereas benchmark information where our application stands with respect to others.
– Baseline compares present performance of application with its own previous performance where as benchmark compares our application performance with other companies application’s performance.
What is test driver and test stub?
– The Stub is called from the software component to be tested. It is used in top down approach.
– The driver calls a component to be tested. It is used in bottom up approach.
– Both test stub and test driver are dummy software components.
We need test stub and test driver because of following reason:
– Suppose we want to test the interface between modules A and B and we have developed only module A. So we cannot test module A but if a dummy module is prepare, using that we can test module A.
– Now module B cannot send or receive data from module A directly so, in these cases we have to transfer data from one module to another module by some external features. This external feature used is called Driver.
What is Agile Testing?
Agile Testing means to quickly validation of the client’s requirements and make the application of high quality user interface. When the build is released to the testing team, testing of the application is started to find the bugs. As a Tester, we need to focus on the customer or end user requirements. We put the efforts to deliver the quality product in spite of short time frame which will further help in reducing the cost of development and test feedbacks will be implemented in the code which will avoid the defects coming from the end user.
Explain bug life cycle.
Bug Life Cycle:
– When a tester finds a bug .The bug is assigned NEW or OPEN with status,
– The bug is assigned to development project manager who will analyze the bug .He will check whether it is a valid defect. If not valid bus is rejected, now status is REJECTED.
– If not, next the defect is checked whether it is in scope. When bug is not part of the current release .Such defects are POSTPONED
– Now, Tester checks whether similar defect was raised earlier. If yes defect is assigned a status DUPLICATE
– When bug is assigned to developer. During this stage bug is assigned a status IN-PROGRESS
– Once bug is fixed. Defect is assigned a status FIXED
– Next the tester will re-test the code. In case the test case passes the defect is CLOSED
– If test case fails again the bug is RE-OPENED and assigned to the developer. That’s all to Bug Life Cycle.
What is Matching Defects?
Matching Defects helps us to remove the locking of same defect in the bug in the application. While using QC, every time we lock a bug, QC saves the list of keywords from the Summary and Description Fields of the bug. When we search for similar defects in QC, keywords in these fields are matched with other defects which are locked previously. Keywords are more than two characters and they are not case sensitive. We have two methods to conduct search of similar defects.
– Finding Similar Defects: compare a selected defect with all other existing defects in project.
– Finding similar Text: compare a specific test string against all other existing defects in project.
What is Recovery Testing?
Recovery testing is done to check how fast and better the application can recover against any type of crash or hardware failure. Type or extent of recovery is specified in the requirement specifications. Recovery testing will enable customer to avoid any inconvenience that are generally associated with the loss of data and performance of the application. We can perform regular recovery testing in order to take backup of all necessary and important data.
What is Test Case?
A test case is a set of conditions which is used by tester to perform the testing of application to make sure that application is working as per the requirement of the user.
– A Test Case contains information like test steps, verification steps, prerequisites, outputs, test environment, etc
– The process of developing test cases can also enable us to determine the issues related to the requirement and designing process of the application.
In Test First Design what step you will follow to add new functionality into the project?
When we have to add new functionality our project, we perform the following steps:
– Quickly add a developer test: we need to create a test that ensures that new added functionality will not crash our project.
– Run your tests. Execute that test, to ensure that new add functionality does not crash our application.
– Update your production code. In this we update our code with few more functionality so that the code passes the new test. Like adding of error message in field where field can take only numeric data.
– Run your test suite again. If test fails, we have to do change in the code and perform retesting of the application.
What is Validation and Verification?
Verification: process of evaluating work-products of a development phase to determine whether they fulfill the specified requirements for that phase.
Validation: process of evaluating software during or at the end of the development process to determine whether it specified requirements.
Difference between Verification and Validation:
– Verification is Static testing where as Validations is Dynamic Testing.
– Verification takes place before validation.
– Verification evaluates plans, document, requirements and specification, where as Validation evaluates product.
– Verification inputs are checklist, issues list, walkthroughs and inspection where as in Validation testing of actual product.
– Verification output is set of document, plans, specification and requirement documents where as in Validation actual product is output.
What are different approaches to do Integration Testing?
Integration testing is black box testing. Integration testing focuses on the interfaces between units, to ensure that units work together to complete a specify task. The purpose of integration testing is to confirm that different components of the application interact with each other. Integration testing is considered complete, when actual results and expected results are same. There are mainly three approaches to do integration testing.
– Top-down Approach: Tests the components by integrating from top to bottom.
– Bottom-up approach: It takes place from the bottom of the control flow to the higher level components
– Big bang approach: In this are different module are joined together to form a complete system and then testing is performed on it.
Can you explain the elementary process?
Software applications are made up by the help of several elementary processes. There are two types of elementary processes:
– Dynamic elementary Process: The dynamic elementary involves process of moving data from one location to another location. The location can be within the application and outside the application.
– Static elementary Process: Static elementary involves maintaining the data of the application.
Explain the PDCA cycle.
Software testing is an important part of the software development process. In normal software development there are four important steps PDCA (Plan, Do, Check, Act) cycle. The four steps are discussed below:
– Plan: Define the goal and the plan for achieving that goal.
– Do: execute those plan strategy which is planned in plan phase
– Check: Check to make sure that everything is going according to the plan and gets the expected results.
– Act: Act according to that issue.
What are the categories of defects?
There are three main categories of defects:
– Wrong: The requirements are implemented incorrectly in the application.
– Missing: When requirement given by the customer and application is unable to meet those application.
– Extra: A requirement incorporated into the product that was not given by the end customer. This is always a variance from the specification, but may be an attribute desired by the user of the product.
What are different types of verifications?
Verification is static type of software testing which is started in earlier phase of development of software. In this approach we don’t execute the software that the reason it comes in static testing. The product is evaluated by going through the code. Types of verification are:
– Walkthrough: Walkthroughs are informal technique. Where the Developer leader organizing a meeting with team member to take feedback regarding the software. This can be used for the improvement of the software quality. Walkthrough are unplanned in the SDLC cycle.
– Inspection: Inspection is a done by checking of a software product thoroughly with the intention to find out defect and ensuring that software is meeting the user requirements.
Which test cases are written first: white boxes or black boxes?
Generally, black box test cases are written first and white box test cases later. To write black box test cases we need the requirement documents and design or project plan. All these documents are easily available in the earlier phase of the development. A black box test case does not need functional design of the application but white box testing needs. Structural design of the application is clearer in the later part of project, mostly while executing or designing. For black box testing you need to only analyze from the functional perspective which is easily available from a simple requirement document.
What is difference between latent and masked defect?
The difference between latent and masked defect are:
– A latent defect is an existing defect that has not yet caused a failure because the conditions that are required to invoke the defect is not meet.
– A masked defect is an existing defect that has not yet caused a failure just because another defect hides that part of the code from being executed where it is present.
What is coverage and what are the different types of coverage techniques?
Coverage is a measurement used in software testing to describe the degree to which the source code is tested. There are three basic types of coverage techniques as shown in the following figure:
– Statement coverage: This coverage ensures that each line of source code has been executed and tested.
– Decision coverage: This coverage ensures that every decision (true/false) in the source code has been executed and tested.
– Path coverage: In this coverage we ensure that every possible route through a given part of code is executed and tested.
Explain the concept of defect cascading?
Defect cascading is a defect which is caused by another defect. In this one defect invokes the other defect in the application. When a defect is present in any stage but is not identified, hide to other phases without getting noticed. This will result in increase in number of defects.
What are the basic elements of defect report format?
The basic elements of Defect Report Format are:
1. Project name
2. Module name
3. Defect detected on
4. Defect detected by
5. Defect id
6. Defect name
7. Snapshot of the defect(if the defect is in the non reproducible environment)
8. Priority, severity, status
9. Defect resolved by
10. Defect resolved on
What is destructive testing, and what are its benefits?
Destructive testing includes methods where material is broken down in to evaluate the mechanical properties, such as strength, toughness and hardness. For example, finding the quality of a weld is good enough to withstand extreme pressure and also to verify the properties of a material.
Benefits of Destructive Testing (DT)
– Verifies properties of a material
– Determines quality of welds
– Helps you to reduce failures, accidents and costs
– Ensures compliance with regulations
What is Use Case Testing?
Use Case: A use case is a description of the process which is performed by the end user for a particular task. Use case contains a sequence of step which is performed by the end user to complete a specific task or a step by step process that describe how the application and end user interact with each other. Use case is written by the user point of view.
Use case Testing: the use case testing uses this use case to evaluate the application. So that, the tester can examines all the functionalities of the application. Use case testing cover whole application.
What is Requirement Traceability Matrix?
The Requirements Traceability Matrix (RTM) is a tool to make sure that project requirement remain same throughout the whole development process. RTM is used in the development process because of following reasons:
– To determine whether the developed project is meet the requirements of the user.
– To determine all the requirements given by the user.
– To make sure the application requirement can be fulfilled in the verification process.
What is difference between Pilot and Beta testing?
The differences between these two are listed below:
– A beta test when the product is about to release to the end user whereas pilot testing take place in the earlier phase of the development cycle.
– In beta testing application is given to a few user to make sure that application meet the user requirement and does not contain any showstopper whereas in case of pilot testing team member give their feedback to improve the quality of the application.
Describe how to perform Risk analysis during software testing?
Risk analysis is the process of identifying risk in the application and prioritizing them to test. Following are some of the risks:
1. New Hardware.
2. New Technology.
3. New Automation Tool.
4. Sequence of code delivery.
5. Availability of application test resources.
We prioritize them into three categories these are:
– High magnitude: Impact of the bug on the other functionality of the application.
– Medium: it can be tolerable in the application but not desirable.
– Low: it can be tolerable. This type of risk has no impact on the company business.
What is Silk Test?
Silk Test is a tool developed for performing the regression and functionality testing of the application. Silk Test a tool is used when we are testing the applications which are based on Window, Java, web or traditional client/server. Silk Test help in preparing the test plan and management of those test plans, to provide the direct accessing of the database and validation of the field.
What is difference between Master Test Plan and Test Plan.
The differences between Master Plan and Test Plan are given below:
– Master Test Plan contains all the testing and risk involved area of the application where as Test case document contains test cases.
– Master Test plan contain all the details of each and every individual tests to be run during the overall development of application whereas test plan describe the scope, approach, resources and schedule of performing test.
– Master Test plan contain the description of every tests that is going to be performed on the application where as test plan only contain the description of few test cases. during the testing cycle like Unit test, System test, beta test etc
– Master Test Plan is created for all large projects but when it is created for the small project then we called it as test plan.
How to deal with not reproducible bug?
A bug cannot be reproduced for following reasons:
1. Low memory.
2. Addressing to non available memory location.
3. Things happening in a particular sequence.
Tester can do following things to deal with not reproducible bug:
– Includes steps that are close to the error statement.
– Evaluate the test environment.
– Examine and evaluate test execution results.
– Resources & Time Constraints must be kept in point.
What is the difference between coupling and cohesion?
The difference between coupling and cohesion is discussed below:
– Cohesion is the degree which is measure dependency of the software component that combines related functionality into a single unit whereas coupling means that binding the related functionality into different unit.
– Cohesion deals with the functionality that related different process within the single module where as coupling deals with how much one module is dependent on the other modules within the application.
– It is good to increase the cohesion between the software whereas increasing coupling is avoided.
What is the role of QA in a project development?
The role of Quality Assurance is discussed below:
– QA team is responsible for monitoring the process to be carried out for development.
– Responsibilities of QA team are planning testing execution process.
– QA Lead creates the time tables and agrees on a Quality Assurance plan for the product.
– QA team communicated QA process to the team members.
– QA team ensures traceability of test cases to requirements.
When do you choose automated testing over manual testing?
This choice between automated testing over manual testing can be based upon following factors:
1. Frequency of use of test case
2. Time Comparison (automated script run much faster than manual execution.)
3. Reusability of Automation Script
4. Adaptability of test case for automation.
5. Exploitation of automation tool
What are the key challenges of software testing?
Following are some challenges of software testing:
1. Application should be stable enough to be tested.
2. Testing always under time constraint.
3. Understanding the requirements.
4. Domain knowledge and business user perspective understanding.
5. Which tests to execute first?
6. Testing the Complete Application.
7. Regression testing.
8. Lack of skilled testers.
9. Changing requirements.
10. Lack of resources, tools and training.
What is baseline testing?Baseline testing is the process of running a set of tests to capture performance information. Baseline testing use the information collected to made the changes in the application to improve performance and capabilities of the application. Baseline compares present performance of application with its own previous performance. |
What is benchmark testing?Benchmarking testing is the process of comparing application performance with respect to industry standard which is given by some other organization. Benchmark informs us where our application stands with respect to others. Benchmark compares our application performance with other company’s application’s performance. |
What is verification and validation?Verification: process of evaluating work-products of a development phase to determine whether they meet the specified requirements for that phase. Validation: process of evaluating software during or at the end of the development process to determine whether it specified requirements. Difference between Verification and Validation: – Verification is Static Testing where as Validations is Dynamic Testing. |
Explain Branch Coverage and Decision Coverage.– Branch Coverage is testing performed in order to ensure that every branch of the software is executed atleast. To perform the Branch coverage testing we take the help of the Control Flow Graph. – Decision coverage testing ensures that every decision taking statement is executed atleast once. – Both decision and branch coverage testing is done to ensure the tester that no branch and decision taking statement, will not lead to failure of the software. – To Calculate Branch Coverage: |
What is difference between Retesting and Regression testing?The differences between Retesting and Regression testing are below: – Retesting is done to verify defect fix previous in now working correctly where as regression is perform to check if the defect fix have not impacted other functionality that was working fine before doing changes in the code. – Retesting is specific and is performed on the bug which is fixed where as in regression is not be always specific to any defect fix it is performed when any bug is fixed. – Retesting concern with executing those test cases that are failed earlier where as regression concern with executing test cases that was passed in earlier builds. – Retesting has higher priority over regression. |
What is Mutation testing & when can it be done?Mutation testing is a performed to find out the defect in the program. It is performed to find put bugs in specific module or component of the application. Mutation testing is based on two assumptions: Competent programmer hypothesis: according this hypothesis we suppose that program write the correct code of the program. |
What is severity and priority of bug? Give some example.Priority: concern with application from the business point of view. It answers: How quickly we need to fix the bug? Or how soon the bug should get fixed? How much the bug is affecting the functionality of the application? Ex. 1. High Priority and Low Severity: 2. High Priority and High Severity: 3. Low Priority and High Severity: 4. Low Priority and Low Severity: |
Explain bug leakage and bug release.Bug Leakage: When customer or end user discovered a bug which can be detected by the testing team. Or when a bug is detected which can be detected in pervious build then this is called as Bug Leakage. Bug release: is when a build is handed to testing team with knowing that defect is present in the release. The priority and severity of bug is low. It is done when customer want the application on the time. Customer can tolerate the bug in the released then the delay in getting the application and the cost involved in removing that bug. These bugs are mentioned in the Release Notes handed to client for the future improvement chances. |
What is alpha and beta testing?Alpha testing: is performed by the IN-House developers. After alpha testing the software is handed over to software QA team, for additional testing in an environment that is similar to the client environment. Beta testing: beta testing becomes active. It is performed by end user. So that they can make sure that the product is bug free or working as per the requirement. IN-house developers and software QA team perform alpha testing. The public, a few select prospective customers or the general public performs beta testing. |
What is Monkey testing?Monkey testing is a type of Black Box Testing used mostly at the Unit Level. In this tester enter the data in any format and check the software is not crashing. In this testing we use Smart monkey and Dumb monkey. Smart monkeys are used for load and stress testing, they will help in finding the bugs. They are very expensive to develop. Dumb monkey, are important for basic testing. They help in finding those bugs which are having high severity. Dumb monkey are less expensive as compare to Smart monkeys. Example: In phone number filed Symbols are entered. |
Why Performance Testing is performed?Performance Testing is performed to evaluate application performance under some load and stress condition. It is generally measured in terms of response time for the user activity. It is designed to test the whole performance of the system at high load and stress condition. Example: Customer like to withdraw money from an ATM counter, customer inserts debit or credit card and wait for the response. If system takes more than 5 min. then according to requirements system functioning is fail. Types of Performance Testing: – Load: analogous to volume testing and determine how application deal with large amount of data.
|
What are tools of performance testing?Following are some popular commercial testing tools are: – LoadRunner(HP): this for web and other application. It provides a variety of application environments, platforms and database. Number of server monitors to evaluate the performance measurement of each component and tracking of bottlenecks. – QAload(Compuware): used for load testing of web, database and char-based system. – WebLoad(RadView): it allows comparing of running test vs. test metrics. – Rational Performance Tester (IBM): used to identify presence and cause of system performance bottlenecks. – Silk Performer (Borland): allow prediction of behavior of e-business environment before it is deployed, regardless of size and complexity. |
Explain the sub-genres of Performance testing.Following are the sub-genres of Performance Testing: – Load Testing: it is conducted to examine the performance of application for a specific expected load. Load can be increased by increasing the number of user performing a specific task on the application in a specific time period. – Stress Testing: is conducted to evaluate a system performance by increasing the number of user more than the limits of its specified requirements. It is performed to understand at which level application crash. – Volume Testing: test an application in order to determine how much amount of data it can handle efficiently and effectively. – Spike Testing: what changes happens on the application when suddenly large number of user increased or decreased. – Soak Testing: is performed to understand the application behavior when we apply load for a long period of time what happens on the stability and response time of application. |
What is performance tuning?To improve the system performance we follow a mechanism, known as Performance tuning. To improve the systems performance there are two types of tuning performed: Hardware tuning: Optimizing, adding or replacing the hardware components of the system and changes in the infrastructure level to improve the systems performance is called hardware tuning. Software tuning: Identifying the software level bottlenecks by profiling the code, database etc. Fine tuning or modifying the software to fix the bottlenecks is called software tuning. |
What is concurrent user hits in load testing?When the multiple users, without any time difference, hits on a same event of the application under the load test is called a concurrent user hit. The concurrency point is added so that multiple Virtual User can work on a single event of the application. By adding concurrency point, the virtual users will wait for the other Virtual users which are running the scripts, if they reach early. When all the users reached to the concurrency point, only then they start hitting the requests. |
What is the need for Performance testing?Performance testing is needed to verify the below: – Response time of application for the intended number of users |
What is the reason behind performing automated load testing?Following drawbacks of manual Load Testing that leads to Automation load testing: – Difficult to measure the performance of the application accurately. |
What are the exiting and entering criteria in the performance testing?We can start the performance testing of application during the design. After the execution of the performance testing, we collected the results and analyzed them to improve the performance. The performance tuning processed will be performed throughout the application development life cycle. Performance tuning is performed which is based on factors like release time of application and user requirements of application stability, reliability and scalability under load, stress and performance tolerance criteria. In some projects the end criteria is defined based on the client performance requirements defined for each section of the application. When product reaches to the expected level then that can be considered as the end criteria for performance testing. |
.How do you identify the performance bottlenecks situations?Performance Bottlenecks can identify by monitoring the application against load and stress condition. To find bottleneck situation in performance testing we use Load Runner because provides different types of monitors like run-time monitor, web resource monitor, network delay monitor, firewall monitor, database server monitor, ERP server resources monitor and Java performance monitor. These monitors can help to us to determine the condition which causes increased response time of the application. The measurements of performance of the application are based on response time, throughput, hits per sec, network delay graphs, etc. |
What activities are performed during performance testing of any application?Following activities are performed during testing of application: 1. Create user scenarios |
Define performance and stress testing.Performance Testing: Performance Testing is performed to evaluate application performance under some load and stress condition. It is generally measured in terms of response time for the user activity. It is designed to test the whole performance of the system at high load and stress condition. Stress testing: It involves imposing the database with heavy loads. Such as, large numbers of users access the data from the same table and that table contains large number of records. |
What are the typical problems in web testing?The following problem may arise in web testing: – Functionality problems |
Write the test scenarios for testing a web site?First we have to assume that Graphical User Interface (GUI) objects and elements of a website together is One Test Scenario. Then, we have to check all the links and buttons. Then we have to check all forms are working properly or not. Prepare Test Scenarios of the forms of a webpage. We can identify 4 different types of Test Scenarios of a form: – Check the form with valid data in all the fields. |
While testing a website, which are the different configurations which will have to be considered?These configurations may demand for change in strategy of the webpage. The most important factors that need consideration are following: Hardware platform: some user may use the Mac platform, some may use Linux, while others may use Microsoft platform. Browsers: browser and their versions also change the layout of the web page. Along with the browser versions, the different Plug-Ins also has to be taken into consideration. The resolution of the monitor also with color depth and text size is some of the other configurations. |
What is the difference between authentication and authorization in web testing?The differences between authentication and authorization are: – Authentication is the process with which the system identifies the user whereas authorization is the process after the authentication process. – The authentication is used to ensure that the user is indeed a user, who he claims to be whereas in authorization system will decide whether a particular task can be performed by the user. – There are different types of authentications, which can be used like password based authentication, device based authentication whereas in authorization there are two types read only, and read write both. |
Explain the different between HTTP and HTTPS?The differences between HTTP and HTTPS are following: – Hypertext Transfer Protocol is a protocol for information to be passed back and forth between web servers and clients. Https is refers to the combination of a normal HTTP interaction over an encrypted Secure Sockets Layer (SSL) or Transport Layer Security (TLS) transport mechanism. – HTTP use port number 80 whereas HTTPS use port number 443. – HTTP can support the client asking for a particular file to be sent only if it has been updated after a certain date and time whereas Hypertext Transfer Protocol over Secure Socket Layer is built into its browser that encrypts and decrypts user page requests as well as the pages that are returned by the Web server. |
What is the difference between the static and dynamic website?The differences between Static and Dynamic website are following: – A static website contains Web pages with fixed content where as in Dynamic web site content of the web page change with respect to time. – Static website are easy to create and don’t require any database design but in case of dynamic website it require good knowledge to develop the website with programming and database knowledge. – In static website user cannot communicate with other and same information will be displayed to each user where as in dynamic website user may communicate with each other. |
How do you perform testing on web based application using QTP?We can do the performance testing using QTP by adding the web add-in in the QTP at the startup of the QTP. Now to make URL of the website available to the QTP we have to type the URL of the site. So that while running QTP will open the application and do the testing. |
What is Cross Site Scripting?Cross Site Scripting is a thread in the dynamic website. It is also known as XSS. Cross site scripting occurs when a web application gathers malicious data from a user. The data is collected in the hyperlink form which contains malicious content within it. It allows malicious code to be inserted into the web page. The web page can be a simple HTML code or a client side script. When the malicious code is inserted in page and clicked by some user, the malicious code becomes a part of the web request of the user. This request can also execute on the user’s computer and steal information. |
What type of security testing you performed?To perform the security testing tester try to attack the system. This is the best way to determine the lope hole in the security area of the application. Most of the systems use encryption technique to store passwords. In this we have to try to get access to the system by using different combinations of passwords. Another common example of security testing is to find if the system is vulnerable to SQL injection attacks. While performing the security testing, tester cannot do any changes in any of the following: – Configuration of the application or the server |
Que 1 – What is Component Testing?
Ans –Testing of individual software components (Unit Testing).
Que 2- What’s Compatibility Testing meaning by?
Ans – In Compatibility testing we can test that software is compatible with other elements of system.
Que 3 – What is Data Driven Testing?
Ans –Testing in which the action of a test case is parameterized by externally defined data values, maintained as a file or spreadsheet. A common technique in Automated Testing.
Que 4 – What is the difference between Positive and Negative Testing?
Ans –Testing aimed at showing software works. Also known as “test to pass” is Positive Testing. Testing aimed at showing software does not work. Also known as “test to fail” is Negative Testing
Que 5 – What is fault?
Ans –A fault is condition that causes system to fail in performing the required function.
Que 6 – What is Performance Testing and Load Testing?
Ans –Testing conducted to evaluate the compliance of a system or component with specified performance requirements. Often this is performed using an automated test tool to simulate large number of users. Also known as “Load Testing”.
Ques 7 – What is the Re-testing?
Ans – Retesting- Again testing the functionality of the application.
Que 8 – What is the importance of Regression testing?
Ans Regression- Check that change in code have not effected the working functionality
Que 9 – Why Agile Testing is important?
Ans – Testing practice for projects using agile methodologies, treating development as the customer of testing and emphasizing a test-first design paradigm. See also Test Driven Development.
Que 10 – What is Basis Path Testing?
Ans- A white box test case design technique that uses the algorithmic flow of the program to design tests.
Que 11 – What skills needed to be a good test automator?
Ans – Good Logic for programming, Analytical skills, Pessimestic in Nature.
Que 12 – Why does software have bugs ?
Ans – 1.miscommunication
2.programming errors
3.time pressures.
4.changing requirements
5.software complexity
Que 13 – How does Bug affect the computer program?
Ans – A fault in a program which causes perform in an unintended or unanticipated manner.
Que 14 – What is Defect?
Ans – If software misses some feature or function from what is there in requirement it is called as defect.
Que 15 – What is the full name of CAST?
Ans – Computer Aided Software Testing.
Que 16 – What is Capture/Replay Tool?
Ans – A test tool that records test input as it is sent to the software under test. The input cases stored can then be used to reproduce the test at a later time. Most commonly applied to GUI test tools.
Que 17- What is the importance of CMM?
Ans –The Capability Maturity Model for Software (CMM or SW-CMM) is a model for judging the maturity of the software processes of an organization and for identifying the key practices that are required to increase the maturity of these processes.
Que 18 – What is Code Inspection?
Ans – A formal testing technique where the programmer reviews source code with a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards.
Que 19 – What is Code Walkthrough?
Ans- A formal testing technique where source code is traced by a group with a small set of test cases, while the state of program variables is manually monitored, to analyze the programmer’s logic and assumptions.
Que 20 – What is Coding?
Ans- It is the generation of source code.
Que 21 – What is Data Dictionary?
Ans – A database that contains definitions of all data items defined during analysis.
Que 22 – What is Data Flow Diagram?
Ans – It is modeling notation that represents a functional decomposition of a system.
Que 23 – What are main benefits of test automation?
Ans – It’s fast, reliable, comprehensive and reusable.
Que 24 – What is Gorilla Testing?
Ans – It is a Testing one particular module, functionality heavily.
Que 25 – What is Gray Box Testing?
Ans – A combination of Black Box and White Box testing methodologies testing a piece of software against its specification but using some knowledge of its internal workings.
Que 26 – What is High Order Tests?
Ans – Black-box tests conducted once the software has been integrated.
Que 27 – What is Independent Test Group (ITG)?
Ans – A group of people whose primary responsibility is software testing,
Que 28 – What is Boundary Value Analysis?
Ans – BVA is similar to Equivalence Partitioning but focuses on “corner cases” or values that are usually out of range as defined by the specification. his means that if a function expects all values in range of negative 100 to positive 1000, test inputs would include negative 101 and positive 1001.
Ques 29 – What is Code Complete?
Ans – Phase of development where functionality is implemented in entirety bug fixes all that left. All functions found in the Functional Specifications have been implemented.
Que 30 – What is Code Coverage?
Ans – An analysis method that determines which parts of the software have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional attention.
Que 31 – What is Cyclomtic Complexity?
Ans – It is a measure of the logical complexity of an algorithm, used in white-box testing.
Que 32 – What is Quality Assurance?
Ans – All those planned or systematic actions necessary to provide adequate confidence that a product or service is of the type and quality needed and expected by the customer.
Que 33-What is Quality Audit?
Ans – A systematic and independent examination to determine whether quality activities and related results comply with planned arrangements and whether these arrangements are implemented effectively and are suitable to achieve objectives.
Que34 – What is Ramp Testing?
Ans – Continuously raising an input signal until the system breaks down.
Que 35 – What does Recovery Testing mean by?
Ans – It Confirms that the program recover from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.
Que 36 – What is Regression Testing?
Ans – Retesting is a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made
Que 37 – What is Security Testing?
Ans – Testing which confirms that the program can restrict access to authorized personnel and that the authorized personnel can access the functions available to their security level.
Que 38 – What is Smoke Testing?
Ans – Smoke Testing is a quick-and-dirty test that the major functions of a piece of software work. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considered it a success if it does not catch on fire.
Que 39 – What is Static Analysis?
Ans- Analysis of a program carried out without executing the program.
Que 40 – What is the main work of Static Analysis?
Ans – It is basically tool that carries out static analysis.
Que 41 – What is Static Testing?
Ans – Analysis of a program carried out without executing the program
Ques 42 -What is Test Bed?
Ans – An execution environment configured for testing.It May consist of specific hardware, OS, network topology, configuration of the product under test, other application or system software, etc. The Test Plan for a project should enumerate the test beds(s) to be used.
Que 43 – What is Test Case?
Ans – Test Case is a commonly used term for a specific test. This is usually the smallest unit of testing. A Test Case will consist of information such as requirements testing, test steps, verification steps, prerequisites, outputs, test environment, etc. A set of inputs, execution preconditions, and expected outcomes developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement. Test Driven Development? Testing methodology associated with Agile Programming in which every chunk of code is covered by unit tests, which must all pass all the time, in an effort to eliminate unit-level and regression bugs during development. Practitioners of TDD write a lot of tests, i.e. an equal number of lines of test code to the size of the production code.
Que 44 – What is Load Testing?
Ans – See Performance Testing.
Que 45 – What is Localization Testing?
Ans – This term refers to making software specifically designed for a specific locality.
Que 46 – What is the meaning of Glass Box Testing?
Ans – It is a synonym for White Box Testing.
Que 47 – What is Functional Testing?
Ans – Testing the features and operational behavior of a product to ensure they correspond to its specifications. Testing that ignores the internal mechanism of a system or component and focused solely on the outputs generated in response to selected inputs and execution conditions or Black Box Testing.
Que 48 – What is Inspection?
Ans- A group review quality improvement process for written material. It consists of two aspects; product (document itself) improvement and process improvement (of both document production and inspection).
Que 49 – What is Quality Circle?
Ans – A group of individuals with related interests that meet at regular intervals to consider problems or other matters related to the quality of outputs of a process and to the correction of problems or to the improvement of quality.
Que 50 – What is Quality Control?
Ans – The operational techniques and the activities used to fulfill and verify requirements of quality.
Que 51 – What is Release Candidate?
Ans – A pre-release version, which contains the desired functionality of the final version, but which needs to be tested for bugs (which ideally should be removed before the final version is released).
Que 52 – What is Scalability Testing?
Ans – Performance testing focused on ensuring the application under test gracefully handles increases in work load.
Que 53 – What is Software Requirements Specification?
Ans – A deliverable that describes all data, functional and behavioral requirements, all constraints, and all validation requirements for software/
Que 54 – What is Storage Testing?
Ans – Testing that verifies the program under test stores data files in the correct directories and that it reserves sufficient space to prevent unexpected termination resulting from lack of space. This is external storage as opposed to internal storage.
Que 55 – What is Equivalence Partitioning?
Ans- A test case design technique for a component in which test cases are designed to execute representatives from equivalence classes.
Que 56 – What is Exhaustive Testing?
Ans- Testing which covers all combinations of input values and preconditions for an element of the software under test.
Que 57 – What is Equivalence Class?
Ans- A portion of a component’s input or output domains for which the component’s behavior is assumed to be the same from the component’s specification.
Que 58 – What is Functional Decomposition?
Ans – A technique used during planning, analysis and design; creates a functional hierarchy for the software.
Que 59 – What is Functional Specification?
Ans – IT is a document that describes in detail the characteristics of the product with regard to its intended features.
Que 60 – What is Conversion Testing?
Ans –Testing of programs or procedures used to convert data from existing systems for use in replacement systems.
Que 61 – What could go wrong with test automation?
Ans – Choice of automation tool and the set of test cases
Que 62 – What is Debugging?
Ans – It is the process of finding and removing the causes of software failures.
Que 63 – What is Component?
Ans- A minimal software item for which a separate specification is available.
Que 64 – What are different names for Unit Testing?
Ans – Component testing, program testing, Module testing
Que 65 – What is CMM?
Ans –The Capability Maturity Model for Software (CMM or SW-CMM) is a model for judging the maturity of the software processes of an organization and for identifying the key practices that are required to increase the maturity of these processes.
Que 66 – What is Cause Effect Graph?
Ans – A graphical representation of inputs and the associated outputs effects which can be used to design test cases.
Que 67 – What is Boundary Testing?
Ans –Test which focus on the boundary or limit conditions of the software being tested. (Some of these tests are stress tests).
Que 68 – What is Bug?
Ans – A fault in a program which causes it to perform in an unintended or unanticipated manner.
Que 69 -What is Beta Testing?
Ans –Testing of a release of a software product conducted by customers.
Que 70 – What is Binary Portability Testing?
Ans –Testing an executable application for portability across system platforms and environments, usually for conformation to an ABI specification.
Que 71 -What is Black Box Testing?
Ans – Testing based on an analysis of the specification of a piece of software without reference to its internal workings. The goal is to test how well the component conforms to the published requirements for the component
Que 72- What is Basis Set?
Ans –The set of tests derived using basis path testing.
Que 73 – What is Baseline?
Ans – The point at which some deliverable produced during the software engineering process is put under formal change control.
Que 74 – What is Endurance Testing?
Ans – Checks for memory leaks or other problems that may occur with prolonged execution.
Que 75 – What is Metric?
Ans – A standard of measurement. Software metrics are the statistics describing the structure or content of a program. A metric should be a real objective measurement of something such as number of bugs per lines of code.
Que 76 – What is Quality Policy?
Ans – The overall intentions and directions of an organization as regards quality as formally expressed by top management.
Que 78 – What is Stress Testing?
Ans –Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements to determine the load under which it fails and how. Often this is performance testing using a very high level of simulated load.
Que 79 – What is Monkey Testing?
Ans –Testing a system or an Application on the fly, i.e. just few tests here and there to ensure the system or an application does not crash out.
Que 80 – What is Concurrency Testing?
Ans – Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single-threaded code and locking semaphores
Que 81 – What is Backus-Naur Form?
Ans – A meta language used to formally describe the syntax of a language.
Que 82 – What is Basic Block?
Ans – A sequence of one or more consecutive, executable statements containing no branches.
Que 83 – What is Sanity Testing?
Ans – Brief test of major functional elements of a piece of software to determine if it’s basically operational.
What is Quality?
- Customer satisfaction? Subjective term. It will depend on who the ‘customer’ is. Each type of customer will have their own view on ‘quality’
What is Software Quality?
- Measurement of how close is actual software product to the expected (intended) product
- Customer satisfaction (to who?)
- Quality Software: reasonably bug-free, delivered on time and within budget, meets requirements and/or expectations, and is maintainable
What is Software Quality Assurance?
- Software QA is the process of monitoring and improving all activities associated with software development, from requirements gathering, design and reviews to coding, testing and implementation.
What is the difference between Software Testing and Software QA?
- Testing is mainly an ‘error detection’ process
- Software QA is ‘preventative’. It aims to ensure quality in the methods & processes. (“Quality Assurance” measures the quality of processesused to create a quality product)
What is Software Testing?
- Software Testing is the process of analyzing the software in order to detect the differences between existing and required conditions and to evaluate the features of the software. It involves the entire software development process:
– monitoring and improving the process
– making sure that any agreed-upon standards and procedures are followed
– ensuring that problems are found and dealt with, at the earliest possible stage
- The purpose of testing is verification, validation and error detection (in order to find and fix the problems)
– Verificationis checking for conformance and consistency by evaluating the results against pre-specified requirements. (Verification: Are we building the system right?)
– Validation is the process of checking that what has been specified is what the user actually wanted. (Validation: Are we building the right system?)
– Error Detection: finding if things happen when they shouldn’t or things don’t happen when they should.
Is it possible to find/fix all the bugs in a software product before it goes to the customers? Why test?
- To establish and to enforce business systems of the QA Organization (Test planning, bug tracking, bug reporting, test automation, release certification, and others)
What is black/white box testing?
- Black box software testing is done without access to the source code.
- White box testing is done with access to the code. Bugs are reported at the source code level, not behavioral.
Describe a bug?
- Mismatch between actual behavior of a software application and its intended (expected) behavior. We learn about expected behavior from requirements, specifications, other technical documentation.
What is use case?
- Use cases are used by Business Analysts as a format for specifying system requirements. Each use case represents completed business operation performed by user. From the QA prospective we would need to execute End-To-End test to make sure the requirement is implemented.
- Find more here: http://searchsoftwarequality.techtarget.com/sDefinition/0,,sid92_gci334062,00.html
What is the most important impact QA can have on a product development process?
- Clarifying requirements
- Bringing down percentage of code re-written due to the change in requirements
What is Negative testing? Positive?
- Positive testing aimed at showing software works as intended when user does what he/she does correct actions.
- Negative testing aimed at showing that software handles properly situations in which user acts not as user is supposed to act (invalid inputs, unreasonable selections of settings, etc.)
Which type of testing results in highest number of bugs found?
- Negative testing (versus Positive testing of same type)
What is the software development life cycle?
- The software development life cycle (SDLC) is a conceptual model used in project management that describes the stages involved in an information system development project, from an initial feasibility study through maintenance of the completed application. Read more –
What is a Test Case?
- Set of conditions and/or variables under which a tester will determine if a requirement upon an application is satisfied
What does Test Case include?
When planning for testing the test case:
- Test case ID
- The purpose (Title, Description) of the test case
- An instruction on how to get from the application base stateto a verifiable application output or expected result
- Expected result
When execute test cases we need two more columns:
- Actual result
- PASS/FAIL indication
What is a test plan?
- Document that describes the objectives, scope, approach, and focus of a software testing effort.
- The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the test group understand the ‘why’ and ‘how’ of product validation. It should be thorough enough to be useful but not so thorough that no one outside the test group will read it.
What does Test Plan include?
The following are some of the items that might be included in a test plan, depending on the particular project:
* Title
* Identification of software including version/release numbers
* Revision history of document including authors, dates, approvals
* Table of Contents
* Purpose of document, intended audience
* Objective of testing effort
* Software product overview
* Relevant related document list, such as requirements, design documents, other test plans, etc.
* Relevant standards or legal requirements
* Traceability requirements
* Relevant naming conventions and identifier conventions
* Overall software project organization and personnel/contact-info/responsibilities
* Test organization and personnel/contact-info/responsibilities
* Assumptions and dependencies
* Project risk analysis
* Testing priorities and focus
* Scope and limitations of testing
* Test outline – a decomposition of the test approach by test type, feature, functionality, process, system, module, etc. as applicable
* Outline of data input equivalence classes, boundary value analysis, error classes
* Test environment – hardware, operating systems, other required software, data configurations, interfaces to other systems
* Test environment validity analysis – differences between the test and production systems and their impact on test validity.
* Test environment setup and configuration issues
* Software migration processes
* Software CM processes
* Test data setup requirements
* Database setup requirements
* Outline of system-logging/error-logging/other capabilities, and tools such as screen capture software, that will be used to help describe and report bugs
* Discussion of any specialized software or hardware tools that will be used by testers to help track the cause or source of bugs
* Test automation – justification and overview
* Test tools to be used, including versions, patches, etc.
* Test script/test code maintenance processes and version control
* Problem tracking and resolution – tools and processes
* Project test metrics to be used
* Reporting requirements and testing deliverables
* Software entrance and exit criteria
* Initial sanity testing period and criteria
* Test suspension and restart criteria
* Personnel allocation
* Personnel pre-training needs
* Test site/location
* Outside test organizations to be utilized and their purpose, responsibilities, deliverables, contact persons, and coordination issues
* Relevant proprietary, classified, security, and licensing issues
* Open issues
* Appendix – glossary, acronyms, etc.
Write test cases for a text field?
- 5 test cases for capacity including 2 for each boundary and one for the class between boundaries
- 3 test cases for valid/invalid input of letters, digits, special characters
- One test cases for each allowed special character (email field as an example)
- Functionality testing if there is any functionality (validation of input as an example, case sensitivity, required field, etc.)
What is Test matrix
Data collection mechanism. It provides a structure for testing the effect of combining two or more variables, circumstances, types of hardware, or events. Row and column headings identify the test conditions. Cells keep the results of test execution.
If there are so many settings/options to choose, how to write test cases?
- Test cases should be developed for all most common potential scenarios
- They should cover most of the positive input
Beside test case & test plan, what documents are required to write?
- Check Lists
- Test matrices
- Test design specs
- End-to-end tests
- Test summary reports
- Bug reports
Describe risk analysis
Risk analysis means the actions taken to avoid things going wrong on a software development project, things that might negatively impact the scope, quality, timeliness, or cost of a project. This is, of course, a shared responsibility among everyone involved in a project. However, there needs to be a ‘buck stops here’ person who can consider the relevant tradeoffs when decisions are required, and who can ensure that everyone is handling their risk management responsibilities.
How will you write test cases for testing fields LOGIN & PASSOWRD, positive and negative testing?
Testing boundary conditions? Why? How?
- Boundary value analysis is a methodology for designing test cases that concentrates software testing effort on cases near the limits of valid ranges.
- Boundary value analysis is a method which refines equivalence partitioning. It generates test cases that highlight errors better than equivalence partitioning. The trick is to concentrate software testing efforts at the extreme ends of the equivalence classes. At those points when input values change from valid to invalid errors are most likely to occur. As well, boundary value analysis broadens the portions of the business requirement document used to generate tests.
For example, if a valid range of quantity on hand is -9,999 through 9,999, write test cases that include: 1. the valid test case quantity on hand is -9,999, 2. the valid test case quantity on hand is 9,999, 3. the invalid test case quantity on hand is -10,000 and 4. the invalid test case quantity on hand is 10,000What is the difference between a test case and a test plan?
- Test plan is the most comprehensive Software Testing document that describes the objectives, scope, approach, and focus of a software testing effort
- Test case is the smallest Software Testing document that describes both typical and atypical situation (set of conditions and/or variables) that may occur in the use of an application (under which a tester will determine if a requirement upon an application is satisfied).
Which documents would you refer to when creating Test Cases?
All business and technical documentation available:
– PRD – Product Requirements Document
– BRD – Business Requirements Document
– Functional Specifications
– Manuals and Help
– Use Cases
– Test Design
– Third party publications (books, published by independent authors)
What is Business Requirements Document (BRD)?
BRD is written by the Business Analysts. It details the business solution for a project including the documentation of customer needs and expectations.
The most common objectives of the BRD are:
– To gain agreement with stakeholders
– To provide a foundation to communicate to a technology service provider what the solution needs to do to satisfy the customer’s and business’ needs
– To provide input into the next phase for this project
– To describe what not how the customer/business needs will be met by the solution
What are Bug Report components?
What fields do you fill out in a Bug Report?
Describe to me the basic elements you put in a defect/bug report?
- Report number: Unique number given to the report
- Application / Module being tested
- Version & release number
- Problem Summary / Short Description / Synopsis
- Steps to reproduce (Detailed Description)
- Severity (Critical, Serious, Minor, Suggestion)
- Priority (High, Medium, Low)
- Environment (Software and/or hardware configuration)
- Reported by
- Assigned to
- Status (Open, Pending, Fixed, Closed, cannot reproduce, etc.)
- Resolution / Notes
- Keywords
If you find a bug and the developer says it is as-designed, what can you do?
– find an exact requirement, which defines the way it should be designed
– if there is no specific requirement compare to same feature implemented in quality applications (ask your manager which applications to compare to)
How do you write a bug report?
- Rule of WWW- What happened, Where it happened, under Which circumstances
- Write one bug report for each fix to be verified
- Bug report should be as complete as possible
- Bug reports are as concise as possible
- Report a bug immediately, do not postpone
- Use technical terms, not “people off the street” language
What is the most important part of bug report?
- Steps to reproduce
- Short Description
- Severity
- Priority
- Status
What is the bug life cycle?
The bug should go through the life cycle to be closed. Here are the stages:
– bug found
– bug reported
– bug assigned to developer
– bug fixed by developer
– fix verified by tester
– bug closed
How can a tester be sure that bug was fixed?
– execute the steps in the bug report
– make sure the fixed bug does not result in new bugs in same area.
Describe the QA Process
QA processes include:
1) Test Planning Process
2) Test Development Process
3) Test Execution Process
4) Defect Management Process
5) Test Reporting Process
What is Unit Testing?
- The goal of unit testing is to isolate each part of the program and show that the individual parts (units) are correct.
- A unit is the smallest testable part of an application. It may be an individual function or procedure.
- Unit testing is provided by developers, not testers.
What is API Testing?
- Testing of an API(Application Programming Interface), which is a collection of software functions and procedures.
- APItesting is mostly used for testing system software, application software or libraries.
- It is a white box testing method.
- APItesting (done by QA Team) is different from Unit testing (done by developers).
What is the Performance Testing? ?
Performance testing is to determine how fast some aspect of a system performs under a particular workload. It can also serve to validate and verify other quality attributes of the system, such as scalability, reliability and resource usage.
What is Stress Testing?
Stress test puts a emphasis on robustness, availability, and error handling under a heavy load, rather than on what would be considered correct behavior under normal circumstances. The goal may be to ensure the software doesn’t crash in conditions of insufficient computational resources (such as memory or disk space), unusually high concurrency, or denial of service attacks.
What is a Regression Testing?
Partial retesting of a modified program to make sure that no errors were introduced while making changes to the code (developing new or fixing existing one)
What is an Acceptance Testing?
Acceptance testing is black-box testing performed on a software prior to its delivery. Acceptance testing by the system provider is distinguished from acceptance testing by the customer (user acceptance testing – UAT).
What do you prefer: white or black box testing?
– Stick to the objective stated in your resume (Portnov School graduates normally apply for black box testing positions)
How do you determine when you have done enough testing?
Testing process comes to the point at which additional tests will not significantly change quality of the software.
Which tools are used to write Test Cases?
– Test Management Tools such as HP Quality Center, Zephyr, Rational TestManager
– Many companies use spreadsheets (Excel) or word processors (Word)
What is walk-through meeting?
Walk-through meeting is a form of software peer review in which a designer or programmer leads members of the development team and other interested parties through a software product, and the participants ask questions and make comments about possible errors, violation of development standards, and other problems.
What is Build?
In a programming context, a build is a version of a program. As a rule, a build is a pre-release version and as such is identified by a build number, rather than by a release number. Reiterative (repeated) builds are an important part of the development process. Throughout development, application components are collected and repeatedly compiled for testing purposes.
What is Test Strategy?
A test strategy is an outline that describes the testing portion of the software development cycle. It is created to inform project managers, testers, and developers about some key issues of the testing process.
Test cases for mobile phone ?
Test Cases for Mobile Phone:
1)Chek whether Battery is inserted into mobile properly
2)Chek Switch on/Switchoff of the Mobile
3)Insert the sim into the phone n chek
4)Add one user with name and phone number in Address book
5)Chek the Incoming call
6)chek the outgoing call
7)send/receive messages for that mobile
8)Chek all the numbers/Characters on the phone working fine by clicking on them..
9)Remove the user from phone book n chek removed properly with name and phone number
10)Chek whether Network working fine..
11)If its GPRS enabled chek for the connectivity.
The test cases for mobile phone can be divided into
1) Functionality
2) Usability
Functionality: The mobile phone should work as per the specification document.
Ex: To verify whether the contact details is added when the user saves the details.
Usability: How easy to operate?
Ex: To verify whether the buttons of the handset is easy to operate
Similary one can design testcases for Pen, Taperecorder, TV etc…….
Test Cases for Mobile Phone:
1)Chek whether Battery is inserted into mobile properly
2)Chek Switch on/Switchoff of the Mobile
3)Insert the sim into the phone n chek
4)Add one user with name and phone number in Address book
5)Chek the Incoming call
6)chek the outgoing call
7)send/receive messages for that mobile
8)Chek all the numbers/Characters on the phone working fine by clicking on them..
9)Remove the user from phone book n chek removed properly with name and phone number
10)Chek whether Network working fine..
11)If its GPRS enabled chek for the connectivity.
12)if its bluetooth there check it.
13) check mobile performance
14) check the talk time
15) check the mobile panel and keys
Test Cases for Mobile Phone
SOME MOBILE PHONE TEST CASES
This is not an exhaustive list. As you see, mobile phone testing is a rigorous process that can easily require numerous test cases.
START UP
1) Can fully charge battery
2) Inserting SIM is intuitive and fool-proof
3) Inserting battery is intuitive and fool-proof
4) Can power on the mobile phone
5) Power-on splash screen meets requirements (ex: format, branding)
6) Set-up page asks if you want to set time and date
a) returning ‘yes’ allows you to complete set-up
b) returning ‘no’ skips set-up
7) Completing or skipping set-up goes to main page
MAIN PAGE
1) Text, graphics, format, color scheme, and branding meet requirements
2) Time and date are properly displayed on screen
3) Signal strength is properly displayed on screen
4) Battery level is properly displayed on screen
5) Left soft button text reads ‘Call’
6) Right soft button text reads ‘Menu’
7) Pressing left soft button displays Call Log
8) Pressing right soft button displays Applications Menus
9) Pressing left return button does nothing
10) Pressing right clear button does nothing
PLACING A CALL
1) Pressing a number key opens foreground window
2) Foreground window has proper dimensions and graphics
3) Selected number is displayed along bottom left of window
4) The text related to soft key changes from ‘Calls’ to ‘Call’
5) Pressing additional numbers adds corresponding numbers from left to right
6) Pressing clear button deletes right-most number
7) Can clear entire entry
8) With numbers populated in the foreground window, pressing back button
returns to main page
9) Enter a valid local phone number and place call
10) Enter a valid long distance number and place call
11) Place call when phone has weak signal (1 bar)
12) Attempt to place a call with no signal
13) Place call with low battery
14) Place call when battery runs out
OTHER TEST CASES TO CONSIDER
1) Receiving calls
a) during another call
b) during an IM session
c) while authoring an SMS
d) while reading and e-mail
e) while playing a game
f) while listening to a song
e) while taking a picture
2) Sending and receiving short messages (SMS) with and without attachments
3) Mobile e-mail
4) Mobile Instant Messaging
5) Mobile internet
6) Bluetooth interface
7) Camera
8) Themes
9) Ringtones
10) Battery life
11) Signal strength
12) No-signal conditions
Test cases can get quite complex. For example, consider the steps required to
verify that a Mobile IM session re-establishes itself after receiving no
signal and going back to a good signal area.
And we’ve just scratched the surface! It would likely take an entire day to
produce all of the test cases needed to fully test a cell phone!
1. Check whether the dimensions of mobile are according to the requirements.
2. Check whether the functionalities are with respect to the requirements.
3. Check whether the colour is accroding to requirments.
4. Check whether the backup for battery is as given in the manual.
5. Check whether the functionalities are with respect to the manual.(extras,games,email services,bluetooth,camera etc.)
6. Checking for various items as charger,headphones,cable etc.
7. Check whether the functionality of keys are proper.
8. Check whether the model is according to the requirements.
9. Checking for display colour(B/w or colour) of the mobile as mentioned in requirements.
10. Checking for proper signal and network in maximum areas.
11. Check for memory used for storage.
12. Checking out for memory where data is stored above the peak value.
13. Checking for performance of the mobile.
14. Checking for the usability.
Test Cases for Bisleri bottle
verify the design and graph of the bottle.
verify the bottle when you open.
verify the bottle when you fill the water.
verify the bottle the water falls outside when you fill.
verify the bottle when the water is full filled.
verify the bottle when you closed.
verify the bottle the water is flow ouside when you after closed
Test Cases for Mineral Water bottle:
1. To check whether the water should be pure
2. To check whether the water should contain some minerals
3. To check whether the bottle should be compact to drink
4. To check it should be some strong when it slips from hand also it should be safe
5. To check the bottle should be recycled
6. To check the bottle should come with different capacities
1.Check the physical appearance of the bottle
2.It can hold water
3.Leakage test
4.Temperature that a bottle can withstand both cold and hot
5.Material of bottle
6.Cap of the bottle it fits the bottle
7.Engravement of company logo
8.How much litre it can hold
9.Color of the bottle
10.Quality of the product is mentioned (1 to 5 Rating)
11.Recyclable, reusable
12.Breakable or durability test
13.Prize of bottle is mentioned
Test Case for a Traffic Signal
1) Check the presence of three lights like Green, Yellow & Red on the traffic light post.
2) Check the switching sequence of the lights.
3) Check the defined time delay between the switching of lights of defined colors.
4) Check the possibility and accuracy of adjustment in defining the time delay between the switching of various lights depending upon the traffic density.
5) Check the switching ON of light of one color at one particular time.
6) Check the switching of lights from some type of sensor.
Write test cases of fan?
There are numerous test cases of fan which are given below:
Test case 1: Check whether it moves or not.
Description: Ensure that fan should moves properly.
Expected result: Fan should be moving.
Test case 2: Check it should have minimum 3 blades.
Description: Ensure that length of fan blades should be considered to 3 blades.
Expected result: Length of fan blades should not be shorter than 3 blades.
Test case 3: Check it should be on when electric button (switch) is on.
Description: Ensure that fan should start working when electric switch is on.
Expected result: Fan should be on when electric button (switch) is on.
Test case 4: Check whether Speed of the fan definitely be controlled by the regulator.
Description: Ensure that speed of fan should be controlled.
Expected result: Fan speed should be controlled by the regulator.
Test case 5: Check it should be stop working once the electric switch off.
Description: Ensure that fan should stop working once the electric switch is off.
Expected result: Fan should be off once electric switch is off.
Test case 6: Check the proper “company name” should be displayed on fan or not.
Description: Always ensure that name of company should be properly displayed on fan.
Expected result; Proper name of company should be displayed on fan.
Test case 7: Check Fan should always work in clock-wise direction.
Description: Ensure that direction of fan should be in clock-wise.
Expected result: Fan should work in clock-wise direction.
Test case 8: Check the color of the fan blades.
Description: Always ensure that all the blades of fan have same color.
Expected result: Color of all the blades of fan should be of same color.
Test case 9: Check the fan during (while) in motion should not vibrate.
Description: Ensure that the fan during (while) in motion should not vibrate.
Expected result: Fan should not vibrate.
Test case 10: Check whether the blades should have decent distance from the ceiling.
Description: Ensure that fan blades should have decent distance from the ceiling.
Expected result: Fan blades should have decent distance.
Test case 11: Check the size of the fan blades.
Description: Always ensure that all the blades of fan have same size.
Expected result: Size of all the blades of fan should be of same size.
Test case 12: Check whether it operates in low voltage.
Description: Ensure that fan should properly operate in low voltage.
Expected result: Fan should be properly operated on low voltage.
Test case 13: Check whether speed varies when regulator adjusted.
Description: Ensure that speed of fan varies when we adjust the regulator.
Expected result: Speed of fan varies while adjusting the regulator.
Write Test Cases of Pen?
Test cases of pen are given below: –
But keep one thing in mind that test cases for pen may vary if you have different requirements or set of requirements.
Below is given different test cases of pen which does not contain any requirements or specifications?
Test cases of pen are like that:
- Verify the color of the pen.
2. Check GUI testing means logo of the pen maker.
3. Check Usability testing means grip of the pen.
4. Verify whether the pen is ballpoint pen or ink pen.
5. Check Integration Testingmeans cap of the pen should easily fit beside the body of the pen.
6. Check pen should be continuously in writing mode.
Some Functional test cases for pen:
- Check whether it writes on paper or not.
- Verify whether the ink on the paper is belongs with the similar color as what we see in the refill.
Performance and load test cases for pen:
- Verify how it performs when writing on wet paper.
- Verify how it performs when writing on rough paper.
- Verify how it performs when writing on hand because we occasionally do that
- Check load test means when pen is pressed very hard against the tough surface then pen refill should not come out of the pen.
Negative test cases about pen:
- Verify whether ink is available or not.
- Check if ink is available, than the pen does not write on the paper.
- Verify by bend the refill at multiple ends and then try to write with it.
- Verify by dip the pen always in to the water and then write it again.
- Check whether it write on leaves or not.
Additional test cases for pen:
- Check usability testing means test by writing on a section of paper, Examine if you can write smoothly. It should not be writing and stopping among (with) breaks.
- Check capability or reliability testing means Test the writing capacity (the amount of writing that is possible from a single refill) of the pen.
- Check Robustness testing means Test wherever you can carry the pen in to your shirt and pent pocket using its cap. The cap distension should be solid enough to grip your pocket.
- Check Compatibility testing means Test by writing on distinct types of surfaces like: rough paper, packing material, glass, leather, cotton, wood, plastic, metals like aluminum or iron, polythene sheet etc.
Write a test case for a car, which has been, recently launch in a market to check its durability, fuel efficiency and its optimum speed.
1.check whether the car is moving on high speed while driving in plain
2)Check features provided in the car compared to other cars of the same price range.
3)Check whether the features provided inside are working fine.
a)power windows
b)airbags
c)ac
d)music system
e)central locking
f)lights,horn,wipers,gear lever,brakes,accelerator,clutch
g)check the odometer and tachometer needle and verify with a gauge.
4)Drive it in conditions like
a)Highway
b)City
to check mileage,clutch,gear,acceleration
5)Drive at fairly high speed and slow down to check brakes,steering,control
Positive test case:
Open gate->Get inside(see if the roof does not hit your hear, should not be too low)->Close gate(no screeching sound should be present)->Check all buttons on dash to work fine(Also check power windows)->Check Music Player and its controls->Start car and drive normally to see hear any defect->stop->Open door->Exit->close door
Negative test cases:
Start the car->Test the AC for cool air,dust,water->Start driving->Check for Shock absorbers on rough patches->control over high speeds and check->mileage should me as prescribed by the company.Stop very quickly to check brakes->Open door->Exit->close door
Test Cases for Pen:
1. Check if the all parts of the pen is fitting properly and no loose fitting. [Installation Testing]
2. Check if the pen ball is fitted properly and ball is moving with ease. [Installation Testing]
3. Check if the dimension of the pen as per mentioned in the requirement. [UI Testing]
4. Size and shape should be confirmable for writing. [UI Testing]
5. Logo on the pen should be properly printed. [UI Testing]
6. Check if the grip on the pen is superior. [Usability Testing]
7. Check if pen is writing smoothly with continuous and not breaking while writing. [Usability Testing]
8. Check if pen ink is not getting blemish while writing on the paper. [Usability Testing]
9. Check if pressure needs to be applied on the pen to write down on page with least efforts. [Usability Testing]
10. Check if pen is usable for similar refills of different brands. [Usability Testing]
11. Check if pen is writing on the page properly. [Functional Testing]
12. Check if the ink on the paper is belongs with the similar color as what we see in the refill. [Functional Testing]
13. Check if written on paper is not getting fade out after some time. If the requirement saying about ink is water proof then apply water on written text and check the behaviour of the ink. [Functional Testing]
14. Check if the pen is making width of line as per the specified millimeter range. If the pen is coming with two different sets of millimeter ranges i.e. 0.5 mm and 0.7 mm then make sure that line width is properly producing by pen as per design. [Validation Testing]
15. Check if pen is properly working in the flight where different pressure conditions. [Performance testing]
16. Check if pen is not used for substantial period of time and check if ink is not clot inside the pen. Check if user able to access the pen with ease. [Recovery testing]
17. Check if the material used to manufacture is safe to use for chewing or kept in mouth. Generally users are use to put the pen in mouth while writing, so make sure that pen is safe put in mouth or chewing.
18. Check if ink in not being leak from refill in normal conditions.
19. Check if pen is working with different writing angles like the notice is displayed on notice board and user want to write on the notice board.
20. Check if the flow of ink is consistent while writing. The Badly designed pens have the problem with uneven ink flows.
21. Check if the pen ink is not getting dry very quickly or very slowing. While writing on page ink coming out of the pen point, so it should neither dry quickly nor dry too late.
22. Check if the pen is properly working on Space environment if it mentioned in the requirement specifications. [Capability testing]
23. Check if how much long you can able to write with the single refill of pen. [Capability testing]
24. Check if pen is properly writing on different type’s surfaces like smooth paper, rough paper, wooden material, plastic, leather, steel, glass etc. [Compatibility Testing]
25. Check if pen is properly gripped on the shirt pocket and user able to carry on pocket with ease. [Robustness Testing]
26. Check if pen writing point is strong enough to bear a load of different users like some user used to write with some extra pressure on the pen tip. [Robustness Testing]
Negative Test Cases for Pen:
26. Check pen stress testing by dropping pen down from practical height and check if nothing is breaking, no any damage to pen and pen is working without any issues.
27. Hold the pen upwards direction for some time and try to write on paper.
28. Keep the pen in water and try to write on paper.
29. Check how pen is working at different climate environmental conditions like at room temperature, different climate conditions.
No comments:
Post a Comment