Testing Interview
Questions and Answers
1.
Can you explain the PDCA cycle and where testing fits in?
Software
testing is an important part of the software development process. In normal
software development there are four important steps, also referred to, in
short, as the PDCA (Plan, Do, Check, Act) cycle.
Let's review the four steps in detail.
- Plan: Define the goal and the plan
for achieving that goal.
- Do/Execute: Depending on the plan
strategy decided during the plan stage we do execution accordingly in this
phase.
- Check: Check/Test to ensure that we
are moving according to plan and are getting the desired results.
- Act: During the check cycle, if any issues are there, then we take appropriate action accordingly and revise our plan again.
So developers and other stakeholders of the project do the "planning and
building," while testers do the check part of the cycle. Therefore,
software testing is done in check part of the PDCA cyle.
2.
What is the difference between white box, black box, and gray box testing?
Black box testing is a testing strategy based solely on
requirements and specifications. Black box testing requires no knowledge of
internal paths, structures, or implementation of the software being tested.
White box testing is a testing strategy based on internal
paths, code structures, and implementation of the software being tested. White
box testing generally requires detailed programming skills.
There is one more type of testing called gray box testing. In this we look into the "box" being tested just long enough to understand how it has been implemented. Then we close up the box and use our knowledge to choose more effective black box tests.
This kind of testing is really of no use and is normally performed by newcomers. Its best use is to see if the system will hold up under adverse effects.
The following figure shows what should not be automated.
The external application boundary can be identified using the following litmus test:
There is one more type of testing called gray box testing. In this we look into the "box" being tested just long enough to understand how it has been implemented. Then we close up the box and use our knowledge to choose more effective black box tests.
The
above figure shows how both types of testers view an accounting application
during testing. Black box testers view the basic accounting application. While
during white box testing the tester knows the internal structure of the
application. In most scenarios white box testing is done by developers as they
know the internals of the application. In black box testing we check the
overall functionality of the application while in white box testing we do code
reviews, view the architecture, remove bad code practices, and do component
level testing.
3.
Can you explain usability testing?
Usability testing is a testing methodology where the
end customer is asked to use the software to see if the product is easy to use,
to see the customer's perception and task time. The best way to finalize the
customer point of view for usability is by using prototype or mock-up software
during the initial stages. By giving the customer the prototype before the
development start-up we confirm that we are not missing anything from the user
point of view.
4.
What are the categories of defects?
There are three main categories of defects:
- Wrong: The requirements have been
implemented incorrectly. This defect is a variance from the given
specification.
- Missing: There was a requirement
given by the customer and it was not done. This is a variance from the
specifications, an indication that a specification was not implemented, or
a requirement of the customer was not noted properly.
- Extra: A requirement incorporated
into the product that was not given by the end customer. This is always a
variance from the specification, but may be an attribute desired by the
user of the product. However, it is considered a defect because it's a
variance from the existing requirements.
5.
How do you define a testing policy?
The following are the important steps used to define a
testing policy in general. But it can change according to your organization.
Let's discuss in detail the steps of implementing a testing policy in an
organization.
- Definition: The first step any
organization needs to do is define one unique definition for testing
within the organization so that everyone is of the same mindset.
- How
to achieve: How
are we going to achieve our objective? Is there going to be a testing
committee, will there be compulsory test plans which need to be executed,
etc?.
- Evaluate: After testing is implemented
in a project how do we evaluate it? Are we going to derive metrics of
defects per phase, per programmer, etc. Finally, it's important to let
everyone know how testing has added value to the project?.
- Standards: Finally, what are the standards
we want to achieve by testing? For instance, we can say that more than 20
defects per KLOC will be considered below standard and code review should
be done for it.
6.
On what basis is the acceptance plan prepared?
In
any project the acceptance document is normally prepared using the following
inputs. This can vary from company to company and from project to project.
- Requirement
document: This
document specifies what exactly is needed in the project from the
customers perspective.
- Input
from customer: This
can be discussions, informal talks, emails, etc.
- Project plan: The project plan prepared by the project manager also serves as good input to finalize your acceptance test.
The following diagram shows the most common inputs used to prepare acceptance test
plans.
7. What is
configuration management?
Configuration
management is the detailed recording and updating of information for hardware
and software components. When we say components we not only mean source code.
It can be tracking of changes for software documents such as requirement,
design, test cases, etc.
When changes are done in adhoc and in an uncontrolled manner chaotic situations can arise and more defects injected. So whenever changes are done it should be done in a controlled fashion and with proper versioning. At any moment of time we should be able to revert back to the old version. The main intention of configuration management is to track our changes if we have issues with the current system. Configuration management is done using baselines.
When changes are done in adhoc and in an uncontrolled manner chaotic situations can arise and more defects injected. So whenever changes are done it should be done in a controlled fashion and with proper versioning. At any moment of time we should be able to revert back to the old version. The main intention of configuration management is to track our changes if we have issues with the current system. Configuration management is done using baselines.
8. How does a
coverage tool work?
While
doing testing on the actual product, the code coverage testing tool is run
simultaneously. While the testing is going on, the code coverage tool monitors
the executed statements of the source code. When the final testing is completed
we get a complete report of the pending statements and also get the coverage
percentage.
9. Which is the
best testing model?
In real projects,
tailored models are proven to be the best, because they share features from The
Waterfall, Iterative, Evolutionary models, etc., and can fit into real life
time projects. Tailored models are most productive and beneficial for many
organizations. If it's a pure testing project, then the V model is the best.
10. What is the
difference between a defect and a failure?
When
a defect reaches the end customer it is called a failure and if the defect is
detected internally and resolved it's called a defect.
11. Should testing
be done only after the build and execution phases are complete?
In traditional
testing methodology testing is always done after the build and execution
phases.
But that's a wrong way of thinking because the earlier we catch a defect, the more cost effective it is. For instance, fixing a defect in maintenance is ten times more costly than fixing it during execution.
In the requirement phase we can verify if the requirements are met according to the customer needs. During design we can check whether the design document covers all the requirements. In this stage we can also generate rough functional data. We can also review the design document from the architecture and the correctness perspectives. In the build and execution phase we can execute unit test cases and generate structural and functional data. And finally comes the testing phase done in the traditional way. i.e., run the system test cases and see if the system works according to the requirements. During installation we need to see if the system is compatible with the software. Finally, during the maintenance phase when any fixes are made we can retest the fixes and follow the regression testing.
Therefore, Testing should occur in conjunction with each phase of the software development.
But that's a wrong way of thinking because the earlier we catch a defect, the more cost effective it is. For instance, fixing a defect in maintenance is ten times more costly than fixing it during execution.
In the requirement phase we can verify if the requirements are met according to the customer needs. During design we can check whether the design document covers all the requirements. In this stage we can also generate rough functional data. We can also review the design document from the architecture and the correctness perspectives. In the build and execution phase we can execute unit test cases and generate structural and functional data. And finally comes the testing phase done in the traditional way. i.e., run the system test cases and see if the system works according to the requirements. During installation we need to see if the system is compatible with the software. Finally, during the maintenance phase when any fixes are made we can retest the fixes and follow the regression testing.
Therefore, Testing should occur in conjunction with each phase of the software development.
12. Are there more
defects in the design phase or in the coding phase?
The
design phase is more error prone than the execution phase. One of the most
frequent defects which occur during design is that the product does not cover
the complete requirements of the customer. Second is wrong or bad architecture
and technical decisions make the next phase, execution, more prone to defects.
Because the design phase drives the execution phase it's the most critical
phase to test. The testing of the design phase can be done by good review. On
average, 60% of defects occur during design and 40% during the execution phase.
13.
What group of teams can do software testing?
When
it comes to testing everyone in the world can be involved right from the
developer to the project manager to the customer. But below are different types
of team groups which can be present in a project.
- Isolated
test team
- Outsource
- we can hire external testing resources and do testing for our project.
- Inside
test team
- Developers
as testers
- QA/QC
team.
14.
What impact ratings have you used in your projects?
Normally, the impact ratings for defects are
classified into three types:
- Minor: Very low impact but does not
affect operations on a large scale.
- Major: Affects operations on a very
large scale.
- Critical: Brings the system to a halt
and stops the show.
15.
Does an increase in testing always improve the project?
No
an increase in testing does not always mean improvement of the product,
company, or project. In real test scenarios only 20% of test plans are critical
from a business angle. Running those critical test plans will assure that the
testing is properly done. The following graph explains the impact of under
testing and over testing. If you under test a system the number of defects will
increase, but if you over test a system your cost of testing will increase. Even
if your defects come down your cost of testing has gone up.
16.
What's the relationship between environment reality and test phases?
Environment reality becomes more important as test
phases start moving ahead. For instance, during unit testing you need the
environment to be partly real, but at the acceptance phase you should have a
100% real environment, or we can say it should be the actual real environment.
The following graph shows how with every phase the environment reality should
also increase and finally during acceptance it should be 100% real.
17.
What are different types of verifications?
Verification
is static type of s/w testing. It means code is not executed. The product is
evaluated by going through the code. Types of verification are:
- Walkthrough: Walkthroughs are informal,
initiated by the author of the s/w product to a colleague for assistance
in locating defects or suggestions for improvements. They are usually
unplanned. Author explains the product; colleague comes out with observations
and author notes down relevant points and takes corrective actions.
- Inspection: Inspection is a thorough
word-by-word checking of a software product with the intention of Locating
defects, Confirming traceability of relevant requirements etc.
18.
How do test documents in a project span across the software development
lifecycle?
The following figure shows pictorially how test
documents span across the software development lifecycle. The following
discusses the specific testing documents in the lifecycle:
- Central/Project
test plan: This
is the main test plan which outlines the complete test strategy of the
software project. This document should be prepared before the start of the
project and is used until the end of the software development lifecycle.
- Acceptance
test plan: This
test plan is normally prepared with the end customer. This document
commences during the requirement phase and is completed at final delivery.
- System
test plan: This
test plan starts during the design phase and proceeds until the end of the
project.
- Integration and unit test plan: Both of these test plans start during the execution phase and continue until the final delivery.
19. Which test cases are
written first: white boxes or black boxes?
Normally black box test cases are written first and
white box test cases later. In order to write black box test cases we need the
requirement document and, design or project plan. All these documents are
easily available at the initial start of the project. White box test cases
cannot be started in the initial phase of the project because they need more
architecture clarity which is not available at the start of the project. So
normally white box test cases are written after black box test cases are
written.
Black box test cases do not require system understanding but white box testing needs more structural understanding. And structural understanding is clearer i00n the later part of project, i.e., while executing or designing. For black box testing you need to only analyze from the functional perspective which is easily available from a simple requirement document.
Black box test cases do not require system understanding but white box testing needs more structural understanding. And structural understanding is clearer i00n the later part of project, i.e., while executing or designing. For black box testing you need to only analyze from the functional perspective which is easily available from a simple requirement document.
20. Explain Unit Testing, Integration Tests, System
Testing and Acceptance Testing?
Unit testing - Testing performed on a
single, stand-alone module or unit of code.
Integration Tests - Testing performed on groups of modules to ensure that data and control are passed properly between modules.
System testing - Testing a predetermined combination of tests that, when executed successfully meets requirements.
Acceptance testing - Testing to ensure that the system meets the needs of the organization and the end user or customer (i.e., validates that the right system was built).
Integration Tests - Testing performed on groups of modules to ensure that data and control are passed properly between modules.
System testing - Testing a predetermined combination of tests that, when executed successfully meets requirements.
Acceptance testing - Testing to ensure that the system meets the needs of the organization and the end user or customer (i.e., validates that the right system was built).
21. What is a test log?
The
IEEE Std. 829-1998 defines a test log as a chronological record of relevant details
about the execution of test cases. It's a detailed view of activity and events
given in chronological manner.
The following figure shows a test log and is followed by a sample test log.
The following figure shows a test log and is followed by a sample test log.
22.
Can you explain requirement traceability and its importance?
In most
organizations testing only starts after the execution/coding phase of the
project. But if the organization wants to really benefit from testing, then
testers should get involved right from the requirement phase.
If the tester gets involved right from the requirement phase then requirement traceability is one of the important reports that can detail what kind of test coverage the test cases have.
If the tester gets involved right from the requirement phase then requirement traceability is one of the important reports that can detail what kind of test coverage the test cases have.
23.
What does entry and exit criteria mean in a project?
Entry
and exit criteria are a must for the success of any project. If you do not know
where to start and where to finish then your goals are not clear. By defining
exit and entry criteria you define your boundaries.
For instance, you can define entry criteria that the customer should provide the requirement document or acceptance plan. If this entry criteria is not met then you will not start the project. On the other end, you can also define exit criteria for your project. For instance, one of the common exit criteria in projects is that the customer has successfully executed the acceptance test plan.
For instance, you can define entry criteria that the customer should provide the requirement document or acceptance plan. If this entry criteria is not met then you will not start the project. On the other end, you can also define exit criteria for your project. For instance, one of the common exit criteria in projects is that the customer has successfully executed the acceptance test plan.
24.
What is the difference between verification and validation?
Verification is a
review without actually executing the process while validation is checking the
product with actual execution. For instance, code review and syntax check is
verification while actually running the product and checking the results is
validation.
25.
What is the difference between latent and masked defects?
A
latent defect is an existing defect that has not yet caused a failure because
the sets of conditions were never met.
A masked defect is an existing defect that hasn't yet caused a failure just because another defect has prevented that part of the code from being executed.
A masked defect is an existing defect that hasn't yet caused a failure just because another defect has prevented that part of the code from being executed.
26.
Can you explain calibration?
It
includes tracing the accuracy of the devices used in the production,
development and testing. Devices used must be maintained and calibrated to
ensure that it is working in good order.
27.
What's the difference between alpha and beta testing?
Alpha
and beta testing has different meanings to different people. Alpha testing is
the acceptance testing done at the development site. Some organizations have a
different visualization of alpha testing. They consider alpha testing as
testing which is conducted on early, unstable versions of software. On the
contrary beta testing is acceptance testing conducted at the customer end.
In short, the difference between beta testing and alpha testing is the location where the tests are done.
In short, the difference between beta testing and alpha testing is the location where the tests are done.
28.
How does testing affect risk?
A risk is a condition that can result in a loss. Risk
can only be controlled in different scenarios but not eliminated completely. A
defect normally converts to a risk.
29.
What is coverage and what are the different types of coverage techniques?
Coverage is a measurement used in software testing to
describe the degree to which the source code is tested. There are three basic
types of coverage techniques as shown in the following figure:
- Statement
coverage: This
coverage ensures that each line of source code has been executed and
tested.
- Decision
coverage: This
coverage ensures that every decision (true/false) in the source code has
been executed and tested.
- Path
coverage: In
this coverage we ensure that every possible route through a given part of
code is executed and tested.
30.
A defect which could have been removed during the initial stage is removed in a
later stage. How does this affect cost?
If a defect is known at the initial stage then it
should be removed during that stage/phase itself rather than at some later
stage. It's a recorded fact that if a defect is delayed for later phases it
proves more costly. The following figure shows how a defect is costly as the
phases move forward. A defect if identified and removed during the requirement
and design phase is the most cost effective, while a defect removed during
maintenance is 20 times costlier than during the requirement and design
phases.
For
instance, if a defect is identified during requirement and design we only need
to change the documentation, but if identified during the maintenance phase we
not only need to fix the defect, but also change our test plans, do regression
testing, and change all documentation. This is why a defect should be
identified/removed in earlier phases and the testing department should be
involved right from the requirement phase and not after the execution phase.
31.
What kind of input do we need from the end user to begin proper testing?
The product has to be used by the user. He is the most
important person as he has more interest than anyone else in the project.
From
the user we need the following data:
- The
first thing we need is the acceptance test plan from the end user. The
acceptance test defines the entire test which the product has to pass so
that it can go into production.
- We
also need the requirement document from the customer. In normal scenarios
the customer never writes a formal document until he is really sure of his
requirements. But at some point the customer should sign saying yes this
is what he wants.
- The
customer should also define the risky sections of the project. For
instance, in a normal accounting project if a voucher entry screen does
not work that will stop the accounting functionality completely. But if
reports are not derived the accounting department can use it for some
time. The customer is the right person to say which section will affect
him the most. With this feedback the testers can prepare a proper test
plan for those areas and test it thoroughly.
- The
customer should also provide proper data for testing. Feeding proper data
during testing is very important. In many scenarios testers key in wrong
data and expect results which are of no interest to the customer.
32.
Can you explain the workbench concept?
In order to understand testing methodology we need to
understand the workbench concept. A Workbench is a way of documenting how a
specific activity has to be performed. A workbench is referred to as phases,
steps, and tasks as shown in the following figure.
There
are five tasks for every workbench:
- Input: Every task needs some
defined input and entrance criteria. So for every workbench we need
defined inputs. Input forms the first steps of the workbench.
- Execute: This is the main task of the
workbench which will transform the input into the expected output.
- Check: Check steps assure that the
output after execution meets the desired result.
- Production
output: If
the check is right the production output forms the exit criteria of the
workbench.
- Rework: During the check step if the
output is not as desired then we need to again start from the execute
step.
33.
Can you explain the concept of defect cascading?
Defect cascading is
a defect which is caused by another defect. One defect triggers the other
defect. For instance, in the accounting application shown here there is a
defect which leads to negative taxation. So the negative taxation defect
affects the ledger
37.
What's the difference between System testing and Acceptance testing?
Acceptance
testing checks the system against the "Requirements." It is
similar to System testing in that the whole system is checked but the important
difference is the change in focus:
System testing checks that the system that was specified has been delivered. Acceptance testing checks that the system will deliver what was requested. The customer should always do Acceptance testing and not the developer.
The customer knows what is required from the system to achieve value in the business and is the only person qualified to make that judgement. This testing is more about ensuring that the software is delivered as defined by the customer. It's like getting a green light from the customer that the software meets expectations and is ready to be used.
System testing checks that the system that was specified has been delivered. Acceptance testing checks that the system will deliver what was requested. The customer should always do Acceptance testing and not the developer.
The customer knows what is required from the system to achieve value in the business and is the only person qualified to make that judgement. This testing is more about ensuring that the software is delivered as defined by the customer. It's like getting a green light from the customer that the software meets expectations and is ready to be used.
38.
Can you explain regression testing and confirmation testing?
Regression testing is used for regression defects.
Regression defects are defects occur when the functionality which was once
working normally has stopped working. This is probably because of changes made
in the program or the environment. To uncover such kind of defect regression
testing is conducted.
The following figure shows the difference between regression and confirmation testing.
The following figure shows the difference between regression and confirmation testing.
If we
fix a defect in an existing application we use confirmation testing to test if
the defect is removed. It's very possible because of this defect or changes to
the application that other sections of the application are affected. So to
ensure that no other section is affected we can use regression testing to
confirm this.
hich in turn affects four other modules.
34.
Can you explain cohabiting software?
When we install the application at the end client it
is very possible that on the same PC other applications also exist. It is also
very possible that those applications share common DLLs, resources etc., with
your application. There is a huge chance in such situations that your changes
can affect the cohabiting software. So the best practice is after you install
your application or after any changes, tell other application owners to run a
test cycle on their application.
35.
What is the difference between pilot and beta testing?
The difference between pilot and beta testing is that
pilot testing is nothing but actually using the product (limited to some users)
and in beta testing we do not input real data, but it's installed at the end
customer to validate if the product can be used in production.
36.
What are the different strategies for rollout to end users?
There are four major ways of rolling out any project:
Pilot: The actual production system
is installed at a single or limited number of users. Pilot basically means
that the product is actually rolled out to limited users for real work.
- Gradual
Implementation: In
this implementation we ship the entire product to the limited users or all
users at the customer end. Here, the developers get instant feedback from
the recipients which allow them to make changes before the product is
available. But the downside is that developers and testers maintain more
than one version at one time.
- Phased
Implementation: In
this implementation the product is rolled out to all users in
incrementally. That means each successive rollout has some added
functionality. So as new functionality comes in, new installations occur
and the customer tests them progressively. The benefit of this kind of
rollout is that customers can start using the functionality and provide
valuable feedback progressively. The only issue here is that with each
rollout and added functionality the integration becomes more complicated.
- Parallel
Implementation: In
these types of rollouts the existing application is run side by side with
the new application. If there are any issues with the new application we
again move back to the old application. One of the biggest problems with
parallel implementation is we need extra hardware, software, and
resources.
37.
Can you explain boundary value analysis?
In some projects there are scenarios where we need to
do boundary value testing. For instance, let's say for a bank application you
can withdraw a maximum of 25000 and a minimum of 100. So in boundary value
testing we only test the exact boundaries rather than hitting in the middle.
That means we only test above the max and below the max. This covers all
scenarios. The following figure shows the boundary value testing for the bank
application which we just described. TC1 and TC2 are sufficient to test all
conditions for the bank. TC3 and TC4 are just duplicate/redundant test cases
which really do not add any value to the testing. So by applying proper boundary
value fundamentals we can avoid duplicate test cases, which do not add value to
the testing.
38.
Can you explain equivalence partitioning?
In equivalence partitioning we identify inputs which
are treated by the system in the same way and produce the same results. You can
see from the following figure applications TC1 and TC2 give the same results
(i.e., TC3 and TC4 both give the same result, Result2). In short, we have two
redundant test cases. By applying equivalence partitioning we minimize the redundant
test cases.
So
apply the test below to see if it forms an equivalence class or not:
- All
the test cases should test the same thing.
- They
should produce the same results.
- If
one test case catches a bug, then the other should also catch it.
- If
one of them does not catch the defect, then the other should not catch it.
39.
Can you explain random/monkey testing?
Random testing is sometimes called monkey testing. In
Random testing, data is generated randomly often using a tool. For instance,
the following figure shows how randomly-generated data is sent to the system.
This data is generated either using a tool or some automated mechanism. With
this randomly generated input the system is then tested and results are
observed accordingly.
Random
testing has the following weakness:
- They
are not realistic.
- Many
of the tests are redundant and unrealistic.
- You
will spend more time analyzing results.
- You
cannot recreate the test if you do not record what data was used for
testing.
This kind of testing is really of no use and is normally performed by newcomers. Its best use is to see if the system will hold up under adverse effects.
40.
What are semi-random test cases?
As
the name specifies semi-random testing is nothing but controlling random
testing and removing redundant test cases. So what we do is perform random test
cases and equivalence partitioning to those test cases, which in turn removes
redundant test cases, thus giving us semi-random test cases.
41.
Can you explain a pair-wise defect?
Orthogonal array is a two-dimensional array in which
if we choose any two columns in the array and all the combinations of numbers
will appear in those columns. The following figure shows a simple L9(34)
orthogonal array. In this the number 9 indicates that it has 9 rows. The number
4 indicates that it has 4 columns and 3 indicates that each cell contains a 1,
2, and 3. Choose any two columns. Let's choose column 1 and 2. It has (1,1),
(1,2), (1,3), (2,1), (2,2), (2,3), (3,1), (3,2), (3,3) combination values. As
you can see these values cover all the values in the array. Compare the values
with the combination of column 3 and 4 and they will fall in some pair. This is
applied in software testing which helps us eliminate duplicate test cases.

42.
What is negative and positive testing?
A negative test is when you put in an invalid input
and receive errors.
A positive test is when you put in a valid input and expect some action to be completed in accordance with the specification.
A positive test is when you put in a valid input and expect some action to be completed in accordance with the specification.
43. How did
you define severity ratings in your project?
There are four types of severity ratings as shown in
the table:
- Severity
1 (showstoppers): These kinds of defects do not allow the application to
move ahead. So they are also called showstopper defects.
- Severity
2 (application continues with severe defects): Application continues
working with these types of defects, but they can have high implications,
later, which can be more difficult to remove.
- Severity
3 (application continues with unexpected results): In this scenario the
application continues but with unexpected results.
- Severity
4 (suggestions): Defects with these severities are suggestions given by
the customer to make the application better. These kinds of defects have
the least priority and are considered at the end of the project or during
the maintenance stage of the project.
44.
Can you explain exploratory testing?
Exploratory testing is also called adhoc testing, but
in reality it's not completely adhoc. Ad hoc testing is an unplanned,
unstructured, may be even an impulsive journey through the system with the
intent of finding bugs. Exploratory testing is simultaneous learning, test
design, and test execution. In other words, exploratory testing is any testing
done to the extent that the tester proactively controls the design of the tests
as those tests are performed and uses information gained while testing to
design better tests. Exploratory testers are not merely keying in random data,
but rather testing areas that their experience (or imagination) tells them are
important and then going where those tests take them.
45.
Can you explain decision tables?
As the name suggests they are tables that list all
possible inputs and all possible outputs. A general form of decision table is
shown in the following figure.
Condition
1 through Condition N indicates various input conditions. Action 1 through
Condition N are actions that should be taken depending on various input
combinations. Each rule defines unique combinations of conditions that result
in actions associated with that rule.
46.
Does automation replace manual testing?
Automation
is the integration of testing tools into the test environment in such a manner
that the test execution, logging, and comparison of results are done with
little human intervention. A testing tool is a software application which helps
automate the testing process. But the testing tool is not the complete answer
for automation. One of the huge mistakes done in testing automation is
automating the wrong things during development. Many testers learn the hard way
that everything cannot be automated. The best components to automate are
repetitive tasks. So some companies first start with manual testing and then
see which tests are the most repetitive ones and only those are then automated.
As a rule of thumb do not try to automate:
As a rule of thumb do not try to automate:
- Unstable
software: If the software is still under development and undergoing many
changes automation testing will not be that effective.
- Once
in a blue moon test scripts: Do not automate test scripts which will be
run once in a while.
- Code
and document review: Do not try to automate code and document reviews;
they will just cause trouble.
The following figure shows what should not be automated.
All
repetitive tasks which are frequently used should be automated. For instance,
regression tests are prime candidates for automation because they're typically
executed many times. Smoke, load, and performance tests are other examples of
repetitive tasks that are suitable for automation. White box testing can also
be automated using various unit testing tools. Code coverage can also be a good
candidate for automation.
47.
How does load testing work for websites?
Websites have software called a web server installed
on the server. The user sends a request to the web server and receives a
response. So, for instance, when you type www.google.com the web server senses
it and sends you the home page as a response. This happens each time you click
on a link, do a submit, etc. So if we want to do load testing you need to just
multiply these requests and responses "N" times. This is what an
automation tool does. It first captures the request and response and then just
multiplies it by "N" times and sends it to the web server, which
results in load simulation.
So once the tool
captures the request and response, we just need to multiply the request and
response with the virtual user. Virtual users are logical users which actually
simulate the actual physical user by sending in the same request and response.
If you want to do load testing with 10,000 users on an application it's
practically impossible. But by using the load testing tool you only need to
create 1000 virtual users.
48.
Can you explain data-driven testing?
Normally an application has to be tested with multiple
sets of data. For instance, a simple login screen, depending on the user type,
will give different rights. For example, if the user is an admin he will have
full rights, while a user will have limited rights and support if he only has
read-only support rights. In this scenario the testing steps are the same but
with different user ids and passwords. In data-driven testing, inputs to the
system are read from data files such as Excel, CSV (comma separated values),
ODBC, etc. So the values are read from these sources and then test steps are
executed by automated testing.
49.
What are the different Ways of doing Black Box testing?
There
are five methodologies most frequently used:
- Top
down according to budget
- WBS
(Work Breakdown Structure)
- Guess
and gut feeling
- Early
project data
- TPA
(Test Point Analysis)
50.
Can you explain TPA analysis?
TPA
is a technique used to estimate test efforts for black box testing. Inputs for
TPA are the counts derived from function points.
Below are the features of TPA:
Below are the features of TPA:
- Used
to estimate only black box testing.
Require function points as inputs.
51.
Can you explain the elementary process?
Software
applications are a combination of elementary processes. When elementary
processes come together they form a software application.
There are two types of elementary processes:
There are two types of elementary processes:
- Dynamic
elementary Process: The
dynamic elementary process moves data from an internal application
boundary to an external application boundary or vice-versa. Example: Input
data screen where a user inputs data into the application. Data moves from
the input screen inside the application.
- Static
elementary Process: Static
elementary process which maintains the data of the application either
inside the application boundary or in the external application boundary.
For example, in a customer maintenance screen maintaining customer data is
a static elementary process.
52.
How do you estimate white box testing?
The
testing estimates derived from function points are actually the estimates for
white box testing. So in the following figure the man days are actually the
estimates for white box testing of the project. It does not take into account
black box testing estimation.
5.
Can you explain the various elements of function points FTR, ILF, EIF, EI, EO,
EQ, and GSC?
- File
Type References (FTRs): An
FTR is a file or data referenced by a transaction. An FTR should be an ILF
or EIF. So count each ILF or EIF read during the process. If the EP is
maintained as an ILF then count that as an FTR. So by default you will
always have one FTR in any EP.
- Internal
Logical Files (ILFs): ILFs
are logically related data from a user's point of view. They reside in the
internal application boundary and are maintained through the elementary
process of the application.ILFs can have a maintenance screen but not
always.
- External
Interface Files (EIFs): EIFs
reside in the external application boundary. EIFs are used only for
reference purposes and are not maintained by internal applications. EIFs
are maintained by external applications.
- External
Input (EI): EIs
are dynamic elementary processes in which data is received from the
external application boundary. Example: User interaction screens, when
data comes from the User Interface to the Internal Application.
- External
Output (EO): EOs
are dynamic elementary processes in which derived data crosses from the
internal application boundary to the external application boundary.
- External
Inquiry (EQ): An
EQ is a dynamic elementary process in which result data is retrieved from
one or more ILF or EIF. In this EP some input requests have to enter the
application boundary. Output results exits the application boundary.
- General
System Characteristics (GSCs): This
section is the most important section. All the previously discussed
sections relate only to applications. But there are other things also to
be considered while making software, such as are you going to make it an
N-Tier application, what's the performance level the user is expecting,
etc. These other factors are called GSCs.
53.
Can you explain an Application boundary?
The
first step in FPA is to define the boundary. There are two types of major
boundaries:
- Internal
Application Boundary
- External
Application Boundary
The external application boundary can be identified using the following litmus test:
- Does
it have or will it have any other interface to maintain its data, which
was not developed by you?.
- Does
your program have to go through a third party API or layer? In order for
your application to interact with the tax department application your code
has to interact with the tax department API.
- The
best litmus test is to ask yourself if you have full access to the system.
If you have full rights to make changes then it is an internal application
boundary, otherwise it is an external application boundary.
54.
Can you explain how TPA works?
There
are three main elements which determine estimates for black box testing: size,
test strategy, and productivity. Using all three elements we can determine the
estimate for black box testing for a given project. Let's take a look at these
elements.
- Size: The most important aspect of
estimating is definitely the size of the project. The size of a project is
mainly defined by the number of function points. But a function point
fails or pays the least attention to the following factors:
Complexity: Complexity
defines how many conditions exist in function points identified during a
project. More conditions means more test cases which means more testing estimates.
- Interfacing: How much does one function
affect the other part of the system? If a function is modified then
accordingly the other systems have to be tested as one function always
impacts another.
- Uniformity: How reusable is the
application? It is important to consider how many similar structured
functions exist in the system. It is important to consider the extent to
which the system allows testing with slight modifications.
- Test
strategy: Every
project has certain requirements. The importance of all these requirements
also affects testing estimates. Any requirement importance is from two
perspectives: one is the user importance and the other is the user usage.
Depending on these two characteristics a requirement rating can be
generated and a strategy can be chalked out accordingly, which also means
that estimates vary accordingly.
Productivity: This
is one more important aspect to be considered while estimating black box
testing. Productivity depends on many aspects.
55.
Can you explain steps in function points?
Below
are the steps in function points:
- First
Count ILF, EIF, EI, EQ, RET, DET, FTR and use the rating tables. After you
have counted all the elements you will get the unadjusted function points.
- Put
rating values 0 to 5 to all 14 GSC. Adding total of all 14 GSC to come out
with total VAF. Formula for VAF = 0.65 + (sum of all GSC factor/100).
- Finally,
make the calculation of adjusted function point. Formula: Total function
point = VAF * Unadjusted function point.
- Make
estimation how many function points you will do per day. This is also
called as "Performance factor".On basis of performance factor,
you can calculate Man/Days.
56.
Can you explain function points?
Function
points are a unit measure for software much like an hour is to measuring time,
miles are to measuring distance or Celsius is to measuring temperature.
Function Points are an ordinal measure much like other measures such as
kilometers, Fahrenheit, hours, so on and so forth.
This approach computes the total function points (FP) value for the project, by totaling the number of external user inputs, inquiries, outputs, and master files, and then applying the following weights: inputs (4), outputs (5), inquiries (4), and master files (10). Each FP contributor can be adjusted within a range of +/-35% for a specific project complexity.
This approach computes the total function points (FP) value for the project, by totaling the number of external user inputs, inquiries, outputs, and master files, and then applying the following weights: inputs (4), outputs (5), inquiries (4), and master files (10). Each FP contributor can be adjusted within a range of +/-35% for a specific project complexity.









































No comments:
Post a Comment