Saturday, May 30, 2009

How to insert a check point to a image to check enable property in QTP?System Testing and User Acceptance Testing, test plan and a test scenario

How to insert a check point to a image to check enable property in QTP?

Answer1:
AS you are saying that the all images are as push button than you can check the property enabled or disabled. If you are not able to find that property than go to object repository for that objecct and click on add remove to add the available properties to that object. Let me know if that works. And if you take it as image than you need to check visible or invisible property tht also might help you are there are no enable or disable properties for the image object.

Answer2:
The Image Checkpoint does not have any property to verify the enable/disable property.
One thing you need to check is:
* Find out form the Developer if he is showing different images for activating/deactiving i.e greyed out image. That is the only way a developer can show deactivate/activate if he is using an "image". Else he might be using a button having a headsup with an image.
* If it is a button used to display with the headsup as an image you woudl need to use the object Properties as a checkpoint.

How do you write test cases?

When I write test cases, I concentrate on one requirement at a time. Then, based on that one requirement, I come up with several real life scenarios that are likely to occur in the use of the application by an end user.
When I write test cases, I describe the inputs, action, or event, and their expected results, in order to determine if a feature of an application is working correctly. To make the test case complete, I also add particulars e.g. test case identifiers, test case names, objectives, test conditions (or setups), input data requirements (or steps), and expected results.
Additionally, if I have a choice, I like writing test cases as early as possible in the development life cycle. Why? Because, as a side benefit of writing test cases, many times I am able to find problems in the requirements or design of an application. And, because the process of developing test cases makes me completely think through the operation of the application.

Diferences Between System Testing and User Acceptance Testing?

Answer1:
system testing: The process of testing an integrated system to verify that it meets specified requirements. acceptance testing: Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system.
First, I don’t classify the incidents or defects regarding the phase the software development process or testing, I prefer classify them regarding their type, e. g. Requeriments, features and functionality, structural bugs, data, integration, etc The value of categorising faults is that it helps us to focus our testing effort where it is most important and we should have distinct test activietis that adrress the problems of poor requerimients, structure, etc.
You don’t do User Acceptance Test only because the software is delivered! Take care about the concepts of testing!

Answer2:
In my company we do not perform user acceptance testing, our clients do. Once our system testing is done (and other validation activities are finished) the software is ready to ship. Therefore any bug found in user acceptance testing would be issued a tracking number and taken care of in the next release. It would not be counted as a part of the system test.

Answer3:
This is what i feel is user acceptance testing, i hope u find it useful. Definition:
User Acceptance testing is a formal testing conducted to determine whether a software satisfies it's acceptance criteria and to enable the buyer to determine whether to accept the system.
Objective:
User Acceptance testing is designed to determine whether the software is fit for the user to use. And also to determine if the software fits into user's business processes and meets his/her needs.
Entry Criteria:
End of development process and after the software has passed all the tests to determine whether it meets all the predetermined functionality, performance and other quality criteria.
Exit Criteria:
After the verification that the docs delivered are adequate and consistent with the executable system. Software system meets all the requirements of the customer
Deliverables:
User Acceptance Test Plan
User Acceptance Testcases
User guides/docs
User Acceptance Testreports

Answer4:
System Testing: Done by QA at developemnt end.It is done after intergration is complete and all integration P1/P2/P3 bugs are fixed. the code is freezed. No more code changes are taken. Then All the requirements are tested and all the intergration bugs are verified.
UAT: Done by QA(trained like end users ). All the requiement are tested and also whole system is verified and validated.

What is the difference between a test plan and a test scenario?

Difference number 1: A test plan is a document that describes the scope, approach, resources, and schedule of intended testing activities, while a test scenario is a document that describes both typical and atypical situations that may occur in the use of an application.
Difference number 2: Test plans define the scope, approach, resources, and schedule of the intended testing activities, while test procedures define test conditions, data to be used for testing, and expected results, including database updates, file outputs, and report results.
Difference number 3: A test plan is a description of the scope, approach, resources, and schedule of intended testing activities, while a test scenario is a description of test cases that ensure that a business process flow, applicable to the customer, is tested from end to end.

Can you give me an example on reliability testing?

For example, our products are defibrillators. From direct contact with customers during the requirements gathering phase, our sales team learns that a large hospital wants to purchase defibrillators with the assurance that 99 out of every 100 shocks will be delivered properly.
In this example, the fact that our defibrillator is able to run for 250 hours without any failure in order to demonstrate the reliability, is irrelevant to these customers. In order to test for reliability we need to translate terminology that is meaningful to the customers into equivalent delivery units, such as the number of shocks. Therefore we describe the customer needs in a quantifiable manner, using the customer’s terminology. For example, our quantified reliability testing goal becomes as follows: Our defibrillator will be considered sufficiently reliable if 10 (or fewer) failures occur from 1,000 shocks.
Then, for example, we use a test / analyze / fix technique, and couple reliability testing with the removal of errors. When we identify a failed delivery of a shock, we send the software back to the developers, for repair. The developers build a new version of the software, and then we deliver another 1,000 shocks (into dummy resistor loads). We track failure intensity (i.e. failures per 1,000 shocks) in order to guide our reliability testing, and to determine the feasibility of the software release, and to determine whether the software meets our customers' reliability requirements.

Need function to find all the positions?
Ex: a string "abcd, efgh,ight" .
Want break this string based on the criteria here ever found the..

Answer1:
And return the delimited fields as a list of string? Sound like a perl split function. This could be built on one of your own containing:
[ ] //knocked this together in a few min. I am sure there is a much more efficent way of doing things
[ ] //but this is with the cobling together of several built in functions
[-] LIST OF STRING Split(STRING sDelim, STRING sData)
[ ] LIST OF STRING lsReturn
[ ] STRING sSegment
[-] while MatchStr("*{sDelim}*", sData)
[ ] sSegment = GetField(sData, sDelim, 1)
[ ] ListAppend(lsReturn, Trim(sSegment))
[ ] //crude chunking:
[ ] sSegment += ","
[ ] sData = GetField(sData, sSegment, 2)
[-] if Len(sData) > 0
[ ] ListAppend(lsReturn, Trim(sData))
[ ] return lsReturn

Answer2:
You could use something like this.... hope I am understanding the problem
[+] testcase T1()
[ ] string sTest = "hello, there I am happy"
[ ] string sTest1 = (GetField (sTest, ",", 2))
[ ] Print(sTest1)
[ ]
[ ] This Prints "there I am happy"
[ ] GetField(sTest,","1)) would Print hello, etc....

Answer3:
Below is the function which return all fields (list of String).
[+] LIST OF STRING ConvertToList (STRING sStr, STRING sDelim)
[ ] INTEGER iIndex= 1
[ ] LIST OF STRING lsStr
[ ] STRING sToken = GetField (sStr, sDelim, iIndex)
[ ]
[+] if (iIndex == 1 && sToken == "")
[ ] iIndex = iIndex + 1
[ ] sToken = GetField (sStr, sDelim, iIndex)
[ ]
[+] while (sToken != "")
[ ] ListAppend (lsStr, sToken)
[ ] iIndex = iIndex+1
[ ] sToken = GetField (sStr, sDelim, iIndex)
[ ] return lsStr

What is the difference between monkey testing and smoke testing?

Difference number 1: Monkey testing is random testing, and smoke testing is a nonrandom testing. Smoke testing is nonrandom testing that deliberately exercises the entire system from end to end, with the the goal of exposing any major problems.
Difference number 2: Monkey testing is performed by automated testing tools, while smoke testing is usually performed manually.
Difference number 3: Monkey testing is performed by "monkeys", while smoke testing is performed by skilled testers.
Difference number 4: "Smart monkeys" are valuable for load and stress testing, but not very valuable for smoke testing, because they are too expensive for smoke testing.
Difference number 5: "Dumb monkeys" are inexpensive to develop, are able to do some basic testing, but, if we used them for smoke testing, they would find few bugs during smoke testing.
Difference number 6: Monkey testing is not a thorough testing, but smoke testing is thorough enough that, if the build passes, one can assume that the program is stable enough to be tested more thoroughly.
Difference number 7: Monkey testing either does not evolve, or evolves very slowly. Smoke testing, on the other hand, evolves as the system evolves from something simple to something more thorough.
Difference number 8: Monkey testing takes "six monkeys" and a "million years" to run. Smoke testing, on the other hand, takes much less time to run, i.e. from a few seconds to a couple of hours.

It's a good thing to share test cases with customers

That's generally a good thing, but the question is why do they want to see them?
Potential problems are that they may be considering changing outsourcing firms and want to use the test cases elsewhere. If that can be prevented, please do so.
Another problem is that they want to micro manage your testing efforts. It's one thing to audit your work to prove to themselves that you're doing a good job, it's an entirely different matter if they intend to tell you that you don't have enough test coverage on the activity of module foo and far too much coverage on module bar, please correct it.
Another issue may be that they are seeking litigation and they need proof that you were negligent in some area of testing.
It's never a bad thing to have your customer wanting to be involved, unless you're a large company and this is a small (in terms of sales) customer.
What are your concerns about this? Can you give more information on your situation and the customer's?

Bug tracking,What metrics are used for test report generation?,What is quality plan?,

What metrics are used for bug tracking?

Metrics that can be used for bug tracking include the followings: the total number of bugs, total number of bugs that have been fixed, number of new bugs per week, and the number of fixes per week. Metrics for bug tracking can be used to determine when to stop testing, for example, when bug rate falls below a certain level. You CAN learn to use defect tracking software.


1. In QA team, everyone talks about process. What exactly they are taking about?
2. Are there any different type of process?

Answer1:
When you talk about "process" you are generally talking about the actions used to accomplish a task.
Here's an example: How do you solve a jigsaw puzzle?
You start with a box full of oddly shaped pieces. In your mind you come up with a strategy for matching two pieces together (or no strategy at all and simply grab random pieces until you find a match), and continue on until the puzzle is completed.
If you were to describe the *way* that you go about solving the puzzle you would be describing the process.
Some follow-up questions you might think about include things like:
- How much time did it take you to solve the puzzle?
- Do you know of any skills, tricks or practices that might help you solve the puzzle quicker?
- What if you try to solve the puzzle with someone else? Does that help you go faster, or slower? (why or why not?) Can you have *too* many people on this one task?
- To answer your second question, I'll ask *you* the question: Are there different ways that people can solve a jigsaw puzzle?
There are many interesting process-related questions, ideas and theories in Quality Assurance. Generally the identification of workplace processes lead to the questions of improvement in efficiency and productivity. The motivation behind that is to try and make the processes as efficient as possible so as to incur the least amount of time and expense, while providing a general sense of repeatability, visibility and predictability in the way tasks are performed and completed.
The idea behind this is generally good, but the execution is often flawed. That is what makes QA so interesting. You see, when you work with people and processes, it is very different than working with the processes performed by machines. Some people in QA forget that distinction and often become disillusioned with the whole thing.
If you always remember to approach processes in the workplace with a people-centric view, you should do fine.

Answer2:
There is:
* Waterfall
* Spiral
* Rapid prototype
* Clean room
* Agile (XP, Scrum, ...)

What metrics are used for test report generation?

Metrics that can be used for test report generation include...
McCabe metrics: cyclomatic complexity metric (v(G)), actual complexity metric (AC), module design complexity metric (iv(G)), essential complexity metric (ev(G)), pathological complexity metric (pv(G)), design complexity metric (S0), integration complexity metric (S1), object integration complexity metric (OS1), global data complexity metric (gdv(G)), data complexity metric (DV), tested data complexity metric (TDV), data reference metric (DR), tested data reference metric (TDR), maintenance severity metric (maint_severity), data reference severity metric (DR_severity), data complexity severity metric (DV_severity), global data severity metric (gdv_severity).
McCabe object-oriented software metrics: encapsulation percent public data (PCTPUB), access to public data (PUBDATA), polymorphism percent of unoverloaded calls (PCTCALL), number of roots (ROOTCNT), fan-in (FANIN), quality maximum v(G) (MAXV), maximum ev(G) (MAXEV), and hierarchy quality (QUAL).
Other object-oriented software metrics: depth (DEPTH), lack of cohesion of methods (LOCM), number of children (NOC), response for a class (RFC), weighted methods per class (WMC), Halstead software metrics program length, program volume, program level and program difficulty, intelligent content, programming effort, error estimate, and programming time.
Line count software metrics: lines of code, lines of comment, lines of mixed code and comments, and lines left blank.

What is quality plan?

Answer1:
the test plan is the document created before starting the testing process, it includes that types of testing that will be performed, high level scope of the project, the envirnmental requirements of the testing process, what automated testing tools will be used (If available), the schedule of each test, when it will start and end.

Answer2:
you should not only understand what a Quality Plan is, but you should understand why you're making it. I don't beleieve that "because I was told to do so" is a good enough reason. If the person who told you to create it can't tell you 1) what it is, and 2) how to create it, I don't think that they actually know why it's needed. That breaks the primary rule of all plans used in testing:
We write quality plans for two very different purposes. Sometimes the quality plan is a product; sometimes it's a tool. It's too easy, but also too expensive, to confuse these goals.
If it's not being used as a tool, don't waste your time (and your company's money) doing this.

What is the difference between efficient and effective?

"Efficient" means having a high ratio of output to input; which means working or producing with a minimum of waste. For example, "An efficient engine saves gas." Or, "An efficient test engineer saves time".
"Effective", on the other hand, means producing or capable of producing an intended result, or having a striking effect. For example, "For rapid long-distance transportation, the jet engine is more effective than a witch's broomstick". Or, "For developing software test procedures, engineers specializing in software testing are more effective than engineers who are generalists".

How effective can we implement six sigma principles in a very large software services organization?

Answer1:
Effective way of implementing sixsigma.
there are quite a few things one needs
1. management buyin
2. dedicated team both drivers as well as adopters
3. training
4. culture building - if you have a pro process culture, life is easy
5. sustained effort over a period towards transforming, people, thoughts and actions Personally technical content is never a challenge, but adoption is a challenge.

Answer2:
"Six sigma" is a combination of process recommendations and mathematical model. The name "six sigma" reflects the notion of reducing variation so much that errors -- events out of tolerance -- are six standard deviations from a desired mean. The mathematics are at the core of the process implementation.
The problem is that software is not hardware. Software defects are designed in, not the result of manufacturing variation.
The other side of six sigma is the drive for continuous improvement. You don't need the six sigma math for this and the concept has been around long before the six sigma movement.
To improve anything, you need some type of indicator of its current state and a way to tell that it is improved. Plus determination to improve it. Management support helps.

Answer3:
There are different methodologies adopted in sixsigma. However, it is commonly referenced from the variance based approach. If you are trying to look at sixsigma from that, for software services, fundamentally the measurement system should be reliable - industry has not reached the maturity level of manufacturing industry where it fits to a T. The differences between SW and HW/manufacturing industry is slightly difficult to address.
There are some areas you can adopt sixsigma in its full statistical form(eg in-process error rate, productivity improvements etc), some areas are difficult.
The narrower the problem area is, the better it gets even in software services to address adopting the statistical method.
There are methodologies that have a bundle of tools,along with statistical techniques, are used on the full SDLC.
A generic observation is ,SS helps if we look for proper fitment of methodology for the purpose. Else doubts creep in

What stage of bug fixing is the most cost effective?
Bug prevention techniques (i.e. inspections, peer design reviews, and walk-throughs) are more cost effective than bug detection.


General testing process ? what is Benchmark? What is “Quality Assurance”?

what is Benchmark?

A Benchmark is a standard to measure against. If you benchmark an application, all future application changes will be tested and compared against the benchmarked application.

Which of the following Statements about gernerating test cases is false?
1. Test cases may contain multiple valid conditions
2. Test cases may contain multiple invalid conditions
3. Test cases may contain both valid and invalid conditions
4. Test cases may contain more than 1 step.
5. test cases should contain Expected results.

Answer1:
all the conditions mentioned are valid and not a single condition can be stated as false.
Here i think, the condition means the input type or situation (some may call it as valid or invalid, positive or negative)
Also a single test case can contain both the input types and then the final result can be verified (it obviously should not bring the required result, as one of the input condition is invalid, when the test case would be executed), this usually happens while writing secnario based test cases.
For ex. Consider web based registration form, in which input data type for some fields are positive and for some fields it is negative (in a scenario based test case)
Above screen can be tested by generating various scenario's and combinations. The final result can be verified against actual result and the registration should not be carried out sucessfully (as one/some input types are invalid), when this test case is executed.
The writing of test case also depends upon the no. of descriptive fields the tester has in the test case template. So more elaborative is the test case template, more is the ease of writing test cases and generating scenario's. So writing of test cases totally depends on the indepth thinking of the tester and there are no predefined or hard coded norms for writing test case.
This is according to my understanding of testing and test case writing knowledge (as for many applications, i have written many positive and negative conditions in a single test case and verified different scenario's by generating such test cases)

Answer2:
The answer to this question will be 3 Test cases may contain both valid and invalid conditions.
Since there is no restriction for the test case to be of multiple steps or more than one valid or invalid conditions. But A test case whether it is feature ,unit level or end to end test case ,it can not contain both valid and invalid condition in a unit test case.
Because if this will happen then the concept of test case for a result will be dwindled and hence has no meaning.

What is “Quality Assurance”?

“Quality Assurance” measures the quality of processes used to create a quality product.
Software Quality Assurance (‘SQA’ or ‘QA’) is the process of monitoring and improving all activities associated with software development, from requirements gathering, design and reviews to coding, testing and implementation.
It involves the entire software development process - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with, at the earliest possible stage. Unlike testing, which is mainly a ‘detection’ process, QA is ‘preventative’ in that it aims to ensure quality in the methods & processes – and therefore reduce the prevalence of errors in the software.
Organisations vary considerably in how they assign responsibility for QA and testing. Sometimes they’re the combined responsibility of one group or individual. Also common are project teams that include a mix of testers and developers who work closely together, with overall QA processes monitored by project managers or quality managers.

Quality Assurance and Software Development

Quality Assurance and development of a product are parallel activities. Complete QA includes reviews of the development methods and standards, reviews of all the documentation (not just for standardisation but for verification and clarity of the contents also). Overall Quality Assurance processes also include code validation.
A note about quality assurance: The role of quality assurance is a superset of testing. Its mission is to help minimise the risk of project failure. QA people aim to understand the causes of project failure (which includes software errors as an aspect) and help the team prevent, detect, and correct the problems. Often test teams are referred to as QA Teams, perhaps acknowledging that testers should consider broader QA issues as well as testing.

Which things to consider to test a mobile application through black box technique?

Answer1:
Not sure how your device/server is to operate, so mold these ideas to fit your app. Some highlights are:
Range testing: Ensure that you can reconnect when leaving and returning back into range.
Port/IP/firewall testing - change ports and ips to ensure that you can connect and disconnect. modify the firewall to shutoff the connection.
Multiple devices - make sure that a user receives his messages with other devices connected to the same ip/port. Your app should have a method to determine which device/user sent the message and only return to it. Should be in the message string sent and received. Unless you have conferencing capabilities within the application.
Cycle the power of the server and watch the mobile unit reconnect automatically.
Mobile unit sends a message and then power off the unit, when powering back on and reconnecting, ensure that the message is returned to the mobile unit.

Answer2:
Not clearly mentioned which area of the mobile application you are testing with. Whether is it simple SMS application or WAP application, you need to specify more details.If you are working with WAP then you can download simulators from net and start testing over it.

What is the general testing process?

The general testing process is the creation of a test strategy (which sometimes includes the creation of test cases), creation of a test plan/design (which usually includes test cases and test procedures) and the execution of tests. Test data are inputs that have been devised to test the system
Test Cases are inputs and outputs specification plus a statement of the function under the test.
Test data can be generated automatically (simulated) or real (live).

The stages in the testing process are as follows:

1. Unit testing: (Code Oriented)
Individual components are tested to ensure that they operate correctly. Each component is tested independently, without other system components.

2. Module testing:
A module is a collection of dependent components such as an object class, an abstract data type or some looser collection of procedures and functions. A module encapsulates related components so it can be tested without other system modules.

3. Sub-system testing: (Integration Testing) (Design Oriented)
This phase involves testing collections of modules, which have been integrated into sub-systems. Sub-systems may be independently designed and implemented. The most common problems, which arise in large software systems, are sub-systems interface mismatches. The sub-system test process should therefore concentrate on the detection of interface errors by rigorously exercising these interfaces.

4. System testing:
The sub-systems are integrated to make up the entire system. The testing process is concerned with finding errors that result from unanticipated interactions between sub-systems and system components. It is also concerned with validating that the system meets its functional and non-functional requirements.

5. Acceptance testing:
This is the final stage in the testing process before the system is accepted for operational use. The system is tested with data supplied by the system client rather than simulated test data. Acceptance testing may reveal errors and omissions in the systems requirements definition( user - oriented) because real data exercises the system in different ways from the test data. Acceptance testing may also reveal requirement problems where the system facilities do not really meet the users needs (functional) or the system performance (non-functional) is unacceptable.

Acceptance testing is sometimes called alpha testing. Bespoke systems are developed for a single client. The alpha testing process continues until the system developer and the client agrees that the delivered system is an acceptable implementation of the system requirements.
When a system is to be marketed as a software product, a testing process called beta testing is often used.

Beta testing involves delivering a system to a number of potential customers who agree to use that system. They report problems to the system developers. This exposes the product to real use and detects errors that may not have been anticipated by the system builders. After this feedback, the system is modified and either released fur further beta testing or for general sale.

What's normal practices of the QA specialists with perspective of software?

These are the normal practices of the QA specialists with perspective of software
[note: these are all QC activities, not QA activities.]
1-Desgin Review Meetings with the System Analyst and If possible should be the part in Requirement gathering
2-Analysing the requirements and the desing and to trace the desing with respect to the requirements
3-Test Planning
4-Test Case Identification using different techniques (With respect to the Web Based Applciation and Desktoip Applications)
5-Test Case Writing (This part is to be assigned to the testing engineers)
6-Test Case Execution (This part is to be assigned to the testing engineers)
7-Bug Reporting (This part is to be assigned to the testing engineers)
8-Bug Review and thier Analysis so that future bus can be removed by desgining some standards

How to write ar Test Cases For ATM,Phone,Yahoo Mail...

1.Test Case for ATM

TC 1 :- succesful card insertion.
TC 2 :- unsuccessful operation due to wrong angle card insertion.
TC 3:- unsuccesssful operation due to invalid account card.
TC 4:- successful entry of pin number.
TC 5:- unsuccessful operation due to wrong pin number entered 3 times.
TC 6:- successful selection of language.
TC 7:- successful selection of account type.
TC 8:- unsuccessful operation due to wrong account type selected w/r to that inserted card.
TC 9:- successful selection of withdrawl option.
TC 10:- successful selection of amount.
TC 11:- unsuccessful operation due to wrong denominations.
TC 12:- successful withdrawl operation.
TC 13:- unsuccessful withdrawl operation due to amount greater than possible balance.
TC 14:- unsucceful due to lack of amount in ATM.
TC 15:- un due to amount greater than the day limit.
TC 16:- un due to server down.
TC 17:- un due to click cancel after insert card.
TC 18:- un due to click cancel after insert card and pin no.
TC 19:- un due to click cancel after language selection,
account type selection,withdrawl selection, enter amount

2. Test cases for Mobile Phone

Test Cases for Mobile Phone:
1)Chek whether Battery is inserted into mobile properly
2)Chek Switch on/Switchoff of the Mobile
3)Insert the sim into the phone n chek
4)Add one user with name and phone number in Address book
5)Chek the Incoming call
6)chek the outgoing call
7)send/receive messages for that mobile
8)Chek all the numbers/Characters on the phone working fine by clicking on them..
9)Remove the user from phone book n chek removed properly with name and phone number
10)Chek whether Network working fine..
11)If its GPRS enabled chek for the connectivity.

3. Test cases for sending a message through mobile phone (assuming all the scenarios)

1.Check for the availability of the Mobile

2.Check the buttons on the mobile

3.Check the mobile is locked or unlocked

4.Check for the unlock of the mobile

5.Select the menu

6.Check for the messages in menu

7.Select the messages

8.Check for the write message in messages menu

9.Select the write message

10.Check the buttons writing alphabets

11.Check for the chars how many u can send

12.Write the message on the write message menu

13.Select the options

14.Select the send

15.Check whether asking for the phone No. of the receiver

16.Select the Search for the receiver Phone No. if Exits

17.Enter the Phone No. of the receiver

18.Select Ok

19.Check for the request send

4. Test cases for Traffic Signal

1.verify if the traffic lights are having three lights(green,yellow,red)
2.verify that the lights turn on in a sequence
3.verify that lights turn on in a sequence based on time specified(greenlight-4.1min,yellowlight10sec,redlight 1 min)
5.verify that only one light glows at a time
6.verify if the speed of the Traffic light can be accelerated as time specified based on the traffic
7.verify if the traffic lights in some spots are sensor activated.

5. Test case for 2 way switch

1. Check whether two switches are present.
2. Check whether both switches are connected properly.
3. Check power supplies for both switches.
4. Check on/off conditions are working properly.
5. Check whether any electornic applicace connected to the 2-way switches should not get power supply when both switches are either in on state or off state.
6. Check whether any electornic applicace connected to the 2-way switches should get power supply when one switch is in on state and other is in off state or vice versa.

6. Test cases Elevator

Some of the use cases would be:
1) Elevator is capable of moving up and down.
2) It is stopping at each floor.
3) It moves exactly to that floor when corresponding floor no is pressed.
4) It moves up when called from upward and down when called from downward.
5) It waits until 'close' button is pressed.
6) If anyon steps inbetween the door at the time of closing, door should open.

7) No break points exists
8) More usecases for the load that the elevator can carry (if required)

7. Test cases Calculator

1 It should have 9 numeric digits.
2 it should give proper output based on the operation.
3 it should  not allow charactres.
4 it should run from cell or battery not through power supply.
5 it should be small in size.
6 at least it should perform 4 basic operation such as

add,sub.div,mul

8. Test cases Bulb

 
Test cases for bulb 
Check the bulb is req shap and size
Check the bulb is fitted and removed from holder
Check the bulb glow req illumunation r not
Check the bulb it should glow when we switch on 
Check the bulb it should off when we switch off

Check the bulb material

Life of the bulb should meet the reqrmt

9. Test Case for Yahoo Page

1.Testing without entering any username and password
2.Test it only with Username
3.Test it only with password.
4 .User name with wrong password
5. Password with wrong user name
6. Right username and right password
7. Cancel, after entering username and pwd.
8.Enter long username and password that exceeds the set limit of characters.
9.Try copy/paste in the password text box.
10.After successfull sign-out, try "Back" option from your browser. Check whether it gets you to the "signed-in" page.

Different Types Of Testings In Sofware Testing

Types and Levels of Testing in Programming

Testing is an important step in software development life cycle. The process of testing takes place at various stages of development in programming. This is a vital step in development life cycle because the process of testing helps to identify the mistakes and sends the program for correction.

This process gets repeated at various stages until the final unit or program is found to be complete thus giving a total quality to the development process. The various levels and types of testing found in a software development life cycle are:

White Box Testing
For doing this testing process the person have to access to the source code of the product to be tested. So it is essential that the person doing this white box testing have some knowledge of the program being tested. Though not necessary it would be more worth if the programmer itself does this white box testing process since this testing process requires the handling of source code.

Black Box Testing
This is otherwise called as functional testing. In contrary to white box testing here the person who is doing the black box testing need not have the programming knowledge. This is because the person doing the black box testing would access the output or outcomes as the end user would access and would perform thorough functionality testing to check whether the developed module or product behaves in functionality in the way it has to be.

Unit Testing
This testing is done for each module of the program to ensure the validity of each module. This type of testing is done usually by developers by writing test cases for each scenarios of the module and writing the results occurring in each step for each module.

Regression testing
We all know that development life cycle is subjected to continuous changes as per the requirements of user. Suppose if there is a change in the existing system which has already been tested it is essential that one has to make sure that this new changes made to the existing system do not affect the existing functionality. For ensuring this regression testing is done.

Integration testing
By making unit testing for each module as explained above the process of integrated testing as a whole becomes simpler. This is because by correcting mistakes or bugs in each module the integration of all units as a system and testing process becomes easier. So one might think why the integration is testing needed. The answer is simple. It is needed because unit testing as explained test and assures correctness of only each module. But it does not cover the aspects of how the system would behave or what error would be reported when modules are integrated. This is done in the level of integration testing.

Smoke Test
This is also called as sanity testing. This is mainly used to identify environmental related problems and is performed mostly by test manager. For any application it is always necessary to have the environment first checked for smooth running of the application. So in this testing process the application is run in the environment technically called as dry run and checked to find that the application could run without any problem or abend in between.

Alpha Testing
The above different testing process described takes place in different stages of development as per the requirement and needs. But a final testing is always made after a full finished product that is before it released to end users and this is called as alpha testing. The alpha testing involves both the white box testing and black box testing thus making alpha testing to be carried out in two phases.

Beta Testing
This process of testing is carried out to have more validity of the software developed. This takes place after the alpha testing. After the alpha phase also the generally the release is not made fully to all end users. The product is released to a set of people and feedback is got from them to ensure the validity of the product. So here normally the testing is being done by group of end users and therefore this beta testing phase covers black box testing or functionality testing only.

Having seen about testing levels and types let us now see how the testing process takes place in general. After getting an idea of what all to be tested by communicating with developers and others in the design phase of the software development life cycle the testing stage carries on parallel. Test plan is made ready during the planning stage of testing. This test plan has details like environment of setting like software, hardware, operating system used, the scope and limitation of testing, test type and so on. In the next phase the test case is prepared which has details of each step for module to be checked, input which can be used for each action are described and recorded for testing. It also has details about what is the expected outcome or expected result of each action .The next phase is the actual testing phase. In this phase the testers make testing based on the test plan and test case made ready and record the output or result resulting from each module. Thus the actual output is recorded .Then a report is made to find the error or defect between expected outcome and actual output in each module in each step. This is sent for rework for developers and testing cycle again continues as above.

It does not mean that the system released is bug free or error free hundred percent. This is because no real system could have null percentage error. But an important point to bear in mind is that a system developed is a quality system only if the system could run for a period of time after its release without error and after this time period only minimal errors are reported. For achieving this testing phase plays an essential role in software development life cycle.

Concepts & Flow About Verfication And Validation On Software Testing

VERIFICATION AND VALIDATION

A. Concepts and Definitions

Software Verification and Validation (V&V) is the process of ensuring that software being developed or changed will satisfy functional and other requirements (validation) and each step in the process of building the software yields the right products (verification). The differences between verification and validation are unimportant except to the theorist; practitioners use the term V&V to refer to all of the activities that are aimed at making sure the software will function as required.

V&V is intended to be a systematic and technical evaluation of software and associated products of the development and maintenance processes. Reviews and tests are done at the end of each phase of the development process to ensure software requirements are complete and testable and that design, code, documentation, and data satisfy those requirements.

B. Activities

The two major V&V activities are reviews, including inspections and walkthroughs, and testing.

1. Reviews, Inspections, and Walkthroughs

Reviews are conducted during and at the end of each phase of the life cycle to determine whether established requirements, design concepts, and specifications have been met. Reviews consist of the presentation of material to a review board or panel. Reviews are most effective when conducted by personnel who have not been directly involved in the development of the software being reviewed.

Informal reviews are conducted on an as-needed basis. The developer chooses a review panel and provides and/or presents the material to be reviewed. The material may be as informal as a computer listing or hand-written documentation.

Formal reviews are conducted at the end of each life cycle phase. The acquirer of the software appoints the formal review panel or board, who may make or affect a go/no-go decision to proceed to the next step of the life cycle. Formal reviews include the Software Requirements Review, the Software Preliminary Design Review, the Software Critical Design Review, and the Software Test Readiness Review.

An inspection or walkthrough is a detailed examination of a product on a step-by-step or line-of-code by line-of-code basis. The purpose of conducting inspections and walkthroughs is to find errors. The group that does an inspection or walkthrough is composed of peers from development, test, and quality assurance.

2. Testing

Testing is the operation of the software with real or simulated inputs to demonstrate that a product satisfies its requirements and, if it does not, to identify the specific differences between expected and actual results. There are varied levels of software tests, ranging from unit or element testing through integration testing and performance testing, up to software system and acceptance tests.

a. Informal Testing

Informal tests are done by the developer to measure the development progress. “Informal” in this case does not mean that the tests are done in a casual manner, just that the acquirer of the software is not formally involved, that witnessing of the testing is not required, and that the prime purpose of the tests is to find errors. Unit, component, and subsystem integration tests are usually informal tests.

Informal testing may be requirements-driven or design-driven. Requirements-driven or black box testing is done by selecting the input data and other parameters based on the software requirements and observing the outputs and reactions of the software. Black box testing can be done at any level of integration. In addition to testing for satisfaction of requirements, some of the objectives of requirements-driven testing are to ascertain:

Computational correctness.

Proper handling of boundary conditions, including extreme inputs and conditions that cause extreme outputs.

State transitioning as expected.

Proper behavior under stress or high load.

Adequate error detection, handling, and recovery.

Design-driven or white box testing is the process where the tester examines the internal workings of code. Design-driven testing is done by selecting the input data and other parameters based on the internal logic paths that are to be checked. The goals of design-driven testing include ascertaining correctness of:

All paths through the code. For most software products, this can be feasibly done only at the unit test level.

Bit-by-bit functioning of interfaces.

Size and timing of critical elements of code.

b. Formal Tests

Formal testing demonstrates that the software is ready for its intended use. A formal test should include an acquirer-approved test plan and procedures, quality assurance witnesses, a record of all discrepancies, and a test report. Formal testing is always requirements-driven, and its purpose is to demonstrate that the software meets its requirements.

Each software development project should have at least one formal test, the acceptance test that concludes the development activities and demonstrates that the software is ready for operations.

In addition to the final acceptance test, other formal testing may be done on a project. For example, if the software is to be developed and delivered in increments or builds, there may be incremental acceptance tests. As a practical matter, any contractually required test is usually considered a formal test; others are “informal.”

After acceptance of a software product, all changes to the product should be accepted as a result of a formal test. Post acceptance testing should include regression testing. Regression testing involves rerunning previously used acceptance tests to ensure that the change did not disturb functions that have previously been accepted.

C. Verification and Validation During the Software

Acquisition Life Cycle

The V&V Plan should cover all V&V activities to be performed during all phases of the life cycle. The V&V Plan Data Item Description (DID) may be rolled out of the Product Assurance Plan DID contained in the SMAP Management Plan Documentation Standard and DID.

1. Software Concept and Initiation Phase

The major V&V activity during this phase is to develop a concept of how the system is to be reviewed and tested. Simple projects may compress the life cycle steps; if so, the reviews may have to be compressed. Test concepts may involve simple generation of test cases by a user representative or may require the development of elaborate simulators and test data generators. Without an adequate V&V concept and plan, the cost, schedule, and complexity of the project may be poorly estimated due to the lack of adequate test capabilities and data.

2. Software Requirements Phase

V&V activities during this phase should include:

Analyzing software requirements to determine if they are consistent with, and within the scope of, system requirements.

Assuring that the requirements are testable and capable of being satisfied.

Creating a preliminary version of the Acceptance Test Plan, including a verification matrix, which relates requirements to the tests used to demonstrate that requirements are satisfied.

Beginning development, if needed, of test beds and test data generators.

The phase-ending Software Requirements Review (SRR).

3. Software Architectural (Preliminary) Design Phase

V&V activities during this phase should include:

Updating the preliminary version of the Acceptance Test Plan and the verification matrix.

Conducting informal reviews and walkthroughs or inspections of the preliminary software and data base designs.

The phase-ending Preliminary Design Review (PDR) at which the allocation of requirements to the software architecture is reviewed and approved.

4. Software Detailed Design Phase

V&V activities during this phase should include:

Completing the Acceptance Test Plan and the verification matrix, including test specifications and unit test plans.

Conducting informal reviews and walkthroughs or inspections of the detailed software and data base designs.

The Critical Design Review (CDR) which completes the software detailed design phase.

5. Software Implementation Phase

V&V activities during this phase should include:

Code inspections and/or walkthroughs.

Unit testing software and data structures.

Locating, correcting, and retesting errors.

Development of detailed test procedures for the next two phases.

6. Software Integration and Test Phase

This phase is a major V&V effort, where the tested units from the previous phase are integrated into subsystems and then the final system. Activities during this phase should include:

Conducting tests per test procedures.

Documenting test performance, test completion, and conformance of test results versus expected results.

Providing a test report that includes a summary of nonconformances found during testing.

Locating, recording, correcting, and retesting nonconformances.

The Test Readiness Review (TRR), confirming the product’s readiness for acceptance testing.

7. Software Acceptance and Delivery Phase

V&V activities during this phase should include:

By test, analysis, and inspection, demonstrating that the developed system meets its functional, performance, and interface requirements.

Locating, correcting, and retesting nonconformances.

The phase-ending Acceptance Review (AR).

8. Software Sustaining Engineering and Operations Phase

Any V&V activities conducted during the prior seven phases are conducted during this phase as they pertain to the revision or update of the software.

D. Independent Verification and Validation

Independent Verification and Validation (IV&V) is a process whereby the products of the software development life cycle phases are independently reviewed, verified, and validated by an organization that is neither the developer nor the acquirer of the software. The IV&V agent should have no stake in the success or failure of the software. The IV&V agent’s only interest should be to make sure that the software is thoroughly tested against its complete set of requirements.

The IV&V activities duplicate the V&V activities step-by-step during the life cycle, with the exception that the IV&V agent does no informal testing. If there is an IV&V agent, the formal acceptance testing may be done only once, by the IV&V agent. In this case, the developer will do a formal demonstration that the software is ready for formal acceptance.

E. Techniques and Tools

Perhaps more tools have been developed to aid the V&V of software (especially testing) than any other software activity. The tools available include code tracers, special purpose memory dumpers and formatters, data generators, simulations, and emulations. Some tools are essential for testing any significant set of software, and, if they have to be developed, may turn out to be a significant cost and schedule driver.

An especially useful technique for finding errors is the formal inspection. Formal inspections were developed by Michael Fagan of IBM. Like walkthroughs, inspections involve the line-by-line evaluation of the product being reviewed. Inspections, however, are significantly different from walkthroughs and are significantly more effective. Inspections are done by a team, each member of which has a specific role. The team is led by a moderator, who is formally trained in the inspection process. The team includes a reader, who leads the team through the item; one or more reviewers, who look for faults in the item; a recorder, who notes the faults; and the author, who helps explain the item being inspected.

This formal, highly structured inspection process has been extremely effective in finding and eliminating errors. It can be applied to any product of the software development process, including documents, design, and code. One of its important side benefits has been the direct feedback to the developer/author, and the significant improvement in quality that results.