Thursday, June 25, 2009

Why Manual Software Testing Is Crucial?

In today’s world, especially the IT sector, the jungle rule prevails i.e. survival of the fittest. There is absolutely no room for the meek or the slow. This leads to longer hours, associates stretching beyond normal limits, and very often burning the midnight oil. However, all these practices come at a price. The human body can only bear so much, and after a certain point, mistakes start creeping in. This is where “Manual Software Testing” comes into play, carefully picking up the lost pieces in the creation, and exposing the flaws.

It has been noticed time and again, that Manual Software Testing is a crucial and indispensable part of any project. Even before the project starts, testing comes into play in the feasibility phase. With a Software test manager in the team, one can get a clear estimate of the required time, materials and resources. This leads to better planning, and a lot of overall savings in terms of time and money. Early test estimates are a crucial factor in the decision to proceed with the product.

Moreover, with the involvement of software testers in the product planning step, deep and insightful questions are asked. Typical of testers, they scrutinize the requirements with a skeptical eye, and rapidly classify the ones which might not be testable. According to the Baziuk Study (1995), the cost to repair a fault in the Operations stage is 470 – 880 times the cost to repair the same fault in the Requirements analysis phase of the project.

Even in the exploration of improvement areas for any software, testing plays a crucial role. With practices like Root Cause Analysis, the overall efficiency of the product may increase upto ten-fold. Various aspects of testing like defect reports, metrics and results help the project managers gauge the progress of the project.

Looking from the business point of view, testing is the only way of knowing whether the newly created product delivers the intended functionality. To promote repeat sales, every client needs an assurance that the new product will not adversely affect the working of the system already in place. Manual Software Testing is exhaustively used to ensure the same, enabling the testers to verify the working of the new product under the existing conditions.

However, ignoring the testing process has proved highly detrimental in many instances. Some of the recent incidents involve a sum of $172 million, which was borne by the Japanese electronics maker Matsushita, to replace the faulty Nokia-branded BL-5C batteries.

In conclusion, it can be safely inferred that testing is an irreplaceable process in the software development cycle. Apart from saving a lot of time and money, it is also the buoy which keeps the software performance afloat.

Tuesday, June 23, 2009

How Much Of A Domain knowledge is Important for testers?

I would like to introduce three dimensional testing career mentioned by Danny R. Faught. There are three categories of skill that need to be judged before hiring any software tester.

What are those three skill categories?


1) Testing skill
2) Domain knowledge
3) Technical expertise.

No doubt that any tester should have the basic testing skills like Manual testing and Automation testing. Tester having the common sense can even find most of the obvious bugs in the software. Then would you say that this much testing is sufficient? Would you release the product on the basis of this much testing done? Certainly not.

You will certainly have a product look by the domain expert before the product goes into the market.

While testing any application you should think like a end-user. But every human being has the limitations and one can’t be the expert in all of the three dimensions mentioned above. (If you are the experts in all of the above skills then please let me know ;-)) So you can’t assure that you can think 100% like how the end-user going to use your application. User who is going to use your application may be having a good understanding of the domain he is working on. You need to balance all these skill activities so that all product aspects will get addressed.

Nowadays you can see the professional being hired in different companies are more domain experts than having technical skills. Current software industry is also seeing a good trend that many professional developers and domain experts are moving into software testing.

We can observe one more reason why domain experts are most wanted! When you hire fresh engineers who are just out of college you cannot expect them to compete with the experienced professionals. Why? Because experienced professional certainly have the advantage of domain and testing experience and they have better understandings of different issues and can deliver the application better and faster.

Here are some of the examples where you can see the distinct edge of domain knowledge:
1) Mobile application testing.
2) Wireless application testing
3) VoIP applications
4) Protocol testing
5) Banking applications
6) Network testing

How will you test such applications without knowledge of specific domain? Are you going to test the BFSI applications (Banking, Financial Services and Insurance) just for UI or functionality or security or load or stress? You should know what are the user requirements in banking, working procedures, commerce background, exposure to brokerage etc and should test application accordingly, then only you can say that your testing is enough - Here comes the need of subject-matter experts.

Let’s take example of my current project: I am currently working on search engine application. Where I need to know the basic of search engine terminologies and concepts. Many times I see some other team tester’s asking me questions like what is ‘publishers’ and ‘advertisers’, what is the difference and what they do? Do you think they can test the application based on current online advertising and SEO? Certainly not. Unless and until they get well familiar with these terminologies and functionalities.

When I know the functional domain better I can better write and execute more test cases and can effectively simulate the end user actions which is distinctly a big advantage.

Here is the big list of the required testing knowledge:

  • Testing skill
  • Bug hunting skill
  • Technical skill
  • Domain knowledge
  • Communication skill
  • Automation skill
  • Some programming skill
  • Quick grasping
  • Ability to Work under pressure …

That is going to be a huge list. So you will certainly say, do I need to have these many skills? Its’ depends on you. You can stick to one skill or can be expert in one skill and have good understanding of other skills or balanced approach of all the skills. This is the competitive market and you should definitely take advantage of it. Make sure to be expert in at least one domain before making any move.

What if you don’t have enough domain knowledge?
You will be posted on any project and company can assign any work to you. Then what if you don’t have enough domain knowledge of that project? You need to quickly grasp as many concepts as you can. Try to understand the product as if you are the customer and what customer will do with application. Visit the customer site if possible know how they work with the product, Read online resources about the domain you want to test the application, participate in events addressing on such domain, meet the domain experts. Or either company will provide all this in-house training before assigning any domain specific task to testers.

There is no specific stage where you need this domain knowledge. You need to apply your domain knowledge in each and every software testing life cycle.

Objectives of Function Point Analysis

Software systems, unless they are thoroughly understood, can be like an ice berg. They are becoming more and more difficult to understand. Improvement of coding tools allows software developers to produce large amounts of software to meet an ever expanding need from users. As systems grow a method to understand and communicate size needs to be used. Function Point Analysis is a structured technique of problem solving. It is a method to break systems into smaller components, so they can be better understood and analyzed.

Function points are a unit measure for software much like an hour is to measuring time, miles are to measuring distance or Celsius is to measuring temperature. Function Points are an ordinal measure much like other measures such as kilometers, Fahrenheit, hours, so on and so forth

Reboot! Rethinking and Restarting Software Development - Online book
Online self paced function point training.

Human beings solve problems by breaking them into smaller understandable pieces. Problems that may appear to be difficult are simple once they are broken into smaller parts -- dissected into classes. Classifying things, placing them in this or that category, is a familiar process. Everyone does it at one time or another -- shopkeepers when they take stock of what is on their shelves, librarians when they catalog books, secretaries when they file letters or documents. When objects to be classified are the contents of systems, a set of definitions and rules must be used to place these objects into the appropriate category, a scheme of classification. Function Point Analysis is a structured technique of classifying components of a system. It is a method to break systems into smaller components, so they can be better understood and analyzed. It provides a structured technique for problem solving.

In the world of Function Point Analysis, systems are divided into five large classes and general system characteristics. The first three classes or components are External Inputs, External Outputs and External Inquires each of these components transact against files therefore they are called transactions. The next two Internal Logical Files and External Interface Files are where data is stored that is combined to form logical information. The general system characteristics assess the general functionality of the system.

Brief History
Function Point Analysis was developed first by Allan J. Albrecht in the mid 1970s. It was an attempt to overcome difficulties associated with lines of code as a measure of software size, and to assist in developing a mechanism to predict effort associated with software development. The method was first published in 1979, then later in 1983 . In 1984 Albrecht refined the method and since 1986, when the International Function Point User Group (IFPUG) was set up, several versions of the Function Point Counting Practices Manual have been published by IFPUG. The current version of the IFPUG Manual is 4.1. A full function point training manual can be downloaded from this website.


Objectives of Function Point Analysis
Frequently the term end user or user is used without specifying what is meant. In this case, the user is a sophisticated user. Someone that would understand the system from a functional perspective --- more than likely someone that would provide requirements or does acceptance testing.

Since Function Points measures systems from a functional perspective they are independent of technology. Regardless of language, development method, or hardware platform used, the number of function points for a system will remain constant. The only variable is the amount of effort needed to deliver a given set of function points; therefore, Function Point Analysis can be used to determine whether a tool, an environment, a language is more productive compared with others within an organization or among organizations. This is a critical point and one of the greatest values of Function Point Analysis.

Function Point Analysis can provide a mechanism to track and monitor scope creep. Function Point Counts at the end of requirements, analysis, design, code, testing and implementation can be compared. The function point count at the end of requirements and/or designs can be compared to function points actually delivered. If the project has grown, there has been scope creep. The amount of growth is an indication of how well requirements were gathered by and/or communicated to the project team. If the amount of growth of projects declines over time it is a natural assumption that communication with the user has improved.

Characteristic of Quality Function Point Analysis
Function Point Analysis should be performed by trained and experienced personnel. If Function Point Analysis is conducted by untrained personnel, it is reasonable to assume the analysis will done incorrectly. The personnel counting function points should utilize the most current version of the Function Point Counting Practices Manual,

Current application documentation should be utilized to complete a function point count. For example, screen formats, report layouts, listing of interfaces with other systems and between systems, logical and/or preliminary physical data models will all assist in Function Points Analysis.

The task of counting function points should be included as part of the overall project plan. That is, counting function points should be scheduled and planned. The first function point count should be developed to provide sizing used for estimating.


The Five Major Components
Since it is common for computer systems to interact with other computer systems, a boundary must be drawn around each system to be measured prior to classifying components. This boundary must be drawn according to the user’s point of view. In short, the boundary indicates the border between the project or application being measured and the external applications or user domain. Once the border has been established, components can be classified, ranked and tallied.

External Inputs (EI) - is an elementary process in which data crosses the boundary from outside to inside. This data may come from a data input screen or another application. The data may be used to maintain one or more internal logical files. The data can be either control information or business information. If the data is control information it does not have to update an internal logical file. The graphic represents a simple EI that updates 2 ILF's (FTR's).

External Outputs (EO) - an elementary process in which derived data passes across the boundary from inside to outside. Additionally, an EO may update an ILF. The data creates reports or output files sent to other applications. These reports and files are created from one or more internal logical files and external interface file. The following graphic represents on EO with 2 FTR's there is derived information (green) that has been derived from the ILF's

External Inquiry (EQ) - an elementary process with both input and output components that result in data retrieval from one or more internal logical files and external interface files. The input process does not update any Internal Logical Files, and the output side does not contain derived data. The graphic below represents an EQ with two ILF's and no derived data.

Internal Logical Files (ILF’s) - a user identifiable group of logically related data that resides entirely within the applications boundary and is maintained through external inputs.

External Interface Files (EIF’s) - a user identifiable group of logically related data that is used for reference purposes only. The data resides entirely outside the application and is maintained by another application. The external interface file is an internal logical file for another application.

Summary of benefits of Function Point Analysis

Function Points can be used to size software applications accurately. Sizing is an important component in determining productivity (outputs/inputs).
They can be counted by different people, at different times, to obtain the same measure within a reasonable margin of error.
Function Points are easily understood by the non technical user. This helps communicate sizing information to a user or customer.
Function Points can be used to determine whether a tool, a language, an environment, is more productive when compared with others.

Monday, June 8, 2009

How to write a good resume?

Tips One

1. Open your resume in Word and search (Ctrl F) for words like "Involved", "Participated" etc. and delete the sentences containing them. The recruiter is not interested in what all are you were involved or participated - but he/she would like see "what you achieved" by doing that. Here is a way to rate your resume - Give one negative mark every time you encounter such word in your resume. How much did your resume score? Now do you understand why you are not getting enough interview calls?

Tips Two

2.Get rid of words like "was Responsible for" or any variant of "Responsibility". What attracts recruiter is action word - "Achieved zero Downtime for systems I was responsible for" v/s "I was responsible for maintaining systems and ensure that downtime was low”. Notice the power of action. You will be delivering the same message but in a power packed way. That catches eyes of who "matter" in getting you a new "dream "Job. It is very important that you load your resume with these power packed action words, lots of them - especially in first 1-2 pages.

Tips Three

3. This one is the most "useless" part of resume if it is present. Writing paragraphs about the application that you tested with the names, versions, modules, detailed functionalities. Looks like a copy paste from functional specifications or SRS (System Requirement Specification) of the software product that tested. Watch out, some times this might land you in legal issues with your employer dragging you to court for leaking strategic product information to public - via your resume. This is big TURN OFF for the reader - especially a recruiter who would process and see thousands of resumes in a day.

Tips Four

Don’t forget the thumb rule – 1 page of resume for every 2 years of experience. So a person with 8-10 years of experience should not have a resume that exceeds 5 pages. Less and crisp is better and easier to read.

Spiral Model

History :
The spiral model was defined by Barry Boehm in his 1988 article A Spiral Model of Software Development and Enhancement. This model was not the first model to discuss iterative development, but it was the first model to explain why the iteration matters. As originally envisioned, the iterations were typically 6 months to 2 years long. Each phase starts with a design goal and ends with the client (who may be internal) reviewing the progress thus far. Analysis and engineering efforts are applied at each phase of the project, with an eye toward the end goal of the project.
The Spiral Model :
The spiral model, also known as the spiral lifecycle model, is a systems development method (SDM) used in information technology (IT). This model of development combines the features of the prototyping model and the waterfall model. The spiral model is intended for large, expensive, and complicated projects.
The steps in the spiral model can be generalized as follows:
1. The new system requirements are defined in as much detail as possible. This usually involves interviewing a number of users representing all the external or internal users and other aspects of the existing system.
2. A preliminary design is created for the new system.
3. A first prototype of the new system is constructed from the preliminary design. This is usually a scaled-down system, and represents an approximation of the characteristics of the final product.
4. A second prototype is evolved by a fourfold procedure: (1) evaluating the first prototype in terms of its strengths, weaknesses, and risks; (2) defining the requirements of the second prototype; (3) planning and designing the second prototype; (4) constructing and testing the second prototype.
5. At the customer's option, the entire project can be aborted if the risk is deemed too great. Risk factors might involve development cost overruns, operating-cost miscalculation, or any other factor that could, in the customer's judgment, result in a less-than-satisfactory final product.
6. The existing prototype is evaluated in the same manner as was the previous prototype, and, if necessary, another prototype is developed from it according to the fourfold procedure outlined above.
7. The preceding steps are iterated until the customer is satisfied that the refined prototype represents the final product desired.
8. The final system is constructed, based on the refined prototype.
9. The final system is thoroughly evaluated and tested. Routine maintenance is carried out on a continuing basis to prevent large-scale failures and to minimize downtime.
Applications :
For a typical shrink-wrap application, the spiral model might mean that you have a rough-cut of user elements (without the polished / pretty graphics) as an operable application, add features in phases, and, at some point, add the final graphics. The spiral model is used most often in large projects. For smaller projects, the concept of agile software development is becoming a viable alternative. The USmilitary has adopted the spiral model for its Future Combat Systems program.
Advantages :
1. Estimates (i.e. budget, schedule, etc.) become more realistic as workprogresses, because important issues are discovered earlier.
2. It is more able to cope with the (nearly inevitable) changes that software development generally entails.
3. Software engineers (who can get restless with protracted design processes) can get their hands in and start working on a project earlier.
Disadvantages :
1. Highly customized limiting re-usability
2. Applied differently for each application
3. Risk of not meeting budget or schedule
4. Risk of not meeting budget or schedule

What are the Security Testing has to be done for Web Applications?

Introduction

As more and more vital data is stored in web applications and the number of transactions on the web increases, proper security testing of web applications is becoming very important. Security testing is the process that determines that confidential data stays confidential (i.e. it is not exposed to individuals/ entities for which it is not meant) and users can perform only those tasks that they are authorized to perform (e.g. a user should not be able to deny the functionality of the web site to other users, a user should not be able to change the functionality of the web application in an unintended way etc.).

Some key terms used in security testing

Before we go further, it will be useful to be aware of a few terms that are frequently used in web application security testing:

What is “Vulnerability”?
This is a weakness in the web application. The cause of such a “weakness” can be bugs in the application, an injection (SQL/ script code) or the presence of viruses.

What is “URL manipulation”?
Some web applications communicate additional information between the client (browser) and the server in the URL. Changing some information in the URL may sometimes lead to unintended behavior by the server.

What is “SQL injection”?
This is the process of inserting SQL statements through the web application user interface into some query that is then executed by the server.

What is “XSS (Cross Site Scripting)”?
When a user inserts HTML/ client-side script in the user interface of a web application and this insertion is visible to other users, it is called XSS.

What is “Spoofing”?
The creation of hoax look-alike websites or emails is called spoofing.
Security testing approach:

In order to perform a useful security test of a web application, the security tester should have good knowledge of the HTTP protocol. It is important to have an understanding of how the client (browser) and the server communicate using HTTP. Additionally, the tester should at least know the basics of SQL injection and XSS. Hopefully, the number of security defects present in the web application will not be high. However, being able to accurately describe the security defects with all the required details to all concerned will definitely help.

1. Password cracking:

The security testing on a web application can be kicked off by “password cracking”. In order to log in to the private areas of the application, one can either guess a username/ password or use some password cracker tool for the same. Lists of common usernames and passwords are available along with open source password crackers. If the web application does not enforce a complex password (e.g. with alphabets, number and special characters, with at least a required number of characters), it may not take very long to crack the username and password.

If username or password is stored in cookies without encrypting, attacker can use different methods to steal the cookies and then information stored in the cookies like username and password.

2. URL manipulation through HTTP GET methods:

The tester should check if the application passes important information in the querystring. This happens when the application uses the HTTP GET method to pass information between the client and the server. The information is passed in parameters in the querystring. The tester can modify a parameter value in the querystring to check if the server accepts it.

Via HTTP GET request user information is passed to server for authentication or fetching data. Attacker can manipulate every input variable passed from this GET request to server in order to get the required information or to corrupt the data. In such conditions any unusual behavior by application or web server is the doorway for the attacker to get into the application.

3. SQL Injection:

The next thing that should be checked is SQL injection. Entering a single quote (‘) in any textbox should be rejected by the application. Instead, if the tester encounters a database error, it means that the user input is inserted in some query which is then executed by the application. In such a case, the application is vulnerable to SQL injection.

SQL injection attacks are very critical as attacker can get vital information from server database. To check SQL injection entry points into your web application, find out code from your code base where direct MySQL queries are executed on database by accepting some user inputs.

If user input data is crafted in SQL queries to query the database, attacker can inject SQL statements or part of SQL statements as user inputs to extract vital information from database. Even if attacker is successful to crash the application, from the SQL query error shown on browser, attacker can get the information they are looking for. Special characters from user inputs should be handled/escaped properly in such cases.

4. Cross Site Scripting (XSS):

The tester should additionally check the web application for XSS (Cross site scripting). Any HTML e.g. or any script e.g.

Basic Tips for testing multi-lingual web sites (7)

These days a number of web sites are deployed in multiple languages. As companies perform more and more business in other countries, the number of such global multi-lingual web applications will continue to increase.

Tip # 1 – Prepare and use the required test environment
If a web site is hosted in English and Japanese languages, it is not enough to simply change the default browser language and perform identical tests in both the languages. Depending on its implementation, a web site may figure out the correct language for its interface from the browser language setting, the regional and language settings of the machine, a configuration in the web application or other factors. Therefore, in order to perform a realistic test, it is imperative that the web site be tested from two machines – one with the English operating system and one with the Japanese operating system. You might want to keep the default settings on each machine since many users do not change the default settings on their machines.


Tip # 2 – Acquire correct translations
A native speaker of the language, belonging to the same region as the users, is usually the best resource to provide translations that are accurate in both meaning as well as context. If such a person is not available to provide you the translations of the text, you might have to depend on automated web translations available on web sites like wordreference.com and dictionary.com. It is a good idea to compare automated translations from multiple sources before using them in the test.

Tip # 3 – Get really comfortable with the application
Since you might not know the languages supported by the web site, it is always a good idea for you to be very conversant with the functionality of the web site. Execute the test cases in the English version of the site a number of times. This will help you find your way easily within the other language version. Otherwise, you might have to keep the English version of the site open in another browser in order to figure out how to proceed in the other language version (and this could slow you down).

Tip # 4 – Start with testing the labels
You could start testing the other language version of the web site by first looking at all the labels. Labels are the more static items in the web site. English labels are usually short and translated labels tend to expand. It is important to spot any issues related to label truncation, overlay on/ under other controls, incorrect word wrapping etc. It is even more important to compare the labels with their translations in the other language.

Tip # 5 – Move on to the other controls
Next, you could move on to checking the other controls for correct translations and any user interface issues. It is important that the web site provides correct error messages in the other language. The test should include generating all the error messages. Usually for any text that is not translated, three possibilities exist. The text will be missing or its English equivalent will be present or you will see junk characters in its place.

Tip # 6 – Do test the data
Usually, multi-lingual web sites store the data in the UTF-8 Unicode encoding format. To check the character encoding for your website in mozilla: go to View -> Character Encoding and in IE go to View -> Encoding. Data in different languages can be easily represented in this format. Make sure to check the input data. It should be possible to enter data in the other language in the web site. The data displayed by the web site should be correct. The output data should be compared with its translation.

Tip # 7 – Be aware of cultural issues
A challenge in testing multi-lingual web sites is that each language might be meant for users from a particular culture. Many things such as preferred (and not preferred) colors, text direction (this can be left to right, right to left or top to bottom), format of salutations and addresses, measures, currency etc. are different in different cultures. Not only should the other language version of the web site provide correct translations, other elements of the user interface e.g. text direction, currency symbol, date format etc. should also be correct.
As you might have gathered from the tips given above, using the correct test environment and acquiring correct translations is critical in performing a successful test of other language versions of a web site.

Friday, June 5, 2009

How to add objects to the object repository (QTP 9.0)?

Adding objects to a local object repository:

An object can be added to the local object repository in one of the following ways:

1. Record some actions on the object; this will automatically add this object to the object repository. If you do not need the recorded statements in your script, you can delete them and it will not remove the added object from the object repository.

2. Add objects manually.

1. Go to Resources -> Object Repository.

2. In the Filter combobox, select “Local Objects.”

3. Go to Object -> Add Objects to Local.

4. Click on the object to be added to the repository.

5. If the Object Selection window appears, select the desired object, and click .

6. If the Add Object to Object Repository window appears, select the appropriate option:

To add only the selected object, select the “Only the selected object” radio button. To add the selected object and its child objects of a specified type, select the “Selected object and its descendants of type” radio button. Then, select the checkbox next to the child object types that should be added.

7. Click .

3. Manually define a new test object.

1. Go to Resources -> Object Repository.

2. In the Filter combobox, select “Local Objects.”

3. Go to Object -> Define New Test Object.

4. Select the appropriate Environment for your test object. For example, to add a Link object, select “Web.”

5. Select the desired test object class for the new object.

6. Enter a name for your test object.

7. Fill in the Test object details as needed. The Test object details area automatically contains the mandatory properties defined for the object class in the Object Identification dialog box. You can add or remove properties as required, and define values for the properties.

To add properties, click the “Add description properties” button (with the + icon). Select the properties to be added and click .
To remove properties, select the properties to be removed, and click the “Remove selected description properties” button (with the X icon)

8. Click to add the new object.

9. Repeat steps d through h as needed.

10. When done, click .

Note:
When you manually define an object, QuickTest Professional will not automatically add the object’s parent. If the parent objects are not present, you will need to define them as well. Once the objects are added to the repository, you can drag-and-drop them to their appropriate positions.

4. Add the object using the Active Screen.

1. In the Active Screen, right-click on the object to be added.

2. Select “View/Add Object.”

3. If the Object Selection window appears, verify the desired object is selected, and click .

4. In the Object Properties window, click .

Adding objects to a shared object repository:
An object can be added to a shared object repository in one of the following ways:

1. From a local repository.
2. Add objects manually.

1. Go to Resources -> Object Repository Manager.

2. Go to File -> Open -> .

3. By default the repository will open in read-only mode. Go to File -> Enable Editing.

4. Go to Object -> Add Objects.

5. Click on the object to be added to the repository.

6. If the Object Selection window appears, select the desired object, and click .

7. If the Add Object to Object Repository window appears, select the appropriate option:

To add only the selected object, select the “Only the selected object” radio button. To add the selected object and its child objects of a specified type, select the “Selected object and its descendants of type” radio button. Then, select the checkbox next to the child object types that should be added.

8. Click .

3. Manually define a new test object.

1. Go to Resources -> Object Repository Manager.

2. Go to File -> Open -> .

3. By default the repository will open in read-only mode. Go to File -> Enable Editing.

4. Go to Object -> Define New Test Object.

5. Select the appropriate Environment for your test object. For example, to add a Link object, select “Web.”

6. Select the desired test object class for the new object.

7. Enter a name for your test object.

8. Fill in the Test object details as needed. The Test object details area automatically contains the mandatory properties defined for the object class in the Object Identification dialog box. You can add or remove properties as required, and define values for the properties.

To add properties, click the “Add description properties” button (with the + icon). Select the properties to be added and click .
To remove properties, select the properties to be removed and click the “Remove selected description properties” button (with the X icon)

9. Click to add the new object.

10. Repeat steps d through h as needed.

11. When done, click .

Note:
When you manually define an object, QuickTest Professional will not automatically add the object’s parent. If the parent objects are not present, you will need to define them as well. Once the objects are added to the repository, you can drag and drop them to their appropriate positions.

4. Merge with another shared object repository.

Parameterization In QTP

Parameterization is the process of substituting values for dynamic parameters from a CSV(Comma separated values) file or from the Database.

For example, when testing a web application that contains a login page, "parameterization” lets you use a different login name and password for each virtual user (dynamic substitution of values). There’s no need for each User Scenario to contain a separate script that performs the task of logging in. Similarly, you can parameterize the cookies and other headers passed in the request header such as Scheme, Proxy-Connection, etc. You can also parameterize the parameters passed with the URL string.

In this example suite, the user name and password are dynamically substituted from the csv file. For this purpose, a data.csv file is created and placed under /projects/WebPerformanceDemo/usersrc folder. The csv file contains 100 usernames and password starting from emp1 to emp100. Hundred logins should be populated in the application's database. This is done using the following steps:

Click the 'Edit' link under HTTP Parameters adjacent to a recorded transaction in the load test screen. This will invoke the Parameterization screen. The recorded URLs will be loaded in the left-side panel.

Select the URL used for logging into the application.

Choose the Parameters tab. The parameters for the URL will be shown in a table.

In the table, look for the parameter with the name 'userpass'. From the Fetch Data From column corresponding to the row, click the arrow in the button and select the parameterize type as 'Dataset'. Configure the dataset to fetch the values from CSV and click the Apply button.

The Value column will display the configured dataset value.

Similarly, select the parameter 'username' and configure the data source for the same.

The above steps will use the same script with 100 different usernames and password to login and simulate the load of 100 virtual users.

How many ways we can parameterize data in QTP?

There are four types of parameters:
Test, action or component parameters enable you to use values passed from your test or component, or values from other actions in your test.

Data Table parameters enable you to create a data-driven test (or action) that runs several times using the data you supply. In each repetition, or iteration, QuickTest uses a different value from the Data Table.

Environment variable parameters enable you to use variable values from other sources during the run session. These may be values you supply, or values that QuickTest generates for you based on conditions and options you choose.

There are three types of environment variables: 
 User-defined internal. Variables that you define within the test. They are saved with the test and accessible only 
within the test in which they were defined.
 User-defined external. Variables that you predefined in the active external environment variables file. These variables 
are read-only in this context. 
 Built-in. Variables that represent information about the test and the computer on which the test is run, such as 
Test path and Operating system. These variables are accessible from all tests, and are designated as read-only

Random number parameters enable you to insert random numbers as values in your test or component. For example, to check how your application handles small and large ticket orders, you can have QuickTest generate a random number and insert it in a number of tickets edit field.

Developing a Test Strategy


Overview

Testing Steps
Looking at UAT from a high level, there are a few basic steps that need to be
undertaken:

Step

Description

Test Strategy

Decide how we are going to approach the testing in terms of people, tools, procedures and support

Test Scenarios

What are the situations we want to test

Test Scripts

What are the actual inputs we will use? What are the expected results?

Test Strategy
Why do a Test Strategy? The Test Strategy is the plan for how you are going to approach testing. It is like a project charter that tells the world how you are going to approach the project. You may have it all in your head, and if you are the only person doing the work it might be OK. If however you do not have it all in your head, or if others will be involved, you need to map out the ground rules. Here are some of the things that need to be covered in a Test Strategy. You could use this as a template for your own strategy.

Project Name

Overview
Testing stage Instructions:
Identify the type of testing to be undertaken.
Example:
User Acceptance Testing
Scheduled for Example:
01.04.06 to 15.04.06
Location Example:
Testing will be carried out in the
Test Center on Level X
Participants Instructions:
Identify who will be involved in the testing. If resources have not been
nominated, outline the skills required.
Example:
Testing Manager - J. Smith
2 Testers - To be nominated. The skills required are:
• Broad understanding of all the processes carried out by the accounts
receivable area.
• Should be familiar with manual processes currently undertaken for
reversing payments.
• Preferably spent time dealing with inquiries from customers over the phone
• Etc.

we plan to cover the product so as to develop an adequate assessment of quality.

A good test strategy is:

Specific
Practical
Justified

The purpose of a test strategy is to clarify the major tasks and challenges of the test project.

Test Approach and Test Architecture are other terms commonly used to describe what I’m calling test strategy.

Example of a poorly stated (and probably poorly conceived) test strategy:

“We will use black box testing, cause-effect graphing, boundary testing, and white box testing to test this product against its specification.”

Test Strategy: Type of Project, Type of Software, when Testing will occur, Critical Success factors, Tradeoffs

Test Plan - Why

· Identify Risks and Assumptions up front to reduce surprises later.

· Communicate objectives to all team members.

· Foundation for Test Spec, Test Cases, and ultimately the Bugs we find.

Failing to plan = planning to fail.

Test Plan - What

· Derived from Test Approach, Requirements, Project Plan, Functional Spec., and Design Spec.

· Details out project-specific Test Approach.

· Lists general (high level) Test Case areas.

· Include testing Risk Assessment.

· Include preliminary Test Schedule

· Lists Resource requirements.