Thursday, July 16, 2009

Loadrunner 9.5 Features

It’s time again for another version update of your favorite load testing product, LoadRunner. The latest release as of this writing is 9.5, Now that .NET 3.5 SP1 is out, I am hoping this will be supported in the next service pack for LoadRunner. In light of all this progress, you should also note, that as of this version, LoadRunner no longer supports Windows 2000.

VUGEN

One notable change with this version is the addition of an Agent for the RDP (Remote Desktop) protocol. The RDP protocol is beginning to evolve much like the Citrix protocol did in the early days of its use. The RDP Agent should allow recognition of objects instead of just x/y coordinates and offer better synchronization. This is important ,because we’re already hearing of people using RDP instead of other protocols like RMI, DCOM, and Winsock Vusers, simply because it’s easier to script against these types of applications and finish testing projects in a shorter amount of time. Terminal Server licensing is a small price to pay when it could take three months to develop a decent RMI script. You can expect a separate article/review of this feature, because I think this is going to be the new “catch all” protocol when nothing else will work – should this agent provide the capabilities it promises. Other new areas of interest include support for Citrix Presentation Server 4.5, Oracle E-Business Suite - R12, and RTMP (Real Time Messaging Protocol) support for the Flex protocol.

The new Protocol Advisor should be helpful for those who do not know how to use a network sniffer to figure out the transport protocol used by an application. You can record your application, and it will suggest protocols based on the information it gathered. It may be a good starting point, but any engineer using LoadRunner should have a good understanding of the underlying protocols that an application uses because there may be times to dig deeper to find the right one, or the right combination. You can now export the test results from a script run in Vugen to HTML. This allows you to use the report in Quality Center to open up defects. It appears that HP has begun to integrate the Service Test product directly into LoadRunner. By adding the right license you can get to all the Service Test functionality, which would allow you to do verification testing on headless (GUI-less) web services. For those of you wondering, Service Test fills that blind spot where QuickTest Pro leaves off in testing web services, especially when there is no GUI interface. As an added benefit of using Vugen as the interface to test headless web services, you can run load tests against them easily as well. Several versions ago you could not have LoadRunner and Service Test on the same machine.

CONTROLLER

A limited version of their WAN emulation software could be used on the Generators. It required an additional licensing purchase, but it was minimal when compared to the overall price of LoadRunner. When the license model for a Controller changed in version 8.1, WAN emulation was thrown in as part of the entire Controller package. Unfortunately, after HP acquired Mercury, the Shunra software sort of got lost in the shuffle somewhere and this functionality disappeared. Because Shunra is a third party software company in their own right, they have continued to sell their VE Desktop software and their VE appliance (hardware) as stand-alone solutions or as an add-on to LoadRunner. However, the integration was limited.

It also relieves the problem of having to install load generators on your production network remotely. With LoadRunner 9.5, the WAN emulation options with even more control are available in LoadRunner, but you will need to contact Shunra to get a license from them to use it. You will install the VE Desktop for HP Software (for LoadRunner or Performance Center, if that is how you roll) on the Controller. You then install the VE Desktop client on the Generators. With Performance Center, or optionally with LoadRunner, you’ll install a VE Desktop server to store advanced network configurations in. This is very cool because it means users with other Shunra products (say, your developers using VE Desktop Professional, hint, hint) could share the same network settings as you. Once you have VE Desktop for HP Software on the Controller, when you need to turn WAN emulation on, you do it through the options within LoadRunner, and it is seamlessly integrated. If you set up different WAN emulation “profiles” on different Generators, you will be able to filter information in the Analysis module to show the impact each one had, meaning you can filter by WAN Emulation (Emulated Location) profile. This is pretty cool if you think about it. It will tell you immediately the role your network has on application performance. Couple of things to note: first, this is a whole new WAN emulation, so don’t think it has anything to do with the WAN emulation of those older versions. Forget and move on. Secondly, you cannot set up WAN emulation on the Controller (like if you install a Generator on a Controller). But if you are trying to do that anyway, we would all make fun of you, as everyone except complete NUBES know you NEVER PUT A GENERATOR AND CONTROLLER ON THE SAME MACHINE. Sheesh… :)

Some people have been asking for a more secure way for the Controller and Generators to communicate with each other. Nothing like passing a whole lot of user names and passwords to Generators in text files (your parameter files), right? There are a couple of new items in the listing of tools called “Host Security Setup” and “Host Security Manager”. It’s fairly simple, in that you create a security key and make sure all the Generators are synced up with it. You have to turn this feature on and choose to enforce channel communications. It is off by default. This will create a secure channel between your Controller and Generator. This should help ease the minds for those with Generators sitting outside their firewalls, and other highly secure environments. I am curious to see long term how much this level of security affects the performance of the LoadRunner components themselves and any test results, if at all.

There is a new option in the Controller’s general options (under the Execution tab) called Post Collate Command, which allows you to run an executable command or batch file after results are collated.

ANALYSIS

HP continues to open up the components to an API so that there can be more control of the LoadRunner components programmatically, for those who need it. The new Analysis API will let you launch and process and Analysis session, but even more importantly, extract this information into a third party tool to report test results any way you desire. I have always felt the Analysis engine was a powerful component to LoadRunner that helped prove out its value, and this extends it even more if you are willing to put in the time to code some stuff up to take advantage of it. There are some additional reports and exporting features in this version. Another enhancement is the support for SQL Server 2005. What year is it anyway? :) Hopefully SQL Server 2008 won’t be far behind (perhaps another thing to put into SP1 for 9.5). More work has been done to improve processing time of test results and importing from external sources. I have not tested that out yet, but it is one of the first things on my list to do.

IN SUMMARY

LoadRunner 9.5 represents a major update to the application and moves it forward closer to where we need it to be today. However, it still lacks features that we would like to see; such as better hooking for the .NET record/replay protocol, and better support for Microsoft WPF and WCF applications in general. The "click-n-script" concept has good intentions, but still needs to be more mature. We find we are still having to compensate for the hooking engine not always capturing what it should. Specifically, AJAX C&S has issues with redirects and still requires some manual function creation to handle the forcing of some JavaScript execution. This makes the use of C&S rather pointless. However, it is nice to see progress being made with the RDP protocol, and Vista support for those who have been forced to migrate to it within their companies. This version appears to load a little faster, and seems a bit more stable .

Monday, July 13, 2009

How to Implement QA Process ?

How to establish QA Process in an organization?

1.CURRENT SITUATION

The first thing you should do is to put what you currently do in a piece of paper in some sort of a flowchart diagram. This will allow you to analyze what is being currently done.


2.DEVELOPMENT PROCESS STAGE
Once you have the "big picture", you have to be aware of the current status of your development project or projects. The processes you select will vary depending if you are in early stages of developing a new application (i.e.: developing a version 1.0), or maintaining an existing application (i.e.: working on release 6.7.1).


3. PRIORITIES
The next thing you need to do is identify the priorities of your project, for example: - Compliance with industry standards - Validation of new functionality (new GUIs, etc) - Security - Capacity Planning ( You should see "Effective Methods for Software Testing" for more info). Make a list of the priorities, and then assign them values of (H)igh, (M)edium and (L)ow.


4. TESTING TYPES
Once you are aware of the priorities, focus on the High first, then Medium, and finally evaluate whether the Low ones need immediate attention.
Based on this, you need to select those Testing Types that will provide coverage for your priorities. Example of testing types:
- Functional Testing
- Integration Testing
- System Testing
- System-to-System Testing (for testing interfaces)
- Regression Testing
- Load Testing
- Performance Testing
- Stress Testing
Etc.


5. WRITE A TEST PLAN
Once you have determined your needs, the simplest way to document and implement your process is to elaborate a "Test Plan" for every effort that you are engaged into (i.e.: for every release).
For this you can use generic Test Plan templates available in the web that will help you brainstorm and define the scope of your testing:
- Scope of Testing (defects, functionality, and what will be and will not be tested).
- Testing Types (Functional, Regression, etc).
- Responsible people
- Requirements traceability matrix (match test cases with requirements to ensure coverage)
- Defect tracking
- Test Cases


DURING AND POST-TESTING ACTIVITIES
Make sure you keep track of the completion of your testing activities, the defects found, and that you comply with an exit criteria prior to moving to the next stage in testing (i.e. User Acceptance Testing, then Production Release).
Make sure you have a mechanism for:
- Reporting
- Test tracking


What is software testing?

1) Software testing is a process that identifies the correctness, completenes, and quality of software. Actually, testing cannot establish the correctness of software. It can find defects, but cannot prove there are no defects.
2) It is a systematic analysis of the software to see whether it has performed to specified requirements. What software testing does is to uncover errors however it does not tell us that errors are still not present.

Any recommendation for estimation how many bugs the customer will find till gold release?

Answer1:
If you take the total number of bugs in the application and subtract the number of bugs you found, the difference will be the maximum number of bugs the customer can find.
Seriously, I doubt you will find any sort of calculations or formula that can answer your question with much accuracy. If you could refernce a previous application release, it might give you a rough idea. The best thing to do is insure your test coverage is as good as you can make it then hope you've found the ones the customer might find.
Remember Software testing is Risk Management!

Answer2:
For doing estimation :
1.)Find out the Coverage during testing of ur software and then estimate keeping in mind 80-20 principle.
2.)You can also look at the deepening of your test cases e.g. how much unit level testing and how much life cycle teting have you performed (Believe that most of the bugs from customer comes due to real use of lifecycle in the software)
3.)You can also refer the defect density from earlier releases of the same product line.
by doing these evaluation you can find out the probability of bugs at an approximately optimum estimation.

Answer3:
You can look at the customer issues mapping from previous release (If you have the same product line) to the current release ,This is the best way of finding estimation for gold release of migration of any product.Secondly, till gold release most of the issues comes from various combination of installation testing like cross-platform,i18 issues,Customization,upgradation and migration.
So ,these can be taken as a parameter and then can estimation be completed.

When the build comes to the QA team, what are the parameters to be taken for consideration to reject the build upfront without committing for testing ?

Answer1:
Agree with R&D a set of tests that if one fails you can reject the build. I usually have some build verification tests that just make sure the build is stable and the major functionality is working.
Then if one test fails you can reject the build.

Answer2:
The only way to legitimately reject a build is if the entrance criteria have not been met. That means that the entrance criteria to the test phase have been defined and agreed upon up front. This should be standard for all builds for all products. Entrance criteria could include:
- Turn-over documentation is complete
- All unit testing has been successfully completed and U/T cases are documented in turn-over
- All expected software components have been turned-over (staged)
- All walkthroughs and inspections are complete
- Change requests have been updated to correct status
- Configuration Management and build information is provided, and correct, in turn-over
The only way we could really reject a build without any testing, would be a failure of the turn-over procedure. There may, but shouldn't be, politics involved. The only way the test phase can proceed is for the test team to have all components required to perform successful testing. You will have to define entrance (and exit) criteria for each phase of the SDLC. This is an effort to be taken together by the whole development team. Developments entrance criteria would include signed requirements, HLD doc, etc. Having this criteria pre-established sets everyone up for success

Answer3:
The primary reason to reject a build is that it is untestable, or if the testing would be considered invalid.
For example, suppose someone gave you a "bad build" in which several of the wrong files had been loaded. Once you know it contains the wrong versions, most groups think there is no point continuing testing of that build.
Every reason for rejecting a build beyond this is reached by agreement. For example, if you set a build verification test and the program fails it, the agreement in your company might be to reject the program from testing. Some BVTs are designed to include relatively few tests, and those of core functionality. Failure of any of these tests might reflect fundamental instability. However, several test groups include a lot of additional tests, and failure of these might not be grounds for rejecting a build.
In some companies, there are firm entry criteria to testing. Many companies pay lipservice to entry criteria but start testing the code whether the entry criteria are met or not. Neither of these is right or wrong--it's the culture of the company. Be sure of your corporate culture before rejecting a build.

Answer4:
Generally a company would have set some sort of minimum goals/criteria that a build needs to satisfy - if it satisfies this - it can be accepted else it has to be rejected
For eg.
Nil - high priority bugs
2 - Medium Priority bugs
Sanity test or Minimum acceptance and Basic acceptance should pass The reasons for the new build - say a change to a specific case - this should pass Not able to proceed - non - testability or even some more which is in relation to the new build or the product If the above criterias don't pass then the build could be rejected.

What is software testing?

Software testing is more than just error detection;
Testing software is operating the software under controlled conditions, to (1) verify that it behaves “as specified”; (2) to detect errors, and (3) to validate that what has been specified is what the user actually wanted.
Verification is the checking or testing of items, including software, for conformance and consistency by evaluating the results against pre-specified requirements. [Verification: Are we building the system right?]
Error Detection: Testing should intentionally attempt to make things go wrong to determine if things happen when they shouldn’t or things don’t happen when they should.
Validation looks at the system correctness – i.e. is the process of checking that what has been specified is what the user actually wanted. [Validation: Are we building the right system?]
In other words, validation checks to see if we are building what the customer wants/needs, and verification checks to see if we are building that system correctly. Both verification and validation are necessary, but different components of any testing activity.

The definition of testing according to the ANSI/IEEE 1059 standard is that testing is the process of analysing a software item to detect the differences between existing and required conditions (that is defects/errors/bugs) and to evaluate the features of the software item.

Remember: The purpose of testing is verification, validation and error detection in order to find problems – and the purpose of finding those problems is to get them fixed.

What is the testing lifecycle?

There is no standard, but it consists of:
Test Planning (Test Strategy, Test Plan(s), Test Bed Creation)
Test Development (Test Procedures, Test Scenarios, Test Cases)
Test Execution
Result Analysis (compare Expected to Actual results)
Defect Tracking
Reporting

How to validate data?

I assume that you are doing ETL (extract, transform, load) and cleaning. If my assumetion is correct then
1. you are builing data warehouse/ data minning
2. you ask right question to wrong place


What is quality?

Quality software is software that is reasonably bug-free, delivered on time and within budget, meets requirements and expectations and is maintainable. However, quality is a subjective term. Quality depends on who the customer is and their overall influence in the scheme of things. Customers of a software development project include end-users, customer acceptance test engineers, testers, customer contract officers, customer management, the development organization's management, test engineers, testers, salespeople, software engineers, stockholders and accountants. Each type of customer will have his or her own slant on quality. The accounting department might define quality in terms of profits, while an end-user might define quality as user friendly and bug free.

Difference between V-model and water fall model, ISO vs CMM?What's the role in CMM Level in Testing?

What are the responsibilities of a QA engineer?

Let's say, an engineer is hired for a small software company's QA role, and there is no QA team. Should he take responsibility to set up a QA infrastructure/process, testing and quality of the entire product? No, because taking this responsibility is a classic trap that QA people get caught in. Why? Because we QA engineers cannot assure quality. And because QA departments cannot create quality.
What we CAN do is to detect lack of quality, and prevent low-quality products from going out the door. What is the solution? We need to drop the QA label, and tell the developers that they are responsible for the quality of their own work. The problem is, sometimes, as soon as the developers learn that there is a test department, they will slack off on their testing. We need to offer to help with quality assessment, only.

The system runs at Intranet environment and it has a security system. The security system's architecture designed on User and Role system. There are only one system Role and that is System Amdin Role and user can also create as many role as he needs. Role is attached with an user and a user can login to the system if he has a role. Role mainly instanciate the permissions of resources in that role. And the system has about 100 system defined resources and there may be some user-defined resources also.
So, in this environment, for testing the security system. how to develop test plan and test script ?

Assume that roles are generated by combining logical options (can edit this section, can only generate reports here, can not access this).
Start by witing down the different activities that each role can access. Then write down the different levels for each activity.
Now create a pair-wise combination of them. I won't explain pair-wise testing as you can Google for it and get better answers there.
Use pair-wise testing to create special roles that are used in testing. If you know that there are certain default roles, make sure to use them.
Then generate a list of tasks that can be preformed on the system (don't concern yourself with roles at this point).
Write each of these tasks down and put them into a database (If you have no other option, use MySQL and OpenOffice to create your shared database).
Then create another table that contains your roles. Create a third table that takes the index values of the first table and the index values of the second table (the intersections) and there you can determine if the scenario can be tested or not using that role. (this can also be done in a spreadsheet with scenarios on the left side and rols across the top).
Then run the tests that can be run.

What is the ratio of developers and testers?

The ratio of developers and testers is not a fixed one, but depends on what phase of the software development life cycle the project is in. When a product is first conceived, organized, and developed, this ratio tends to be 10:1, 5:1, or 3:1, i.e. heavily in favor of developers. In sharp contrast, when the product is near the end of the software development life cycle, just before alpha testing begins, this ratio tends to be 1:1, or even 1:2, in favor of testers.

What is the difference between V-model and water fall model?

V-model is used for the project based, here the spec is not freezed, the devolpement and QA process goes parellel.
Water fall model is used for the product based projects, here the spec is defined and freezed in the starting, Once the developement completes the coding, testers will start testing the product.

Which of these roles are the best and most popular?

In testing, Tester roles tend to be the most popular. The less popular roles include the roles of System Administrator, Test/QA Team Lead, and Test/QA Managers.

For Reliability, Usability and Testability. Explain why you would test for these factors?

Reliability:
- Extent to which a program can be expected to perform its intended function with required precision.
- This testing would be performed if the application has a characteristic that affects human lives or if it is a Real time application.
Usability:
- Effort required in learning, operating, preparing input & interpreting output of a program.
- This testing would be performed if the application has a characteristic that involves a lot of human interaction with the application.
Testability:
- Effort required in testing a program to ensure it performs its intended function.
- This testing would be performed if the application has a characteristic that affects human lives.

What other roles are in testing?

Depending on the organization, the following roles are more or less standard on most testing projects: Testers, Test Engineers, Test/QA Team Leads, Test/QA Managers, System Administrators, Database Administrators, Technical Analysts, Test Build Managers, and Test Configuration Managers.
Depending on the project, one person can and often wear more than one hat. For instance, we Test Engineers often wear the hat of Technical Analyst, Test Build Manager and Test Configuration Manager as well.

Whats the difference between ISO vs CMM ?

Answe1:
CMM is much oriented towards S/W engg process improvements and never speaks of customer satisfaction whereas the ISO 9001:2000 speaks of process improvements generic to all organisations and also speaks of customer satisfaction.

Answer2:
FYI. There are 3 popular ISO standards that are commonly used for SW projects. They are 12270, 15540, and 9001 (subset or 9000). I hope I got the numbers correct. For CMM, the latest version is 1.1, however, it is already considered a legacy standard which is to be replaced by CMMI, the latest version is 1.1. For further information re CMM/I, visit the following:
http://www.sei.cmu.edu/cmm/
http://www.sei.cmu.edu/cmmi/

To build and release the build to the QA. Does any body knowing in detail about this profile?

Build Release engineer, The nature of the job is to retrieve the source from the confirugartion system, and creates a build in the build machine, and takes a copy of the files which you moved to buildmachine, and install into QA servers.
Here the main task when you install in QA servers, you have to be carefull about connectin properties, whether all applications are extracted properly, whether is QA server should have all supported software

What makes a good test engineer?

Good test engineers have a "test to break" attitude. We, good test engineers, take the point of view of the customer, have a strong desire for quality and an attention to detail. Tact and diplomacy are useful in maintaining a cooperative relationship with developers and an ability to communicate with both technical and non-technical people. Previous software development experience is also helpful as it provides a deeper understanding of the software development process, gives the test engineer an appreciation for the developers' point of view and reduces the learning curve in automated test tool programming.

What's the role in CMM Level in Testing?
What's the diff b/w 5 levels?
which level most commonly used in testing?

Answer1:
SEI = 'Software Engineering Institute' at Carnegie-Mellon University; initiated by the U.S. Defense Department to help improve software development processes.
CMM = 'Capability Maturity Model', now called the CMMI ('Capability Maturity Model Integration'), developed by the SEI. It's a model of 5 levels of process 'maturity' that determine effectiveness in delivering quality software. It is geared to large organizations such as large U.S. Defense Department contractors. However, many of the QA processes involved are appropriate to any organization, and if reasonably applied can be helpful. Organizations can receive CMMI ratings by undergoing assessments by qualified auditors.
Level 1 - characterized by chaos, periodic panics, and heroic efforts required by individuals to successfully complete projects. Few if any processes in place; successes may not be repeatable.
Level 2 - software project tracking, requirements management, realistic planning, and configuration management processes are in place; successful practices can be repeated.
Level 3 - standard software development and maintenance processes are integrated throughout an organization; a Software Engineering Process Group is is in place to oversee software processes, and training programs are used to ensure understanding and compliance.
Level 4 - metrics are used to track productivity, processes, and products. Project performance is predictable, and quality is consistently high.
Level 5 - the focus is on continouous process improvement. The impact of new processes and technologies can be predicted and effectively implemented when required. Perspective on CMM ratings: During 1997-2001, 1018 organizations were assessed. Of those, 27% were rated at Level 1, 39% at 2,23% at 3, 6% at 4, and 5% at 5. (For ratings during the period 1992-96, 62% were at Level 1, 23% at 2, 13% at 3, 2% at 4, and 0.4% at 5.) The median size of organizations was 100 software engineering/maintenance personnel; 32% of organizations were U.S. federal contractors or agencies. For those rated at Level 1, the most problematical key process area was in Software Quality Assurance.

Answer2:
The whole essence of CMM or CMMI is to produce quality software. It targets the whole organizational practices (or processes), which are believed to be the best across industries. For further understanding of SEI CMMI visit http://www.sei.cmu.edu/cmmi.
What is the role of CMMI Level in Testing?
Please understand that Testing is just part or subset of CMMI. Testing is addressed on a particular Process Area. If my memory serves me correct, it is the VER or Verification process area and sometimes addressed also in VAL or the Validation process area. It could also be the other way around.
Each Process Area has its own level to be driven to the level 5. This is true for the Continuous Representation of CMMI version 1.1. I am not sure about the Staged Representaiton of the same version. Please refer to the website above for more details.
What is the difference between the levels of CMMI?
This was already answered in the same thread by Priya. I would like to add that there is an additional level for the Continuous Representation which is called Level 0 (zero) --> Incomplete.
Which level is most commonly used in Testing?
I would say all levels would deal with testing. But again this is true for VAL and VER Process Areas.
For further readings, try searching google using CMMI+tutorials or Testing+CMMI. Most of the documents about CMMI are free and available on the Web.

Answer3:
Level 1. Initial The organization is characterized by an ad hoc set of activities. The processes aren't defined and success depends on individual effort and heroics.
Level 2. Repeatable At this level, basic project management processes are established to track costs, to schedule, and to define functionality. The discipline is available to repeat earlier successes on similar projects.
Level 3. Defined All processes are documented for both management and engineering activities, and standards are defined.
Level 4. Managed Detailed measures of each process are defined and product quality data is routinely collected. Both process and products are quantitatively understood and controlled.
Level 5. Optimizing Continuous process improvement is enabled by quantitative feedback from the process and from piloting innovative ideas and technologies.
There are 3 popular ISO standards that are commonly used for SW projects. They are 12270, 15540, and 9001 (subset or 9000). I hope I got the numbers correct. For CMM, the latest version is 1.1, however, it is already considered a legacy standard which is to be replaced by CMMI, the latest version is 1.1.