Testing

What is Static Analysis?

What is Static Analysis?
Static Analysis is basically a analysis of a program carried out without executing the program. A tool that carries out static analysis. 
More interview questions and answers

What\'s Ad Hoc Testing ?

A Ad Hoc Testing is basically that type of testing where the tester tries to break the software by randomly trying functionality of software.

Whats the Accessibility Testing ?

The Accessibility Testing is basically that Testing which is determines if software will be usable by people with disabilities.

What\'s the Alpha Testing ?

The Alpha Testing is basically that testing which is controlled environment by the end user of the software , And It is conducted at the developer sites .

What\'s the Beta Testing ?

Beta testing is basically from the Client side, It\'s application after the installation at the client side.

What is Component Testing ?

Component Testing is basically a type of individual software components , its also a Unit Testing.

What\'s Compatibility Testing ?

Capability testing is basically type of testing which is use for only check to the s/w capability. Its means that we can test that software is compatible with other elements of system.

What is Concurrency Testing ?

Concurrency testing is basically the Multi-user testing , Its defined the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single-threaded code and locking semaphores.

What is Conformance Testing ?

The Conformance testing is basically a process of testing, It is mainly use for the applied to testing conformance to a formal standard. Its implementation conforms to the specification on which it is based.

What is Context Driven Testing ?

The Context-driven Testing is the basically a school of software testing. Which uses continuous and creative evaluation of testing opportunities in light of the potential.the value of that information to the organization right now. information revealed. It is similar as a Agile Testing. 

What is Data Driven Testing ?

Data driven testing is basically which type of testing that have done action of a test case is parametrized by externally defined data values, maintained as a file or spreadsheet.Its common technique and its also called as  Automated Testing.

What is Conversion Testing ?

Conversion Testing is basically a testing which programs or procedures used to convert data from existing systems for use in replacement systems.

What is Dependency Testing ?

Dependency testing is basically for the Examines an application\'s requirements for per-existing software, initial states and configuration in order to maintain proper functionality.

What is Depth Testing

Depth Testing is basically want to be whole details for the any s/w and it will be find out all information when it will be test.

What is Dynamic Testing ?

Dynamic Testing is very importing in the testing world , Its execute thorough the Testing software, Its test also Static Testing.

What is Endurance Testing ?

Endurance Testing is basically use to check for the memory leaking problems ans or other problems which is related from prolonged executions.

What is End-to-End testing ?

End-to-End Testing is basically use din the real world , Its a complete application environment in a situation, Such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

What is Exhaustive Testing ?

Exhaustive Testing is basically a type of Testing which covers all combinations of input values and preconditions for an element of the software under test.

What is Gorilla Testing ?

Gorilla testing is basically a type of Testing which one particular module, functionality heavily.

What is Installation Testing ?

Installation Testing is basically Confirms that the application under test recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.

What is Localization Testing ?

Localization Testing is basically a which type of testing that term refers to making software specifically designed for a specific locality

What is Loop Testing ?

Loop testing is basically a type of White box testing technique that exercises program loops. Loop testing is the testing of a resource or resources multiple times under program control. 

The looping is controlled by the Diagnostic Controller. Loop testing is only supported when running in maintenance mode or service mode, and Advanced Diagnostic Routines have been chosen.

The user indicates that loop testing is desired at the Test Method menu. The rule associated with loop testing is that user interaction is only allowed on the first and last pass. The diagnostic applications get notification that loop mode has been invoked by obtaining the value of loop-mode in the Tm-input object class. The following actions should be taken by the DA when loop-mode has the following values:

LOOPMODE_ENTERLM : The Diagnostic Application should perform any tests as usual, plus perform Error Log Analysis if running in Problem Determination mode.

LOOPMODE_INLM : The Diagnostic Application should perform any tests as usual, and not Error Log Analysis.

LOOPMODE_EXITLM : The Diagnostic Application should not perform any tests, nor perform Error Log Analysis. Instead cleanup procedures should be invoked to remove wrap plugs, etc, before exiting.

What is Mutation Testing ?

Mutation testing is very expensive to run, especially on very large applications. Mutation testing is basically used to test the quality of our test suite. This is done by mutating certain statements in our source code and checking if our test code is able to find the errors. 

However,  There is a mutation testing tool, Jester, which can be used to run mutation tests on Java code. Jester looks at specific areas of our source code, for example: forcing a path through an if statement, changing constant values, and changing Boolean values.

Mutation testing is basically a type of method which method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes, bugs and retesting with the original test data/cases to determine if the \"bugs\" are detected. Proper implementation requires large computational resources.

What is Monkey Testing ?

Monkey Testing is the bassiclaly type of testing which is ensure that system is given a good performance and its system or application does not crash outwithout check the whole system or application .

What is Positive Testing ?

Positive Testing is the basically that testing which is only check the software application is running mode . Its also known as \"Test to pass\".


What is Negative Testing ?

Negative Testing is just opposite a positive testing . Its sickbay use only the show for the system or application is not working well.Its also known as \"Test to fail\".

What is Path Testing ?

Path Testing is defied as a name . Path Testing is testing which check only all the paths of the program and also check the source code at least one time .

What is Performance Testing ?

Performance testing is basically tech only compliance of the system , And also evaluate the component with specified performance requirement .Its also use in the Automated test tool simulate large number of users. Also know as \"Load Testing\".

What is Ramp Testing ?

Ramp Testing is the very important for the any system , Because it\'s create continuously raising and input signals until the system breaks down .

What is Recovery Testing ?

Shortage of disk space, unexpected loss of communication, or power out conditions also include in Events. Recovery Testing is basically use for the Confirms that the program recovers from expected or unexpected events without loss of data or functionality. 

What is the Re-testing testing ?

Retesting is basically a a defines by name It is  Again testing the functionality of the application. It executing the test scripts which was raised as defects in earlier builds. It is a type of testing that is performed to check for the functionality of an application by using different inputs after the fix is done for the bugs that were recorded during the earlier testing.

What is the Regression testing ?

Regression Testing:- Regression testing is the basically use for Check that change in code have not effected the working functionality. Its a  re-execution of selected testcases on modified build to ensure bug fix work completeness and correctness. 

Its testing after the fixing of defect as bug and solving it ,we test this bug whether it effects any thing in future. Its testing the bug encountered build once again after developing with same inputs. Its testing to make sure that there are no new bugs raised due to fixing one. regression testing follows retesting .

Regression testing is of 3 types They are :

Unit regression testing
Partial Regression testing
Complete regression testing

What is Sanity Testing ?

Sanity Testing is basically a Brief test of major functional elements of a piece of software to determine if its basically operational. Its is defined as :

Test Engineer cover basic functionality of the build to validate \" whether the build is stable for complete testing or not\".
Sanity testing is an Initial effort to check whether the application can be tested further without any interruption. Basic GUI functionality,connectivity to database are concentrated here.
Tester conducts the Sanity test to ensure the stability of the application build.Tester finds weather the application build is stable for complete application or not.

A sanity test is a narrow regression test that focuses on one or a few areas of functionality. Sanity testing is usually narrow and deep.

A sanity test is usually unscripted.

A Sanity test is used to determine a small section of the application is still working after a minor change.

Sanity testing is a cursory testing; it is performed whenever a cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing.

Sanity testing is to verify whether requirements are met or not, checking all features breadth-first.

What is Scalability Testing ?

Scalability Testing is basically use for testing , Its is a test designed to prove that both the functionality and the performance of a system will scale up to meet specified requirements. 

Its Performance testing focused on ensuring the application under test gracefully handles increases in work load.

What is Security Testing ?

Security Testing is basically a Process to determine that an IS or Information System protects data and maintains functionality as intended. 

Testing which confirms that the program can restrict access to authorized personnel and that the authorized personnel can access the functions available to their security level.

The six basic security concepts that need to be covered by security testing are :
> Confidentiality,
> Integrity,
> Authentication,
> Authorization,
> Availability and
> Non-repudiation.

What is Stress Testing ?

Stress testing is a basically form of testing that is mainly used to determine the stability of a given system or entity. 

It involves testing beyond normal operational capacity, often to a breaking point, in order to observe the results.

Stress testing may have a more specific meaning in certain industries.

What is Smoke Testing ?

Smoke testing is basically term used in plumbing, woodwind repair, electronics, computer software development, and the entertainment industry. 

Stress testing is a form of testing that is used to determine the stability of a given system or entity. It involves testing beyond normal operational capacity, often to a breaking point, in order to observe the results.

It mainly refers to the first test made after repairs or first assembly to provide some assurance that the system under test will not catastrophically fail.

After a smoke test proves that the pipes will not leak, the keys seal properly, the circuit will not burn, or the software will not crash outright, the assembly is ready for more stressful testing.

What is Soak Testing ?

Soak testing involves testing a system with a significant load extended over a significant period of time, to discover how the system behaves under sustained use. 

Soak testing is basically called as Endurance Testing.running a system at high levels of load for prolonged periods of time.

A soak test would normally execute several times more transactions in an entire day or night than would be expected in a busy day, to identify any performance problems that appear after a large number of transactions have been executed.

Its basically a quick-and-dirty test that the major functions of a piece of software work. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire.

What\'s the Usability testing ?

Usability testing is mainly for user friendliness .Usability testing is basically a technique, Which is used to evaluate a product by testing it on users. 

This can be seen as an irreplaceable usability practice, since it gives direct input on how real users use the system.

This is in contrast with usability inspection methods where experts use different methods to evaluate a user interface without involving users.

What\'s the User acceptance testing ?

User acceptance testing (UAT) is a people-focused activity which helps confirm that your software application performs the required functionality while also meeting user requirements. This is especially critical for customer-facing software systems and products. 

User Acceptance Testing is basically often the final step before rolling out the application. It is determining if software is satisfactory to an end-user or customer. It is usually a black box type of testing. In other words, the focus is on the functionality and the usability of the application rather than the technical aspects.

The steps taken for User Acceptance Testing typically involve one or more of the following :
> User Acceptance Test (UAT) Planning
> Designing UA Test Cases
> Selecting a Team that would execute the (UAT) Test Cases
> Executing Test Cases
> Documenting the Defects found during UAT
> Resolving the issues/Bug Fixing
> Sign Off.

What\'s the Volume Testing ?

Volume Testing basically means the software with large volume of data in the database. It is belongs to the group of non-functional tests, which are often misunderstood and/or used interchangeably. 

Volume testing refers to testing a software application with a certain amount of data.

We can perform the Volume testing, where the system is subjected to large volume of data. testing the software with heavy volumes of data. it is done to find out memory leaks and buffer overflows.

It is the is the subset of stress testing.In the volume testing we increase the size of the database to check the performance of the software. It is the application more then peak load, it was identify by graph.

What is Acceptance Testing?

Acceptance Testing is the final stage of testing before product release or implementation. 

Acceptance Testing is basically conducted to enable a user/customer to determine whether to accept a software product. Normally performed to validate the software meets a set of agreed acceptance criteria.

The expected results or performance characteristics that define whether the test case passed or failed .

What is Agile Testing?

Agile testing is basically a software testing practice that follows the status of the agile manifesto, Its treating software development as the customer of testing. Agile testing involves testing from the customer perspective as early as possible, testing early and often as code becomes available and stable enough from module/unit level testing. 

Testing practice for projects using agile methodologies, treating development as the customer of testing and emphasizing a test-first design paradigm . Testing practice for projects using agile methodologies, treating development as the customer of testing and emphasizing a test-first design paradigm.

What is Application Binary Interface (ABI)?

A specification for a specific hardware platform combined with the operating system. 

It is one step beyond the application program interface (API), which defines the calls from the application to the operating system.

An application binary interface (ABI) is basically describes the low-level interface between an application or any type of program and the operating system . Its defining requirements for portability of applications in binary forms across different system platforms and environments.

What is Application Programming Interface (API)?

An application programming interface (API) is basically a set of routines, data structures, object classes and/or protocols provided by libraries and/or operating system services in order to support the building of applications. Its is a formalized set of software calls and routines that can be referenced by an application program in order to access supporting system or network services API may be : 

Language-dependent :
Its only available in a particular programming language, utilizing the particular syntax and elements of the programming language to make the API convenient to use in this particular context.

Language-independent : It is written in a way that means they can be called from several programming languages which is typically an assembly/C-level interface. This is a desired feature for a service-style API which is not bound to a particular process or system and is available as a remote procedure call.

What is Automated Software Quality (ASQ)?

Automated Software Quality is basically use for the improving the software quality with use of software tools, such as automated testing tools.

What is Automated Testing?

Automated testing is basically plays a most popular role in many software development projects. Its using software tools which will be provide a facility for running tests without manual intervention. 

Automated software testing is an effective and necessary part of the software development cycle. It Can be applied in GUI, performance, API, etc. testing. The use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions.

Automated software testing has long been thought critical for large development organizations, but is often considered to be too expensive and difficult to implement for smaller companies.

What is Backus-Naur Form?

Its a basically metalanguage used to formally describe the syntax of a language. Backus-Naur Form (BNF) is a meta syntax used to express context-free grammars, It is a formal way to describe formal languages. 

John Backus and Peter Naur developed a context free grammar to define the syntax of a programming language by using two sets of rules, Which is :-

> lexical rules and
> syntactic rules.

What is Basic Block?

Basic Blocks is basically a sequence of one or more consecutive, executable statements containing no branches. This approach is based upon using profiles of a program\'s code structure (basic-blocks) to uniquely identify different phases of execution in the program.

What is Basis Path Testing?

Basic Path Testing is basically a white box test case design technique that mainly uses the algorithmic flow of the program to design tests. 

A new coverage measure is proposed for efficient and effective software testing. The conventional coverage measure for branch testing has such defects as overestimation of software quality and redundant test data selection because all branches are treated equally.

These problems can be avoided by paying attention to only those branches essential for path testing. That is, if one branch is executed whenever another particular branch is executed, the former branch is nonessential for path testing. This is because a path covering the latter branch also covers the former branch. Branches other than such nonessential branches will be referred to as essential branches.

What is Basis Set?

Basic set is basically set of tests derived using basis path testing. 

What is Baseline?

Base Line is the basically a point at which some deliverable produced during the software engineering process is put under formal change control. 

What is Binary Portability Testing?

Binary Portability is basically Testing an executable application for portability across system platforms and environments, usually for conformation to an ABI specification.

What is Black Box Testing?

Black Box Testing is basically based on an analysis of the specification of a piece of software without reference to its internal workings. 

The goal is to test how well the component conforms to the published requirements for the component. It is specification based test data adequacy criteria is explored.

The approach focuses on generating a flow-graph from a component\'s specification and applying analogues of white-box strategies to it.

What is Bottom Up Testing?

Bottom Up testing is basically an approach to integration testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.

What is Boundary Testing?

Boundary Testing is basically a Test which focus on the boundary or limit conditions of the software being tested.

What is Bug?

Bug is a type of fault in a program which causes the program to perform in an unintended or unanticipated manner. 

What is Defect?

Defect is basically when occurred if software misses some feature or function from what is there in requirement it is called as defect. 

What is Boundary Value Analysis?

BVA is stands for Boundary Value Analysis. BVA is similar to Equivalence Partitioning but focuses on \"corner cases\" or values that are usually out of range as defined by the specification. his means that if a function expects all values in range of negative 100 to positive 1000, test inputs would include negative 101 and positive 1001.

What is Branch Testing?

Branch Testing basically is mainly use for the test to the program , It is check all branches of the program at least 1 time .

What is Breadth Testing?

Breadth Testing is basically test suite that exercises the full functionality of a product but does not test features in detail.  

What is CAST?

CAST is basically stands for Computer Aided Software Testing. 

What is Capture/Replay Tool?

Capture/Replay Tool is basically a test tool that records test input as it is sent to the software under test. The input cases stored can then be used to reproduce the test at a later time. Most commonly applied to GUI test tools. a tool for capturing and replaying Java program executions in the field. jRapture works with Java binaries (byte code) and any compliant implementation of the Java virtual machine. It employs a lightweight, transparent capture process that permits unobtrusive capture of a Java programs executions.

What is CMM?

CMM is classically stands for The Capability Maturity Model, Which is only for Software, Which s/w is a model for judging the maturity of the software processes of an organization and for identifying the key practices that are required to increase the maturity of these processes. Its developed to present sets of recommended practices in a number of key process areas that have been shown to enhance software-development and maintenance capability, is discussed. 

The CMM was designed to help developers select process-improvement strategies by determining their current process maturity and identifying the issues most critical to improving their software quality and process.

What is Cause Effect Graph?

Cause Effect Graph is basically a Graphical representation of inputs and the associated outputs effects which can be used to design test cases.Delta Debugging to multiple states of the program automatically reveals the cause-effect chain of the failure, the variables and values that caused the failure.In a case study, our prototype implementation successfully isolated the cause effect chain for a failure of the GNU C compiler.

What is Code Complete?

Phase of development where functionality is implemented in entirely; bug fixes are all that are left. All functions found in the Functional Specifications have been implemented.

What is Code Coverage?

Code coverage is basically a type of analysis method that determines which parts of the software have been executed and covered also by the test case suite and which parts have not been executed and therefore may require additional attention .

What is Code Inspection?

Code Inspection is basically formal testing technique where the programmer reviews source code with a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards.

What is Code Walkthrough?

Code walk through is basically a formal testing technique where source code is traced by a group with a small set of test cases, while the state of program variables is manually monitored, to analyze the programmer\'s logic and assumptions.

What is Coding?

Coding is the basically generation of source code.

What is Compatibility Testing?

Compatible Testing is basically that type of Testing which software is compatible with other elements of a system with which it should operate, e.g. browsers, Operating Systems, or hardware.

What is Component?

Component is a basically an identifiable part of a larger program or construction. 

Usually, a component provides a particular function or group of related functions. In programming design, a system is divided into components that in turn are made up of modules .


Component test means testing all related modules that form a component as a group to make sure they work together.

What is Conformance Testing?

Conformance testing or type testing is testing to determine whether a system meets some specified standard . Conformance testing is basically known as compliance testing, Conformance tests mainly use for the capture the technical description of a specification and measure whether a product faithfully implements the specification. 

The testing provides developers, users, and purchasers, with increased levels of confidence in product quality and increases the probability of successful interoperability.

It is a methodology used in engineering to ensure that a product, process, computer program or system meets a defined set of standards. These standards are commonly defined by large, independent entities such as the Institute of Electrical and Electronics Engineers (IEEE)conformance testing is often performed by external organizations, sometimes the standards body itself, to give greater guarantees of compliance.

Products tested in such a manner are then advertised as being certified by that external organization as complying with the standard.

What is Conversion Testing?

Conversion testing is the basically use to ensure that the existing application\'s data should not change, while converting it into the new updated/version/enhancement of the application. data may be converted into an invalid format that cannot be processed by the new system, thus the data will have no value. 

In addition, data may be omitted from the conversion process resulting in gaps or system errors in the new system. An inability to process backup or archive files will result in the inability to restore or interrogate old data.

Testing of programs or procedures used to convert data from existing systems for use in replacement systems. Testing of programs or procedures used to convert data from existing systems for use in replacement systems.

Conversion testing is testing of program or procedure used for data conversion from existing system whenever updating, or to a new system.

What is Cyclomatic Complexity?

Cyclomatic complexity or conditional complexity is a software metric or measurement. It was developed by Thomas J. McCabe in 1976 and is used to measure the complexity of a program. It directly measures the number of linearly independent paths through a program\'s source code. It is basically a software metric that provides a quantitative measure of the logical complexity of a program. 

Cyclomatic complexity measures the amount of decision logic in a single software module. It is used for two related purposes in the structured testing methodology.
> It gives the number of recommended tests for software.
> It is used during all phases of the software lifecycle, beginning with design, to keep software reliable, testable, and manageable.       

Cyclomatic complexity is based entirely on the structure of software\'s control flow graph.Cyclomatic complexity is computed using the control flow graph of the program, the nodes of the graph correspond to the commands of a program, and a directed edge connects two nodes if the second command might be executed immediately after the first command.Cyclomatic complexity is the most widely used member of a class of static software metrics.

Cyclomatic complexity may be considered a broad measure of soundness and confidence for a program. Its mainly a measurement of the intricacy of a program module based on the number of repetitive cycles or loops that are made in the program logic. It is used as a general measure of complexity for software quality control as well as to determine the number of testing procedures.

What is Data Dictionary?

What is Data Flow Diagram?

Data Flow diagram is basically a modeling notation that represents a functional decomposition of a system. 

Data flow diagrams as used in Structured Analysis, based on an abstract model for data flow transformations. The semantics consists of a collection of VDM functions, transforming an abstract syntax representation of a data flow diagram into an abstract syntax representation of a VDM specification.

Since this transformation is executable, it becomes possible to provide a software analyst/designer with two \'views\' of the system being modeled: a graphical view in terms of a data flow diagram, and a textual view in terms of a VDM specification.

In this paper emphasis is on the motivation for the choices made in the transformation.

What is Debugging?

Debugging is basically the process of finding and removing the causes of software failures. Its basically use Event Based Behavioral Abstraction (EBBA), in which debugging is treated as a process of creating models of expected program behaviors and comparing these to the actual behaviors exhibited by the program. 

The use of EBBA techniques can enhance debugging tool transparency, reduce latency and uncertainty for fundamental debugging activities, and accommodate diverse, heterogeneous architectures. Using events and behavior models as a basic mechanism provides a uniform view of heterogeneous systems and enables analysis to be performed in well defined ways.

Their use also enables EBBA users to extend and reuse knowledge gained in solving previous problems to new situations. We describe our behavior modeling algorithm that matches actual behavior to models and automates many behavior analysis steps.

The algorithm matches behavior in as many ways as possible and resolves these to return the best match to the user. It deals readily with partial behavior matches and incomplete information. In particular, we describe a tool set we have built. The tool set has been used to investigate the behavior of a wide range of programs. The tools are modular and can be distributed readily throughout a system.

What is Dependency Testing?

Dependency Testing is basically Examines an application\'s requirements for pre existing software, initial states and configuration in order to maintain proper functionality.

Presented is a computation method the chase for testing implication of data dependencies by a set of data dependencies. The chase operates on tableaux similar to those of Aho, Sagiv, and Ullman. The chase includes previous tableau computation methods as special cases.

By interpreting tableaux alternately as mappings or as templates for relations, it is possible to test implication of join dependencies including multivalued dependencies and functional dependencies by a set of dependencies.

What is Depth Testing?

Depth testing is basically a test that exercises a feature of a product in full detail. 

What is Dynamic Testing?

Dynamic Testing is basically a Testing software through executing it. The system combines the basics of the pseudo dynamic test with a dynamic actuator, a digital displacement transducer and a digital servo mechanism. The digital servo mechanism has been introduced to ensure accurate displacement and velocity control, in which digital feedback control with a time interval of 2 msec has been performed continuously during actuator motion. 

Using the system, pseudo dynamic tests under sinusoidal and earthquake ground motion are carried out for a structure having a viscous damper, demonstrating that a perfectly real time pseudo dynamic test can be achieved by incorporating the modified central difference method into an extra buffer operation of the digital servo mechanism. dynamic test signals for analog circuits. Using the integral measure for characterizing time-domain signals, we extend the minmax formulation of the static test problem to the dynamic case. A sub-optimal solution strategy, similar to dynamic programming methods is used to construct the test waveforms. The approach presented here may be used to construct input signals for an on-chip test scheme or for the selection of an external stimulus applied through an arbitrary waveform generator .

What is Emulator?

Emulator is basically a device, computer program, or system that accepts the same inputs and produces the same outputs as a given system. 

What is Endurance Testing?

Endurance Testing is basically use for Checks for memory leaks or other problems that may occur with prolonged execution. Its mainly objective is verify the reliability and safety of new methods for evaluating trunk muscle endurance, and to compare the differences between healthy subjects and patients with chronic low back pain.

What is End-to-End testing?

End to End Testing is basically a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate. Its mainly use for the n/w problem, network failures affect the availability of service delivery across wide-area networks (WANs) and to evaluate classes of techniques for improving end-to-end service availability. 

Using several large-scale connectivity traces, we develop a model of network unavailability that includes key parameters such as failure location and failure duration. We then use trace-based simulation to evaluate several classes of techniques for coping with network unavailability. We find that caching alone is seldom effective at insulating services from failures but that the combination of mobile extension code and prefetching can improve average unavailability by as much as an order of magnitude for classes of service whose semantics support disconnected operation.

We find that routing-based techniques may provide significant improvements but that the improvements of many individual techniques are limited because they do not address all significant categories of network failures. By combining the techniques we examine, some systems may be able to reduce average unavailability by as much as one or two orders of magnitude.

What is Equivalence Class?

Equivalence Class is basically a structure of equivalence classes can be completely described by four parameters : 
> Class size, 
> Number of nodes, 
> The distribution of \"singles\" among nodes, and 
> Directionality of training. 

Class size refers to the number of stimuli in a class. Nodes are stimuli linked by training to at least two other stimuli. Singles are stimuli linked by training to only one other stimulus. The distribution of singles refers to the number of singles linked by training to each node. Directionality of training refers to the use of stimuli as samples and as comparison stimuli in training. These four parameters define the different ways in which the stimuli in a class can be organized, and thus provide a basis for systematically characterizing the properties of stimuli in a given equivalence class. The four parameters can also be used to account for the development of individual differences that are commonly characterized in terms of \"understanding\" and connotative meaning. A portion of a component\'s input or output domains for which the component\'s behavior is assumed to be the same from the component\'s specification.

What is Equivalence Partitioning?

A Equivalence class is basically a case design technique for a component in which test cases are designed to execute representatives from equivalence classes.

What is Exhaustive Testing?

Exhaustive Testing is basically that Testing which covers all combinations of input values and preconditions for an element of the software under test. 

A parity bit signature particularly well suited for exhaustive testing techniques is defined and discussed.

The discussion is concerned not only with the proposed parity bit signature itself, but also with the general problem of evaluating its effectiveness relative to a given implementation. In addition to such desirable properties as uniformity and ease of implementation, it is shown to be especially amenable to efficient fault coverage calculations .

What is Functional Decomposition?

Functional Decomposition is basically a technique Which is used during planning, analysis and design; creates a functional hierarchy for the software . 

What is Functional Specification?

Function Specification is type of document that describes in detail the characteristics of the product with regard to its intended features. 

What is Functional Testing?

Functional Testing is basically a Testing, Which have a many features and operational behavior of a product to ensure they correspond to its specifications. 

Testing that ignores the internal mechanism of a system or component and focuses solely on the outputs generated in response to selected inputs and execution conditions. or Black Box Testing.

What is Glass Box Testing?

Glass Box Testing is the same as White Box Testing. Its Testing is a critical activity for database application programs as faults if undetected could lead to unrecoverable data loss. Database application programs typically contain statements written in an imperative programming language with embedded data manipulation commands, such as SQL. 

However relatively little study has been made in the testing of database application programs. In particular, few testing techniques explicitly consider the inclusion of database instances in the selection of test cases and the generation of test data input. In this paper, we study the generation of database instances that respect the semantics of SQL statements embedded in a database application program.


The paper also describes a supporting tool which generates a set of constraints. These constraints collectively represent a property against which the program is tested. Database instances for program testing can be derived by solving the set of constraints using existing constraint solvers.

What is Gorilla Testing?

Gorilla Testing is basically a type of testing which is one particular module, functionality heavily. 

What is Gray Box Testing?

A Combination of Black Box and White Box Testing methodologies, testing a piece of software against its specification but using some knowledge of its internal workings. 

Test case generation is the most important part of the testing efforts, the automation of specification based test case generation needs formal or semi-formal specifications. As a semi-formal modelling language, UML is widely used to describe analysis and design specifications by both academia and industry, thus UML models become the sources of test generation naturally.

Test cases are usually generated from the requirement or the code while the design is seldom concerned, this paper proposes an approach to generate test cases directly from UML activity diagram using Gray-box method, where the design is reused to avoid the cost of test model creation. In this approach, test scenarios are directly derived from the activity diagram modelling an operation.

Then all the information for test case generation, i.e. input/output sequence and parameters, the constraint conditions and expected object method sequence, is extracted from each test scenario.

At last, the possible values of all the input/output parameters could be generated by applying category-partition method, and test suite could be systematically generated to find the inconsistency between the implementation and the design. A prototype tool named UMLTGF has been developed to support the above process.

What is High Order Tests?

High Order Testes is basically a Black-box tests conducted once the software has been integrated . 

What is Independent Test Group (ITG)?

Independent Test Group is basically a type of group of people whose primary responsibility is software testing.

What is Inspection?

What is Integration Testing?

Integration Testing is basically a Testing of combined parts of an application to determine if they function together correctly. Usually performed after unit and functional testing. 

This type of testing is especially relevant to client/server and distributed systems. Increasing numbers of software developers are using the Unified Modeling Language (UML) and associated visual modeling tools as a basis for the design and implementation of their distributed, component-based applications.

At the same time, it is necessary to test these components, especially during unit and integration testing.At Siemens Corporate Research, we have addressed the issue of testing components by integrating test generation and test execution technology with commercial UML modeling tools such as Rational Rose; the goal being a design-based testing environment.

In order to generate test cases automatically, developers first define the dynamic behavior of their components via UML Statecharts, specify the interactions amongst them and finally annotate them with test requirements. Test cases are then derived from these annotated Statecharts using our test generation engine and executed with the help of our test execution tool.

The latter tool was developed specifically for interfacing to components based on COM/DCOM and CORBA middleware.In this paper, we present our approach to modeling components and their interactions, describe how test cases are derived from these component models and then executed to verify their conformant behavior.

What is Installation Testing?

Installation testing is basically a Confirms that the application under test recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.

A method for installing and/or testing software for a build to order computer system includes reading a plurality of component descriptors from a computer readable file. At least one component descriptor describes a respective component of the computer system. A plurality of steps are retrieved from a database, at least one step being associated with a respective component descriptor. A step also includes a respective sequence number. The plurality of steps are sequenced in a predetermined order according to the sequence numbers to provide a step sequence. The step sequence includes commands for installing and/or testing software upon the computer system.

What is Localization Testing?

Localization testing is the basically that term which refers to making software specifically designed for a specific locality.

What is Loop Testing?

Loop testing is basically a white box testing technique that exercises program loops. 

What is Metric?

Metric is basically a standard of measurement. Software metrics are the statistics describing the structure or content of a program. A metric should be a real objective measurement of something such as number of bugs per lines of code.

What is Monkey Testing?

Monkey Testing is basically a Testing a system or an Application on the fly, i.e just few tests here and there to ensure the system or an application does not crash out.

What is Negative Testing?

Negative Testing is basically a Testing aimed at showing software does not work. Also known as \"test to fail\".

What is Path Testing?

Path Testing is basically a Testing in which all paths in the program source code are tested at least once.

What is Performance Testing?

Performance Testing is basically a Testing conducted to evaluate the compliance of a system or component with specified performance requirements. Often this is performed using an automated test tool to simulate large number of users. 

In the performance Testing some points are very important for the testers :
> Response Time
> Band Width
> Throughput
> Scalability
> Stability.

 Performance have a three types of there :
> Load Testing
> Stress Testing
> volume Testing

What is Positive Testing?

Positive Testing is basically use for showing software works. that which software working well or not?

What is Quality Assurance?

Quality Assurance is basically All those planned or systematic actions necessary to provide adequate confidence that a product or service is of the type and quality needed and expected by the customer.

What is Quality Audit?

Quality audit is basically a systematic and independent examination to determine whether quality activities and related results comply with planned arrangements and whether these arrangements are implemented effectively and are suitable to achieve objectives.

What is Quality Circle?

Quality Circle is basically a group of individuals with related interests that meet at regular intervals to consider problems or other matters related to the quality of outputs of a process and to the correction of problems or to the improvement of quality. 

What is Quality Control?

Quality control is basically a operational techniques and the activities used to fulfill and verify requirements of quality. 

What is Quality Management?

Quality Management is basically That aspect of the overall management function that determines and implements the quality policy. 

What is Quality Policy?

Quality Policy is basically a Overall intentions and direction of an organization as regards quality as formally expressed by top management. 

What is Quality System?

Quality System is basically a organizational structure, responsibilities, procedures, processes, and resources for implementing quality management. 

What is Race Condition?

Race Condition is a basically a cause of concurrency problems. Multiple accesses to a shared resource, at least one of which is a write, with no mechanism used by either to moderate simultaneous access.

What is Ramp Testing?

Ramp testing is basically a Continuously raising an input signal until the system breaks down. 

What is Recovery Testing?

Recovery Testing is basically confirms that the program recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.

What is Regression Testing?

Regression Testing is classically a retesting a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made.

What is Release Candidate?

Release Candidate is basically a per-release version, which contains the desired functionality of the final version, but which needs to be tested for bugs , which ideally should be removed before the final version is released.

What is Sanity Testing?

Sanity Testing is basically a Brief test of major functional elements of a piece of software to determine if its basically operational. See also Smoke Testing.

What is Scalability Testing?

Scalability Testing is basically a performance testing focused on ensuring the application under test gracefully handles increases in work load. 

What is Security Testing?

Security Testing is that testing which confirms that the program can restrict access to authorized personnel and that the authorized personnel can access the functions available to their security level.

What is Software Requirements Specification?

Software Requirements Specification is basically deliverable that describes all data, functional and behavioral requirements, all constraints, and all validation requirements for software.

What is Static Analysis?

Static Analysis is basically a analysis of a program carried out without executing the program. A tool that carries out static analysis. 

What is Software Testing?

Software Testing is basically a set of activities conducted with the intent of finding errors in software . Software testing is basically a negative approach for positive intent. 

What is Static Analyzer?

Static Analyses basically a tool which is carries out static analysis.

What is Static Testing?

What is Storage Testing?

Storage Testing is basically that testing which verifies the program under test stores data files in the correct directories and that it reserves sufficient space to prevent unexpected termination resulting from lack of space. This is external storage as opposed to internal storage.

What is Stress Testing?

Stress Testing is basically a Testing which is conducted to evaluate a system or component at or beyond the limits of its specified requirements to determine the load under which it fails and how. Often this is performance testing using a very high level of simulated load.

What is Structural Testing?

Structural Testing is basically a Testing, Which is based on an analysis of internal workings and structure of a piece of software.

What is System Testing?

System Testing is basically a testing that attempts to discover defects that are properties of the entire system rather than of its individual components. 

What is Testability?

Testability is basically a type of degree to which a system or component facilitates the establishment of test criteria and the performance of tests to determine whether those criteria have been met.

What is Testing?

Testing is the process The process of exercising software to verify that it satisfies specified requirements and to detect errors. 

The process of analyzing a software item to detect the differences between existing and required conditions (that is, bugs), and to evaluate the features of the software item (Ref. IEEE Std 829). The process of operating a system or component under specified conditions, observing or recording the results, and making an evaluation of some aspect of the system or component.

What is Test Automation? It is the same as Automated Testing.

What is Test Bed?

Test Bed is basically an execution environment configured for testing. May consist of specific hardware, OS, network topology, configuration of the product under test, other application or system software, etc. The Test Plan for a project should enumerated the test beds(s) to be used.

What is Test Case?

Test Case is basically a commonly used term for a specific test. This is usually the smallest unit of testing. 

A Test Case will consist of information such as requirements testing, test steps, verification steps, prerequisites, outputs, test environment, etc.

A set of inputs, execution preconditions, and expected outcomes developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement.

Test Driven Development? Testing methodology associated with Agile Programming in which every chunk of code is covered by unit tests, which must all pass all the time, in an effort to eliminate unit-level and regression bugs during development.

Practitioners of TDD write a lot of tests, i.e. an equal number of lines of test code to the size of the production code.

What is Test Driver?

Test Driver is the basically a tools for the execute a test .

What is Test Environment?

Test Environment is a type of environment which is like hardware and software environment in which environment tests will be run, and any other software with which the software under test interacts when under test including stubs and test drivers.

What is Test First Design?

Test first design is basically one of the mandatory practices of Extreme Programming (XP).It requires that programmers do not write any production code until they have first written a unit test.

What is Test Harness?

Test Harness is basically a program or test tool used to execute a tests. It is also known as a Test Driver. 

What is Test Plan?

Test plan is basically document describing the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning.

What is Test Procedure?

Test Procedure is basically a type of document providing detailed instructions for the execution of one or more test cases. 

What is Test Script?

Test Script is mainly used to refer to the instructions for a particular test that will be carried out by an automated test tool. 

What is Test Specification?

Test specification is basically a document which is specifying the test approach for a software feature or combination or features and the inputs, predicted results and execution conditions for the associated tests.

What is Test Suite?

Test Suit is basically a collection of tests used to validate the behavior of a product. The scope of a Test Suite varies from organization to organization. There may be several Test Suites for a particular product for example. In most cases however a Test Suite is a high level concept, grouping together hundreds or thousands of tests related by what they are intended to test.

What is Test Tools?

Test Tools basically use for the automated testing which is use for the tesing the programs , I mean computer programs used in the testing of a system, a component of the system, or its documentation. 

What is Thread Testing?

Thread Testing is basically a variation of top down testing where the progressive integration of components follows the implementation of subsets of the requirements, as opposed to the integration of components by successively lower levels.

What is Top Down Testing?

Top Down testing is basically an approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.

What is Total Quality Management?

Total Quality Management is basically a company commitment to develop a process that achieves high quality product and customer satisfaction. 

What is Traceability Matrix?

Traceable Matrix is basically a document , which is showing the relationship between Test Requirements and Test Cases.

What is Usability Testing?

Usability Testing is basically a Testing the ease with which users can learn and use a product. 

What is Use Case?

Use case is basically The specification of tests that are conducted from the end-user perspective. Use cases tend to focus on operating software as an end-user would conduct their day-to-day activities

What is Unit Testing?

Unit Testing is basically a Testing of individual software components. 

What is Validation?

Validation is basically a process of evaluating software at the end of the software development process to ensure compliance with software requirements. It is the techniques for validation is testing, inspection and reviewing.

What is Verification?

Verification is the process which is the process of determining whether of not the products of a given phase of the software development cycle meet the implementation steps and can be traced to the incoming objectives established during the previous phase. The techniques for verification are testing, inspection and reviewing.

What is White Box Testing?

Testing based on an analysis of internal workings and structure of a piece of software. Includes techniques such as Branch Testing and Path Testing. Also known as Structural Testing and Glass Box Testing. Contrast with Black Box Testing.White box testing is used to test the internal logic of the code.for ex checking whether the path has been executed once, checking whether the branches has been executed at least once. Used to check the structure of the code. 

What is Workflow Testing?

WorkFlow testing is basically a Scripted end to end testing which duplicates specific workflows which are expected to be utilized by the end user. 

What\'s the difference between load and stress testing ?

The many difference are there :

One of the most common, but unfortunate misuse of terminology is treating \"load testing\" and \"stress testing\" as synonymous.

The consequence of this ignorant semantic abuse is usually that the system is neither properly \"load tested\" nor subjected to a meaningful stress test. Stress testing is subjecting a system to an unreasonable load while denying it the resources needed to process that load. The idea is to stress a system to the breaking point in order to find bugs that will make that break potentially harmful. The system is not expected to process the overload without adequate resources, but to behave in a decent manner not corrupting or losing data. Bugs and failure modes discovered under stress testing may or may not be repaired depending on the application, the failure mode, consequences, etc.

The load or incoming transaction stream in stress testing is often deliberately distorted so as to force the system into resource depletion.
Load testing is subjecting a system to a statistically representative (usually) load. The two main reasons for using such loads is in support of software reliability testing and in performance testing.

 The term \'load testing\' by itself is too vague and imprecise to warrant use. For example, do we mean representative load,\' \'overload,\' \'high load,\' etc. In performance testing, load is varied from a minimum (zero) to the maximum level the system can sustain without running out of resources or having, transactions > suffer (application specific) excessive delay.

A third use of the term is as a test whose objective is to determine the maximum sustainable load the system can handle. In this usage, \'load testing\' is merely testing at the highest transaction arrival rate in performance testing.

What\'s the difference between QA and testing?

The  difference between QA and testing are:

> QA is more a preventive thing, ensuring quality in the company and therefore the product rather than just testing the product for software bugs,

TESTING means \'quality control\'
> QUALITY CONTROL measures the quality of a product
> QUALITY ASSURANCE measures the quality of processes used to create a quality product.

What is the best tester to developer ratio?

Reported tester : It is a basically a developer ratios range from 10:1 to 1:10. There\'s no simple answer.

 It depends on so many things, Amount of reused code, number and type of interfaces, platform, quality goals, etc. It also can depend on the development model. The more specs, the less testers.

The roles can play a big part also.
Does QA own beta?
Do you include process auditors or planning activities?

These figures can all vary very widely depending on how you define \'tester\' and \'developer\'.

In some organizations, a \'tester\' is anyone who happens to be testing software at the time, such as their own. In other organizations, a \'tester\' is only a member of an independent test group.
It is better to ask about the test labor content than it is to ask about the tester/developer ratio. The test labor content, across most applications is generally accepted as 50%, when people do honest accounting. For life-critical software, this can go up to 80%.

How can new Software QA processes be introduced in an existing organization?

We have a many ways are the introduced in an existing organization for new s/w QA processes, 

These are following as :
> A lot depends on the size of the organization and the risks involved. For large organizations with high risk (in terms of lives or property) projects, serious management buy-in is required and a formalized QA process is necessary.
> Where the risk is lower, management and organizational buy in and QA implementation may be a slower, step at a time process. QA processes should be balanced with productivity so as to keep bureaucracy from getting out of hand.
> For small groups or projects, a more ad hoc process may be appropriate, depending on the type of customers and projects. A lot will depend on team leads or managers, feedback to developers, and ensuring adequate communications among customers, managers, developers, and testers.
> In all cases the most value for effort will be in requirements management processes, with a goal of clear, complete, testable requirement specifications or expectations.

What are 5 common problems in the software development process?

Problems are in s/w development process is defines as :

Poor requirements : if requirements are unclear, incomplete, too general, or not testable, there will be problems.

Unrealistic schedule : if too much work is crammed in too little time, problems are inevitable.

Inadequate testing : no one will know whether or not the program is any good until the customer complains or systems crash.

Featuritis : Requests to pile on new features after development is underway, extremely common.

Miscommunication : if developers don\'t know what\'s needed or customer\'s have erroneous expectations, problems are guaranteed.

What are 5 common solutions to software development problems?

Mainly Solutions are there for the S/w development problems : 

Solid requirements: clear, complete, detailed, cohesive, attainable, testable requirements that are agreed to by all players. Use prototypes to help nail down requirements.

Realistic schedules : allow adequate time for planning, design, testing, bug fixing, re-testing, changes, and documentation; personnel should be able to complete the project without burning out.

Dequate testing : start testing early on, re-test after fixes or changes, plan for adequate time for testing and bug-fixing.
Stick to initial requirements as much as possible : be prepared to defend against changes and additions once development has begun, and be prepared to explain consequences. If changes are necessary, they should be adequately reflected in related schedule changes. If possible, use rapid prototyping during the design phase so that customers can see what to expect. This will provide them a higher comfort level with their requirements decisions and minimize changes later on.

Communication : require walkthroughs and inspections when appropriate; make extensive use of group communication tools :
     > E-mail,
     > Groupware,
     > Networked bug-tracking tools and
     > Change management tools,
     > Intranet capabilities,  
          Insure that documentation is available and
Up-to-date : preferably electronic, not paper; promote teamwork and cooperation; use prototypes early on so that customers\' expectations are clarified.

What is \'good code\'?

\'Good code\' is basically a code that works, is bug free, and is readable and maintainable. Some organizations have coding \'standards\' that all developers are supposed to adhere to, but everyone has different ideas about what\'s best, or what is too many or too few rules. There are also various theories and metrics, such as McCabe Complexity metrics. It should be kept in mind that excessive use of standards and rules can stifle productivity and creativity. \'Peer reviews\', \'buddy checks\' code analysis tools, etc. can be used to check for problems and enforce standards.

> For C and C++ coding, here are some typical ideas to consider in setting rules/standards; these may or may not apply to a particular situation :
         * Minimize or eliminate use of global variables.
         * Use descriptive function and method names : use both upper and lower case, avoid abbreviations, use as many characters as necessary to be adequately descriptive (use of more than 20 characters is not out of line); be consistent in naming conventions.
         * Use descriptive variable names : use both upper and lower case, avoid abbreviations, use as many characters as necessary to be adequately descriptive (use of more than 20 characters is not out of line), be consistent in naming conventions.
         * Function and method sizes should be minimized; less than 100 lines of code is good, less than 50 lines is preferable.
         * Function descriptions should be clearly spelled out in comments preceding a function\'s code.
         * Organize code for readability.
- use whitespace generously - vertically and horizontally
         * Each line of code should contain 70 characters max.
         * One code statement per line.
         * Coding style should be consistent through a program (eg, use of brackets, indentations, naming conventions, etc.)
         * In adding comments, err on the side of too many rather than too few comments; a common rule of thumb is that there should be at least as many lines of comments (including header blocks) as lines of code.
         * No matter how small, an application should include documentaion of the overall program function and flow (even a few paragraphs is better than nothing); or if possible a separate flow chart and detailed program documentation.
          * Make extensive use of error handling procedures and status and error logging.
          * For C++, to minimize complexity and increase maintainability, avoid too many levels of inheritance in class heirarchies (relative to the size and complexity of the application). Minimize use of multiple inheritance, and minimize use of operator overloading (note that the Java programming language eliminates multiple inheritance and operator overloading.)
        * For C++, keep class methods small, less than 50 lines of code per method is preferable.
        * For C++, make liberal use of exception handlers .

What is \'good design\'?

Good functional design is indicated by an application whose functionality can be traced back to customer and end user requirements

Good Design is basically could refer to many things, but its refers to \'functional design\' or \'internal design\'.

Good internal design is indicated by software code whose overall structure is clear, understandable, easily modifiable, and maintainable,

It is robust with sufficient error handling and status logging capability, and works correctly when implemented.

For programs that have a user interface, it\'s often a good idea to assume that the end user will have little computer knowledge and may not read a user manual or even the online help; some common rules of thumb include :

The program should act in a way that least surprises the user.
It should always be evident to the user what can be done next and how to exit.
The program shouldn\'t let the users do something stupid without warning them.

What makes a good test engineer?

A Good test engineer has basically a \'test to break\' attitude, An ability to take the point of view of the customer, a strong desire for quality, and an attention to detail. 

Tact and diplomacy are useful in maintaining a cooperative relationship with developers, and an ability to communicate with both technical developers and non technical customers, management people is useful.

Previous software development experience can be helpful as it provides a deeper understanding of the software development process, gives the tester an appreciation for the developers\' point of view, and reduce the learning curve in automated test tool programming.

Judgment skills are needed to assess high risk areas of an application on which to focus testing efforts when time is limited.

What makes a good Software QA engineer?

Software QA Engineer is basically a Same qualities as a good tester has are useful for a QA engineer. Additionally, they must be able to understand the entire software development process and how it can fit into the business approach and goals of the organization. 

Communication skills and the ability to understand various sides of issues are important. In organizations in the early stages of implementing QA processes, patience and diplomacy are especially needed.

An ability to find problems as well as to see \'what\'s missing\' is important for inspections and reviews.

What makes a good QA or Test manager?

A Good QA,test,or QA/Test(combined)manager should have a diff qualities , which is defined as : 

Be familiar with the software development process

Be able to maintain enthusiasm of their team and promote a positive atmosphere, despite what is a somewhat \'negative\' process (e.g., looking for or preventing problems).

Be able to promote teamwork to increase productivity.

Be able to promote cooperation between software, test, and QA engineers.

Have the diplomatic skills needed to promote improvements in QA processes.

Have the ability to withstand pressures and say \'no\' to other managers when quality is insufficient or QA processes are not being adhered to.

Have people judgment skills for hiring and keeping skilled personnel.

Be able to communicate with technical and non-technical people, engineers, managers, and customers.

Be able to run meetings and keep them focused .

What\'s the role of documentation in QA?

The Role of documentation in QA is defined as : 
Critical QA practices should be documented such that they are :
       * Repeatable.
       * Specifications,
       * Designs,
       * Business rules,
       * Inspection reports,
       * Configurations,
       * Code changes,
       * Test plans,
       * Test cases,
       * Bug reports,
       * User manuals, etc.

Should all be documented. There should ideally be a system for easily finding and obtaining documents and determining what documentation will have a particular piece of information.

Change management for documentation should be used if possible.

What\'s the big deal about \'requirements\'?

Requirement is basically One of the most reliable methods of insuring problems, or failure, in a complex software project is to have poorly documented requirements specifications .

It is defined as :
> Requirements are the details describing an application\'s externally perceived functionality and properties.
> Requirements should be clear, complete, reasonably detailed, cohesive, attainable, and testable.
> A non testable requirement would be, for example, \'user-friendly\' (too subjective).
> A testable requirement would be something like \'the user must enter their previously assigned password to access the application\'.
> Determining and organizing requirements details in a useful and efficient way can be a difficult effort; different methods are available depending on the particular project.
> Many books are available that describe various approaches to this task.
> Care should be taken to involve ALL of a project\'s significant \'customers\' in the requirements process.
> \'Customers\' could be in house personnel or out, and could include end-users, customer acceptance testers, customer contract officers, customer management, future software maintenance engineers, salespeople, etc.
> Anyone who could later derail the project if their expectations aren\'t met should be included if possible.
> Organizations vary considerably in their handling of requirements specifications. Ideally, the requirements are spelled out in a document with statements such as \'The product shall. \'Design\' specifications should not be confused with \'requirements\'; design specifications should be traceable back to the requirements.

In some organizations requirements may end up in high level project plans, functional specification documents, in design documents, or in other documents at various levels of detail. No matter what they are called, some type of documentation with detailed requirements will be needed by testers in order to properly plan and execute tests. Without such documentation, there will be no clear cut way to determine if a software application is performing correctly.

What steps are needed to develop and run software tests?

The following are some of the steps are needed to develop and run software tests are consider as :

> Obtain requirements, functional design, and internal design specifications and other necessary documents.
> Obtain budget and schedule requirements.
> Determine project related personnel and their responsibilities, reporting requirements, required standards and processes. such as release processes, change processes, etc.
> Identify application\'s higher risk aspects, set priorities, and determine scope and limitations of tests.
> Determine test approaches and methods : unit, integration, functional, system, load, usability tests, etc.
> Determine test environment requirements means hardware, software, communications, etc.
> Determine test ware requirements record/playback tools, coverage analyzers, test tracking, problem/bug tracking, etc.
> Determine test input data requirements.
> Identify tasks, those responsible for tasks, and labor requirements.
> Set schedule estimates, timelines, milestones.
> Determine input equivalence classes, boundary value analyses, error classes.
> Prepare test plan document and have needed reviews/approvals.
> Write test cases.
> Have needed reviews/inspections/approvals of test cases.
> Prepare test environment and testware, obtain needed user manuals/reference ,documents/ configuration guides/installation guides, set up test tracking processes, set up logging and archiving processes, set up or obtain test input data.
> Obtain and install software releases.
> Perform tests.
> Evaluate and report results.
> Track problems/bugs and fixes.
> Retest as needed.
> Maintain and update test plans, test cases, test environment, and testware through life cycle.

What is configuration management\'?

Configuration management is basically a covers the processes used to control, coordinate, and track : 

> Code,
> Requirements,
> Documentation,
> Problems,
> Change requests,
> Designs,
> Tools/compilers/libraries/patches,
> Changes made to them, and who makes the changes.

What if the software is so buggy it can\'t really be tested at all?

The Best bet in this situation is for the testers to go through the process of reporting whatever bugs or blocking type problems initially show up, with the focus being on critical bugs. 

Since this type of problem can severely affect schedules, and indicates deeper problems in the software development process , such as insufficient unit testing or insufficient integration testing, poor design, improper build or release procedures, etc. Managers should be notified, and provided with some documentation as evidence of the problem.

How can it be known when to stop testing?

Software testing can be difficult to determine. Many modern software applications are so complex, and run in such an interdependent environment, that complete testing can never be done. Common factors in deciding when to stop are :
> Deadlines (release deadlines, testing deadlines, etc.)
> Test cases completed with certain percentage passed.
> Test budget depleted.
> Coverage of code/functionality/requirements reaches a specified point.
> Bug rate falls below a certain level.
> Beta or alpha testing period ends.

What if there isn\'t enough time for thorough testing?

Use risk analysis to determine where testing should be focused . Since it\'s rarely possible to test every possible aspect of an application, Every possible combination of events, every dependency, or everything that could go wrong, risk analysis is appropriate to most software development projects. This requires judgment skills, common sense, and experience. If warranted, formal methods are also available. 

Considerations can include :
> Which functionality is most important to the project\'s intended purpose?
> Which functionality is most visible to the user?
> Which functionality has the largest safety impact?
> Which functionality has the largest financial impact on users?
> Which aspects of the application are most important to the customer?
> Which aspects of the application can be tested early in the development cycle?
> Which parts of the code are most complex, and thus most subject to errors?
> Which parts of the application were developed in rush or panic mode?
> Which aspects of similar/related previous projects caused problems?
> Which aspects of similar/related previous projects had large maintenance expenses?
> Which parts of the requirements and design are unclear or poorly thought out?
> What do the developers think are the highest risk aspects of the application?
> What kinds of problems would cause the worst publicity?
> What kinds of problems would cause the most customer service complaints?
> What kinds of tests could easily cover multiple functionalists?
> Which tests will have the best high risk coverage to time required ratio?

What can be done if requirements are changing continuously?

A common problem and a major headache are created in the system , which is defined as :

> Work with the project\'s stakeholders early on to understand how requirements might change so that alternate test plans and strategies can be worked out in advance, if possible.
> It\'s helpful if the application\'s initial design allows for some adaptability so that later changes do not require redoing the application from scratch.
> If the code is well-commented and well documented this makes changes easier for the developers.
> Use rapid prototyping whenever possible to help customers feel sure of their requirements and minimize changes.
> The project\'s initial schedule should allow for some extra time commensurate with the possibility of changes.
> Try to move new requirements to a \'Phase 2\' version of an application, while using the original requirements for the \'Phase 1\' version.
> Negotiate to allow only easily-implemented new requirements into the project, while moving more difficult new requirements into future versions of the application.
> Be sure that customers and management understand the scheduling impacts, inherent risks, and costs of significant requirements changes. Then let management or the customers not the developers or testers decide if the changes are warranted after all, that\'s their job.
> Balance the effort put into setting up automated testing with the expected effort required to redo them to deal with changes.
> Try to design some flexibility into automated test scripts.
> Focus initial automated testing on application aspects that are most likely to remain unchanged.
> Devote appropriate effort to risk analysis of changes to minimize regression testing needs.
> Design some flexibility into test cases, This is not easily done; the best bet might be to minimize the detail in the test cases, or set up only higher level generic type test plans.
> Focus less on detailed test plans and test cases and more on ad hoc testing with an understanding of the added risk that this entails.

What if the project isn\'t big enough to justify extensive testing?

The projects isn\'t big enough to justify extensive testing is basically a consider the impact of project errors, not the size of the project. 

However, if extensive testing is still not justified, risk analysis is again needed and the same considerations as described previously in The tester might then do ad-hoc testing, or write up a limited test plan based on the risk analysis.

What if the application has functionality that wasn\'t in the requirements?

The Application has functionality that wasn\'t in the requirement is basically It may take serious effort to determine if an application has significant unexpected or hidden functionality, and it would indicate deeper problems in the software development process. 

If the functionality isn\'t necessary to the purpose of the application, it should be removed, as it may have unknown impacts or dependencies that were not taken into account by the designer or the customer.

If not removed, design information will be needed to determine added testing needs or regression testing needs. Management should be made aware of any significant added risks as a result of the unexpected functionality.

If the functionality only effects areas such as minor improvements in the user interface, for example, it may not be a significant risk.

How can Software QA processes be implemented without stifling productivity?

Software QA processes be implementing QA processes slowly over time, using consensus to reach agreement on processes, and adjusting and experimenting as an organization grows and matures, productivity will be improved instead of stifled. 

Problem prevention will lessen the need for problem detection, panics and burn-out will decrease, and there will be improved focus and less wasted effort.

At the same time, attempts should be made to keep processes simple and efficient, minimize paperwork, promote computer-based processes and automated tracking and reporting, minimize time required in meetings, and promote training as part of the QA process.

However, no one - especially talented technical types - likes rules or bureaucracy, and in the short run things may slow down a bit. A typical scenario would be that more days of planning and development will be needed, but less time will be required for late-night bug-fixing and calming of irate customers.

What if an organization is growing so fast that fixed QA processes are impossible?

This is a common problem in the software industry, especially in new technology areas. There is no easy solution in this situation, other than :

> Hire good people.
> Management should \'ruthlessly prioritize\' quality issues and maintain focus on the customer
> Everyone in the organization should be clear on what \'quality\' means to the customer .

How does a client/server environment affect testing?

Client server applications can be quite complex due to the multiple dependencies among clients, data communications, hardware, and servers. Thus testing requirements can be extensive. When time is limited as it usually is the focus should be on integration and system testing. 

Additionally, load/stress/performance testing may be useful in determining client/server application limitations and capabilities. There are commercial tools to assist with such testing.

How can World Wide Web sites be tested?

Web sites are essentially client server applications with web servers and \'browser\' clients. Consideration should be given to the interactions between : 

> HTML pages,
> TCP/IP communications,
> Internet connections,
> Firewalls,
> Applications

 That run in web pages such as applets, javascript, plug-in applications, and applications that run on the server side such as cgi scripts, database interfaces, logging applications, dynamic page generators, asp, etc. Additionally, there are a wide variety of servers and browsers, various versions of each, small but sometimes significant differences between them, variations in connection speeds, rapidly changing technologies, and multiple standards and protocols. The end result is that testing for web sites can become a major ongoing effort. Other considerations might include :
> What are the expected loads on the server e.g., number of hits per unit time?, and
> what kind of performance is required under such loads such as web server response time, database query response times.
> What kinds of tools will be needed for performance testing such as web load testing tools, other tools already in house that can be adapted, web robot downloading tools, etc.
> Who is the target audience?
> What kind of browsers will they be using?
> What kind of connection speeds will they by using?
> Are they intra organization thus with likely high connection speeds and similar browsers or Internet wide thus with a wide variety of connection speeds and browser types?
> What kind of performance is expected on the client side e.g.
> How fast should pages appear, how fast should animations, applets, etc. load and run?
> Will down time for server and content maintenance/upgrades be allowed? how much?
> What kinds of security firewalls, encryptions, passwords, etc.
> Will be required and what is it expected to do? How can it be tested?
> How reliable are the site\'s Internet connections required to be? And how does that affect backup system or redundant connection requirements and testing?
> What processes will be required to manage updates to the web site\'s content, and what are the requirements for maintaining, tracking, and controlling page content, graphics, links, etc.?
> Which HTML specification will be adhered to? How strictly? What variations will be allowed for targeted browsers?
> Will there be any standards or requirements for page appearance and/or graphics throughout a site or parts of a site?
> How will internal and external links be validated and updated? how often?
> Can testing be done on the production system, or will a separate test system be required?
> How are browser caching, variations in browser option settings, dial up connection variabilities, and real world internet \'traffic congestion\' problems to be accounted for in testing?
> How extensive or customized are the server logging and reporting requirements; are they considered an integral part of the system and do they require testing?
> How are cgi programs, applets, javascripts, ActiveX components, etc. to be maintained, tracked, controlled, and tested?
> Pages should be 3-5 screens max unless content is tightly focused on a single topic. If larger, provide internal links within the page.
> The page layouts and design elements should be consistent throughout a site, so that it\'s clear to the user that they\'re still within a site.
> Pages should be as browser independent as possible, or pages should be provided or generated based on the browser type.
> All pages should have links external to the page; there should be no dead end pages.
> The page owner, revision date, and a link to a contact person or organization should be included on each page.

How is testing affected by object-oriented designs?

Well engineered Object-Oriented design can make it easier to trace from code to internal design to functional design to requirements. 

While there will be little affect on black box testing, Where an understanding of the internal design of the application is unnecessary, White box testing can be oriented to the application\'s objects. If the application was well designed this can simplify test design.

What is Extreme Programming and what\'s it got to do with testing?

Extreame Programming(XP) is a mainly a type of software development approach for small teams on risk prone projects with unstable requirements. 

Testing \'extreme testing\' is a core aspect of Extreme Programming. Programmers are expected to write unit and functional test code first before the application is developed. Test code is under source control along with the rest of the code. Customers are expected to be an integral part of the project team and to help developed scenarios for acceptance black box testing.

Acceptance tests are preferably automated, and are modified and returns for each of the frequent development iterations. QA and test personnel are also required to be an integral part of the project team. Detailed requirements documentation is not used, and frequent re-scheduling, re-estimating, and re-prioritizing is expected.

Will automated testing tools make testing easier?

Automated Testing tolls make a testing easier is defined as : 

> Possibly. For small projects, The Time needed to learn and implement them may not be worth it. For larger projects, or on going long term projects they can be valuable.
> A Common type of automated tool is the \'record/playback\' type. For example, a tester could click through all combinations of menu choices, dialog box choices, buttons, etc. in an application GUI and have them \'recorded\' and the results logged by a tool. The \'recording\' is typically in the form of text based on a scripting language that is interpretable by the testing tool. If new buttons are added, or some underlying code in the application is changed, etc. the application can then be retested by just \'playing back\' the \'recorded\' actions, and comparing the logging results to check effects of the changes. The problem with such tools is that if there are continual changes to the system being tested, the \'recordings\' may have to be changed so much that it becomes very time consuming to continuously update the scripts. Additionally, interpretation of results screens, data, logs, etc. can be a difficult task. Note that there are record/playback tools for text based interfaces also, and for all types of platforms.
> Other Automated tools can include :
    * Code analyzers : monitor code complexity, adherence to standards,
    * Coverage Analyzers : these tools check which parts of the code have been exercised by a test, and may be oriented to code statement coverage, condition coverage, path coverage, etc.
    * Memory Analyzers : such as bounds checkers and leak detectors.
    * Load/performance Test Tools : for testing client/server and web applications under various load levels.
    * Web Test Tools : To check that links are valid, HTML code usage is correct, client-side and server-side programs work, a web site\'s interactions are secure.
    * Other Tools : for test case management, documentation management, bug reporting, and configuration management .

What\'s the difference between black box and white box testing?

Black box and white-box are test design methods. 

There are many diff are there :
> Black-box test design treats the system as a black-box, so it doesn\'t explicitly use knowledge of the internal structure.

> Black-box test design is usually described as focusing on testing functional requirements.
 
> Synonyms for black-box include : behavioral, functional, opaque-box, and closed-box. White-box test design allows one to peek inside the box, and it focuses specifically on using internal knowledge of the software to guide the selection of test data. Synonyms for white-box include : structural, glass-box and clear-box.

> While black-box and white-box are terms that are still in popular use, many people prefer the terms \'behavioral\' and \'structural\'. Behavioral test design is slightly different from black-box test design because the use of internal knowledge isn\'t strictly forbidden, but it\'s still discouraged. In practice, It hasn\'t proven useful to use a single test design method. One has to use a mixture of different methods so that they aren\'t hindered by the limitations of a particular one. Some call this \'gray-box\' or \'translucent-box\' test design, but others wish we\'d stop talking about boxes altogether.

> It is important to understand that these methods are used during the test design phase, and their influence is hard to see in the tests once they\'re implemented. Note that any level of testing unit testing, system testing, etc. can use any test design methods.

> Unit testing is usually associated with structural test design, but this is because testers usually don\'t have well-defined requirements at the unit level to validate.

What kinds of testing should be considered?

Many Testings are there : 

 Black box testing : not based on any knowledge of internal design or code. Tests are based on requirements and functionality.

White box testing : based on knowledge of the internal logic of an application\'s code. Tests are based on coverage of code statements, branches, paths, conditions.

Unit testing : The Most \'Micro\' scale of testing, to test particular functions or code modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a well-designed architecture with tight code; may require developing test driver modules or test harnesses

Incremental Integration Testing : continuous testing of an application as new functionality is added; requires that various aspects of an application\'s functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers.

Integration Testing : Testing of combined parts of an application to determine if they function together correctly. The \'parts\' can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.

Functional Testing : Black-box type testing geared to functional requirements of an application, this type of testing should be done by testers. This doesn\'t mean that the programmers shouldn\'t check that their code works before releasing it,which of course applies to any stage of testing.

System Testing : Black-Box type testing that is based on overall requirements specifications; covers all combined parts of a system .

End-to-End Testing : Similar to system testing, the \'Macro\' end of the test scale, involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

Sanity Testing or Smoke Testing : Typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or corrupting databases, the software may not be in a \'sane\' enough condition to warrant further testing in its current state.

Regression Testing : Re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much  re-testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing.

Acceptance Testing : final testing based on specifications of the end-user or customer, or based on use by end-users/customers over some limited period of time.

Load Testing : testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system\'s response time degrades or fails.

Stress Testing : term often used interchangeably with \'load\' and \'performance\' testing. Also used to describe such tests as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc.

Performance Testing : Term often used interchangeably with \'stress\' and \'load\' testing. Ideally \'performance\' testing and any other \'type\' of testing is defined in requirements documentation or QA or Test Plans.

Usability Testing : Testing for \'user-friendliness\'. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers.

Install/Uninstall Testing : Testing of full, partial, or upgrade install/uninstall processes. > Recovery Testing : Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.

Fail over Testing : Typically used interchangeably with \'Recovery Testing\'

Security Testing : Testing how well the system protects against unauthorized internal or external access, willful damage, may require sophisticated testing techniques.

Compatibility Testing : Testing how well software performs in a particular hardware/software/operating system/network/etc. environment.

Exploratory Testing : often taken to mean a creative, informal software test that is not based on formal test plans or test cases; testers may be learning the software as they test it.

Ad-hoc Testing : similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software before testing it.

Context-driven Testing : Testing driven by an understanding of the environment, culture, and intended use of software. For example, the testing approach for Life-Critical medical equipment software would be completely different than that for a low-cost computer game.

User Acceptance Testing : Determining if software is satisfactory to an end-user or customer.

Comparison Testing : Comparing software weaknesses and strengths to competing products.

Alpha Testing : testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers.

Beta Testing : testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers.

Mutation Testing : A Method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes (\'bugs\') and retesting with the original test data/cases to determine if the \'bugs\' are detected. Proper implementation requires large computational resources.

Why is it often hard for management to get serious about quality assurance?

Solving Problems is a high-visibility process; preventing problems is low-visibility.

This is illustrated by an old parable:

In ancient China there was a family of healers, one of whom was known throughout the land and employed as a physician to a great lord. The physician was asked which of his family was the most skillful healer. He replied, \"I tend to the sick and dying with drastic and dramatic treatments, and on occasion someone is cured and my name gets out among the lords.\"

\"My elder brother cures sickness when it just begins to take root, and his skills are known among the local peasants and neighbors.\"

\"My eldest brother is able to sense the spirit of sickness and eradicate it before it takes form. His name is unknown outside our home.\"

Why does software have bugs?

Many reasons are there :

 Miscommunication or no communication : as to specifics of what an application should or shouldn\'t do the application\'s requirements.

Software complexity : The complexity of current software applications can be difficult to comprehend for anyone without experience in modern day software development. Multi tiered applications, client server and distributed applications, data communications, enormous relational databases, and sheer size of applications have all contributed to the exponential growth in software/system complexity. programming errors programmers, like anyone else, can make mistakes.

Changing requirements (whether documented or undocumented) : The end user may not understand the effects of changes, or may understand and request them anyway redesign, rescheduling of engineers, effects on other projects, work already completed that may have to be redone or thrown out, hardware requirements that may be affected, etc. If there are many minor changes or any major changes, known and unknown dependencies among parts of the project are likely to interact and cause problems, and the complexity of coordinating changes may result in errors. Enthusiasm of engineering staff may be affected. In some fast changing business environments, continuously modified requirements may be a fact of life. In this case, management must understand the resulting risks, and QA and test engineers must adapt and plan for continuous extensive testing to keep the inevitable bugs from running out of control.

Poorly documented code : it\'s tough to maintain and modify code that is badly written or poorly documented; the result is bugs. In many organizations management provides no incentive for programmers to document their code or write clear, understandable, maintainable code. In fact, it\'s usually the opposite: they get points mostly for quickly turning out code, and there\'s job security if nobody else can understand it (\'if it was hard to write, it should be hard to read\').

software development tools :
visual tools, class libraries, compilers, scripting tools, etc. often introduce their own bugs or are poorly documented, resulting in added bugs.

What testing activities you may want to automate in a project?

Testing tools can be used for :

Sanity tests : which is repeated on every build,
Stress/Load tests : simulate a large no of users,which is manually impossible and
Regression tests : are which are done after every code change.

How will you test the field that generates auto numbers of AUT when we click the button \'NEW\" in the application?

We can create a textfile in a certain location, and update the auto generated value each time we run the test and compare the currently generated value with the previous one will be one solution. 

How will you evaluate the fields in the application under test using automation tool?

We can use Verification points or rational Robot to validate the fields . 

For the Example :

> Using objectdata,
> Object data properties VP we can validate fields.

What are the tables in testplans and testcases?

Test plan is basically a document that contains the scope, approach, test design and test strategies. It includes the following : 

Test case identifier
Scope
Features to be tested
Features not to be tested.
Test strategy.
Test Approach
Test Deliverables
Responsibilities.
Staffing and Training
Risk and Contingencies
Approval

While A test case is a noted/documented set of steps or activities that are carried out or executed on the software in order to confirm its functionality/behavior to certain set of inputs.

What are the table contents in testplans and test cases?

Test Plan is basically a document which is prepared with the details of the testing priority. A test Plan generally includes : 

Objective of Testing
Scope of Testing
Reason for testing
Time-frame
Environment
Entrance and exit criteria
Risk factors involved
Deliverables

What automating testing tools are you familiar with?

Many Testing tolls are there :

Win Runner ,
Load runner,
QTP ,
Silk Performer,
Test director,
Rational robot,
QA run.

How did you use automating testing tools in your job?

Many ways are there :

For regression testing
Criteria to decide the condition of a particular build
Describe some problem that you had with automating testing tool.

The problem of winrunner identifying the third party controls like infragistics control.

How do you plan test automation?

That are defines as :

Prepare the automation Test plan
Identify the scenario
Record the scenario
Enhance the scripts by inserting check points and Conditional Loops
Incorporated Error Handler
Debug the script
Fix the issue
Rerun the script and report the result.

Can test automation improve test effectiveness?

Yes, Automating a test makes the test process :

Fast
Reliable
Repeatable
Programmable
Reusable
Comprehensive
What is data driven automation?
          
Testing the functionality with more test cases becomes laborious as the functionality grows.

For multiple sets of data or test cases, we can execute the test once in which you can figure out for which data it has failed and for which data, the test has passed.

This feature is available in the WinRunner with the data driven test where the data can be taken from an excel sheet or notepad.

What are the main attributes of test automation?

There are many software test automation attributes :

Maintainability : the effort needed to update the test automation suites for each new release.
Reliability : the accuracy and repeatability of the test automation.
Flexibility : the ease of working with all the different kinds of automation test ware.
Efficiency : the total cost related to the effort needed for the automation.
Portability : the ability of the automated test to run on different environments.
Robustness : the effectiveness of automation on an unstable or rapidly changing system.
Usability : the extent to which automation can be used by different types of users.

Does automation replace manual testing?

That is a some functionality which cannot be tested in an automated tool so we may have to do it manually. therefore manual testing can never be replaced .When we talk about real environment we do negative testing manually. 

How will you choose a tool for test automation?

Choosing of a tool defends on many things

Application to be tested.
Test environment.
Scope and limitation of the tool.
Feature of the tool.
Cost of the tool.
Whether the tool is compatible with your application which means tool should be able to interact with our application.
Ease of use .

How you will evaluate the tool for test automation?

We need to concentrate on the features of the tools and how this could be beneficial for our project. The additional new features and the enhancements of the features will also help. 

What are main benefits of test automation?

Main Benefits are there :

FAST
RELIABLE
COMPREHENSIVE
REUSABLE

What could go wrong with test automation?

Automation Testing is basically a go wrong in this way , which is following as :

> The choice of automation tool for certain technologies.
> Wrong set of test automated.

How you will describe testing activities?

Testing activities start from the elaboration phase. The various testing activities are :

Preparing the test plan,
Preparing test cases,
Execute the test case,
Log the bug,
Validate the bug and take appropriate action for the bug, Automate the test cases.

What testing activities you may want to automate?

Automate all the high priority test cases which needs to be executed as a part of regression testing for each build cycle. 

Describe common problems of test automation.

The Common problems are :

Maintenance of the old script when there is a feature change or enhancement.

The change in technology of the application will affect the old scripts .

What types of scripting techniques for test automation do you know?

5 types of scripting techniques for test automation is following as :

Linear.
Structured.
Shared.
Data Driven.
Key Driven .

What are principles of good testing scripts for automation?

Many principles of good testing scripts for automation is there :

Proper code guiding standards
Standard format for defining functions, exception handler etc
Comments for functions
Proper error-handling mechanisms
The appropriate synchronization techniques
What tools are available for support of testing during software development life cycle?

Can the activities of test case design be automated?

As we know that the Test cases design be automated and test case design is about formulating the steps to be carried out to verify something about the application under test. And this cannot be automated. However, we agree that the process of putting the test results into the excel sheet.

What are the limitations of automating software testing?

There are many limitations of automating software testing.

Hard to create environments like 

out of memory
invalid input/reply
corrupt registry entries

make applications behave poorly and existing automated tools can\'t force these condition : they simply test your application in \'normal\' environment.

What skills needed to be a good test automator?

Many Skills needed to be a good test automator is there : 

Good Logic for programming.
Analytical skills.
Pessimestic in Nature.

How to find that tools work well with your existing system?

Many tools work well with our existing system is there :

Discuss with the support officials
Download the trial version of the tool and evaluate
Get suggestions from people who are working on the tool .

Describe some problem that you had with automating testing tool

Mainly problems are that we had with automating testing tools are there :

The inability of winrunner to identify the third party control like infragistics controls.
The change of the location of the table object will cause object not found error.
The inability of the winrunner to execute the script against multiple languages.

Can we perform the test of single application at the same time using different tools on the same machine?

No 
We can perform the test of single application at the same time using different tools on the same machine, The Testing Tools will be in the ambiguity to determine which browser is opened by which tool.

What is bidirectional traceability?

Bidirectional traceability is basically needs to be implemented both forward and backward When the requirements are managed well, traceability can be established from the source requirement to its lower level requirements and from the lower level requirements back to their source. 

Such bidirectional traceability helps determine that all source requirements have been completely addressed and that all lower level requirements can be traced to a valid source.

What is stub? Explain in testing point of view?

Stub is basically a dummy program or component, the code is not ready for testing, it\'s used for testing.that means, in a project if there are 4 modules and last is remaining and there is no time then we will use dummy program to complete that fourth module and we will run whole 4 modules also. The dummy program is also known as stub . 

What is \'configuration management\'?

Configuration management is a basically a process to control and document any changes made during the life of a project. 

Revision control,
Change Control, and
Release Control are important aspects of Configuration Management.

How to test the Web applications?

The Basic difference in web testing is here we have to test for URL\'s coverage and links coverage. Using WinRunner we can conduct webtesting. 

But we have to make sure that Webtest option is selected in \"Add in Manager\". Using WR we cannot test XML objects.

What are the problems encountered during the testing the application compatibility on different browsers and on different operating systems

The Problems are encountered during the testing the application compatibility-on diff browsers and diff operating system : 

> Font issues,
> Alignment issues

For Web Applications what type of tests are you going to do?

Web based applications present new challenges, these challenges include : 

Short release cycles;
Constantly Changing Technology;
Possible huge number of users during initial website launch;
Inability to control the user\'s running environment;
24 hour availability of the web site.

The quality of a website must be evident from the Onset. Any difficulty whether in response time, accuracy of information, or ease of use will compel the user to click to a competitor\'s site. Such problems translate into lost of users, lost sales, and poor company image.

To overcome these types of problems, use the following techniques :

Functionality Testing : Functionality testing involves making Sure the features that most affect user interactions work properly. These include :
     * Forms
     * Searches
     * Pop-up windows
     * Shopping carts
     * Online payments

Usability Testing : Many users have low tolerance for anything that is difficult to use or that does not work. A user\'s first impression of the site is important, and many websites have become cluttered with an increasing number of features. For general use websites frustrated users can easily click over a competitor\'s site. Usability testing involves following main steps :
       * Identify the website\'s purpose;
       * Identify the indented users ;
       * Define tests and conduct the usability testing.
       * Analyze the acquired information

Navigation Testing : Good Navigation is an essential part of a website, especially those that are complex and provide a lot of information. Assessing navigation is a major part of usability Testing.

Forms Testing : Websites that use forms need tests to ensure that each field works properly and that the forms posts all data as intended by the designer.

Page Content Testing : Each web page must be tested for correct content from the user perspective for correct content from the user perspective. These tests fall into two categories: ensuring that each component functions correctly and ensuring that the content of each is correct.

Configuration and Compatibility testing : A key challenge for web applications is ensuring that the user sees a web page as the designer intended. The user can select different browser software and browser options, use different network software and on-line service, and run other concurrent applications. We execute the application under every browser platform combination to ensure the web sites work properly under various environments.

Reliability and Availability Testing : A key requirement o a website is that it Be available whenever the user requests it, after 24 hours a day, every day. The number of users accessing web site simultaneously may also affect the site\'s availability.

Performance Testing : Performance Testing, which evaluates System performance under normal and heavy usage, is crucial to success of any web application. A system that takes for long to respond may frustrate the user who can then quickly move to a competitor\'s site. Given enough time, every page request will eventually be delivered. Performance testing seeks to ensure that the website server responds to browser requests within defined parameters.

Load Testing : The purpose of Load testing is to model real world experiences, typically by generating many simultaneous users accessing the website. We use automation tools to increases the ability to conduct a valid load test, because it emulates thousand of users by sending simultaneous requests to the application or the server.

Stress Testing : Stress Testing consists of subjecting the system to varying and maximum loads to evaluate the resulting performance. We use automated test tools to simulate loads on website and execute the tests continuously for several hours or days.

Security Testing : Security is a primary concern when communicating and conducting business- especially sensitive and business- critical transactions over the internet. The user wants assurance that personal and financial information is secure. Finding the vulnerabilities in an application that would grant an unauthorized user access to the system is important.

Define Brain Stromming and Cause Effect Graphing?

Define Brain Stromming and Cause Effect Graphing is defined as : 

BS : A learning technique involving open group discussion intended to expand the range of available ideas
OR

A meeting to generate creative ideas. At PEPSI Advertising, daily, weekly and bi-monthly brainstorming sessions are held by various work groups within the firm. Our monthly I-Power brainstorming meeting is attended by the entire agency staff.
OR

Brainstorming is a highly structured process to help generate ideas. It is based on the principle that you cannot generate and evaluate ideas at the same time. To use brainstorming, you must first gain agreement from the group to try brainstorming for a fixed interval .

CEG : A testing technique that aids in selecting, in a systematic way, a high-yield set of test cases that logically relates causes to effects to produce test cases. It has a beneficial side effect in pointing out incompleteness and ambiguities in specifications.

How testing is proceeded when SRS or any other docccument is not given?

If SRS is not there we can perform Exploratory testing. In Exploratory testing the basic module is executed and depending on its results, the next plan is executed.

How do we test for severe memory leakages ?

By using Endurance Testing .we test for serve memory leakages : 
 
Endurance Testing means checking for memory leaks or other problems that may occur with prolonged execution .

What is the difference between quality assurance and testing?

The difference between quality assurance and testing  are there : 

Quality assurance involves the entire software development process and testing involves operation of a system or application to evaluate the results under certain conditions.

QA is oriented to prevention and Testing is oriented to detection.

What is memory leaks and buffer overflows ?

Memory leaks is basically means incomplete deallocation : are bugs that happen very often. Buffer overflow is basically means data sent as input to the server that overflows the boundaries of the input area, thus causing the server to misbehave. Buffer overflows can be used.

what are the major differences between stress testing,load testing,Volume testing?

Many Major differences between stress Testing , Load Testing and Volume Testing are there : 

Stress testing : means increasing the load ,and checking the performance at each level.
Load testing : means at a time giving more load by the expectation and checking the performance at that level.
Volume testing : means first we have to apply initial.

What is the maximum length of the test case we can write?

Its only depending on functionality.we cant say exactly  length of test cases .

Password is having 6 digit alphanumeric then what are the possible input conditions?

Including special characters also Possible input conditions are :

Input password as = 6abcde ie. number first)
Input password as = abcde8 (ie character first)
Input password as = 123456 (all numbers)
Input password as = abcdef (all characters)
Input password less than 6 digit
Input password greater than 6 digits
Input password as special characters
Input password in CAPITAL ie uppercase
Input password including space
(SPACE) followed by alphabets /numerical /alphanumerical

What is internationalization Testing?

Software Internationalization is basically process of developing software products independent from cultural norms, language or other specific attributes of a market.

If I give some thousand tests to execute in 2 days what do you do?

If Possible, we will automate or else, execute only the test cases which are mandatory.

What does black-box testing mean at the unit, integration, and system levels?

Tests for each Software Requirement using Equivalence Class Partitioning, Boundary Value Testing, and more Test cases for system software requirements using the Trace Matrix, Cross functional Testing, Decision Tables, and more
Test cases for system integration for configurations, manual operations, etc.

What is agile testing?

Agile testing is basically a used whenever customer requirements are changing dynamically.

If we have no SRS, BRS but we have test cases does you execute the test cases blindly or do we follow any other process :

Test case would have detail steps of what the application is supposed to do.
Functionality of application.
In addition we can refer to Backed, is mean look into the Database. To gain more knowledge of the application.

What is Bug life cycle?

Bug Life Cycle is basically a defined as : 

New: when tester reports a defect.
Open : when developer accepts that it is a bug or if the developer rejects the defect, then the status is turned into \"Rejected\"
Fixed : when developer make changes to the code to rectify the bug.
Closed/Reopen : when tester tests it again. If the expected result shown up, it is turned into \"Closed\" and if the problem persists again, it\'s \"Reopen\".

What is deferred status in defect life cycle?

Deferred status in defect Life cycle us basically means the developer accepted the bus, but it is scheduled to rectify in the next build .

Smoke test? Do you use any automation tool for smoke testing?

Testing the application whether It\'s performing its basic functionality properly or not, So that the test team can go ahead with the application. Definitely can use. 

Verification and validation?

Mainly Definition are there : 

Verification : It is static. No code is executed. Say, analysis of requirements etc.
Validation : It is dynamic. Code is executed with scenarios present in test cases.

When a bug is found, what is the first action?

Report it in bug tracking tool. 

What is test plan and explain its contents?

Test plan is basically a document which contains the scope for testing the application and what to be tested, when to be tested and who to test. 

Advantages of automation over manual testing?

Automation over manual Testing have a many disadvantages are there : 

Time saving,
Resource and
Money.

What is mean by release notes?

It\'s a document released along with the product which explains about the product. It also contains about the bugs that are in deferred status. 

What is Testing environment in your company, means how testing process start?

Testing environment in our company means Testing process is going as follows : 

Quality assurance unit.
Quality assurance manager.
Test lead.
Test engineer.

Give an example of high priority and low severity, low priority and high severity?

Severity level : The degree of impact the issue or problem has on the project. Severity 1 usually means the highest level requiring immediate attention. Severity 5 usually represents a documentation defect of minimal impact. 

Severity is levels :
Critical : the software will not run
High : unexpected fatal errors (includes crashes and data corruption)
Medium : a feature is malfunctioning
Low : a cosmetic issue

Severity levels :
   * Bug causes system crash or data loss.
   * Bug causes major functionality or other severe problems; product crashes in obscure cases.
   * Bug causes minor functionality problems, may affect \"fit anf finish\".
   * Bug contains typos, unclear wording or error messages in low visibility fields.

Severity levels :
    * High : A major issue where a large piece of functionality or major system component is completely broken. There is no workaround and testing cannot continue.
    * Medium : A major issue where a large piece of functionality or major system component is not working properly. There is a workaround, however, and testing can continue.
    * Low : A minor issue that imposes some loss of functionality, but for which there is an acceptable and easily reproducible workaround. Testing can proceed without interruption.

Severity and Priority :
    * Priority is Relative : the priority might change over time. Perhaps a bug initially deemed P1 becomes rated as P2 or even a P3 as the schedule draws closer to the release and as the test team finds even more heinous errors. Priority is a subjective evaluation of how important an issue is, given other tasks in the queue and the current schedule. It\'s relative. It shifts over time. And it\'s a business decision.

Severity is an absolute : it\'s an assessment of the impact of the bug without regard to other work in the queue or the current schedule. The only reason severity should change is if we have new information that causes us to re evaluate our assessment. If it was a high severity issue when I entered it, it\'s still a high severity issue when it\'s deferred to the next release. The severity hasn\'t changed just because we\'ve run out of time. The priority changed.

Severity Levels can be defined as follow :


S1 - Urgent/Showstopper. Like system crash or error message forcing to close the window.
Tester\'s ability to operate the system either totally (System Down), or almost totally, affected. A major area of the users system is affected by the incident and it is significant to business processes.

S2 : Medium/Workaround. Exist like when a problem is required in the specs but tester can go on with testing. Incident affects an area of functionality but there is a work-around which negates impact to business process. This is a problem that :
        a) Affects a more isolated piece of functionality.
        b) Occurs only at certain boundary conditions.
        c) Has a workaround (where \"don\'t do that\" might be an acceptable answer to the user).
        d) Occurs only at one or two customers. or is intermittent

S3 : Low. This is for minor problems, such as failures at extreme boundary conditions that are unlikely to occur in normal use, or minor errors inlayout/formatting. Problems do not impact use of the product in any substantive way. These are incidents that are cosmetic in nature and of no or very low impact to business processes.

What is Use case?

Use Case is basically a Simple flow between the end user and the system. It contains pre conditions, post conditions, normal flows and exceptions. It is done by Team Lead/Test Lead/Tester. 

Difference between STLC and SDLC?

Main Difference are there : 

STLC is basically as Software Test Life Cycle it starts with :
    * Preparing the test strategy.
    * Preparing the test plan.
    * Creating the test environment.
    * Writing the test cases.
    * Creating test scripts.
    * Executing the test scripts.
    * Analyzing the results and reporting the bugs.
    * Doing regression testing.
    * Test exiting.

SDLC is basically stands for software or system development life cycle, phases are :
    * Project initiation.
    * Requirement gathering and documenting.
    * Designing.
    * Coding and unit testing.
    * Integration testing.
    * System testing.
    * Installation and acceptance testing. \" Support or maintenance.

How you are breaking down the project among team members?

It can be depend on diff type of cases these following cases

Number of modules.
Number of team members.
Complexity of the Project.
Time Duration of the project.
Team member\'s experience etc.

What is Test Data Collection?

Test data is basically a collection of input data taken for testing the application. Various types and size of input data will be taken for testing the applications. Sometimes in critical application the test data collection will be given by the client also. 

What is Test Server?

Test server basically a place where the developers put their development modules, which are accessed by the testers to test the functionality. 

What are non functional requirements?

The Non-functional requirements is basically  a software product are : 

Reliability,
Usability,
Efficiency,
Delivery Time,
Software development environment,
Security requirements,
Standards to be followed  etc.

What are the differences between these three words Error, Defect and Bug?

 The differences between these three words Error, Defect and Bug : 

Error: The deviation from the required logic, syntax or standards/ethics is called as error.
        There are three types of error. They are:
Syntax error : This is due to deviation from the syntax of the language what supposed to follow.
Logical error : This is due to deviation from the logic of the program what supposed to follow.
Execution error : This is generally happens when you are executing the same program, that time you get it.

Defect : When an error found by the test engineer or testing department, then it is called defect.

Bug : if the defect is agreed by the developer then it converts into bug, which has to fix by the developer or post pond to next version.

Why we perform stress-testing, resolution-testing and cross- browser testing?

They are basically defined as : 

Stress Testing : We need to check the performance of the application.

   * Def : Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements.

Resolution Testing : Some times developer created only for 1024 resolution, the same page displayed a horizontal scroll bar in 800 x 600 resolutions.

No body can like the horizontal scroll appears in the screen. That is reason to test the Resolution testing.

Cross-browser Testing : This testing some times called compatibility testing. When we develop the pages in IE compatible, the same page is not working in Fairfox or Netscape properly, because most of the scripts are not supporting to other than IE. So that we need to test the cross-browser Testing .

There are two sand clocks(timers) one complete totally in 7 minutes and other in 9-minutes we have to calculate with this timers and bang the bell after completion of 11 minutes , plz give me the solution.

Start both clocks
When 7 min clock complete, turn it so that it restarts.
When 9 min clock finish, turn 7 min clocks (It has 2 mints only).
When 7 min clock finishes, 11 min complete.

What is the minimum criteria for white box?

We should know the logic, code and the structure of the program or function. Internal knowledge of the application how the system works what\'s the logic behind it and structure how it should react to particular action. 

What are the technical reviews?

For each document, it should be reviewed. Technical Review in the sense, for each screen, developer will write a Technical Specification. It should be reviewed by developer and tester. There are functional specification review, unit test case review and code review etc. 

In what basis we will write test cases?

We would write the Test cases based on Functional Specifications and BRDs and some more test cases using the Domain knowledge. 

Explain ETVX concept?

E- Entry Criteria
T- Task
V- Validation
X- Exit Criteria

ENTRY CRITERIA : Input with \'condition\' attached.
e.g. Approved SRS document is the entry criteria for the design phase.

TASK : Procedures.
e.g. Preparation of HLD, LLD etc.

VALIDATION : Building quality & Verification activities
e.g. Technical reviews

EXIT CRITERIA: Output with \'condition\' attached.
e.g Approved design document
It is important to follow ETVX concept for all phases in SDLC.

What are the main key components in Web applications and client and Server applications? (differences)

Web Applications : Web application is basically can be implemented using any kind of technology like Java, .NET, VB, ASP, CGI& PERL. Based on the technology,We can derive the components. Let\'s take Java Web Application. It can be implemented in 3 tier architecture. Presentation tier jsp, html, dthml,servlets, struts. 

Business Tier or Java Beans, EJB, JMS. Data Tier or Databases like Oracle, SQL Server etc., If we take .NET Application, Presentation or ASP, HTML, DHTML, Business Tier (DLL) & Data Tier ( Database like Oracle, SQL Server etc.,

Client Server Applications : It will have only 2 tiers. One is Presentation (Java, Swing) and Data Tier (Oracle, SQL Server). If it is client Server architecture, the entire application has to be installed on the client machine. When ever you do any changes in our code, Again, It has to be installed on all the client machines. Where as in Web Applications, Core Application will reside on the server and client can be thin Client(browser). Whatever the changes we do, We have to install the application in the server. NO need to worry about the clients. Because, We will not install any thing on the client machine.

If the client identified some bugs to whom did he reported?

He will report to the Project Manager. Project Manager will arrange a meeting with all the leads (Dev. Manager, Test Lead and Requirement Manager) then raise a Change Request and then, identify which all the screens are going to be impacted by the bug. They will take the code and correct it and send it to the Testing Team. 

What is the formal technical review?

Technical review should be done by the team of members. The document, which is going to be reviewed, who has prepared and reviewers should sit together and do the review of that document. 

It is called Peer Review. If it is a technical document, It can be called as formal Technical review, I guess. It varies depends on the company policy.

At what phase tester role starts?

In SDLC after completion of FRS document the test lead prepare the use case document and test plan document, then the tester role is start. 

Explain \'Software metrics\'?

Its a Measurement is fundamental to any engineering discipline , Its a basically a Metrics, which is use : 

We cannot control what we cannot measure!
Metrics helps to measure quality
Serves as dash-board

The main metrics are :
Size,
Schedule,
Defects.

In this there are main sub metrics.
Test Coverage = Number of units (KLOC/FP) tested / total size of the system
Test cost (in %) = Cost of testing / total cost *100
Cost to locate defect = Cost of testing / the number of defects located
Defects detected in testing (in %) = Defects detected in testing / total system defects*100
Acceptance criteria tested = Acceptance criteria tested / total acceptance criteria

Actually how many positive and negetive testcases will write for a module?

That depends on the module and complexity of logic. For every test case, we can identify +ve and -ve points. Based on the criteria, we will write the test cases, If it is crucial process or screen. We should check the screen,in all the boundary conditions.

What is Software reliability?

It is the probability that software will work without failure for a specified period of time in a specified environment.Reliability of software is measured in terms of Mean Time Between Failure (MTBF). For eg if MTBF = 10000 hours for an average software, then it should not fail for 10000 hours of continuous operation. 

What are the main bugs which were identified by you and in that how many are considered as real bugs?

If We take one screen, let\'s say, it has got 50 Test conditions, out of which, I have identified 5 defects which are failed. We should give the description defect, severity and defect classification. All the defects will be considered.

Defect Classification are defined as :

GRP : Graphical Representation
LOG : Logical Error
DSN : Design Error
STD : Standard Error
TST : Wrong Test case
TYP : Typographical Error (Cosmetic Error)

What the main use of preparing a traceability matrix?

Traceability matrix is basically prepared in order to cross check the test cases designed against each requirement, hence giving an opportunity to verify that all the requirements are covered in testing the application.

To Cross verify the prepared test cases and test scripts with user requirements. To monitor the changes, enhance occurred during the development of the project.

What is Six sigma? Explain.

Six Sigma : A quality discipline that focuses on product and service excellence to create a culture that demands perfection on target, every time.

Six Sigma quality levels
Produces 99.9997% accuracy, with only 3.4 defects per million opportunities.
Six Sigma is designed to dramatically upgrade a company\'s performance, improving quality and productivity. Using existing products, processes, and service standards,
They go for Six Sigma MAIC methodology to upgrade performance.

MAIC is defined as follows:
Measure : Gather the right data to accurately assess a problem.
Analyze : Use statistical tools to correctly identify the root causes of a problem
Improve : Correct the problem (not the symptom).
Control : Put a plan in place to make sure problems stay fixed and sustain the gains.
 
Key Roles and Responsibilities : The key roles in all Six Sigma efforts are as follows :
Sponsor : Business executive leading the organization.
Champion : Responsible for Six Sigma strategy, deployment, and vision.
 Process Owner : Owner of the process, product, or service being improved responsible for long-term sustainable gains.
Master Black Belts : Coach black belts expert in all statistical tools.
Black Belts : Work on 3 to 5 $250,000-per-year projects; create $1 million per year in value.
Green Belts : Work with black belt on projects.

What is TRM?

TRM basically means Test Responsibility Matrix.

TRM : It indicates mapping between test factors and development stages.

Test factors like :

Ease of use,
Reliability,
Portability,
Authorization,
Access control,
Audit trail,
Ease of operates,
Maintainable.
 
Development stages :

Requirement Gathering
Analysis
Design
Coding
Testing
 Maintenance

What are cookies? Tell me the advantage and disadvantage of cookies?

Cookies are basically a messages that web servers pass to our web browser when we visit Internet sites. Our browser stores each message in a small file. 

When we request another page from the server, Our browser sends the cookie back to the server. These files typically contain information about our visit to the web page, as well as any information we\'ve volunteered, such as our name and interests. Cookies are most commonly used to track web site activity. When we visit some sites, the server gives we a cookie that acts as our identification card.

Upon each return visit to that site, Our browser passes that cookie back to the server. In this way, a web server can gather information about which web pages are used the most, and which pages are gathering the most repeat hits. Only the web site that creates the cookie can read it.

Additionally, web servers can only use information that we  provide or choices that we make while visiting the web site as content in cookies.

Accepting a cookie does not give a server access to our computer or any of our personal information. Servers can only read cookies that they have set, so other servers do not have access to our information.

 Also, it is not possible to execute code from a cookie, and not possible to use a cookie to deliver a virus.

What is the difference between Product-based Company and Projects-based Company?

The difference between Product-based Company and Projects-based Company : 

Product based company : It develops the applications for Global clients i.e. there is no specific clients. Here requirements are gathered from market and analyzed with experts.

Project based company : It develops the applications for the specific client. The requirements are gathered from the client and analyzed with the client.

Why Scalability and Load Testing is Important?

These are terms defined as : 

Some very high profile websites have suffered from serious outages and/or performance issues due to the number of people hitting their website. E-commerce sites that spent heavily on advertising but not nearly enough on ensuring the quality or reliability of their service have ended up with poor web-site performance, system downtime and/or serious errors, with the predictable result that customers are being lost.

In the case of toysrus, its web site couldn\'t handle the approximately 1000 percent increase in traffic that their advertising campaign generated. Similarly, Encyclopaedia Britannica was unable to keep up with the amount of users during the immediate weeks following their promotion of free access to its online database. The truth is, these problems could probably have been prevented, had adequate load testing taken place.

When creating an eCommerce portal, companies will want to know whether their infrastructure can handle the predicted levels of traffic, to measure performance and verify stability.

These types of services include
* Scalability
* Load
*Stress testing as well as Live Performance Monitoring.

Load testing tools can be used to test the system behavior and performance under stressful conditions by emulating thousands of virtual users. These virtual users stress the application even harder than real users would, while monitoring the behavior and response times of the different components. This enables companies to minimize test cycles and optimize performance, hence accelerating deployment, while providing a level of confidence in the system.

Once launched, the site can be regularly checked using Live Performance Monitoring tools to monitor site performance in real time, in order to detect and report any performance problems before users can experience them.

Define Preparing for a Load Test.

The first step in designing a Web site load test is to measure as accurately as possible the current load levels. Measuring Current Load Levels
The best way to capture the nature of Web site load is to identify and track, [e.g. using a log analyzer] a set of key user session variables that are applicable and relevant to our Web site traffic.

Some of the variables that could be tracked include :
The length of the session (measured in pages)
The duration of the session (measured in minutes and seconds)
The type of pages that were visited during the session (e.g., home page, product information page, credit card information page etc.)
The typical/most popular flow or path through the website.
The % of browse vs. purchase sessions
the % type of users (new user vs. returning registered user)

Measure how many people visit the site per week/month or day. Then break down these current traffic patterns into one hour time slices, and identify the peak-hours (i.e. if we get lots of traffic during lunch time etc.), and the numbers of users during those peak hours. This information can then be used to estimate the number of concurrent users on your site.

What Is Concurrent Users?

Although our site may be handling x number of users per day, only a small percentage of these users would be hitting our site at the same time. For example, if we have 3000 unique users hitting our site on one day, all 3000 are not going to be using the site between 11.01 and 11.05 am.So, once you have identified your peak hour, divide this hour into 5 or 10 minute slices we should use our own judgement here, based on the length of the average user session] to get the number of concurrent users for that time slice. 

Define Estimating Target Load Levels.

Once we have identified the current load levels, the next step is to understand as accurately and as objectively as possible the nature of the load that must be generated during the testing.
Using the current usage figures, estimate how many people will visit the site per week/month or day. Then divide that number to attain realistic peak hour scenarios.
It is important to understand the volume patterns, and to determine what load levels our web site might be subjected to (and must therefore be tested for).
There are four key variables that must be understood in order to estimate target load levels :
how the overall amount of traffic to our Web site is expected to grow the peak load level which might occur within the overall traffic how quickly the number of users might ramp up to that peak load level. how long that peak load level is expected to last Once we have an estimate of overall traffic growth, we\'ll need to estimate the peak level we might expect within that overall volume.

Define Estimating Test Duration.

The Duration of the peak is also very important a Web site that may deal very well with a peak level for five or ten minutes may crumble if that same load level is sustained longer than that. We should use the length of the average user session as a base for determining the load test duration.

What is Ramp-up Rate?

Our site may be handling x number of users per day, only a small percentage of these users would be hitting your site at the same time. Therefore, when preparing our load test scenario, we should take into account the fact that users will hit the website at different times, and that during our peak hour the number of concurrent users will likely gradually build up to reach the peak number of users, before tailing off as the peak hour comes to a close. The rate at which the number of users build up, the \"Ramp up Rate\" should be factored into the load test scenarios.

How to create the scenarios that are to be used to load test the web site?

The Information gathered during the analysis of the current traffic is used to create the scenarios that are to be used to load test the web site.

The identified scenarios aim to accurately emulate the behavior of real users navigating through the Web site.

For example : a seven page session that results in a purchase is going to create more load on the Web site than a seven page session that involves only browsing.

A browsing session might only involve the serving of static pages, while a purchase session will involve a number of elements, including the inventory database, the customer database, a credit card transaction with verification going through a third party system, and a notification email.

A single purchase session might put as much load on some of the system\'s resources as twenty browsing sessions.Similar reasoning may apply to purchases from new vs. returning users.

 A new user purchase might involve a significant amount of account setup and verification something existing users may not require.

The database load created by a single new user purchase may equal that of five purchases by existing users, so we should differentiate the two types of purchases.

How to prepare a script to run each scenario with the number of types of users concurrently playing back to give you a the load scenario?

Using the load test tool, write the scripts to run each scenario with the number of types of users concurrently playing back to give you a the load scenario. 

The key elements of a load test design are :
> Test objective
> Pass/fail criteria
> Script description
> Scenario description

Load Test Objective: The objective of this load test is to determine if the Web site, as currently configured, will be able to handle the X number of sessions/hr peak load level  anticipation. If the system fails to scale as anticipated, the results will be analyzed to identify the bottlenecks.

Pass/Fail Criteria: The load test will be considered a success if the Web site will handle the target load of X number of sessions/hr while maintaining the pre defined average page response times.

The page response time will be measured and will represent the elapsed time between a page request and the time the last byte is received.

Since in most cases the user sessions follow just a few navigation patterns, you will not need hundreds of individual scripts to achieve realism, if we choose carefully, a dozen scripts will take care of most Web sites.

How To Create a Load Testing Scenario?

Scripts should be combined to describe a load testing scenario. A Basic scenario includes the scripts that will be executed, the percentages in which those scripts will be executed, and a description of how the load will be ramped up.

By emulating multiple business processes, the load testing can generate a load equivalent to X numbers of virtual users on a Web application.

During these load tests, real time performance monitors are used to measure the response times for each transaction and check that the correct content is being delivered to users. In this way, they can determine how well the site is handling the load and identify any bottlenecks.

The execution of the scripts opens X number of HTTP sessions with the target Web site and replays the scripts over and over again.

Every few minutes it adds X more simulated users and continues to do so until the web site fails to meet a specific performance goal.

Why System Performance Monitoring Is Important?

It is very impotent for the system , Its very vital during the execution phase to monitor all aspects of the website.

This includes measuring and monitoring the CPU usage and performance aspects of the various components of the website, i.e. not just the web server, but the database and other parts aswell such as firewalls, load balancing tools etc. For example, one e-tailer, whose site fell over (apparently due to a high load), when analyzing the performance bottlenecks on their site discovered that the web server had in fact only been operating at 50% of capacity.

Further investigation revealed that the credit card authorization engine was the cause of failure, it was not responding quick enough for the website, which then fellover when it was waiting for too many responses from the authorization engine.

They resolved this issue by changing the authorization engine, and amending the website coding so that if there were any issues with authorization responses in future, the site would not crash. Similarly, another e-commerce site found that the performance issues that they were experiencing were due to database performance issues, while the web-server CPU usage was only at 25%, the backend db server CPU usage was 86%. Their solution was to upgrade the db server. Therefore, it is necessary to use performance monitoring tools to check each aspect of the website architecture during the execution phase.

Could You Suggest an Execution Strategy for a Load Scenario?

Start with a test at 50% of the expected virtual user capacity for 15 minutes and a medium ramp rate. The different members of the team testers will also need to be monitoring the CPU usage during the testing should be able to check whether our website is handling the load efficiently or some resources are already showing high utilization. After making any system adjustments, run the test again or proceed to 75% of expected load. Continue with the testing and proceed to 100%; then up to 150% of the expected load, while monitoring and making the necessary adjustments to our system as you go along.

How To Report Load Testing Results?

Often the first indication that something is wrong is the end user response times start to climb. Knowing which pages are failing will help we narrow down where the problem is. 

Whichever load test tool we use, it will need to produce reports that will highlight the following :

Page response time by load level
Completed and abandoned session by load level
Page views and page hits by load level
HTTP and network errors by load level
Concurrent user by minute
Missing links report,

If applicable Full detailed report which includes response time by page and by transaction, lost sales opportunities, analysis and recommendations .

What Are the Important Aspects of Website Load Testing?

When Testing websites, It is critically important to test from outside the firewall. 

In addition, web based load testing services, based outside the firewall, can identify bottlenecks that are only found by testing in this manner.

Web-based stress testing of web sites are therefore more accurate when it comes to measuring a site\'s capacity constraints.

Web traffic is rarely uniformly distributed, and most Web sites exhibit very noticeable peaks in their volume patterns.

Typically, there are a few points in time, One or two days out of the week, or a couple of hours each day, When the traffic to the Web site is highest.

What is load testing?

Load testing is basically use to test that if the application works fine with the loads that result from large number of simultaneous users, transactions and to determine weather it can handle peak usage periods.

What is Performance testing?

Timing for both read and update transactions should be gathered to determine whether system functions are being performed in an acceptable time-frame. This should be done standalone and then in a multi user environment to determine the effect of multiple transactions on the timing of a single transaction. 

What is LoadRunner?

LoadRunner works by creating virtual users who take the place of real users operating client software, such as sending requests using the HTTP protocol to IIS or Apache web servers. Requests from many virtual user clients are generated by Load Generators in order to create a load on various servers under test, These load generator agents are started and stopped by Mercury\'s Controller program. The Controller controls load test runs based on Scenarios invoking compiled Scripts and associated Run-time Settings.

Scripts are crafted using Mercury\'s \"Virtual user script Generator\", named \"V U Gen\", It generates C language script code to be executed by virtual users by capturing network traffic between Internet application clients and servers.

With Java clients, VuGen captures calls by hooking within the client JVM. During runs, the status of each machine is monitored by the Controller.

At the end of each run, the Controller combines its monitoring logs with logs obtained from load generators, and makes them available to the \"Analysis\" program, which can then create run result reports and graphs for Microsoft Word, Crystal Reports, or an HTML webpage browser.

Each HTML report page generated by Analysis includes a link to results in a text file which Microsoft Excel can open to perform additional analysis.

Errors during each run are stored in a database file which can be read by Microsoft Access.

What is Virtual Users?

Unlike a WinRunner workstation which emulates a single user\'s use of a client, LoadRunner can emulate thousands of Virtual Users.

Load generators are controlled by VuGen scripts which issue non GUI API calls using the same protocols as the client under test. But WinRunner GUI Vusers emulate keystrokes, mouse clicks, and other User Interface actions on the client being tested.

Only one GUI user can run from a machine unless LoadRunner Terminal Services Manager manages remote machines with Terminal Server Agent enabled and logged into a Terminal Services Client session.
During run time, threadedvusers share a common memory pool.

So threading supports more Vusers per load generator.

The Status of Vusers on all load generators start from \"Running\", then go to \"Ready\" after going through the init section of the script. Vusers are \"Finished\" in passed or failed end status. Vusers are automatically \"Stopped\" when the Load Generator is overloaded.

To use Web Services Monitors for SOAP and XML, a separate license is needed, and vUsers require the Web Services add-in installed with Feature Pack (FP1).

No additional license is needed for standard web (HTTP) server monitors Apache, IIS, and Netscape.

How we Use Windows Remote Desktop Connection?

To keep Windows Remote Desktop Connection sessions from timing out during a test, the Terminal Services on each machine should be configured as follows :

Click Start, point to Programs (or Control Panel), Administrative Tools and choose Terminal Services,
Configuration.
Open the Connections folder in tree by clicking it once.
Right-click RDP-Tcp and select Properties.
Click the Sessions tab.
Make sure \"Override user settings\" is checked.
Set Idle session limit to the maximum of 2 days instead of the default 2 hours.
Click Apply.
 Click OK to confirm message \"Configuration changes have been made to the system registry; however, the user session now active on the RDP-Tcp connection will not be changed.\"

Explain the Load testing process? Version 7.2

Here Many steps are following as : 
Step 1 : Planning the test. Here, we develop a clearly defined test plan to ensure the test scenarios we develop will accomplish load-testing objectives.

Step 2 : Creating Vusers. Here, we create Vuser scripts that contain tasks performed by each Vuser, tasks performed by Vusers as a whole, and tasks measured as transactions.

Step 3 : Creating the scenario. A scenario describes the events that occur during a testing session. It includes a list of machines, scripts, and Vusers that run during the scenario. We create scenarios using LoadRunner Controller. We can create manual scenarios as well as goal-oriented scenarios. In manual scenarios, we define the number of Vusers, the load generator machines, and percentage of Vusers to be assigned to each script. For web tests, we may create a goal-oriented scenario where we define the goal that our test has to achieve. LoadRunner automatically builds a scenario for us.

Step 4: Running the scenario. We emulate load on the server by instructing multiple Vusers to perform tasks simultaneously. Before the testing, we set the scenario configuration and scheduling. We can run the entire scenario, Vuser groups, or individual Vusers.

Step 5 : Monitoring the scenario. We monitor scenario execution using the LoadRunner online runtime, transaction, system resource, Web resource, Web server resource, Web application server resource, database server resource, network delay, streaming media resource, firewall server resource, ERP server resource, and Java performance monitors.

Step 6 : Analyzing test results. During scenario execution, LoadRunner records the performance of the application under different loads. We use LoadRunner\'s graphs and reports to analyze the application\'s performance.

When do you do load and performance Testing?

We perform load testing once we are done with interface (GUI) testing. Modern system architectures are large and complex. Whereas single user testing primarily on functionality and user interface of a system component, application testing focuses on performance and reliability of an entire system. For example, a typical application testing scenario might depict 1000 users logging in simultaneously to a system. This gives rise to issues such as what is the response time of the system, does it crash, will it go with different software applications and platforms, can it hold so many hundreds and thousands of users, etc. This is when we set do load and performance testing.

What are the components of LoadRunner?

The Components of LoadRunner are 

The Virtual User Generator,
Controller, and
The Agent process,
LoadRunner Analysis and
Monitoring,
LoadRunner Books Online.

What Component of LoadRunner would you use to record a Script?

The Virtual User Generator (VuGen) component is basically used to record a script. It enables us to develop Vuser scripts for a variety of application types and communication protocols. 

What Component of LoadRunner would you use to play Back the script in multi user mode?

The Controller component is used to playback the script in multi-user mode. This is done during a scenario run where a vuser script is executed by a number of vusers in a group.

What is a rendezvous point?

We insert rendezvous points into Vuser scripts to emulate heavy user load on the server. Rendezvous points instruct Vusers to wait during test execution for multiple Vusers to arrive at a certain point, in order that they may simultaneously perform a task. For example, to emulate peak load on the bank server, you can insert a rendezvous point instructing 100 Vusers to deposit cash into their accounts at the same time.

Explain the recording mode for web Vuser script?

We use VuGen to develop a Vuser script by recording a user performing typical business processes on a client application. VuGen creates the script by recording the activity between the client and the server. For example, in web based applications, VuGen monitors the client end of the database and traces all the requests sent to, and received from, the database server. We use VuGen to: Monitor the communication between the application and the server; Generate the required function calls; and Insert the generated function calls into a Vuser script.

Why do you create parameters?

Parameters are like script variables.It is very important to creat. They are used to vary input to the server and to emulate real users. Different sets of data are sent to the server each time the script is run. Better simulate the usage model for more accurate testing from the Controller; one script can emulate many different users on the system.

What is correlation? Explain the difference between automatic correlation and manual correlation?

Correlation is used to obtain data which are unique for each run of the script and which are generated by nested queries. Correlation provides the value to avoid errors arising out of duplicate values and also optimizing the code to avoid nested queries. Automatic correlation is where we set some rules for correlation. It can be application server specific. Here values are replaced by data which are created by these rules. In manual correlation, the value we want to correlate is scanned and create correlation is used to correlate. 

How do you find out where correlation is required?

We have a Two ways :
First we can scan for correlations, and see the list of values which can be correlated. From this we can pick a value to be correlated.
Secondly, we can record two scripts and compare them. We can look up the difference file to see for the values which needed to be correlated.

Where do you set automatic correlation options?

Automatic Correlation from web point of view can be set in recording options and correlation tab.

Here we can enable correlation for the entire script and choose either issue online messages or offline actions, where we can define rules for that correlation. 

Automatic correlation for database can be done using show output window and scan for correlation and picking the correlate query tab and choose which query value we want to correlate.

If we know the specific value to be correlated, we just do create correlation for the value and specify how the value to be created. 

What is a function to capture dynamic values in the web Vuser script?

Web_reg_save_param function saves dynamic data information to a parameter. 

VuGen Recording and Scripting?

LoadRunner script code obtained from recording in the ANSI C language syntax, represented by icons in icon view until us click Script View.

What is Scenarios ?

Scenarios encapsulate the Vuser Groups and scripts to be executed on load generators at run-time. Manual scenarios can distribute the total number of Vusers among scripts based on the analyst-specified percentage, 


Evenly among load generators. Goal Oriented scenarios are automatically created based on a specified transaction response time or number of hits/transactions-per-second (TPS). Test analysts specify the % of Target among scripts.

When do you disable log in Virtual User Generator, When do you choose standard and extended logs?

Once we debug our script and verify that it is functional, we can enable logging for errors only. 

When we add a script to a scenario, logging is automatically disabled. Standard Log Option:

When you select Standard log, it creates a standard log of functions and messages sent during script execution to use for debugging .Disable this option for large load testing scenarios. When you copy a script to a scenario, logging is automatically disabled Extended Log Option:

Select extended log to create an extended log, including warnings and other messages. Disable this option for large load testing scenarios. When you copy a script to a scenario, logging is automatically disabled. We can specify which additional information should be added to the extended log using the Extended log options.

How do you debug a LoadRunner script?

VuGen contains two options to help debug Vuser scripts-the Run Step by Step command and breakpoints. 

The Debug settings in the Options dialog box allow us to determine the extent of the trace to be performed during scenario execution. 

The debug information is written to the Output window. We can manually set the message class within your script using the lr_set_debug_message function. This is useful if we want to receive debug information about a small section of the script only.

How do you write user defined functions in LR?

Before we create the User Defined functions we need to create the external library (DLL) with the function.

We add this library to VuGen bin directory. Once the library is added then we assign user defined function as a parameter. The function should have the following format : __declspec (dllexport) char* (char*, char*) 

What are the changes you can make in run-time settings?

The Run Time Settings that we make are :

   > Pacing : It has iteration count.

   > Log : Under this we have Disable Logging Standard Log and

   > Extended Think Time : In think time we have two options like Ignore think time and Replay think time.

   > General : Under general tab we can set the vusers as process or as multithreading and whether each step as a transaction. 

Define LoadRunner FAQ.

We set Iterations in the Run Time Settings of the VuGen.

 The navigation for this is Run time settings, Pacing tab, set number of iterations. 

How do you perform functional testing under load?

Functionality under load can be tested by running several Vusers concurrently.

 By increasing the amount of Vusers, we can determine how much load the server can sustain.

How we use network drive mappings?

If several load generators need to access the same physical files, rather than having to remember to copy the files each time they change, each load generator can reference a common folder using a mapped drive. But since drive mappings are associated with a specific user :

   > Logon the load generator as the user the load generator will use

   > Open Windows Explorer and under Tools select Map a Network Drive and create a drive. It saves time and hassle to have consistent drive letters across load generators, so some organizations reserver certain drive letters for specific locations.

   > Open the LoadRunner service within Services (accessed from Control Panel, Administrative Tasks),

   > Click the \"Login\" tab.

   > Specify the username and password the load generator service will use. (A dot appears in front of the username if the userid is for the local domain).

   > Stop and start the service again. 

What is Ramp up? How do you set this?

This Option is used to gradually increase the amount of Vusers/load on the server.

 An initial value is set and a value to wait between intervals can be specified. To set Ramp Up, 

go to �Scenario Scheduling Options�.

What is the advantage of running the Vuser as thread?

VuGen provides the facility to use multithreading . This enables more Vusers to be run pergenerator

If the Vuser is run as a process, the same driver program is loaded into memory for each Vuser, thus taking up a large amount of memory.

This limits the number of Vusers that can be run on a single generator. If the Vuser is run as a thread, only one instance of the driver program is loaded into memory for the given number of Vusers (say 100).

Each thread shares the memory of the parent driver program, thus enabling more Vusers to be run per generator.

If you want to stop the execution of your script on error, how do you do that?

The lr_abort function aborts the execution of a Vuser script.

It instructs the Vuser to stop executing the Actions section, execute the vuser_end section and end the execution. This function is useful when we need to manually abort a script execution as a result of a specific error condition. When we end a script using this function, the Vuser is assigned the status \"Stopped\".For this to take effect, we have to first uncheck the Continue on error option in Run-Time Settings.

What is the relation between Response Time and Throughput?

The Throughput graph shows the amount of data in bytes that the Vusers received from the server in a second. When we compare this with the transaction response time, we will notice that as throughput decreased, the response time also decreased. Similarly, the peak throughput and highest response time would occur approximately at the same time.

Explain the Configuration of your systems?

The Configuration of our systems refers to that of the client machines on which we run the Vusers. 

The Configuration of any client machine includes its hardware settings, memory, operating system, software applications, development tools, etc.

This System component configuration should match with the overall system configuration that would include the network infrastructure, the web server, the database server, and any other components that go with this larger system so as to achieve the load testing objectives.

How do you identify the performance bottlenecks?

Performance Bottlenecks can be detected by using monitors.

These monitors might be application server monitors, web server monitors, database server monitors and network monitors.

They help in finding out the troubled area in our scenario which causes increased response time. The measurements made are usually performance response time, throughput, hits/sec, network delay graphs, etc.<hr id=\"null\">

If web server, database and Network are all fine where could be the problem?

The Problem could be in the system itself or in the application server or in the code written for the application. 

How did you find web server related issues?

Using Web resource monitors we can find the performance of web servers.

Using these monitors we can analyze throughput on the web server, number of hits per second that occurred during scenario, the number of http responses per second, the number of downloaded pages per second.

How did you find database related issues?

By running Database monitor and help of Data Resource Graph we can find database related issues.

E.g. we can specify the resource we want to measure on before running the controller and than we can see database related issues .

What is the difference between Overlay graph and Correlate graph?

Overlay Graph : It overlay the content of two graphs that shares a common x-axis. Left Y-axis on the merged graph show�s the current graph�s value & Right Y-axis show the value of Y-axis of the graph that was merged. 

Correlate Graph : Plot the Y-axis of two graphs against each other. The active graph�s Y-axis becomes X-axis of merged graph. Y-axis of the graph that was merged becomes merged graph�s Y-axis. 

How did you plan the Load? What are the Criteria?

Load test is planned to decide the number of users, what kind of machines we are going to use and from where they are run. It is based on 2 important documents, Task Distribution Diagram and Transaction profile.

Task Distribution Diagram gives us the information on number of users for a particular transaction and the time of the load. The peak usage and off-usage are decided from this Diagram. 

Transaction profile gives us the information about the transactions name and their priority levels with regard to the scenario we are deciding.

What does vuser_init action contain?

Vuser_init action contains procedures to login to a server. 

What does vuser_end action contain?

Vuser_end section contains log off procedures. 

What is think time? How do you change the threshold?

Think time is the time that a real user waits between actions.

Example : When a user receives data from a server, the user may wait several seconds to review the data before responding. This delay is known as the think time. Changing the Threshold : Threshold level is the level below which the recorded think time will be ignored. 

The default value is five (5) seconds. We can change the think time threshold in the Recording options of the Vugen.

What is the difference between standard log and extended log?

The Standard log sends a subset of functions and messages sent during script execution to a log.

The subset depends on the Vuser type Extended log sends a detailed script execution messages to the output log. This is mainly used during debugging when we want information about: Parameter substitution.

 Data returned by the server. Advanced trace.

What is lr_debug_message ?

The lr_debug_message function sends a debug message to the output log when the specified message class is set. 

What is lr_output_message ?

 The lr_output_message function sends notifications to the Controller Output window and the Vuser log file                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    









    . 

What is lr_error_message ?

The lr_error_message function sends an error message to the LoadRunner Output window. 

What is lrd_stmt?

The lrd_stmt function associates a character string usually a SQL statement with a cursor. This function sets a SQL statement to be processed. 

What is lrd_fetch?

The lrd_fetch function fetches the next row from the result set . 

What is Throughput?

If the throughput scales upward as time progresses and the number of Vusers increase, this indicates that the bandwidth is sufficient. 

If the graph were to remain relatively flat as the number of Vusers increased, it would be reasonable to conclude that the bandwidth is constraining the volume of data delivered. 

Types of Goals in Goal Oriented Scenario?

Load Runner provides you with five different types of goals in a goal oriented scenario:    * The number of concurrent Vusers    * The number of hits per second    * The number of transactions per second    * The number of pages per minute;The transaction response time that we want our scenario Analysis Scenario ; Bottlenecks : In Running Vuser graph correlated with the response time graph we can see that as the number of Vusers increases, the average response time of the check itinerary transaction very gradually increases. In other words, the average response time steadily increases as the load increases.

At 56 Vusers, there is a sudden, sharp increase in the average response time. We say that the test broke the server. That is the mean time before failure (MTBF). The response time clearly began to degrade when there were more than 56 Vusers running simultaneously.

For new users, how to use WinRunner to test software applications automatically ?

The following steps may be of help to you when automating tests : 

>  MOST IMPORTANT : write a set of manual tests to test your application, we cannot just jump in with WR and expect to produce a set of meaningful tests. Also as you will see from the steps below this set of manual tests will form your plan to tackle automation of our application.

> Once we have a set of manual tests look at them and decide which ones you can automate using your current level of expertise. NOTE that there will be tests that are not suitable for automation, either because you can\'t automate them, or they are just not worth the effort.

> Automate the tests selected in step 2 - initially you will use capture/replay using the steps in the manual test, but you will soon see that to produce meaningful and informative tests we need to add additional code to your test eg. use tl_step() to give test results. As this process continues you will soon see that there are operations that you repeatedly do in multiple tests  these are then candidates for user-defined functions and compiled modules

> Once we have completed step 3 go back to step 2 and we will find that the knowledge you have gained in step 3 will now allow you to select some more tests that you can do. 

              If we continue going through this loop you will gradually become more familiar with WR and TSL, in fact we will probably find that eventually we do very little capture/replay and more straight TSL coding. 

How to use WinRunne to check whether the record was updated or the record was delelte or the record was inserted or not?

Using WinRunner check point features : 

> Create > dDB checkpoint > Runtime Record check


* How to use WinRunner to test the login screen

> When we enter wrong id or password, we will get Dialog box.

   * Record this Dialog box

   * User win_exists to check whether dialog box exists or not

   * Playback : Enter wrong id or password, if win_exists is true, then our application is working good. Enter  good id or password, if win_exists is false, then our application is working perfectly.


After clicking on \"login\" button, they opens other windows of the web application, how to check that page is opened or not?

When our expecting \"Window1\" to come up after clicking on Login.

Capture the window in the GUI Map. No two windows

in an web based application can have the same

html_name property. Hence, this would be the

property to check.

> try a simple win_exists(\"window1\", <time>)

in an IF condition.

> If that does\'nt work, try the function,


win_exists(\"{ class: window, MSW_class:

 html_frame, 

html_name: \"window1\"}\",<time>);


How can we Winrunner testscript for checking all the links at a time?

location = 0;

set_window(\"ourWindow\",8);


while(obj_exists((link = \"{class: object,MSW_class:

html_text_link,location: \"& location & \"}\"))== E_OK)

{

obj_highlight(link);   

web_obj_get_info(link,\"name\",name);

web_link_valid(link,valid);

if(valid)

      tl_step(\"Check web link\",PASS,\"Web link

       \"\" && name && \"\" is valid.\");

    else

      tl_step(\"Check web link\",FAIL,\"Web link \"\"

       && name && \"\" is not valid.\");

 location++;

}


How to get the resolution settings?

Use get_screen_res(x,y) to get the screen resolution in WR7.5.orUse get_resolution (Vert_Pix_int, Horz_Pix_int, Frequency_int) in WR7.01 

WITHOUT the GUI map, how we use the phy desc directly?

It\'s easy, just take the description straight 

out of the GUI map squigglies and all, put it 

into a variable or pass it as a string

and use that in place of the object name .

button_press ( \"btn_OK\" );

becomes

button_press (\"{class: push_button, label: OK}\");


What are the three modes of running the scripts?

WinRunner provides three modes in which to run tests:

Verify, Debug, and Update. we use each mode during a different phase of the testing process.

Verify : Use the Verify mode to check your application.&gt; Debug : Use the Debug mode to help you identify bugs in a test script. Update : Use the Update mode to update the expected results of a test or to create a new expected results folder.

How do you handle unexpected events and errors?

WinRunner uses exception handling to detect an unexpected event when it occurs and act to recover the test run.

WinRunner enables we to handle the following types of exceptions :

> Pop-up exceptions : Instruct WinRunner to detect and handle the appearance of a specific window.

> TSL exceptions : Instruct WinRunner to detect and handle TSL functions that return a specific error code.

> Object exceptions : Instruct WinRunner to detect and handle a change in a property for a specific GUI object.

> Web exceptions : When the WebTest add-in is loaded, we can instruct WinRunner to handle unexpected events and errors that occur in our Web site during a test run.

How do you handle pop-up exceptions?

A pop-up exception Handler handles the pop-up messages that come up during the execution of the script in the AUT. TO handle this type of exception we make WinRunner learn the window and also specify a handler to the exception. It could be Default actions, WinRunner clicks the OK or Cancel button in the pop-up window, or presses Enter on the keyboard. To select a default handler, click the appropriate button in the dialog box.

> User-defined handler : If we prefer, specify the name of our own handler. Click User Defined Function Name and type in a name in the User Defined Function Name box.

How do you handle TSL exceptions?

Suppose we are running a batch test on an unstable version of our application. If our application crashes, We want WinRunner to recover test execution

A TSL exception can instruct WinRunner to recover test execution by exiting the current test, restarting the application, and continuing with the next test in the batch. The handler function is responsible for recovering test execution.

When WinRunner detects a specific error code, it calls the handler function. We implement this function to respond to the unexpected error in the way that meets our specific testing needs. Once we have defined the exception, WinRunner activates handling and adds the exception to the list of default TSL exceptions in the Exceptions dialog box. Default TSL exceptions are defined by the XR_EXCP_TSL configuration parameter in the wrun.ini configuration file.

How to write an email address validation script in TSL?

public function IsValidEMAIL(in strText)

{

auto aryEmail[], aryEmail2[], n;

n = split(strText, aryEmail, \"@\");

if (n != 2)

return FALSE;

# Ensure the string \"@MyISP.Com\" does not pass.

if (!length(aryEmail[1]))

return FALSE;

n = split(aryEmail[2], aryEmail2, \".\");

if (n < 2)

return FALSE;

# Ensure the string \"Recipient@.\" does not pass.

if (!(length(aryEmai2[1]) * length(aryEmai2[1])))

return FALSE;

return TRUE;

}


How to have winrunner insert yesterdays date into a field in the application?

> Use get-time to get the PC system time in seconds since 01/01/1970 > Subtract 86400 (no seconds in a day) from it > Use time_str to convert the result into a date format > If format of returned date is not correct use string manipulations to get the format you require > Insert the date into your application Alternatively we could try the following : > In an Excel datasheet create a column with an appropriate name, and in the first cell of the column use the excel formula. > Format the cell to give us the required date format. > Use the ddt : functions to read the date from the excel datasheet. > Insert the reteived date into our application .

How can withwin runner to make single scripts which supports multiple languages?

Actually, We can have scripts that run for different locales.I have a set of scripts that run for Japanese as well as English Locales. Idea is to have objects recorded in GUI Map with a locale independent physical description. This can be achieved in two ways :

> After recording the object in the GUI Map, inspect the description and ensure that no language specific properties are used. For ex: html_name property for an object of class: 

> html_text_link could be based on the text. we can either remove these language dependent properties if it doesnt really affect our object recognition. If it does affect, We need to find another property for the object that is locale independent. This new property may be something thats already there or we need to create them. This leads to the next option

> Have developers assign a locale independent property like \'objname\' or something to all objects that we use in your automated scripts. Now, modify our GUI Map description for the particular object to look for this property instead of the standard locale dependent properties recorded by WR these default properties are in GUI Map Configuration. We could also use a GUI map for each locale. Prefix the GUI map name with the locale, jpn_UserWindow.gui and enu_UserWindow.gui and load the correct map based on the current machine locale. Specifically, we can use the get_lang() function to obtain the current language setting, then load the appropriate GUI map in our init script. Take a look at the sample scripts supplied with WinRunner for the flight application. I think those scripts are created for both English and Japanese locales.

          After taking care of different GUIs for different locales, the script also needs some modification. If we are scripting in English and then moving on to any other language, all the user inputs will be in English. Due to this the script will fail as it is expecting a Japanese input for a JPN language. Instead of using like that, assign all the user inputs to a variable and use the same wherever the script uses it. This variables has to be assigned (may be after the driver script) before we call the script which we want to run. We should have different variable scripts for different languages. Depending on the language you want to run, call the appropriate variable script file. This will help to run the same script with different locale. 

How to use a regular _expression in the physical description of a window in the GUI map?

Several web page windows with similar html names they all end in or contain \" MyCompany\" The GUI Map has saved the following physical description for one of these windows:


{

class: window,

html_name: \"Dynamic Name | MyCompany\"

MSW_class: html_frame

}


The \"Dynamic Name \" part of the html name changes with the different pages.


Replace:

{

class: window,

html_name: \"!.*| MyCompany\"

MSW_class: html_frame

}

Regular expressions in GUI maps always begin with \"!\".


How to force WR to learn the sub-items on a menu...?

If WR is not learning sub-items then the easy way id to add manually those sub items in to GUI map.. of course you need to study the menu description and always add the PARENT menu name for that particular sub-menu. 

How to check property of specific Icon is highlighted or not?

set_window(\"Name of the window\");

obj_check_info(\"Name of the object \",\"focused\",0ut_value);

check for out_value & proceed further 

What is the BitMap or GUI Checkpoints?

DO NOT use BitMap or GUI Checkpoints for dynamic verification. These checkpoints are purely for static verifications. There are ofcourse, work-arounds, but mostly not worth the effort. 

How to to get the information from the status bar without doing any activity/click on the hyperlink?

We can use the \"statusbar_get_text(\"Status Bar\",0,text);\" function \"text\" variable contains the status bar statement.

or

web_cursor_to_link ( link, x, y );

link The name of the link.

x,y The x- and y-coordinates of the mouse pointer when moved to a link, relative to the upper left corner of the link. 

Object name Changing dynamically?

> logicalname:\"chkESActivity\"

 {

  class: check_button,

  MSW_class: html_check_button,

  html_name: chkESActivity,

  part_value: 90

 }   

> logical name \"chkESActivity_1\"

{

  class: check_button,

  MSW_class: html_check_button,

  html_name: chkESActivity,

  part_value: 91

}

Replace with:


 # we give any name as the logical name

Logical:\"CheckBox\"   

{

 class: check_button,

 MSW_class: html_check_button,

 html_name: chkESActivity,

 part_value: \"![0-9][0-9]\"

}

# changes were done here

we can use any of the checkbox command like

button_set(\"CheckBox\",ON);

# the above statement will check any check 

box with part value ranging from 00 to 99


Define Text Field Validations?

Need to validate text fields against : 

> Null

> Not Null.

> whether it allows any Special Characters.

> whether it allows numeric contents.

> Maximum length of the field etc.


* From the requirements find out what the behaviour of the text field in question should be. Things you need to know are :

> what should happen if field left blank

> what special characters are allowed is it an alpha, nemeric or alphanumeric field etc.etc.


* Write manual tests for doing what you want. This will create a structure to form the basis of our WR tests.


* now create your WR scripts. I suggest that we use data driven tests and use Excel spreadsheets for our inputs instead of having user input. For example the following structure will test whether the text field will accept special characters :

open the data table for each value in the data table  get value insert value into text field attempt to use the value inserted if result is as expected report pass

       else

           report fail

   next value in data table

 

in this case the data table will 

contain all the special charcaters


Loads multiple GUI maps into an array?

#GUIMAPS------------

static guiname1 = \"MMAQ_guimap.gui\";

static guiname2 = \"SSPicker_guimap.gui\";

static guiname3 = \"TradeEntry.gui\";

static guiLoad[] = {guiname1, guiname2, guiname3}

 

Then I just call the function:

#LOAD GUIMAP FILES VIA THE LOAD GUIMAP FUNCTION

(this closes ALL open guimaps)

rc = loadGui(guiLoad); 

if (rc != \"Pass\") #Check success of the Gui_Load

tl_step(\"Guiload\",FAIL,\"Failed to load Guimap(s)

for \"&testname(getvar)); 

    #This line to test log

texit(\"Failed to load Guimap(s)for \"&testname(getvar)); 

}


public function loadGui(inout guiLoad[])

{

static i;

static rc;


# close any temp GUI map files

GUI_close(\"\");

GUI_close_all(); 


for(i in guiLoad) 

rc = (GUI_load(GUIPATH & guiLoad[i]));

if ((rc != 0) && (rc != E_OK)) 

#Check the Gui_Load

{

return (\"Failed to load \" &guiLoad[i]);

}

return (\"Pass\");

}


Read and write to the registry using the Windows API functions?????

function space(isize)

{

        auto s;

        auto i;

        for (i =1;i<=isize;i++)

            {

                s = s & \" \";


            }

    return(s);

}


load_dll(\"c:\\windows\\system32\\ADVAPI32.DLL\");

extern long RegDeleteKey( long, string<1024> );

extern long RegCloseKey(long);

extern long RegQueryValueExA

(long,string,long,long,inout string<1024>,inout long );

extern long RegOpenKeyExA

(long,string,long ,long,inout long);

extern long RegSetValueExA

(long,string,long,long,string,long);


MainKey = 2147483649; # HKEY_CURRENT_USER

SubKey = \"Software\\TestConverter\\TCEditor\\Settings\"; 

# This  is where you set your subkey path

const ERROR_SUCCESS = 0;


const KEY_ALL_ACCESS = 983103;

ret = RegOpenKeyExA(MainKey, SubKey, 0, 

   KEY_ALL_ACCESS, hKey);

 # open the key

if (ret==ERROR_SUCCESS)

{

cbData = 256;

tmp = space(256);

KeyType = 0;

ret = RegQueryValueExA

(hKey,\"Last language\",0,KeyType,tmp,cbData); 

# replace \"Last language\" with 

#the key you want to read

}

pause (tmp);

NewSetting = \"SQABASIC\";

cbData = length(NewSetting) + 1;

ret = RegSetValueExA(hKey,\"Last language\",

0,KeyType,NewSetting,cbData); 

# replace \"Last language\" with the key

 you want to write


cbData = 256;

tmp = space(256);

KeyType = 0;

ret = RegQueryValueExA

(hKey,\"Last language\",0,KeyType,tmp,cbData); 

# verifies you changed the key


pause (tmp);


RegCloseKey(hKey); 

 # close the key


How to break infinite loop?????

set_window(\"Browser Main Window\",1);

text=\"\";

start = get_time();

while(text!=\"Done\")

{

statusbar_get_text(\"Status Bar\",0,text);

now = get_time();

if ( (now-start) == 60 )

# Specify no of seconds after which you want

break

{

break;

}

}


User-defined function that would write to the Print-log as well as write to a file?

Code is following as :

function writeLog(in strMessage){

    file_open(\"C:FilePath...\");

    file_printf(strMessage);

    printf(strMessage);

}


How to do text matching?

We could try embedding it in an if statement. If/when it fails use a tl_step statement to indicate passage and then do a texit to leave the test. Another idea would be to use win_get_text or web_frame_get_text to capture the text of the object and the do a comparison (using the match function) to determine it\'s existance. 

the MSW_id value sometimes changes, rendering the GUI map useless

MSW_Id\'s will continue to change as long as our developers are modifying our application. Having dealt with this, I determined that each MSW_Id shifted by the same amount and I was able to modify the entries in the gui map rather easily and continue testing.

Instead of using the MSW_id use the \"location\". If we use our GUI spy it will give us every detail it can. Then add or remove what we don\'t want.

Having the DB Check point, its able to show the current values in form but its not showing the values that saved in the table

This looks like its happening because the data has been written to the db after our checkpoint, so we have to do a runtime record check Create-->Database Checkpoint-->Runtime 

Record Check.  we may also have to perform some

customization if the data displayed in the 

application is in a different format than the data

in the database by using TSL.  

For example, converting radio buttons to 

database readable form involves the following :

# Flight Reservation

set_window (\"Flight Reservation\", 3);

#edit_set (\"Date of Flight:\", \"16/06/09\");


# retrieve the three button states

button_get_state ( \"First\", first);

button_get_state ( \"Business\", bus);

button_get_state ( \"Economy\", econ);


# establish a variable with the correct numeric

#value based on which radio button is set

if (first)

service=\"1\";

if (bus)

service=\"2\";

if (econ)

service=\"3\";

set_window(\"Untitled - Notepad\",3);

edit_set(\"Report Area\",service);

db_record_check(\"list1.cvr\",DVR_ONE_MATCH,record_num);


Define Increas Capacity Testing?

When you begin your stress testing, you will want to increase your capacity testing to make sure we are able to handle the increased load of data such as ASP pages and graphics. When we test the ASP pages, we may want to create a page similar to the original page that will simulate the same items on the ASP page and have it send the information to a test bed with a process that completes just a small data output. By doing this, we will have our processor still stressing the system but not taking up the bandwidth by sending the HTML code along the full path. This will not stress the entire code but will give we a basis from which to work. Dividing the requests per second by the total number of user or threads will determine the number of transactions per second. It will tell we at what point the server will start becoming less efficient at handling the load. Let\'s look at an example. 

Let\'s say our test with 50 users shows 

our server can handle 5 requests per second,    

with 100 users it is 10 requests per second, 

 with 200 users it is 15 requests per second,

 and eventually with 300 users it is 20 requests per second. 

our requests per second are continually climbing, so it seems that we are obtaining steadily improving performance. 

Let\'s look at the ratios:

05/50 = 0.1

10/100 = 0.1

15/200 = 0.075

20/300 = 0.073

From this example we can see that the performance of the server is becoming less and less efficient as the load grows. This in itself is not necessarily bad (as long as your pages are still returning within our target time frame). However, it can be a useful indicator during our optimization process and does give we some indication of how much leeway we have to handle expected peaks. 

Define Stateful testing?

When we use a Web-enabled application to set a value, does the server respond correctly later on. 

Define Boundary Test?

Boundary tests are designed to check a program\'s response to extreme input values. Extreme output values are generated by the input values. It is important to check that a program handles input values and output results correctly at the lower and upper boundaries. Keep in mind that we can create extreme boundary results from non-extreme input values. It is essential to analyze how to generate extremes of both types. In addition. sometime we know that there is an intermediate variable involved in processing. If so, it is useful to determine how to drive that one through the extremes and special conditions such as zero or overflow condition.

Define this condition : CONDITIONS DURING WHICH REGRESSION TESTS MAY BE RUN????

Issue fixing cycle. Once the development team has fixed issues, a regression test can be run validate the fixes. Tests are based on the step-by-step test cases that were originally reported : 

    > If an issue is confirmed as fixed, then the issue report status should be changed to Closed.

    > If an issue is confirmed as fixed, but with side effects, then the issue report status should be changed to Closed. However, a new issue should be filed to report the side effect.

    > If an issue is only partially fixed, then the issue report resolution should be changed back to Unfixed, along with comments outlining the outstanding problems .

> Open-status regression cycle : Periodic regression tests may be run on all open issue in the issue-tracking database. During this cycle, issue status is confirmed either the report is reproducible as is with no modification, the report is reproducible with additional comments or modifications, or the report is no longer reproducible.

> Closed-fixed regression cycle : In the final phase of testing, a full-regression test cycle should be run to confirm the status of all fixed-closed issues.

> Feature regression cycle : Each time a new build is cut or is in the final phase of testing depending on the organizational procedure, a full-regression test cycle should be run to confirm that the proven correctly functional features are still working as expected. 

Define Database Testing??

  Items to check when testing a database : 

* What to test>Environment>toola/technique

Seach results >System test environment >Black Box and White Box technique

*Response time >System test environment Sytax Testing/Functional Testing

*Data integrity > Development environment >  White Box testing

*Data validity > Development environment > White Box testing 

How do you find an object in an GUI map?

The GUI Map Editor is been provided with a Find and Show Buttons. To find a particular object in the GUI Map file in the application, select the object and click the Show window. This blinks the selected object. 

To find a particular object in a GUI Map file click the Find button, which gives the option to select the object. When the object is selected, if the object has been learned to the GUI Map file it will be focused in the GUI Map file. 

What different actions are performed by find and show button?

To find a particular object in the GUI Map file in the application, select the object and click the Show window. This blinks the selected object.

To find a particular object in a GUI Map file click the Find button, which gives the option to select the object. When the object is selected, if the object has been learned to the GUI Map file it will be focused in the GUI Map file. 

How do you identify which files are loaded in the GUI map?

The GUI Map Editor has a drop down GUI File displaying all the GUI Map files loaded into the memory. 

How do you modify the logical name or the physical description of the objects in GUI map?

We can modify the logical name or the physical description of an object in a GUI map file using the GUI Map Editor. 

When do you feel you need to modify the logical name?

Changing the logical name of an object is useful when the assigned logical name is not sufficiently descriptive or is too long. 

When it is appropriate to change physical description?

Changing the physical description is necessary when the property value of an object changes. 

How WinRunner handles varying window labels?

We can handle varying window labels using regular expressions. WinRunner uses two hidden properties in order to use regular expression in an object�s physical description. These properties are :

> Regexp_label and > Regexp_MSW_class.>>> The regexp_label : property is used for windows only. It operates behind the scenes to insert a regular expression into a window�s label description.>>> The regexp_MSW_class : property inserts a regular expression into an object�s MSW_class. It is obligatory for all types of windows and for the object class object.

What is the purpose of regexp_label property and regexp_MSW_class property?

> The Regexp_label property is used for windows only. It operates behind the scenes to insert a regular expression into a window�s label description. 

> The regexp_MSW_class property inserts a regular expression into an object�s MSW_class. It is obligatory for all types of windows and for the object class object.

How do you suppress a regular expression?

We can suppress the regular expression of a window by replacing the regexp_label property with label property.

How do you copy and move objects between different GUI map files?

We can copy and move objects between different GUI Map files using the GUI Map Editor. The steps to be followed are :   Choose Tools : GUI Map Editor to open the GUI Map Editor.   &gt; Choose View : GUI Files.   Click Expand in the GUI Map Editor. The dialog box expands to display two GUI map files simultaneously.   

View a different GUI map file on each side of the dialog box by clicking the file names in the GUI File lists.  

In one file, select the objects you want to copy or move. Use the Shift key and or Control key to select multiple objects. To select all objects in a GUI map file, choose Edit - Select All.   &gt; Click Copy or Move.   ; To restore the GUI Map Editor to its original size, click Collapse.

How do we select multiple objects during merging the files?

Use the Shift key and or Control key to select multiple objects. 

To select all objects in a GUI map file, choose Edit>Select All.

How do we clear a GUI map files?

We can clear a GUI Map file using the Clear All option in the GUI Map Editor. 

How do you filter the objects in the GUI map?

GUI Map Editor has a Filter option. This provides for filtering with 3 different types of options.

    >  Logical name displays only objects with the specified logical name.

   >  Physical description displays only objects matching the specified physical description. Use any substring                 belonging to the physical description.

   > Class displays only objects of the specified class, such as all the push .

How do you configure GUI map?

Configuration is here :

>  When WinRunner learns the description of a GUI object, it does not learn all its properties. Instead, it learns the minimum number of properties to provide a unique identification of the object.

   > Many applications also contain custom GUI objects. A custom object is any object not belonging to one of the standard classes used by WinRunner. These objects are therefore assigned to the generic object class. When WinRunner records an operation on a custom object, it generates obj_mouse_ statements in the test script.

   > If a custom object is similar to a standard object, you can map it to one of the standard classes. You can also configure the properties WinRunner uses to identify a custom object during Context Sensitive testing. The mapping and the configuration you set are valid only for the current WinRunner session. To make the mapping and the configuration permanent, you must add configuration statements to your startup test script. 

What is the purpose of GUI map configuration?

GUI Map configuration is used to map a custom object to a standard object. 

How do you make the configuration and mappings permanent?

The Mapping and the configuration you set are valid only for the current WinRunner session.

To make the mapping and the configuration permanent, you must add configuration statements to your startup test script.

What is the purpose of GUI spy?

Using the GUI Spy, we can view the properties of any GUI object on your desktop. we use the Spy pointer to point to an object, and the GUI Spy displays the properties and their values in the GUI Spy dialog box.

We can choose to view all the properties of an object, or only the selected set of properties that WinRunner learns.

Have we used WinRunner in your project?

Yes, we have been WinRunner for creating automates scripts for GUI,  functional and regression testing of the AUT.

Explain WinRunner testing process?

WinRunner testing process involves six main stages: 

> Create GUI Map File so that WinRunner can recognize the GUI objects in the application being tested ii.

 Create test scripts by recording, programming, or a combination of both. While recording.

What is contained in the GUI map?

WinRunner stores information it learns about a window or object in a GUI Map. 

When WinRunner runs a test, it uses the GUI map to locate objects. It reads an object�s description in the GUI map and then looks for an object with the same properties.

How does WinRunner recognize objects on the application?

WinRunner uses the GUI Map file to recognize objects on the application. When WinRunner runs a test, it uses the GUI map to locate objects. 

It reads an object�s description in the GUI map and then looks for an object with the same properties.

Have we created test scripts and what is contained in the test scripts?

Yes We have created test scripts. It contains the statement in Mercury Interactive�s

Test Script Language (TSL). These statements appear as a test script in a test window. we can then enhance our recorded test script, either by typing in.

How does WinRunner evaluates test results?

Following each test run, WinRunner displays the results in a report. 

The report details all the major events that occurred during the run, such as checkpoints, error messages, system messages, or user messages. If mismatches are detected at checkpoints.

Have we performed debugging of the scripts?

Yes, We have performed debugging of scripts. 

We can debug the script by executing the script in the debug mode. We can also debug script using the Step, Step Into, Step out functionalities provided by the WinRunner.

How do you run your test scripts?

We run tests in Verify mode to test your application. 

Each time WinRunner encounters a checkpoint in the test script, it compares the current data of the application being tested to the expected data captured earlier. If any mismatches are found.

How do you analyze results and report the defects?

Following each test run, WinRunner displays the results in a report. The report details all the major events that occurred during the run, such as checkpoints, error messages, system messages, or user messages.

 If mismatches are detected at checkpoints.

What is the purpose of different record methods : > Record > Pass up > As Object > Ignore.

Mainly purposes are there :

> Record instructs WinRunner to record all operations performed on a GUI object. This is the default record method for all classes. (The only exception is the static class (static text), for which the default is Pass Up.)

> Pass Up instructs WinRunner to record an operation performed on this class as an operation performed on the element containing the object. Usually this element is a window, and the operation is recorded as win_mouse_click.

> As Object instructs WinRunner to record all operations performed on a GUI object as though its class were object class.

> Ignore instructs WinRunner to disregard all operations performed on the class. 

Define Mercury WinRunner FAQ????

The test script name in the Startup Test box in the Environment tab in the General Options dialog box is the start  up file in WinRunner. 

What are the virtual objects and how do you learn them?

> Applications may contain bitmaps that look and behave like GUI objects. WinRunner records operations on these bitmaps using win_mouse_click statements. By defining a bitmap as a virtual object, We can instruct WinRunner to treat it like a GUI object such as a push button, when we record and run tests.

> Using the Virtual Object wizard, We can assign a bitmap to a standard object class, define the coordinates of that object, and assign it a logical name.

      To define a virtual object using the Virtual Object wizard :

        * Choose Tools > Virtual Object Wizard. The Virtual Object wizard opens. Click Next.

        * In the Class list, select a class for the new virtual object. If rows that are displayed in the window. For a table class, select the number of visible rows and columns. Click Next.

        * Click Mark Object. Use the crosshairs pointer to select the area of the virtual object. we can use the arrow keys to make precise adjustments to the area we define with the crosshairs. Press Enter or click the right mouse button to display the virtual object�s coordinates in the wizard. If the object marked is visible on the screen, we can click the Highlight button to view it. Click Next.

       * Assign a logical name to the virtual object. This is the name that appears in the test script when we record on the virtual object. If the object contains text that WinRunner can read, the wizard suggests using this text for the logical name. Otherwise, WinRunner suggests virtual_object, virtual_push_button, virtual_list, etc.

        * We can accept the wizard�s suggestion or type in a different name. WinRunner checks that there are no other objects in the GUI map with the same name before confirming our choice. Click Next. 


What are the two modes of recording?

There are 2 modes of recording in WinRunner : 

> Context Sensitive recording records the operations we perform on your application by identifying Graphical User Interface (GUI) objects.

> Analog recording records keyboard input, mouse clicks, and the precise x- and y-coordinates traveled by the mouse pointer across the screen. 

What is a checkpoint and what are different types of checkpoints?

Checkpoints allow we to compare the current behavior of the application being tested to its behavior in an earlier version. We can add four types of checkpoints to your test scripts :

> GUI checkpoints verify information about GUI objects. For example : We can check that a button is enabled or see which item is selected in a list.

> Bitmap checkpoints take a snapshot of a window or area of our application and compare this to an image captured in an earlier version.

> Text checkpoints read text in GUI objects and in bitmaps and enable you to verify their contents.

> Database checkpoints check the contents and the number of rows and columns of a result set, which is based on a query you create on our database. 

What are data driven tests?

When We test our application, We may want to check how it performs the same operations with multiple sets of data. We can create a data driven test with a loop that runs ten times :

each time the loop runs, it is driven by a different set of data. In order for WinRunner to use data to drive the test, we must link the data to the test script which it drives. This is called parameterizing our test.

The data is stored in a data table. We can perform these operations manually, or We can use the DataDriver Wizard to parameterize we test and store the data in a data table.

What are the synchronization points?

Synchronization points enable we to solve anticipated timing problems between the test and our application. For example : if we create a test that opens a database application, We can add a synchronization point that causes the test to wait until the database records are loaded on the screen.

For Analog testing, we can also use a synchronization point to ensure that WinRunner repositions a window at a specific location. When we run a test, the mouse cursor travels along exact coordinates. Repositioning the window enables the mouse pointer to make contact with the correct elements in the window.

What is parameterizing?

In order for WinRunner to use data to drive the test, you must link the data to the test script which it drives. This is called parameterizing our test.  The data is stored in a data table. 

How do you maintain the document information of the test scripts?

Before creating a test, We can document information about the test in the General and Description tabs of the Test Properties dialog box. We can enter the name of the test author, the type of functionality tested,  a detailed description of the test, and a reference to the relevant functional specifications document. 

What do we verify with the GUI checkpoint for single property and what command it generates, explain syntax?

We can check a single property of a GUI object. For example, we can check whether a button is enabled or disabled or whether an item in a list is selected. To create a GUI checkpoint for a property value, use the Check Property dialog box to add one of the following functions to the test script  

What do we verify with the GUI checkpoint for object/window and what command it generates, explain syntax?

> We can create a GUI checkpoint to check a single object in the application being tested. We can either check the object with its default properties or we can specify which properties to check.

    > Creating a GUI Checkpoint using the Default Checks : 

          * We can create a GUI checkpoint that performs a default check on the property recommended by WinRunner. For example : if we create a GUI checkpoint that checks a push button, the default check verifies that the push button is enabled.

          * To create a GUI checkpoint using default checks :

              A) Choose Create : GUI Checkpoint : For Object/Window, or click the GUI Checkpoint for Object/Window button on the User toolbar. If we are recording in Analog mode, press the CHECK GUI FOR OBJECT/WINDOW softkey in order to avoid extraneous mouse movements. Note that we can press the CHECK GUI FOR OBJECT/WINDOW softkey in Context Sensitive mode as well. The WinRunner window is minimized, the mouse pointer becomes a pointing hand, and a help window opens on the screen.

               B) Click an object.

               C) WinRunner captures the current value of the property of the GUI object being checked and stores it in the test�s expected results folder. The WinRunner window is restored and a GUI checkpoint is inserted in the test script as an obj_check_gui statement Syntax :  win_check_gui ( window, checklist, expected_results_file, time ); 

    > Creating a GUI Checkpoint by Specifying which Properties to Check.

    > We can specify which properties to check for an object. For example : if we create a checkpoint that checks a push button, we can choose to verify that it is in focus, instead of enabled.

    > To create a GUI checkpoint by specifying which properties to check :

          A) Choose Create > GUI Checkpoint > For Object/Window, or click the GUI Checkpoint for Object/Window button on the User toolbar. If we are recording in Analog mode, press the CHECK GUI FOR OBJECT/WINDOW softkey in order to avoid extraneous mouse movements. Note that we can press the CHECK GUI FOR OBJECT/WINDOW softkey in Context Sensitive mode as well. The WinRunner window is minimized, the mouse pointer becomes a pointing hand, and a help window opens on the screen.

          B) Double-click the object or window. The Check GUI dialog box opens.

          C) Click an object name in the Objects pane. The Properties pane lists all the properties for the selected object.

          D) Select the properties we want to check.

               1. To edit the expected value of a property, first select it. Next, either click the Edit Expected Value button, or double-click the value in the Expected Value column to edit it.

               2. To add a check in which you specify arguments, first select the property for which you want to specify arguments. Next, either click the Specify Arguments button, or double-click in the Arguments column. Note that if an ellipsis (three dots) appears in the Arguments column, then you must specify arguments for a check on this property. (You do not need to specify arguments if a default argument is specified.) When checking standard objects, you only specify arguments for certain properties of edit and static text objects. You also specify arguments for checks on certain properties of nonstandard objects.

               3. To change the viewing options for the properties of an object, use the Show Properties buttons.

               4. Click OK to close the Check GUI dialog box. WinRunner captures the GUI information and stores it in the test�s expected results folder. The WinRunner window is restored and a GUI checkpoint is inserted in the test script as an obj_check_gui or a win_check_gui statement. Syntax: win_check_gui ( window, checklist, expected_results_file, time ); obj_check_gui ( object, checklist, expected results file, time ); 


What is the use of Test Director software?

TestDirector is Mercury Interactive\'s software test management tool. It helps quality assurance personnel plan and organize the testing process. 

With TestDirector we can create a database of manual and automated tests, build test cycles.

How we integrated your automated scripts from TestDirector?

When we work with WinRunner, We can choose to save our tests directly to our TestDirector database or while creating a test case in the TestDirector we can specify whether the script in automated or manual.

And if it is automated script then.

What are the different modes of recording?

There are two type of recording in WinRunner. 

> Context Sensitive recording records the operations we perform on our application by identifying Graphical User Interface (GUI) objects. 

> Analog recording records keyboard input, mouse clicks, and the precise x- and y-coordinates traveled by the mouse pointer across the screen.

What is the purpose of loading WinRunner Add-Ins?

Add-Ins are used in WinRunner to load functions specific to the particular add-in to the memory. While creating a script only those functions in the addin selected will be listed in the function generator and while executing the script only those functions in the loaded add-in will be executed else 

WinRunner will give an error message saying it does not recognize the function.

What are the reasons that WinRunner fails to identify an object on the GUI?

WinRunner fails to identify an object in a GUI due to various reasons. 

> The object is not a standard windows object. 

> If the browser used is not compatible with the WinRunner version, GUI Map Editor will not be able to learn any of the objects displayed in the browser window.

What do you mean by the logical name of the object?????

An Object�s logical name is determined by its class. 

In most cases, the logical name is the label that appears on an object.

If the object does not have a name then what will be the logical name?

If the object does not have a name then the logical name could be the attached text.

What is the different between GUI map and GUI map files?

The GUI map is actually the sum of one or more GUI map files. There are two modes for organizing GUI map files : 

> Global GUI Map file: a single GUI Map file for the entire application

> GUI Map File per Test: WinRunner automatically creates a GUI Map file for each test created.

GUI Map file is a file which contains the windows and the objects learned by the WinRunner with its logical name and their physical description. 

How do we view the contents of the GUI map?

GUI Map editor displays the content of a GUI Map. 

We can invoke GUI Map Editor from the Tools Menu in WinRunner. 

The GUI Map Editor displays the various GUI Map files created and the windows and objects learned in to them with their logical name and physical description.

What do we verify with the GUI checkpoint for multiple objects and what command it generates, explain syntax?

To create a GUI checkpoint for two or more objects :

    > Choose Create GUI Checkpoint For Multiple Objects or click the GUI Checkpoint for Multiple Objects button on the User toolbar. If we are recording in Analog mode, press the CHECK GUI FOR MULTIPLE OBJECTS softkey in order to avoid extraneous mouse movements. The Create GUI Checkpoint dialog box opens.

    > Click the Add button. The mouse pointer becomes a pointing hand and a help window opens.

    > To add an object, click it once. If we click a window title bar or menu bar, a help window prompts we to check all the objects in the window.

    > The pointing hand remains active. We can continue to choose objects by repeating step 3 above for each object you want to check.

    > Click the right mouse button to stop the selection process and to restore the mouse pointer to its original shape. The Create GUI Checkpoint dialog box reopens.

    > The Objects pane contains the name of the window and objects included in the GUI checkpoint. To specify which objects to check, click an object name in the Objects pane. The Properties pane lists all the properties of the object. The default properties are selected.

         * To edit the expected value of a property, first select it. Next, either click the Edit Expected Value button, or double-click the value in the Expected Value column to edit it.

         * To add a check in which we specify arguments, first select the property for which we want to specify arguments. Next, either click the Specify Arguments button, or double-click in the Arguments column. Note that if an ellipsis appears in the Arguments column, then we must specify arguments for a check on this property. (We do not need to specify arguments if a default argument is specified.) When checking standard objects, We only specify arguments for certain properties of edit and static text objects. We also specify arguments for checks on certain properties of nonstandard objects.

        * To change the viewing options for the properties of an object, use the Show Properties buttons. 

   > To save the checklist and close the Create GUI Checkpoint dialog box, click OK. WinRunner captures the current property values of the selected GUI objects and stores it in the expected results folder. A win_check_gui statement is inserted in the test script. 


Syntax : win_check_gui ( window, checklist, expected_results_file, time );

obj_check_gui ( object, checklist, expected results file, time ); 

What information is contained in the checklist file and in which file expected results are stored?

The checklist file contains information about the objects and the properties of the object we are verifying.

The gui*.chk file contains the expected results which is stored in the exp folder.

What do you verify with the bitmap check point for object/window and what command it generates, explain syntax?

    >  You can check an object, a window, or an area of a screen in our application as a bitmap. While creating a test, we indicate what we want to check. WinRunner captures the specified bitmap, stores it in the expected results folder (exp) of the test, and inserts a checkpoint in the test script. When we run the test, WinRunner compares the bitmap currently displayed in the application being tested with the expected bitmap stored earlier. In the event of a mismatch, WinRunner captures the current actual bitmap and generates a difference bitmap. By comparing the three bitmaps (expected, actual, and difference), we can identify the nature of the discrepancy.

    * When working in Context Sensitive mode, We can capture a bitmap of a window, object, or of a specified area of a screen. WinRunner inserts a check point i the test script in the form of either a win_check_bitmap or obj_check_bitmap statement.

    > Note that when you record a test in Analog mode, We should press the CHECK BITMAP OF WINDOW softkey or the CHECK BITMAP OF SCREEN AREA softkey to create a bitmap checkpoint. This prevents WinRunner from recording extraneous mouse movements. If we are programming a test, you can also use the Analog function check_window to check a bitmap.

    > To capture a window or object as a bitmap:

         A. Choose Create - Bitmap Checkpoint - For Object/Window or click the Bitmap Checkpoint for Object/Window button on the User toolbar. Alternatively, if you are recording in Analog mode, press the CHECK BITMAP OF OBJECT/WINDOW softkey. The WinRunner window is minimized, the mouse pointer becomes a pointing hand, and a help window opens.

         B. Point to the object or window and click it. WinRunner captures the bitmap and generates a win_check_bitmap or obj_check_bitmap statement in the script. The TSL statement generated for a window bitmap has the following syntax: win_check_bitmap ( object, bitmap, time );

         C. For an object bitmap, the syntax is: obj_check_bitmap ( object, bitmap, time );

         D. For example, when you click the title bar of the main window of the Flight Reservation application, the resulting statement might be: win_check_bitmap (\"Flight Reservation\", \"Img2\", 1);

         E. However, if you click the Date of Flight box in the same window, the statement might be: obj_check_bitmap (\"Date of Flight:\", \"Img1\", 1); 


Syntax: obj_check_bitmap ( object, bitmap, time [, x, y, width, height] ); 

What do you verify with the bitmap checkpoint for screen area and what command it generates, explain syntax?

    >  We can define any rectangular area of the screen and capture it as a bitmap for comparison. The area can be any size: it can be part of a single window, or it can intersect several windows. The rectangle is identified by the coordinates of its upper left and lower right corners, relative to the upper left corner of the window in which the area is located. If the area intersects several windows or is part of a window with no title (for example, a popup window), its coordinates are relative to the entire screen (the root window).

    > To capture an area of the screen as a bitmap:

         A. Choose Create - Bitmap Checkpoint - For Screen Area or click the Bitmap Checkpoint for Screen Area button. Alternatively, if you are recording in Analog mode, press the CHECK BITMAP OF SCREEN AREA softkey. The WinRunner window is minimized, the mouse pointer becomes a crosshairs pointer, and a help window opens.

         B. Mark the area to be captured: press the left mouse button and drag the mouse pointer until a rectangle encloses the area; then release the mouse button.

         C. Press the right mouse button to complete the operation. WinRunner captures the area and generates a win_check_bitmap statement in your script.

         D. The win_check_bitmap statement for an area of the screen has the following syntax: win_check_bitmap ( window, bitmap, time, x, y, width, height ); 


What do we verify with the database checkpoint default and what command it generates, explain syntax?

    >  By adding runtime database record checkpoints you can compare the information in your application during a test run with the corresponding record in your database. By adding standard database checkpoints to your test scripts, you can check the contents of databases in different versions of your application.

    > When you create database checkpoints, you define a query on your database, and your database checkpoint checks the values contained in the result set. The result set is set of values retrieved from the results of the query.

    > You can create runtime database record checkpoints in order to compare the values displayed in your application during the test run with the corresponding values in the database. If the comparison does not meet the success criteria you.

    > specify for the checkpoint, the checkpoint fails. You can define a successful runtime database record checkpoint as one where one or more matching records were found, exactly one matching record was found, or where no matching records are found.

    > You can create standard database checkpoints to compare the current values of the properties of the result set during the test run to the expected values captured during recording or otherwise set before the test run. If the expected results and the current results do not match, the database checkpoint fails. Standard database checkpoints are useful when the expected results can be established before the test run.

      Syntax: db_check(checklist_file, expected_restult);

    > You can add a runtime database record checkpoint to your test in order to compare information that appears in your application during a test run with the current value(s) in the corresponding record(s) in your database. You add runtime database record checkpoints by running the Runtime Record Checkpoint wizard. When you are finished, the wizard inserts the appropriate db_record_check statement into your script.

      Syntax: db_record_check(ChecklistFileName,SuccessConditions,RecordNumber );

      ChecklistFileName ---- A file created by WinRunner and saved in the test\'s checklist folder. The file contains information about the data to be captured during the test run and its corresponding field in the database. The file is created based on the information entered in the Runtime Record Verification wizard.

      SuccessConditions ----- Contains one of the following values:

      A. DVR_ONE_OR_MORE_MATCH : The checkpoint passes if one or more matching database records are found.

      B. DVR_ONE_MATCH : The checkpoint passes if exactly one matching database record is found.

      3. DVR_NO_MATCH : The checkpoint passes if no matching database records are found.

      RecordNumber : An out parameter returning the number of records in the database. 

How do we handle dynamically changing area of the window in the bitmap checkpoints?

The difference between bitmaps option in the Run Tab of the general options defines the minimum number of pixels that constitute a bitmap mismatch .

What do we verify with the database check point custom and what command it generates, explain syntax?

    >  When you create a custom check on a database, you create a standard database checkpoint in which you can specify which properties to check on a result set.

    > You can create a custom check on a database in order to :

          * check the contents of part or the entire result set

          * edit the expected results of the contents of the result set

          * count the rows in the result set

          * count the columns in the result set 

    > You can create a custom check on a database using ODBC, Microsoft Query or Data Junction. 

What do we verify with the sync point for object/window property and what command it generates, explain syntax?

    >  Synchronization compensates for inconsistencies in the performance of your application during a test run. By inserting a synchronization point in your test script, you can instruct WinRunner to suspend the test run and wait for a cue before continuing the test.

    > You can a synchronization point that instructs WinRunner to wait for a specified object or window to appear. For example, you can tell WinRunner to wait for a window to open before performing an operation within that window, or you may want WinRunner to wait for an object to appear in order to perform an operation on that object.

    > You use the obj_exists function to create an object synchronization point, and you use the win_exists function to create a window synchronization point. These functions have the following syntax:

      obj_exists ( object [, time ] ); win_exists ( window [, time ] ); 

What do we verify with the sync point for object/window bitmap and what command it generates, explain syntax?

You can create a bitmap synchronization point that waits for the bitmap of an object or a window to appear in the application being tested.

During a test run, WinRunner suspends test execution until the specified bitmap is redrawn, and then compares the current bitmap with the expected one captured earlier. If the bitmaps match, then WinRunner continues the test.

Syntax:

obj_wait_bitmap ( object, image, time );

win_wait_bitmap ( window, image, time ); 

What is the extension of gui map file?

The extension for a GUI Map file is �.gui�. 

How do we find an object in an GUI map?

The GUI Map Editor is been provided with a Find and Show Buttons.

To find a particular object in the GUI Map file in the application, select the object and click the Show window. This blinks the selected object.

To find a particular object in a GUI Map file click the Find button, which gives the option to select the object. When the object is selected, if the object has been learned to the GUI Map file it will be focused in the GUI Map file.

How do you identify which files are loaded in the GUI map?

The GUI Map Editor has a drop down �GUI File�  displaying all the GUI Map files loaded into the memory. 

When do we feel we need to modify the logical name?

Changing the logical name of an object is useful when the assigned logical  name is not sufficiently descriptive or is too long.

What is the purpose of obligatory and optional properties of the objects?

For each class, WinRunner learns a set of default properties. Each default property is classified obligatory or optional.   > An obligatory property is always learned (if it exists). 

An optional property is used only if the obligatory properties do not provide unique identification of an object. These optional properties are stored in a list. 

WinRunner selects the minimum number of properties from this list that are necessary to identify the object. It begins with the first property in the list, and continues, if necessary, to add properties to the description until it obtains unique identification for the object.

When the optional properties are learned?

An optional property is used only if the obligatory properties do  not provide unique identification of an object. 

What is the purpose of location indicator and index indicator in GUI map configuration?

In cases where the obligatory and optional properties do not uniquely identify an object, WinRunner uses a selector to differentiate between them. Two types of selectors are available :

> A location selector uses the spatial position of objects.

> The location selector uses the spatial order of objects within the window, from the top left to the bottom right corners, to differentiate among objects with the same description.

> An index selector uses a unique number to identify the object in a window.

> The index selector uses numbers assigned at the time of creation of objects to identify the object in a window. Use this selector if the location of objects with the same description may change within a window. 

How do we handle custom objects?

A custom object is any GUI object not belonging to one of the standard classes used by WinRunner. WinRunner learns such objects under the generic object class. WinRunner records operations on custom objects using obj_mouse_ statements.

If a custom object is similar to a standard object, you can map it to one of the standard classes. You can also configure the properties WinRunner uses to identify a custom object during Context Sensitive testing. 

What is the name of custom class in WinRunner and what methods it applies on the custom objects?

WinRunner learns custom class objects under the generic object class. 

WinRunner records operations on custom objects using obj_ statements.

In a situation when obligatory and optional both the properties cannot uniquely identify an object what method WinRunner applies?

In cases where the obligatory and optional properties do not uniquely identify an object, WinRunner uses a selector to differentiate between them. Two types of selectors are available : 

> A location selector uses the spatial position of objects.

> An index selector uses a unique number to identify the object in a window.

What do we verify with the sync point for screen area and what command it generates, explain syntax?

For screen area verification we actually capture the screen area into a bitmap and verify the application screen area with the bitmap file during execution

  Syntax:

obj_wait_bitmap(object, image, time, x, y, width, height); 

How do we edit checklist file and when do you need to edit the checklist file?

WinRunner has an edit checklist file option under the create menu. Select the Edit GUI Checklist to modify GUI checklist file and Edit Database Checklist to edit database checklist file. This brings up a dialog box that gives you option to select the checklist file to modify.

There is also an option to select the scope of the checklist file, whether it is Test specific or a shared one. Select the checklist file, click OK which opens up the window to edit the properties of the objects.

When we create GUI map do you record all the objects of specific objects?

If we are learning a window then WinRunner automatically learns all the objects in the window else we will we identifying those object,  which are to be learned in a window, since we will be working with only those objects while creating scripts. 

What is the purpose of set_window command?

Set_Window command sets the focus to the specified window. We use this command to set the focus to the required window before executing tests on a particular window.

 Syntax: set_window(, time);The logical name is the logical name of the window and time is the time the execution has to wait till it gets the given window into focus.

How do we load GUI map?

We can load a GUI Map by using the GUI_load command.  Syntax :  GUI_load(); 

What is the disadvantage of loading the GUI maps through start up scripts?

If we are using a single GUI Map file for the entire AUT then the memory used by the GUI Map may be much high. If there is any change in the object being learned then WinRunner will not be able to recognize the object, as it is not in the GUI Map file loaded in the memory.

So we will have to learn the object again and update the GUI File and reload it.

How do you unload the GUI map?

We can use GUI_close to unload a specific GUI Map file or else we call use GUI_close_all command to unload all the GUI Map files loaded in the memory.   Syntax : GUI_close(); or         GUI_close_all ; 

What actually happens when you load GUI map?

When we load a GUI Map file, the information about the windows and the objects with their logical names and physical description are loaded into memory.

So when the WinRunner executes a script on a particular window, it can identify the objects using this information loaded in the memory.

What is the purpose of the temp GUI map file?

While recording a script, WinRunner learns objects and windows by itself. This is actually stored into the temporary GUI Map file. 

We can specify whether we have to load this temporary GUI Map file should be loaded each time in the General Options.

How do you select multiple objects during merging the files?

Use the Shift key and/or Control key to select multiple objects. 

To select all objects in a GUI map file. choose Edit > Select All.

How do you edit the expected value of an object?

We can modify the expected value of the object by executing the script in the Update mode.

We can also manually edit the gui*.chk file which contains the expected values which come under the exp folder to change the values.

How do you modify the expected results of a GUI checkpoint?

We can modify the expected results of a GUI checkpoint be running the script containing the  checkpoint in the update mode.

How do you handle ActiveX and Visual basic objects?

WinRunner provides with add-ins for ActiveX and Visual basic objects.

When loading WinRunner, select those add-ins and these add-ins provide with a set of functions to work on ActiveX and VB objects.

How do you create ODBC query?

We can create ODBC query using the database checkpoint wizard. It provides with option to create an SQL file that uses an ODBC DSN to connect to the database. 

The SQL File will contain the connection string and the SQL statement.

How do you record a data driven test?

We can create a data-driven testing using data from a flat file, data table or a database.

> Using Flat File : we actually store the data to be used in a required format in the file. We access the file using the File manipulation commands, reads data from the file and assign the variables with data.

> Data Table : It is an excel file. We can store test data in these files and manipulate them. We use the �ddt_*� functions to manipulate data in the data table.

> Database : we store test data in the database and access these data using �db_*� functions. 

How do you convert a database file to a text file?

 We can use Data Junction to create a conversion file which converts a database to a target  text file.

How do you parameterize database check points?

When we create a standard database checkpoint using ODBC Microsoft Query,

We can add parameters to an SQL statement to parameterize the checkpoint. This is useful if we want to create a database checkpoint with a query in which the SQL statement defining our query changes.

How do you create parameterize SQL commands?

A parameterized query is a query in which at least one of the fields of the WHERE clause is parameterized, The value of the field is specified by a question mark symbol(). For example : The following SQL statement is based on a query on the database in the sample Flight Reservation application :

SELECT Flights.Departure, Flights.Flight_Number, Flights.Day_Of_Week FROM Flights Flights WHERE (Flights.Departure=?) AND (Flights.Day_Of_Week=?)

SELECT defines the columns to include in the query.

FROM specifies the path of the database.

WHERE (optional) specifies the conditions, or filters to use in the query. Departure is the parameter that represents the departure point of a flight.

Day_Of_Week is the parameter that represents the day of the week of a flight.

When creating a database checkpoint, you insert a db_check statement into your test script. When we parameterize the SQL statement in your checkpoint, the db_check function has a fourth, optional, argument: the parameter_array argument. A statement similar to the following is inserted into your test script :

db_check(\"list1.cdl\", \"dbvf1\", NO_LIMIT, dbvf1_params);

The parameter_array argument will contain the values to substitute for the parameters in the parameterized checkpoint.

What check points you will use to read and check text on the GUI and explain its syntax?

> We can use text checkpoints in we test scripts to read and check text in GUI objects and in areas of the screen. While creating a test we point to an object or a window containing text. WinRunner reads the text and writes a TSL statement to the test script. We may then add simple programming elements to our test scripts to verify the contents of the text.

    > We can use a text checkpoint to : 

          * Read text from a GUI object or window in our application, using obj_get_text and win_get_text.

          * Search for text in an object or window, using win_find_text and obj_find_text.

          * Move the mouse pointer to text in an object or window, using obj_move_locator_text and win_move_locator_text.

          * Click on text in an object or window, using obj_click_on_text and win_click_on_text. 


How to get Text from object/window ?

We use obj_get_text (logical_name, out_text) function to get the text from an object

We use win_get_text (window, out_text [, x1, y1, x2, y2]) function to get the text from a window. 

How to get Text from screen area ?

We use win_get_text (window, out_text [, x1, y1, x2, y2]) function to get the text from a  window.

Which TSL functions you will use for Searching text on the window

find_text ( string, out_coord_array, search_area [, string_def ] );

win_find_text ( window, string, result_array [, search_area [, string_def ] ] ); 

What are the steps of creating a data driven test?

The Steps involved in data driven testing are :

 > Creating a test.> Converting to a data-driven test and preparing a database.

 > Running the test.> Analyzing the test results.

How to use data driver wizard?

We can use the DataDriver Wizard to convert our entire script or a part of our script into a data-driven test. 

For example : Our test script may include recorded operations, checkpoints, and other statements that do not need to be repeated for multiple sets of data. We need to parameterize only the portion of our test script that we want to run in a loop with multiple sets of data.


>>>>> To create a data-driven test :


    > If we want to turn only part of our test script into a data-driven test, first select those lines in the test script.

    > Choose Tools - DataDriver Wizard.

    > If we want to turn only part of the test into a data-driven test, click Cancel. Select those lines in the test script and reopen the DataDriver Wizard. If we want to turn the entire test into a data-driven test, click Next.

    > The Use a new or existing Excel table box displays the name of the Excel file that WinRunner creates, which stores the data for the data-driven test. Accept the default data table for this test, enter a different name for the data table, or use.

    > The browse button to locate the path of an existing data table. By default, the data table is stored in the test folder.

    > In the Assign a name to the variable box, enter a variable name with which to refer to the data table, or accept the default name, table.

    > At the beginning of a data-driven test, the Excel data table you selected is assigned as the value of the table variable. Throughout the script, only the table variable name is used. This makes it easy for you to assign a different data table .

    > To the script at a later time without making changes throughout the script.

    > Choose from among the following options:

       A. Add statements to create a data-driven test : Automatically adds statements to run your test in a loop : sets a variable name by which to refer to the data table ; adds braces ({and}), a for statement, and a ddt_get_row_count statement to our test script selection to run it in a loop while it reads from the data table ; Adds ddt_open and ddt_close statements.

         B. To Our test script to open and close the data table, Which are necessary in order to iterate rows in the table. Note that we can also add these statements to our test script manually.

         C. If we do not choose this option, we will receive a warning that our data-driven test must contain a loop and statements to open and close our datatable.

         D. Import data from a database : Imports data from a database. This option adds ddt_update_from_db, And ddt_save statements to our test script after the ddt_open statement.

         E. Note that in order to import data from a database, either Microsoft Query or Data Junction must be installed on our machine.  We can install Microsoft Query from the custom installation of Microsoft Office. Note that Data Junction is not automatically included in our WinRunner package. To purchase Data Junction, contact our Mercury Interactive representative. For detailed information on working with Data Junction, refer to the documentation in the Data Junction package.

         F. Parameterize the test : Replaces fixed values in selected checkpoints and in recorded statements with parameters, using the ddt_val function, and in the data table, adds columns with variable values for the parameters. > Line by line : Opens a wizard screen for each line of the selected test script, which enables we to decide whether to parameterize a particular line, and if so, whether to add a new column to the data table or use an existing column when parameterizing data.

         G. Automatically : Replaces all data with ddt_val statements and adds new columns to the data table. The first argument of the function is the name of the column in the data table. The replaced data is inserted into the table. 

    > The Test script line to parameterize box displays the line of the test script to parameterize. The highlighted value can be replaced by a parameter. The Argument to be replaced box displays the argument (value) that we can replace with a parameter. We can use the arrows to select a different argument to replace.

           Choose whether and how to replace the selected data :

         * Do not replace this data : Does not parameterize this data.

         * An existing column: If parameters already exist in the data table for this test, select an existing parameter from the list.

         * A new column : Creates a new column for this parameter in the data table for this test. Adds the selected data to this column of the data table. The default name for the new parameter is the logical name of the object in the selected. TSL statement above. Accept this name or assign a new name. 

    > The final screen of the wizard opens.

         A. If we want the data table to open after we close the wizard, select Show data table now.

         B. To perform the tasks specified in previous screens and close the wizard, click Finish.

         C. To close the wizard without making any changes to the test script, click Cancel. 

How do you clear a GUI map files?

We can clear a GUI Map file using the �Clear All� option in the GUI Map  Editor. 

When the optional properties are learned?

An Optional property is used only if the obligatory properties  do not provide unique identification of an object . 

How do you handle object exceptions?

During testing , Unexpected changes can occur to GUI objects in the application we are testing. These changes are often subtle but they can disrupt the test run and distort results.

We could use exception handling to detect a change in property of the GUI object during the test run, and to recover test execution by calling a handler function and continue with the test execution .

What is a compile module?

A compiled module is a script containing a library of user-defined functions that we want to call frequently from other tests. When we load a compiled module, its functions are automatically compiled and remain in memory. We can call them directly from within any test.

Compiled modules can improve the organization and performance of our tests. Since we debug compiled modules before using them, our tests will require less errorchecking. In addition, calling a function that is already compiled is significantly faster than interpreting a function in a test script.

What is the difference between script and compile module?

Mny diff are there :

> Test script contains the executable file in WinRunner while Compiled Module is used to store reusable functions. Complied modules are not executable.

> WinRunner performs a pre-compilation automatically when it saves a module assigned a property value of Compiled Module.

> Modules containing TSL code have a property value of \"main\". Main modules are called for execution from within other modules. Main modules are dynamically compiled into machine code only when WinRunner recognizes a \"call\" statement. Example of a call for the \"app_init\" script :

call cso_init();

call( \"C:\\MyAppFolder\\\" & \"app_init\" );

Compiled modules are loaded into memory to be referenced from TSL code in any module. Example of a load statement :

reload (C:\\MyAppFolder\\\" & \"flt_lib\");

or load (\"C:\\MyAppFolder\\\" & \"flt_lib\"); 

How do you write messages to the report?

To write message to a report we use the report_msg statement :  Syntax : report_msg (message); 

What is a command to invoke application?

Invoke_application is the function used to invoke an application.    Syntax : invoke_application(file, command_option, working_dir, SHOW); 

What is the purpose of tl_step command?

Used to determine whether sections of a test pass or fail.  Syntax : tl_step(step_name, status, description); 

Which TSL function you will use to compare two files?

We can compare 2 files in WinRunner using >  The file_compare function.Syntax : file_compare (file1, file2 [, save file]); 

What is the use of function generator?

The Function Generator provides a quick, error-free way to program scripts. We can :

Add Context Sensitive functions that perform operations on a GUI object or get information from the application being tested.

Add Standard and Analog functions that perform non-Context Sensitive tasks such as synchronizing test execution or sending user-defined messages to a report.

Add Customization functions that enable we to modify WinRunner to suit our testing environment.

What is the use of putting call and call_close statements in the test script?

We can use two types of call statements to invoke one test from another :
> A call statement invokes a test from within another test.
> A call_close statement invokes a test from within a script and closes the test when the test is completed.

What is the use of treturn and texit statements in the test script?

The treturn and texit statements are used to stop execution of called tests.

> The treturn statement stops the current test and returns control to the calling test.

> The texit statement stops test execution entirely, unless tests are being called from a batch test. In this case, control is returned to the main batch test. Both functions provide a return value for the called test. If treturn or texit is not used, or if no value is specified, then the return value of the call statement is 0.

The syntax is :

treturn [( expression )];

texit [( expression )]; 

What does auto, static, public and extern variables means?

> Auto: An auto variable can be declared only within a function and is local to that function. It exists only for as long as the function is running. A new copy of the variable is created each time the function is called.

> Static: A static variable is local to the function, test, or compiled module in which it is declared. The variable retains its value until the test is terminated by an Abort command. This variable is initialized each time the definition of the function is executed.

> Public : A public variable can be declared only within a test or module, and is available for all functions, tests, and compiled modules.

> Extern : An extern declaration indicates a reference to a public variable declared outside of the current test or module. 

How do you declare constants?

The const specifier indicates that the declared value cannot be modified. The class of a constant may be either public or static. If no class is explicitly declared, the constant is assigned the default class public. Once a constant is defined, it remains in existence until we exit WinRunner.

The syntax of this declaration is :

[class] const name [= expression]; 

How do you declare arrays?

The Following syntax is used to define the class and the initial expression of an array. 

Array size need not be defined in TSL.class array_name [ ] [=init_expression]The array class may be any of the classes used for variable declarations (auto, static, public, extern).

How do you load and unload a compile module?

In ordr to access the functions in a compiled module we ro load the module, We can load it from with in any test script using the load command ; all tests will then be able to access the function until we quit WinRunner or unload the compiled module.

We can load a module either as a system module or as a user module. A system module is generally a closed module that is invisible to the tester. It is not displayed when it is loaded, cannot be stepped into, and is not stopped by a pause command. A system module is not unloaded when we execute an unload statement with no parameters global unload.

load (module_name [,1|0] [,1|0] );

The module_name is the name of an existing compiled module.

> Two additional, optional parameters indicate the type of module. The first parameter indicates whether the function module is a system module or a user module : 1 indicates a system module; 0 indicates a user module.

(Default = 0)

> The second optional parameter indicates whether a user module will remain open in the WinRunner window or will close automatically after it is loaded : 1 indicates that the module will close automatically; 0 indicates that the module will remain open.

(Default = 0)

> The unload function removes a loaded module or selected functions from memory.

> It has the following syntax :

unload ( [ module_name | test_name [ , \"function_name\" ] ] ); 

Why you use reload function?

If we make changes in a module, We should reload it. The reload function removes a loaded module from memory and reloads it combining the functions of unload and load .

> The syntax of the reload function is :

reload ( module_name [ ,1|0 ] [ ,1|0 ] );

> The module_name is the name of an existing compiled module.

> Two additional optional parameters indicate the type of module. 

   * The first parameter indicates whether the module is a system module or a user module : 

       1 indicates a system module ; 

       0 indicates a user module.

(Default = 0)

    * The second optional parameter indicates whether a user module will remain open in the WinRunner window or will close automatically after it is loaded. 

* 1 indicates that the module will close automatically.

* 0 indicates that the module will remain open.

(Default = 0) 


Write and explain compile module?

Write TSL functions for the following interactive modes : 

> Creating a dialog box with any message we specify, and an edit field.

> Create dialog box with list of items and message.

> Create dialog box with edit field, check box, and execute button, and a cancel button.

> Creating a browse dialog box from which user selects a file.

> Create a dialog box with two edit fields, one for login and another for password input. 

How you used WinRunner in your project?

Yes, I have been using WinRunner for creating automated scripts for GUI, functional and regression testing of the AUT .

Have you created test scripts and what is contained in the test scripts?

Yes I have created test scripts. It contains the statement in Mercury Interactive�s Test Script Language (TSL). These statements appear as a test script in a test window.

We can then enhance our recorded test script, either by typing in additional TSL functions and programming elements or by using WinRunner�s visual programming tool, the Function Generator.

How does WinRunner evaluate test results?

Following each test run, WinRunner displays the results in a report. 

The report details all the major events that occurred during the run, such as checkpoints, error messages, system messages, or user messages.

If mismatches are detected at checkpoints during the test run, We can view the expected results and the actual results from the Test Results window.

Have you performed debugging of the scripts?

Yes, We have performed debugging of scripts.

We can debug the script by executing the script in the debug mode. We can also debug script using the Step, Step Into, Step out functionalities provided by the WinRunner.

Have you integrated your automated scripts from TestDirector?

When we work with WinRunner, we can choose to save our tests directly to our TestDirector database or while creating a test case in the TestDirector we can specify whether the script in automated or manual. And if it is automated script then TestDirector will build a skeleton for the script that can be later modified into one which could be used to test the AUT. 

    What are the different modes of recording? - There are two type of recording in WinRunner. 

> Context Sensitive recording records the operations we perform on our application by identifying Graphical User Interface (GUI) objects. Analog recording records keyboard input, mouse clicks, and the precise x- and y-coordinates traveled by the mouse pointer across the screen.

What is meant by the logical name of the object?

An Object�s logical name is determined by its class.

 In most cases, the logical name is the label that appears on an object .

How do you view the contents of the GUI map?

GUI Map editor displays the content of a GUI Map. 

We can invoke GUI Map Editor from the Tools Menu in WinRunner.

The GUI Map Editor displays the various GUI Map files created and the windows and objects learned in to them with their logical name and physical description.

How to compare value of textbox in WinRunner?

Its the problem :Textbox on page 1. then after clicking \'Submit\' button, value of textbox will be display on page 2 as static. 

How to compare value of textbox from page 1 if it is equal to on page 2?Capture the value from textbox in page 1 and store in a variable (like a). 

Then after clicking on submit button when the value is diplaying on page 2 as static. From here using screen area (get text) text point, capture the value and store in second variable (like b). Now compare two variables.

Define Winrunner with combo box?

Winrunner with combo box Problem :

Application has combo box which has some values need to select item 4 in first combo box to run the test Scenario. How to get the value of the selected combo box?

Answer1:

Use the GUI spy and compare the values in the SPY with the values in the GUI map for the physical attributes of the TComboBox_* objects. It appears to me that WinRunner is recording an attribute to differentiate combobox_1 from _0 that is *dynamic* rather than static. We need to find a physical property of all the comboboxes that is constant and unique for each combobox between refreshes of the app. (handle is an example of a BAD one). That\'s the property we need to have recorded in our GUI map (in addition to those physical properties that were recorded for the first combobox that was recorded.

Answer2:

Go through the following script, it will help ...

function app_data(dof)

{

report_msg (\"application data entry\");

set_window (\"Flight Reservation\", 6);

list_get_items_count (\"Fly From:\" , flyfromc);

list_get_items_count (\"Fly To:\" , flytoc);

report_msg (flyfromc);

report_msg (flytoc);

for (i =0; i < flyfromc; i++)

{

#for (j=0; j<flytoc-1; j++)

for (j=0; j<flytoc-1; j++)

{

m=0;

do

{

 menu_select_item(\"File;New Order\");

edit_set (\"Date of Flight:\", dof);

 obj_type (\"Date of Flight:\",\"<kTab>\");

 list_select_item (\"Fly From:\",\"#\"i);

   # Item Number 0;

 obj_type (\"Fly From:\",\"<kTab>\");

 list_select_item (\"Fly To:\", \"#\"j);

# Item Number 0;

 obj_mouse_click (\"FLIGHT\", 42, 20,LEFT);

set_window (\"Flights Table\", 1);

list_get_items_count (\"Flight\" ,flightc);

 list_activate_item (\"Flight\", \"#\"m);

# Item Number 1;

 set_window (\"Flight Reservation\",5);

 edit_set (\"Name:\", \"ajay\");

 button_press (\"Insert Order\");

m++;

}while ( a<flightc);

 report_msg (j);

}

report_msg (i);

}

}


WinRunner: How to Set GUI file\'s searchpath?

WinRunner : How to Set GUI file\'s searchpath?

[GUI file at d:1234windows.gui

How to use the script bellow to load the GUI file successfully.?

#load gui file

GUI_unload_all;

if(GUI_load(\"windows.gui\")!=0)

{

pause(\"can not open \"windows.gui\" File\");

texit;

}

#end loading gui


Put all the scripts at localmachine but GUI files at TD Server ,Used command line to run my scripts(successfully).

When update Winrunner 7.6 Version to 8.2 and connect to TD Server,now it seems that wrun.exe couldn\'t find localmachine\'s scripts, anyone knows wrun.exe command-line mode\'s new parameters(in WINRUNNER 8.2 Ver.)?


Answer1:

GUI_load(\"C:\\Program Files\\Mercury

Interactive\\WinRunner\\EMR\\EMR.gui\");

in your winrunner startup file, you can set the path for this startup file in general options->startup

#load gui file

GUI_unload_all;

if(GUI_load(\"C:\\Program Files\\Mercury Interactive\\WinRunner\\EMR\\EMR.gui\")!=0)

{

pause(\"unable to <>open C:\\Program Files\\Mercury

Interactive\\WinRunner\\EMR\\EMR.gui\");

texit;

}

#end loading gui

you cant set path for GUI map file in winruner other than Temporary GUI Map File


Answer2:

Might suggest to your boss that the GUI is universal to all machines in spite of the fact that all machines must have their own local script in his view. Even if you are testing different versions of the same software, you can have the local machine \"aware\" of what software version it is running and know what GUI to load from you server. I run a lab with 30 test machines, each with their own copy of the script(s) from the server, but using one master GUI per software roll.

As far as how to set search path for the local machine, you can force that in the setup of each machine. Go to Tools=>Options=>General Options=> Folders. Once there, you can add, delete or move folders around at will. WinRunner will search in the order in which they are listed, from top down. \"Dot\" means search in the current directory, whatever that may be at the time. 

WinRunner: How to check the tab order?

winrunner sample application : 
set_window (\"Flight Reservation\", 7);
if(E_OK==obj_type (\"Date of Flight:\",\"\")){
if(E_OK==obj_type (\"Fly From:\",\"\")){
if(E_OK==obj_type (\"Fly To:\",\"\")){
if(E_OK==obj_type (\"Name:\",\"\")){
if(E_OK==obj_type (\"Date ofFlight:\",\"\")) { report_msg(\"Ok\");
}
}
}
}

WinRunner: Why \"Bitmap Check point\" is not working with Framework?

Bitmap checkpoint is dependent on the monitor resolution.

It depends on the machine on which it has been recorded.

Unless we are using a machine with a screen of the same resolution and settings ,

it will fail. Run it in update mode on our machine once. It will get updated to our system and then onwards will pass.

How to Plan automation testing to to impliment keyword driven methodology in testing automation using winrunner8.2?

Keyword driven testing refers to an application independent automation framework. This framework requires the development of data tables and keywords, independent of the test automation tool used to execute them and the test script code that \"drives\" the application-under-test and the data. Keyword-driven tests look very similar to manual test cases. In a keyword-driven test, the functionality of the application-under-test is documented in a table as well as in step-by-step instructions for each test.

Suppose you want to test a simple application like Calculator and want to perform 1+3=4, then you require to design a framework as follows : 


Window->Calculator ; 

Control->Pushbutton ;

 Action-> Push; 

Argument->1

Window->Calculator ;

 Control->Pushbutton ;

 Action-> Push; 

Argument->+

Window->Calculator ;

 Control->Pushbutton ; 

Action-> Push; 

Argument->3

Window->Calculator ; 

Control->Pushbutton ; 

Action-> Push; 

Argument->=

Window->Calculator ;

 Action-> Verify; 

Argument->4

           Steps are associated with the manual test case execution. Now write functions for all these common framework required for our test caese. Our representation may be different as per our requirement and used tool. 

How does winrunner invoke on remote machine?

Steps to call WinRunner in remote machine:

> Send a file to remote machine particular folder (this may contains your test parameters)

> write a shell script listener & keep always running the remotehost (this script will watching the file in folder mentioned in step 1).

> write a batch file to invoke the winrunner, test name & kept it in remote machine.

> call the batch file thru shell script whenever the file exist as mentioned in step1 .

WinRunner: How to connect to ORACLE Database without TNS?

The following code would help the above problem.<code>tblName = getvar(\"curr_dir\")&amp;table;ddt_close_all_tables();resConnection = \"\";db_disconnect(\"session\");rc = ddt_open(tblName, DDT_MODE_READ);if (rc != E_OK)pause(\"Unable to open file\");else{dvr = ddt_val(tblName,\"DRIVERNAME\");tnsName = ddt_val(tblName,\"SERVER\");user = tolower(ddt_val(tblName,\"UID\"));pass = tolower(ddt_val(tblName,\"PWD\"));host = ddt_val(tblName,\"HOSTNAME\");port = ddt_val(tblName,\"PORT\");pro = toupper(ddt_val(tblName,\"PROTOCOL\"));resConnection = db_connect(\"session\", 

Have to verify against excel spread sheet where report descriptions are stored . please guide \"how to proceed?

A list box which is displaying report names and below that there is a multi line text box provided which is displaying the description of the report corresponding to each report. Able get all the descriptions by using below for loop. But have to verify against excel spread sheet where report descriptions are stored . please guide \"how to proceed?\"


list_get_info(\"Listbox1\",\"count\",count);

for(num = 1; num < count; num++)

{

row = num + 1;

list_select_item(\"Listbox1\", \"#\"&num);

list_get_info(\"Listbox1\",\"value\",val);

report_msg(val);

edit_get_text(\"Textarea1\",s) ;

report_msg(s);

}


#Open the excel spread sheet

# suppose spread sheet having 2 fields, Report_name and report_des

table = \"E:\\test\\Datadriven\\default.xls\";

rc = ddt_open(table, DDT_MODE_READ);

if (rc!= E_OK && rc != E_FILE_OPEN)

{

#loop the list box items

for(num = 0; num < count; num++)

{

list_get_info(\"Listbox1\",\"value\",val);

ddt_get_row_count(table,table_RowCount);

for(table_Row = 1; table_Row <= table_RowCount; table_Row ++)

{

ddt_set_row(table,table_Row);

report_name = ddt_val(table, \"report_name\");

if ( val == report_name)

{

report_des = ddt_val(table, \"report_des\");

# Compare the report description

}

}

}

}

WinRunner: While invoking Win Runner, the error message displays

WinRunner : While invoking Win Runner, the error message displays : \"The testing tool will be terminated because an error has occured.

Please look at the file :

C:DOCUME~1ADMI~1LOCALS~1 Tempwrstderr

for more details.\"

Go to processes TAB on Task Manager panel and kill following processes

wrun.exe

crvw.exe

six_agnt.exe and wowexec.exe

Kill NTVDM.EXE and CRVW.EXE processes from task manager.

Once we have killed the winrunner process you need to remove the icon from the windows task bar which causes this error.Once done we can safely restart winrunner .

WinRunner: How to use data driven technology in GUI Check points for Objects ?

Here is a sample code which writen for web enviourment.

web_obj_get_text(\"Client Name\",\"#1\",\"#3\",text,\" \",\" \",1);

if(text==ddt_val(table,\"WaveDesc\")){

report_msg(\"done\");

}

else

{

report_msg(\"Not Done\");

}

so in each recursion it will check the text with data stored in the excell sheet. And the report\'s result will give a progress. 

How to handle \'Timeout expired\' problem in WinRunner when dealingwith Complex SQL Queries??

While creating DSN which option we have selected for authenticity. If we have selected Windows NT authentication then no need to Enter userId and password or if we have selected SQL Server authentication then we need to enter userID and password during DSN creation it self.

Enter database user name and password while creating DSN. the script as follows :-

dbstatus = db_connect(\"GetRecs\", \"DSN=dsn name\", 30);

if (dbstatus !=0)

{

report_msg (\"GetRecs-FAILED-Could not open Database\");

treturn(\"Stop\");

}

WinRunner: How to generate user name uniquely?

There are a couple of ways of dealing with this problem.

 One way is to maintain a file with the \'last value used\' in it and just keep it up to date. If it\'s a data driven test, this value could even be in the data table. Alternately,

 We can always use the \'time\' element as the value added to our string. That way we\'er always assured of a new number.

Unable to print a newline character [n] in file, any solution?

file_printf (, \"%srn\", text); 

How to define the variable in script that have stored in excel sheet using winrunner?

[In A1 field contains {Class:push button, lable:OK..}

In B1 field Contains OK = button_press(OK);

where OK contains the value of field A1

OK should act as a variable which has to contain value of field A1]

Answer1:

There is no need to define any variable that is going to use in the Testscript. We can just start using it directly.

So, if we want to assign a value to the dynamic variable which is taken from Data Table, then we can use the \"eval\" function for this.

Example :

eval( ddt_val(Table,\"Column1\") & \"=\"water\";\" );

# The above statement takes the variable name from Data table and assigns \"water\" as value to it.

Answer2:

Write a function that looked down a column in a table and then grabbed the value in the next cell and returned it. However, We would then need to call.

button_press(tbl_convert(\"OK\"));

rather than

button_press(\"OK\");

where tbl_convert takes the value from the A1 (in our example) and returns the value in B1.

One other difficulty we would have would be if we wanted to have the same name for objects from different windows (e.g., an \"OK\" button in multiple windows). We could expand our function to handle this by having a separate column that carries the window name. 

WinRunner: How to Change physical description?

[problem: the application containes defferent objects , But the location property is different/changing. 

Suppose for example : there is one html table and it contains objects and it\'s phisical properties,

for one object.

{

class : object,

MSW_class: html_text_link,

html_name: \"View/Edit\"

location:0

}

and for other objects.

{

class: object,

MSW_class: html_text_link,

html_name: \"View/Edit\"

location : 1

}

When record the scripts its gives viwe/edit as logical name,

Code : web_image_click(\"view/edit\", 11, 7);

When run the script win runner cannot identifies which object to click and it gives an error Msg.

P.S. WinRunner 7.5 with Java and web addins on Windows XP operating system and IE 6.0 browser(SP2).

Answer1 :

In dynamically changing the name of the html_table, we have to interchange the physical description. while recording the clicked name inside the table will be the name of the html_table in GUI Map. Change the logical name alone in GUI map. then in coding using the methods in gui_ get the logical name of this html_table.get its physical description.delete the Object thru coding from the Gui map.Then with the logical name and physical description we got previously , add these description using Gui_add methods.

Answer2 :

Just change the logical names to unique names.

winrunner will recognize each object separately using the physical name and the location property.

An

ode> = 0;

web_link_click(\"{ class: object, MSW_class: html_text_link, html_name: \"View/Edit\", location:\" & i & \"}\";

i = 1;

web_link_click(\"{ class: object, MSW_class: html_text_link, html_name: \"View/Eocation; i & \"}\";

[problem: the application containes defferent objects , But the location property is different/changing. Suppose for example : there is one html table and it contains objects and it\'s phisical properties,for one object.{class : object,MSW_class: html_text_link,html_name: \"View/Edit\"location:0}and for other objects.{class: object,MSW_class: html_text_link,html_name: \"View/Edit\"location : 1}When record the scripts its gives viwe/edit as logical name,Code : web_image_click(\"view/edit\", 11, 7);When run the script win runner cannot identifies which object to click and it gives an error Msg.P.S. WinRunner 7.5 with Java and web addins on Windows XP operating system and IE 6.0 browser(SP2).Answer1 :In dynamically changing the name of the html_table, we have to interchange the physical description. while recording the clicked name inside the table will be the name of the html_table in GUI Map. Change the logical name alone in GUI map. then in coding using the methods in gui_ get the logical name of this html_table.get its physical description.delete the Object thru coding from the Gui map.Then with the logical name and physical description we got previously , add these description using Gui_add methods.Answer2 :Just change the logical names to unique names.winrunner will recognize each object separately using the physical name and the location property.Answer3 :i = 0;web_link_click(\"{ class: object, MSW_class: html_text_link, html_name: \"View/Edit\", location:\" & i & \"}\";i = 1;web_link_click(\"{ class: object, MSW_class: html_text_link, html_name: \"View/Edit\", location:\" & i & \"}\";

Is there any function in winrunner which will clear the history of the browser?

Actually the script is working fine when we execute for the first time. But when we execute in the second time it is directly going inside the application without asking for login credentials by taking the path from the browser history. So the script fails. It is working fine if I clear the history of the browser before each run.

         This is not the matter of clearing the history. In any case it should not allow we to login to application with entering login credentials. I think this is application bug.

To clear history :

DOS_system with

del \"C:Documents and Settings%USERNAME%

Cookies\"*our_cookiesite_name*

WinRunner: How to read dynamic names of html_link


Answer1:

Use the following steps :

> Using the Function, web_tbl_get_cell_data read the link.

> use GUI_add function to add to the Map editor.

> use GUI_save function to save the same link.

> Now, web_link_click() and pass the variable that we got in step.

Answer2 :

Can try this method. It will reduce the complexity and there is no need to update the GUI Map File. Use web_tbl_get_cell_data() function to get the Description of the link and use the variable name in web_link_click() function.

web_tbl_get_cell_data(\"Tablename\",\"#Rowno\",\"#columnnumber\",0,cell_value,cell_val ue_len);

web_link_click(cell_value);

Answer3 :

> get number of row in your table: tbl_get_rows_count (\"tableName\",rows);

> write a for loop: for(i=0;i<=row;i++)

> get text of specified cell with column and row:tbl_get_cell_data (\"Name\",\"#\"&i,column,var1);

> compare with the if condition

> if true : make any flage and take row number in variable m

> now end the loop and write

tbl_set_selected_cell ( \"tableName\", \"#\"& m,column);

type (\"<kTab><t2><kReturn>\");

Example :

tbl_get_cols_count(\"Name\",cols);

tbl_get_rows_count(\"Name\",rows);

for(i=2;i<=rows;i++)

{

for(j=1;j<=cols;j++)

{

tbl_get_cell_data(\"Name\",\"#\"&i,\"#\"&j,var1);

if(var1 == Supplier)

{

m=i;

}

}

}

tbl_set_selected_cell ( \"Name\", \"#\"&m,\"#\"&j type (\"<kTab><t2><kReturn>\");

Is it possible to use winrunner for testing .aspx forms or dotnet forms?

We can\'t test dot net application using winrunner 7.6 and also from prior version.

 Because winrunner do not have addin for dot net.ASP.NET forms it is a code for server side part of an application, if it generates on the front end normal HTML/JavaScript/Java/ActiveX it shouldn\'t be a problem to test the application using WR.

Can WinRunner put the test results in a file?

Yes, We can put the results into the file format. (the file extension is .txt) In Test Results window,

 We can select one option :tools menu text report then we can get a text file.Another option is to write out the results out into a html file.

WinRunner: What is the difference between virtual object and custom object?

Answer1 :

The virtual object is an object which is not recognized by Winrunner. The virtual object class like obj_mouse_click which works for that instance only. To work at any time, then we should forcibly to instruct the winrunner to recognize the virtual object with the help of Virtual Object Wizard.

Note : the virtual object must be mapped to a relavant standard classes only avail in winruuner. Ex : button (which is avail on the toolbar in a app. window) which is to be mapped to the standard class callled PUSH_BUTTON. when its completed then we can observe the TSL statment would be button_press(\"logicalName\") which is permanent one in we r winrunner.

GUI map Configuration :

It helps when winrunner is not able locate the object by winruuner. for ex : two or more objects will have same logical name and its physical properties then how winrunner locate the specific object. In which case that should instruct the winrunner to unquely identify the specific object by setting obligatory, optional and MS_WID with the help of GUI Map config.

Answer2:

we use the virtual object wizard in winrunner to map the bitmap object while recording winrunner generates the obj_mouse_click.

Custom object is an object which do not belong to one of the standard class of winrunner. We use the gui map configuration to map the custom object to standard object of the winrunner.

Answer3:

virtual object : image or portion of the window are made virtual object to use functions available for the object just for convenience in scripting.

virtual object captures the cordinates of the object.

custom object : general object which does not belong to winrunner class, we map this general object to winrunner standard object, i.e. custom object.

How to create an Object of an Excel File in WinRunner?

The object part, or actual Excel table is created via the WinRunner Data Table and it is stored inside the same directory that the WinRunner script is stored in. Of course we may create the Excle spreadsheet themself and reference it from our script manually. This is also mentioned in the User Guide.

The Data Table Wizard mentioned earlier will link this object to the script and assist in parameterizing the data from the Excel table object. 

How to use values returned by VB script in winrunner?

From our VB script create a file system object to write output to a text file :

Dim fso, MyFile

Set fso = CreateObject(\"Scripting.FileSystemObject\")

Set MyFile = fso.CreateTextFile(\"c:testfile.txt\", True)

MyFile.WriteLine(\"This is a test.\")

MyFile.Close

Then use file_open and file_getline functions in WinRunner to read the file. 

WinRunner: What tag is required to allow me to identify a html table?

.Indeed, it is better to ask developer to put ID every place where it is possible. 

It will avoid lots of trouble and help the resuable of our script (consider localization).

WinRunner: How to work with file type using WinRunner functions?

When recording, WinRunner does not record file-type objects. 

However, We can manually insert file-type statements into our test script using the web_file_browse and web_file_set functions.

WinRunner: Do Java Add-Ins required for Web based Application?

We do not need any Java add-in to tests simple JSP pages.

 If we are using Java applets with some swing or awt components drawn on the applet then we need java add-in otherwise simple web add-in will server the purpose.

How to generate unique name?

function unique_str()

{

auto t, tt, leng, i;

t = get_time();

leng = length(t);

tt = \"\";

for (i = 1; i <= leng; i++)

{

tt = tt & (sprintf(\"%c\", 97 + i + substr(t, i, 1)) );

}

return tt;

}

WinRunner: How to access the last window brought up?

Rc = win_get_info(\"{class: window, active: 1}\", property, result);Is there something or some script that can determine the LAST WINDOW DISPLAYED or OPENED on the desktop and in order to use that information to gather the label.there are a couple of solutions, depending on what we know about the window.

 If we know distinguishing characteristics of the window, use them and just directly describe the gui attributes. I assume that we do not have these, or we would likely have already done so.If not, there is a brute force method. Iterate over all of the open windows prior to the new window opening and grab their handles. 

After our new window opens, iterate again. The \'extra\' handle points to our new window. We can use it in the gui description directly to manipulate the new window. As I said, a bit brutish, but it works. We can use the same technique when we have multiple windows with essentially the same descriptors and need to iterate over them in the order in which they appeared.Any object (or window) can be described by it\'s class and it\'s iterator. if I wanted to address each of the individuals in a room and had no idea what their names were, but would like to do so in a consistent way would it not be sufficient to say - \'person who came into the room first\', \'person who came into the room second\', or alternately \'person who is nearest the front on the left\', \'person who is second nearest the front on the left\'. 

These are perfectly good ways of describing the individuals because we do two things : limit the elements we want to describe (people) and then give an unambiguous way of enumerating them.So, to apply this to our issue : we want to do an \'exist\' on a dynamically described element (window, in your case). So we make a loop and ask \'window # 0, do you exist\', if the answer is yes, we ask for the handle, store it and repeat the loop.Eventually we get to window n, We ask if it exists, the answer is no and we now have a list of all of the handles of all of the existing windows. We should note that there will be n windows ( 0 to n-1, makes a count of n).We may need to brush up on programmatically describing an object (or window), the syntax is a little lengthy but extremely useful once we get the feel for it. It really frees we from only accessing objects that are already described in the gui map. 

Try this as a starting point, we\'ll need to add storing & sorting the handles themself :i = 0;finished = FALSE;while (finished == FALSE){if (win_exists(\"{class: window, location: \"\" & i & \"\"}\"\") == E_OK ){win_get_info(\"{class: window, location: \"\" & i & \"\"}\"\", \"handle\", handle);printf(\" handle was \" & handle);i ++;}else{finished = TRUE;}}

WinRunner: How to identifying dynamic objects in web applications ?

Check whether the object is present inside the table. 

If yes then the get the table name and the location of that object. 

Then by using web_obj_get_child_item function we can get the description of the Object. Once we get the Description then we can do any operation on that object.

WinRunner: How to delete files from drive?

Here is a simple method using dos.

where speech_path_file is a variable.

example :

# initialize vars

speech_path_file = \"C:\\speech_path_verified.txt\";

.

.

dos_system(\"del \" & speech_path_file); 

WinRunner: Could do we start automation before getting the build?

The Manual test cases should be written BEFORE the application is available, so does the automation process. automation itself is a development process, We do start the development BEFORE everything is ready, 

We can start to draw the structure and maybe some basic codes. 

And there are some benefits of having automation start early, e.g., if two windows have same name and structure and we think it is trouble, we may ask developer to put some unique identifiers, for example, a static which has different MSW_id). If we (& our boss) really treat the automation as the part of development, we should start this as early as possible, in this phase it likes the analyse and design phase of the product.

How to create a GUI map dynamically?

gmf = \"c:\\new_file_name.gui\";

GUI_save_as ( \"\", gmf );

rc = GUI_add(gmf, \"First_Window\" , \"\" , \"\");

rc = GUI_add(gmf, \"First_Window\" , \"new_obj\" , \"\");

rc = GUI_add(gmf, \"First_Window\" , \"new_obj\" , \"{label: Push_Me}\"); 

WinRunner script for Waitbusy?

> only need to load once, best in startup script or wherever load( getenv(\"M_ROOT\") & \"\\lib\\win32api\", 1, 1 );

> returns 1 if app has busy cursor, 0 otherwise public function IsBusy(hwnd) {const HTCODE=33554433;

> 0x2000001 const WM_SETCURSOR=32;

return SendMessageLong(hwnd, WM_SETCURSOR, hwnd, HTCODE);

> wait for app to not be busy, optional timeout public function WaitBusy(hwnd, timeout) {const HTCODE=33554433;

> 0x2000001 const WM_SETCURSOR=32;

if(timeout) timeout *= 4;

while(--timeout)

{

if (SendMessageLong(hwnd, WM_SETCURSOR, hwnd, HTCODE) == 0) return E_OK;

wait(0,250);

> 1/4 second }

return -1;

> timeout error code }

> wait busy, provide window instead of hwnd public function WinWaitBusy(win, timeout){auto hwnd

; win_get_info(win, \"handle\", hwnd);

return WaitBusy(hwnd, timeout); }

> example of how to use it... set_window(win); WinWaitBusy(win); 

Define : WinRunner script to get Min and Max?

Public function fnMinMaxWinrunner(in action)

{

auto handle;

const SW_MAXIMIZE = 3;

const SW_MINIMIZE = 6;

load_dll(\"user32.dll\");

#extern int ShowWindow(long, int);

win_get_info(\"{class: window, label: \"!WinRunner.*\"}\", \"handle\", handle);

switch(action)

{

case \"SW_MINIMIZE\" :

{

# Maximizing WinRunner

ShowWindow(handle, SW_MINIMIZE);

wait(2);

break;

}

case \"SW_MAXIMIZE\" :

{ # Maximizing WinRunner

ShowWindow(handle, SW_MAXIMIZE);

wait(2);

break;

}

}

unload_dll(\"user32.dll\");

};

Define : Type special chars in WinRuneer?

Type special chars as they are, instead of interpreting them : 

> Data can be read from a data file and then typed into an app.

> Escape the following chars: <> - +.

> In a string, quote \" and backslash  will already be escaped.

> Generally won\'t be a lot of special chars, so.

> Use index instead of looping through each character.

> Function no_special data .

{

auto esc_data, i, p;

esc_data = \"\";

while(1)

{

p=32000;

i=index(data,\"-\");

p=i?(i<p?i:p):p;

i=index(data,\"+\");

p=i?(i<p?i:p):p;

i=index(data,\"<\");

p=i?(i<p?i:p):p;

i=index(data,\">\");

p=i?(i<p?i:p):p;

if (p<32000)

{

esc_data = esc_data substr(data,1,p-1) \"\\\" substr(data,p,1);

data = substr(data,p+1);

}

else break;

}

esc_data = esc_data data;

return esc_data;

}

> trial run here

data = \"This -- is +the+ <yobbo> new test sample\";

win_activate(\"Untitled - Notepad\");

win_type(\"Untitled - Notepad\", no_special(data));

How to convert variable from ascii to string?

If We want to generate characters from their ascii codes, We can use the sprintf() function, example :

sprintf(\"%c\",65) will generate \"A\"

If we want to add a number onto the end of a string, We can simply stick it next to the string, example :

ball=5;

print \"and the winning number is : \" ball;

Putting them together can get some interesting effects, example :

public arr[] = {72,101,108,108,111,32,102,114,111,109,32,77,105,115,104,97};

msg = \"\";

for(i in arr) msg = msg sprintf(\"%c\",arr[i]);

print msg;

Hmmm, interesting effect from the elements not being in order. I\'ll try it again :

msg = \"\";

for(i=0;i<16;i++) msg = msg sprintf(\"%c\",arr[i]);

print msg;


Define : The script for WinRunner Database Functions?

The script for WinRunner Database Functions : 
 Pre-requisites:
 ---------------
 Requires a Variable / Constant "gstrConnString" defined in our
 calling script / startup which holds the ODBC connection string
 >  How To Use  :
 -----------
 Save this file as a compiled module in your search path.
 use load() OR reload() for using this script in the test or another function.
 define a variable/constant gstrConnString in our calling script/
 startup and put the ODBC connection string in this variable. eg.
 gstrConnString ="DRIVER={Oracle in OraHome92};SERVER=MANOJ; UID=BASECOLL;PWD=BASECOLL;DBA=W;APA=T;EXC=F; XSM=Default;FEN=T;QTO=T;FRC=10;FDL=10;LOB=T;RST=T;GDE=F;FRL=Lo; BAM=IfAllSuccessful;MTS=F;MDI=Me;CSR=F;FWC=F;PFC=10;TLO=O;";
 This is the string that I use; declaration is in Startup Script and get the value
 from configuration.xls
 public const gstrConnString = ddt_val(gstrConfigFilePath, "gstrConnString");
   Description  :
 -----------
 Contains following functions
 A)  GetDBColumnValue(in strSql, in strColumn, out strVal)
 B) Use this function when we want only first /single value of strColumn.
    Usage  :
 strSQL = "Select PRODUCT_CODE from PRODUCT_MASTER where PRODUCT_NAME = 'WINE'";
 strColumn = "PRODUCT_CODE";
 rc = GetDBColumnValue(strSql, strColumn, strVal);
 pause (strVal);
 B. GetDBRow(in strSql, out strHeader, out nHeaderCount, out strRow )
 Use this function when you require entire first row of the result set.
 strSql = Query to execute
 strHeader = Header Names seperated by tab. (It will hold only 1024 char)
 Hence not so reliable
 nHeaderCount = Count of Columns in the result set.
 strRow = Row Tab seperated string.
  C) GetDBColumnAllValues(in strSql, in strColumn, out strVal[], out nRecord)
 Use this function when you require entire content of a Column
 strSQL = Query to execute
 strColumn = Column form the Query for which values are required
 strVal[] = Array that holds the Values
 nRecord = Gives the Number of values retreived.
 D. GetDBAllRows(in strSql,out strHeader, out nHeaderCount, out strRow[], out nRecord)
 Use this function when you require to get entire result set. This returns all the
 result set in an array each row in array is a tab seperated string.
 strSql = Query to execute
 strHeader = Header Names seperated by tab. (It will hold only 1024 char)
 Hence not so reliable
 nHeaderCount = Count of Columns in the result set.
 strRow[] = Array of Row Tab seperated string.
 nRecord = Count of values in strRow.
>>>>  Notes :
 ------
 I have observed that I get correct results only when I use UPPER case while building the
 query.
 In case we find any defects in the script please communicate so that I am aware of the same and could enhance it further.
-----------------------------------------------------------
public function GetDBColumnValue(in strSql, in strColumn, out strVal) -----------------------------------------------------------
{
 Reference to the Connection String Constant
extern gstrConnString; Holds The result 0 is success any thing other than 0 is failed
	auto rc;
> Holds the Error that is returned by the ODBC...
auto strLastError;
> Holds the Number of rows that is returned by strSQL	
	auto nRecord;
	
	 Setting strLastError to Null
	strLastError ="";
	
	 Setting the rc to unsuccessfull	
	rc = -9999;
	
	 Attempt to connect...
rc  = db_connect("obDatabase",gstrConnString);
	if (rc!=0)
		{
	 If failed then return...error_code
report_msg("Could not Connect To database.");
			return rc;
		}

	 Attempt the query execution....
rc = db_execute_query("obDatabase", strSql,nRecord);
	if (rc!=0)
		{
		 If failed then return code
		db_disconnect("obDatabase");
report_msg("db_execute_query returned error.");
			return rc;

		}
	if (nRecord == 0)
	{
 If the records returned is 0 then....
		rc = 1;
		db_disconnect("obDatabase");
report_msg ("SQL: " & strSql & ". Returned Zero Rows !!!");
		return rc;
	}

	 Attempt to get the field value...
strVal = db_get_field_value ("obDatabase","#0",strColumn);
	if (strVal=="")
	{
 Case strVal is null ...
 Check whether any error has occured
db_get_last_error("obDatabase", strLastError);
		if (strLastError!="")
		{
 If error has occured then... return
		db_disconnect("obDatabase");
		rc = 2;
report_msg("Last DB Error: " & strLastError);
			return rc;
		}
 if there is no error then the
 field is having null as value
	}
	 Attempt to disconnect
	rc = db_disconnect("obDatabase");
	if (rc!=0)
	{
 If error then return...
report_msg("Could not disconnect.");
		return rc;
	}
	 Empty every thing and quit...
	strSql = "";
	strColumn ="";
	strLastError ="";
	return rc;
}
--------------------------------------------------
public function GetDBRow(in strSql, out strHeader,
 out nHeaderCount, out strRow )
---------------------------------------------------
{
 Reference to the Connection String Constant 
extern gstrConnString;
	
	 Holds the result set	
	auto rc;

 Holds the Error that is returned by the ODBC...
	auto strLastError;

 Holds the Number of records that are 
	returned by query....
	auto nRecord;

	 Set the strLastError to null.
	strLastError ="";

	 Set rc as unsuccessful
	rc = -9999;
	
 Attempt to establish a connection
rc  = db_connect("obDatabase",gstrConnString);
	if (rc!=0)
		{
 On error return the error code.
	
report_msg("Could not Connect To database.");
		return rc;
		}

 Attempt to execute the query...
rc = db_execute_query("obDatabase", strSql, nRecord);
	if (rc!=0)
		{
 On error return the error code
	db_disconnect("obDatabase");
report_msg("db_execute_query returned error.");
		return rc;
		}
 Case the number of records returned is zero then
	if (nRecord == 0)
	{
		rc = 1;
		db_disconnect("obDatabase");
report_msg ("SQL: " & strSql & ". Returned Zero Rows !!!");
		return rc;
	}
	# Attempt to get the Row 
rc = db_get_row("obDatabase", "#0", strRow);
	if (rc!=0)
		{
		# Case error
		db_disconnect("obDatabase");
report_msg("db_get_row returned error.");
			return rc;
		}	
	 Attempt to get Headers
rc = db_get_headers("obDatabase",nHeaderCount, strHeader);
	if (rc!=0)
		{
		 Case error then
		db_disconnect("obDatabase");
report_msg("db_get_headers returned error.");
			return rc;
		}
 if strRow is null then check if any error has occured
	if (strRow =="")
	{
db_get_last_error("obDatabase", strLastError);
		if (strLastError!="")
		{
 If strLastError is not null then return the error.
		rc = 2;
		db_disconnect("obDatabase");
report_msg("Last DB Error: " & strLastError);
			return rc;
		}
	}
	 Disconnect the db
	rc = db_disconnect("obDatabase");
	if (rc!=0)
	{
	report_msg("Could not disconnect.");
		return rc;
	}
	strSql = "";
	strLastError ="";
	return rc;
}

-------------------------------------------------------
public function GetDBColumnAllValues(in strSql, in 
strColumn, out strVal[], out nRecord)
-------------------------------------------------------
{
	 Reference to the Connection String Constant
	extern gstrConnString;
	
 Holds The result 0 is success any thing other
 than 0 is failed
	auto rc;
	
 Holds the Error that is returned by the ODBC...
	auto strLastError;

	 Holds index of the strVal array.	
	auto i;

	 Setting strLastError to Null
	strLastError ="";
	
	 Setting the rc to unsuccessfull	
	rc = -9999;
	
	 Attempt to connect...
rc  = db_connect("obDatabase",gstrConnString);
	if (rc!=0)
		{
	 If failed then return...error_code
report_msg("Could not Connect To database.");
			return rc;
		}

	 Attempt the query execution....
rc = db_execute_query("obDatabase", strSql,nRecord);
	if (rc!=0)
		{
		 If failed then return code
		db_disconnect("obDatabase");
report_msg("db_execute_query returned error.");
			return rc;

		}
	if (nRecord == 0)
	{
	 If the records returned is 0 then....
		rc = 1;
		db_disconnect("obDatabase");
report_msg ("SQL: " & strSql & ". Returned Zero Rows !!!");
		return rc;
	}

	i = 1;
	do 
	{
 Attempt to get the field value...
strVal[i] = db_get_field_value ("obDatabase","#" & 
(i-1),strColumn);
		if (strVal[i]=="")
		{
 Case strVal is null ... 
Check whether any error has occured
db_get_last_error("obDatabase", strLastError);
	if (strLastError!="")
	{	
 If error has occured then... return
	db_disconnect("obDatabase");
		rc = 2;
report_msg("Last DB Error: " & strLastError);
				return rc;
	}
 if there is no error then the field
 is having null as value
		}
		i++;	
	}
	while (i <= nRecord);

	 Attempt to disconnect
	rc = db_disconnect("obDatabase");
	if (rc!=0)
	{
 If error then return...
report_msg("Could not disconnect.");
	return rc;
	}
	 Empty every thing and quit...
	strSql = "";
	strColumn ="";
	strLastError ="";
	return rc;
}


-----------------------------------------------------
public function GetDBAllRows(in strSql,out strHeader, 
out nHeaderCount, out strRow[], out nRecord)
----------------------------------------------------
{
 Reference to the Connection String Constant 
	extern gstrConnString;
	
	 Holds the result set	
	auto rc;

 Holds the Error that is returned by the ODBC...
	auto strLastError;

	 Holds the index of Array strRow[]
	auto i;
	 Temporary string
	auto strTmp;
	 Set the strLastError to null.

	strLastError ="";
	strTmp = "";

	 Set rc as unsuccessful
	rc = -9999;
	
	 Attempt to establish a connection
rc  = db_connect("obDatabase",gstrConnString);
	if (rc!=0)
		{
		 On error return the error code.
report_msg("Could not Connect To database.");
			return rc;
		}

	 Attempt to execute the query...
rc = db_execute_query("obDatabase", strSql, nRecord);
	if (rc!=0)
		{
 On error return the error code
	db_disconnect("obDatabase");
report_msg("db_execute_query returned error.");
		return rc;
		}
 Case the number of records returned is zero then
	if (nRecord == 0)
	{
		rc = 1;
		db_disconnect("obDatabase");
report_msg ("SQL: " & strSql & ".
 Returned Zero Rows !!!");
		return rc;
	}
	i = 1;
	
	do 
	{
		strTmp = "";
 Attempt to get the Row 
rc = db_get_row("obDatabase", (i-1), strTmp);
		if (rc!=0)
		{
			 Case error
db_disconnect("obDatabase");
report_msg("db_get_row returned error.");
			return rc;
		}
 Push the strTmp in the array
		strRow[i] = strTmp;
		 Increment i
		i++;
	}while (i <= nRecord);

 Attempt to get Headers
rc = db_get_headers("obDatabase", nHeaderCount,
 strHeader);
	if (rc!=0)
		{
 Case error then
		db_disconnect("obDatabase");
report_msg("db_get_headers returned error.");
			return rc;
		}

	 Disconnect the db
	rc = db_disconnect("obDatabase");
	if (rc!=0)
	{
	report_msg("Could not disconnect.");
		return rc;
	}
	strSql = "";
	strLastError ="";
	strTmp="";
	return rc;
}

public function getConnection( inout strConn)
{
 Reference to the Connection String Constant 
	extern gstrConnString;
	
	 Holds the result set	
	auto rc;
	 Set rc as unsuccessful
	rc = -9999;
	
 Attempt to establish a connection
rc  = db_connect(strConn,gstrConnString);
	if (rc!=0)
		{
	 On error return the error code.
report_msg("Could not Connect To database.");
			return rc;
		}
	return rc;
}

How to get time duration in millisecond in WinRunner?


All we have to do is save the start time 

(get_time()) at the start and at the end of 

the test and then format the difference in 

terms of the seconds passed between.

The code below will do what you want and more : 

const SECOND = 1;

const MINUTE = 60 * SECOND;

const HOUR = MINUTE * 60;

const DAY = HOUR * 24;

const YEAR = DAY * 365;

public function dddt_CalculateDuration

(in oldTime,in newTime, out strDuration,

out years, out months,out days, out hours,

out minutes, out seconds)

{

auto timeDiff, plural = \"s, \", 

singular = \", \", remainder;


    timeDiff = oldTime - newTime;


if(timeDiff >= YEAR)

{

    remainder = timeDiff % YEAR;

years = (timeDiff - remainder) / YEAR;

    timeDiff = remainder;

}


if(timeDiff >= DAY)

{

    remainder = timeDiff % DAY;

days = (timeDiff - remainder) / DAY;

    timeDiff = remainder;

}


if(timeDiff >= HOUR)

{

    remainder = timeDiff % HOUR;

hours = (timeDiff - remainder) / HOUR;

    timeDiff = remainder;

}


if(timeDiff >= MINUTE)

{

    remainder = timeDiff % MINUTE;

minutes = (timeDiff - remainder) / MINUTE;

    timeDiff = remainder;

}


seconds = timeDiff;


strDuration = \"\";


if (years)

{

strDuration = years & \" Year\";

if (years > 1)

 strDuration = strDuration & plural;

else

 strDuration = strDuration & singular;

}


if (days)

{

strDuration = strDuration & days & \" Day\";

    if (days > 1)

strDuration = strDuration & plural;

    else

 strDuration = strDuration & singular;

}


if (hours)

{

strDuration = strDuration & hours & \" Hour\";

    if (hours > 1)

strDuration = strDuration & plural;

    else

strDuration = strDuration & singular;

}


if (minutes)

{

strDuration = strDuration & minutes & \" Minute\";

    if (minutes > 1)

        strDuration = strDuration & plural;

    else

        strDuration = strDuration & singular;

}


if (seconds)

{

strDuration = strDuration & seconds & \" Second\";

    if (seconds > 1)

        strDuration = strDuration & \"s.\";

    else

        strDuration = strDuration & \".\";

}


return E_OK;

}


Working with QTP and work on web application which is developed on .Net>

Working with QTP and work on web application which is developed on .Net. Trying to prepare scripts but , when record and run the script most of the liks are not recognizing by qtp,that links are dynamically generating and also into diff place.Try changing the Web Event Recording Configurations. 

Go to Tools 

> Web Event Recording Configurations, and change the setting to high.

If the links are dynamically generated, try changing the recorded object properties. After recording, right click on the recorded object and select object properties. From this screen you can add/remove attributes for playback that were previously recorded. Focus on attributes of the object that are not specific to location and do not change (html ID maybe). 

How to verify the animations(gif files) present in the applications using WinRunner?

WinRunner doesn\'t support testing that technology.

You will need to find another tool to do that. QuickTest may be a possible choice for you. Go to the Mercury site and look at the list of supported technologies for QuickTest Pro 6.5 & above (not Astra)

WinRunner: Should I sign up for a course at a nearby educational institution?

When we\'re employed, the cheapest or free education is sometimes provided on the job, by our employer, while we are getting paid to do a job that requires the use of WinRunner and many other software testing tools.

If we\'re employed but have little or no time, we could still attend classes at nearby educational institutions.

If we\'re not employed at the moment, then we\'ve got more time than everyone else, so that\'s when we definitely want to sign up for courses at nearby educational institutions. Classroom education, especially non-degree courses in local community colleges, tends to be cheap. 

How important is QTP in automated testing, does only with manaul testing (Test Director) is enough for testing. Or do we require automated tools in each and every projects. What are the different advantages of the QTP?

Most projects that are being automated should not be because they\'re not ready to be. Most managers assume that automated functional QUI testing will replace testers. 

It won\'t. It just runs the same tests over, and over, and over. When changes are made to the system under test, those changes either break the existing automated tests or they are not covereb by the changes. Automated functional GUI testing is usually a waste of time.TestDirector is not used for executing any actual test activity but it is a test management tool used for Requirements Management,

Test Plan, Test Lab, and Defects Management. Even if the individual test cases are not automated, TestDirector can make life much easier during the test cycles. These two are also good reads on the topic :

Tell me about the TestDirector?

The TestDirector is a software tool that helps software QA professionals to gather requirements, to plan, schedule and run tests, and to manage and track defects/issues/bugs. It is a single browser-based application that streamlines the software QA process. 

The TestDirector\'s \"Requirements Manager\" links test cases to requirements, ensures traceability, and calculates what percentage of the requirements are covered by tests, how many of these tests have been run, and how many have passed or failed.As to planning, the test plans can be created, or imported, for both manual and automated tests. The test plans then can be reused, shared, and preserved. 

The TestDirector�s \"Test Lab Manager\" allows we to schedule tests to run unattended, or run even overnight. The TestDirector\'s \"Defect Manager\" supports the entire bug life cycle, from initial problem detection through fixing the defect, and verifying the fix. Additionally, the TestDirector can create customizable graphs and reports, including test execution reports and release status assessments.

What is a backward compatible design?

The design is backward compatible, if the design continues to work with earlier versions of a language, program, code, or software. When the design is backward compatible, the signals or data that has to be changed does not break the existing code.

For instance, a (mythical) web designer decides he should make some changes, because the fun of using Javascript and Flash is more important (to his customers) than his backward compatible design. Or, alternatively, he decides, he has to make some changes because he doesn\'t have the resources to maintain multiple styles of backward compatible web design.

Therefore, our mythical web designer\'s decision will inconvenience some users, because some of the earlier versions of Internet Explorer and Netscape will not display his web pages properly (as there are some serious improvements in the newer versions of Internet Explorer and Netscape that make the older versions of these browsers incompatible with, for example, DHTML). 

This is when we say, \"Our (mythical) web designer\'s code fails to work with earlier versions of browser software, therefore his design is not backward compatible\".On the other hand, if the same mythical web designer decides that backward compatibility is more important than fun, or, if he decides that he does have the resources to maintain multiple styles of backward compatible code, then, obviously, no user will be inconvenienced when Microsoft or Netscape make some serious improvements in their web browsers. This is when we can say, \"Our mythical web designer\'s design is backward compatible\".

How to get the compiler to create a DLL ?

In the Borland compiler, Creating \"Console DLL\'s\". 

A console application is one that does not have a GUI windows queue component. This seems to work well and has a very small footprint.

How to export DLL functions so that WinRunner could recognise them?

Created the following definition in the standard header file:
#define WR_EXPORTED extern \"C\" __stdcall __declspec(dllexport)
and write a function it looks something like this:
WR_EXPORTED UINT WrGetComputerName( )
{
. . .
} 

How to pass parameters between WinRunner and the DLL function?

Passing Strings (a DLL function):

In WinRunner,

extern int WrTestFunction1( in string );

In the DLL,

WR_EXPORTED int WrTestFunction1( char *lcStringArg1 )

{

. . .

return( {some int value} ); }

And then to use it in WinRunner,

WrTestFunction1( \"Fred\" );

Receiving Strings:

In WinRunner,

extern int WrTestFunction1( out string <10>); #The <10> tells WinRunner how much space to use for a buffer for the returned string.

In the DLL,

WR_EXPORTED int WrTestFunction1( char *lcStringArg1 )

{

. . .

{some code that populates lcStringArg1};

. . .

return( {some int value} );

}

And then to use it in WinRunner,

WrTestFunction1( lcString1 );

# lcString1 now contains a value passed back from the DLL function


Passing Numbers (a DLL function)

In WinRunner,

extern int WrTestFunction1( in int );

In the DLL,

WR_EXPORTED int WrTestFunction1( int lnIntegerArg1 )

{

. . .

return( {some int value} );

}

And then to use it in WinRunner,

WrTestFunction1( 2 );

Recieving Numbers

In WinRunner,

extern int WrTestFunction1( out int );

In the DLL,

WR_EXPORTED int WrTestFunction1( int *lnIntegerArg1 )

{

. . .

*lnIntegerArg1 = {some number};

return( {some int value} );

}

And then to use it in WinRunner,

WrTestFunction1( lnNum );

# lnNum now contains a value passed back from the DLL function


Here are some example functions.

#define WR_EXPORTED extern \"C\" __stdcall __declspec(dllexport)

#define WR_SUCCESS 0

#define WR_FAILURE 100000

#define FAILURE 0

#define WR_STAGE_1 10000

#define WR_STAGE_2 20000

#define WR_STAGE_3 30000

#define WR_STAGE_4 40000

#define WR_STAGE_5 50000

#define WR_STAGE_6 60000

#define WR_STAGE_7 70000

#define WR_STAGE_8 80000

#define WR_STAGE_9 90000

#define MAX_USERNAME_LENGTH 256

#define HOST_NAME_SIZE 64

WR_EXPORTED UINT WrGetComputerName( LPTSTR lcComputerName )

{

BOOL lbResult;

DWORD lnNameSize = MAX_COMPUTERNAME_LENGTH + 1;

// Stage 1

lbResult = GetComputerName( lcComputerName, &lnNameSize );

if( lbResult == FAILURE )

return( WR_FAILURE + WR_STAGE_1 + GetLastError() );

return( WR_SUCCESS );

}

WR_EXPORTED UINT WrCopyFile( LPCTSTR lcSourceFile, LPCTSTR lcDestFile, BOOL lnFailIfExistsFlag )

{

BOOL lbResult;

// Stage 1

lbResult = CopyFile( lcSourceFile, lcDestFile, lnFailIfExistsFlag );

if( lbResult == FAILURE )

return( WR_FAILURE + WR_STAGE_1 + GetLastError() );

return( WR_SUCCESS );

}

WR_EXPORTED UINT WrGetDiskFreeSpace( LPCTSTR lcDirectoryName,

LPDWORD lnUserFreeBytesLo,

LPDWORD lnUserFreeBytesHi,

LPDWORD lnTotalBytesLo,

LPDWORD lnTotalBytesHi,

LPDWORD lnTotalFreeBytesLo,

LPDWORD lnTotalFreeBytesHi )

{

BOOL lbResult;

ULARGE_INTEGER lsUserFreeBytes,

lsTotalBytes,

lsTotalFreeBytes;

// Stage 1

lbResult = GetDiskFreeSpaceEx( lcDirectoryName,

&lsUserFreeBytes,

&lsTotalBytes,

&lsTotalFreeBytes );

if( lbResult == FAILURE )

return( WR_FAILURE + WR_STAGE_1 + GetLastError() );

*lnUserFreeBytesLo = lsUserFreeBytes.LowPart;

*lnUserFreeBytesHi = lsUserFreeBytes.HighPart;

*lnTotalBytesLo = lsTotalBytes.LowPart;

*lnTotalBytesHi = lsTotalBytes.HighPart;

*lnTotalFreeBytesLo = lsTotalFreeBytes.LowPart;

*lnTotalFreeBytesHi = lsTotalFreeBytes.HighPart;

return( WR_SUCCESS );

}

Why Have TSL Test Code Conventions ?

TSL Test Code conventions are important to TSL programmers for a number of reasons:

> 80% of the lifetime cost of a piece of software goes to maintenance.

> Hardly any software is maintained for its whole life by the original author.

> TSL Code conventions improve the readability of the software, allowing engineers to understand new code more quickly and thoroughly.

> If we ship our source code as a product, we need to make sure it is as well packaged and clean as any other product we create.

Define : Test Script Naming ?

Test type + Project Name + Version Number + Module name + Test script Function .

For example:

Test type = UAT

Project Name = MHE

Version of the Project = 3.2

Module Name = Upload

Function Name = Excel_File

So the entire file name would be UAT_MHE_3.2_Upload_Excel_File

Note & Caution :

> Make sure the entire file name saved is below 255 characters.

> Use the underscore \"_\" character instead of hyphen \"-\" or \" \" character for separation.

> Highly recommended to store the test scripts remotely in a common folder or in the Test director repository , which are accessible and can be accessed by the test team at any time.

> Do not use any special characters on the test script name like \"*&^ #@!\" etc .,

> In this document - script or test script(TSL) means the same , pls don\'t get confused 

Define : Test script Directory structure: ?

Winrunner recognizes the testscript as a file which is stored as a directory in the Operating system. The script \'s TSL code , header information , checklist files , results , expected results etc are stored in these directories for each and every script.> Do not modify or delete anything inside these directories manually without consulting an expert.> Try to have scripts, which have lines less than or equal to 500 

> While creating multiple scripts make sure they follow the directories and subdirectory structure (ie) Every script is stored in their respective modules under a folder and a main script which call all these scripts in a Parent folder above these scripts.In a nutshell \"All the scripts must be organized and should follow an hierarchy \". 

> If a module contains more than 2 scripts , a excel file is kept in the respective folder , which gives details of the testscripts and the functionality of these scripts in a short description E.g the excel sheet can contain fields like TestPlan No, Test script No, Description of the Testscript, Status of Last run, Negative or a non-negative test.> Also make sure that evert script has a text file , which contains the test results of the last run. 

> Maintenance of script folder that has unwanted files and results folder must be cleaned periodically.> Backup of all the scripts (zipped) to be taken either in hard drive, CD-ROM, zip drives etc and kept safely.

All the TSL script files should begin with a comment that lists the Script name, Description of the script,

All the TSL script files should begin with a comment that lists the Script name, Description of the script, version information, date, and copyright notice :


Script Name: 


Script Description: 


Version information: 


 Date created and modified:


Copyright notice 


Author:

Comments generated by WinRunner.

WinRunner automatically generates some of the comments during recording..If it makes any sense leave them, else modify the comments accordingly

Single line comment at the end of line.

Accessfile = create_browse_file_dialog (\"*.mdb\"); # Opens an Open dialog for an Access table.

It is mandatory to add comment for your test call

Call crea_org0001 (); #Call test to create organization

It is mandatory to add comments when you are using a variable which is a public variable and is not defined in the present script.

Web_browser_invoke (NETSCAPE, strUrl); #strUrl is a variable defined in the init script

Note:The frequency of comments sometimes reflects poor quality of code. When you feel compelled to add a comment, consider rewriting the code to make it clearer. Comments should never include special characters such as form-feed. 

Define : Creating C DLLs for use with WinRunner?

These are the steps to create a DLL that can be loaded and called from WinRunner.

> Create a new Win32 Dynamic Link Library project, name it, and click .

> On Step 1 of 1, select \"An empty DLL project,\" and click .

> Click <OK> in the New Project Information dialog.

> Select File New from the VC++ IDE.

> Select \"C++ Source File,\" name it, and click <OK>.

> Close the newly created C++ source file window.

> In Windows Explorer, navigate to the project directory and locate the .cpp file you created.

> Rename the .cpp file to a .c file

> Back in the VC++ IDE, select the FileView tab and expand the tree under the Projects Files node.

> Select the Source Files folder in the tree and select the .cpp file you created.

> Press the Delete key; this will remove that file from the project.

> Select Project Add To Project Files from the VC++ IDE menu.

> Navigate to the project directory if you are not already there, and select the .c file that we renamed above.

> Select the .c file and click <OK>. The file will now appear under the Source Files folder.

> Double-click on the .c file to open it.

> Create our functions in the following format:


#include \"include1.h\"

#include \"include2.h\"

.

.

.

#include \"includen.h\"

#define EXPORTED __declspec(dllexport)

<return type> EXPORTED <function1 name>(<type arg1> <arg1>,

<type arg2> <arg2>,

�,

<type argn> <argn>)

{

<function body>

return <some value>;

}

.

.

.

<return type> EXPORTED <functionN name>(<type arg1> <arg1>,

<type arg2> <arg2>,

�,

<type argn> <argn>)

{

<function body>

return <some value>;

}

> Choose Build <Project name>.DLL from the VC++ IDE menu.

> Fix any errors and repeat step 17.

> Once the DLL has compiled successfully, the DLL will be built in either a Debug directory or a Release directory under your project folder depending on your settings when you built the DLL.

> To change this setting, select Build Set Active Configuration from the VC++ IDE menu, and select the Configuration you want from the dialog. Click <OK>, then rebuild the project (step 17).

> All the DLLs types that you are going to create are loaded and called in the same way in WinRunner. This process will be covered once in a later section. 

How i Creating C++ DLLs for use with WinRunner?

Here are the steps for creating a C++ DLL:

> Create a new Win32 Dynamic Link Library project, name it, and click <Next>.

> On Step 1 of 1, select \"An Empty DLL Project,\" and click <Finish>.

> Click <OK> in the New Project Information dialog.

> Select File New from the VC++ IDE.

> Select C++ Source File, name it, and click .

> Double-click on the .cpp file to open it.

> Create our functions in the following format:


#include \"include1.h\"

#include \"include2.h\"

.

.

.

#include \"includen.h\"


#define EXPORTED extern \"C\" __declspec(dllexport)


EXPORTED <return type> <function1 name>(<type arg1> <arg1>,

<type arg2> <arg2>,

�,

<type argn> <argn>)

{

<function body>

return <some value>;

}

.

.

.

EXPORTED <return type> <functionN name>(<type arg1> <arg1>,

<type arg2> <arg2>,

�,

<type argn> <argn>)

{

<function body>

return <some value>;

}


> Choose Build <Project name>.DLL from the VC++ IDE menu.

> Fix any errors and repeat step 8.

> Once the DLL has compiled successfully, the DLL will be built in either a Debug directory or a Release directory under your project folder depending on your settings when you built the DLL.

> To change this setting, select Build Set Active Configuration from the VC++ IDE menu, and select the Configuration you want from the dialog. Click <OK>, then rebuild the project.

> All the DLLs types that you are going to create are loaded and called in the same way in WinRunner. This process will be covered once in a later section. 

Defien : Creating MFC DLLs for use with WinRunner?

> Create a new MFC AppWizard(DLL) project, name it, and click <Next>.

> In the MFC AppWizard Step 1 of 1, accept the default settings and click <Finish>.

> Click <OK> in the New Project Information dialog.

> Select the ClassView tab in the ProjectView and expand the classes tree. You will see a class that has the following name C<project name>App; expand this branch.

> You should see the constructor function C<project name>App(); double-click on it.

> This should open the .cpp file for the project. At the very end of this file add the following definition :

#define EXPORTED extern \"C\" __declspec( dllexport )

> Below you will add your functions in the following format:

#define EXPORTED extern \"C\" __declspec(dllexport)

EXPORTED <return type> <function1 name>(<type arg1> <arg1>,

<type arg2> <arg2>,

�,

<type argn> <argn>)

{

<function body>

return <some value>;

}

.

.

.

EXPORTED <return type> <functionN name>(<type arg1> <arg1>,

<type arg2> <arg2>,

�,

<type argn> <argn>)

{

<function body>

return <some value>;

}


> We will see the functions appear under the Globals folder in the ClassView tab in the ProjectView.

9. Choose Build <Project name>.DLL from the VC++ IDE menu.

10. Fix any errors and repeat step 9.

11. Once the DLL has compiled successfully, the DLL will be built in either a Debug directory or a Release directory under your project folder depending on your settings when you built the DLL.

12. To change this setting, select Build Set Active Configuration from the VC++ IDE menu, and select the Configuration you want from the dialog. Click , then rebuild the project (step 9).

13. All the DLLs types that you are going to create are loaded and called in the same way in WinRunner. This process will be covered once in a later section. 

Define: Creating MFC Dialog DLLs for use with WinRunner

> Create a new MFC AppWizard(DLL) project, name it, and click <Next>.

> In the MFC AppWizard Step 1 of 1, accept the default settings and click <Finish>.

> Click <OK> in the New Project Information dialog.

> Select the ClassView tab in the ProjectView and expand the classes tree. You will see a class that has the following name C<project name>App; expand this branch also.

> You should see the constructor function C<project name>App(); double-click on it.

> This should open the .cpp file for the project. At the very end of this file add the following definition :

#define EXPORTED extern \"C\" __declspec( dllexport )


> Switch to the ResourceView tab in the ProjectView.

> Select Insert Resource from the VC++ IDE menu.

> Select Dialog from the Insert Resource dialog and click .

> The Resource Editor will open, showing you the new dialog. Add the controls you want to the dialog, and set the properties of the controls you added.

> Switch to the ClassView tab in the ProjectView and select View ClassWizard from the VC++ IDE menu, or double-click on the dialog you are creating.

> The Class Wizard should appear with an \"Adding a Class\" dialog in front of it. Select \"Create a New Class\" and click .

> In the New Class dialog that comes up, give our new class a name and click <OK>.

> In the Class Wizard, change to the Member Variables tab and create new variables for the controls you want to pass information to and from. Do this by selecting the control, clicking , typing in the variable name, selecting the variable type, and clicking <OK>. Do this for each variable you want to create.

> Switch to the Message Maps tab in the Class Wizard. Select the dialog class from the Object IDs list, then select the WM_PAINT message from the Messages List. Click <Add Function>, then <Edit Code>. This should bring up the function body for the OnPaint function.

> Add the following lines to the OnPaint function so it looks like the following:

void <the dialog class>::OnPaint()

{

CPaintDC dc(this); // device context for painting

this-%gt;BringWindowToTop();

UpdateData(FALSE);

// Do not call CDialog::OnPaint() for painting messages

}


> Select IDOK from the Object IDs list, then select the BN_CLICKED message from the Messages

list. Click <Add Function>, accept the default name, and click <Edit Code>.

> Add the line UpdateData(TRUE); to the function, so it looks like this:

void ::OnOK()

{

UpdateData(TRUE);

CDialog::OnOK();

}

19. When you are done with this, click <OK> to close the Class Wizard dialog and apply your changes. Your new class should appear in the ProjectView in the ClassView tab.

> In the tree on the ClassView tab, double-click on the constructor function for the C<project name>App (see step 5).

> At the top of the file, along with the other includes, add an include statement to include the header file for your dialog class. It should be the same name as the name you gave the class in step 13 with a .h

appended to it. If you are unsure of the name, you can look it up on the FileView tab under the Header Files folder. 22. At the very end of the file, after the #define you created in step 6, create a function that looks something like this:

EXPORTED int create_dialog(char* thestring)

{

AFX_MANAGE_STATE(AfxGetStaticModuleState());

<dialog class> theDlg;

theDlg.<var1>=<initial value>;

theDlg.DoModal();

<do whatever conversion is necessary to convert the value to a string>

strcpy(thestring,strVar1); //this will pass the value back to WinRunner.

return 0;

}

> Choose Build <Project name>.DLL from the VC++ IDE menu.

> Fix any errors and repeat step 23.

> Once the DLL has compiled successfully, the DLL will be built in either a Debug directory or a Release directory under your project folder depending on your settings when you built the DLL.

> To change this setting, select Build Set Active Configuration from the VC++ IDE menu, then select the Configuration you want from the dialog. Click <<OK>, then rebuild the project (step 23).

> All the DLLs types that you are going to create are loaded and called in the same way in WinRunner. This process will be covered once in a later section. 

Loading and Calling the Above DLLs from WinRunner

Loading and calling DLLs from WinRunner is really very simple. There are only 3 steps.

> Load the DLL using the command load_dll.

> Declare the function in the DLL as an external function using the extern function.

> Call the function as you would any other TSL function.

As simple as this is, there are some things you need to be aware of.

* WinRunner has a limited number of variable types; basically, there is string, int, and long. Windows has many different types. Two common types, which may confuse you, are HWND and DWORD. Which WinRunner type do you choose for these? You should declare these as long.

* If we are building a function in a DLL and you are testing it in WinRunner, make sure you unload the DLL in WinRunner using the unload_dll function before you try to recompile the DLL. If we leave the DLL loaded in WinRunner and try to recompile the DLL, you will receive an error message in VC++ that looks like this :

> LINK : fatal error LNK1104: cannot open file \"Debug/<project name>.DLL Error executing link.exe

To resolve this error, step through the unload_dll line in WinRunner, then compile the DLL.

* Before shipping a DLL make sure you compile it in Release mode. This will make the DLL much smaller and optimized. 

Definition of Tests ?

As a prime entry point defining the test needs a idea to classify the scripts into finer elements of functions each contributing the various aspects of automation Techniques.

Looking into this perspective the elements of the Automation Script would require the Record Play Back techniques, details of the application as better understood as Objects in tools, execution of Business Logic using loop constructs, and the test data accessibility for either Batch Process or any Back end operations.

Ultimately we need this entire salient features to function at the right point of time getting the right inputs. To satisfy these criteria we require a lot of planning before we start automating the Test Scripts .

Test Recorder about Object Vs Actions ?

In automation tools the Test Recorder is of two modes Object based and Action Mode. It requires a meticulous but yet a simplified approach on which mode to use. Though it is inevitable to avoid Action mode, it is still used for many at TE Based applications.

As a best practice the object based is widely accepted and mandatory mode of operation in Test Automation. To the extent possible we will avoid Action based functions and stick on the object mode of operation.

Test Recorder about Generic Test Environment Options ?

Some generic settings we need to set in General Options:

> Default Recording Mode is Object mode

> Synch Point time is 10 seconds as default

> When Test Execution is in Batch Mode ensure all the options are set off so that the Batch test runs uninterrupted

> In the Text Recognition if the Application Text is not recognizable then the Default Font Group is set. The Text Group is identified with a User Defined Name and then include in the General Option.

Test Recorder about Test Properties ?

> Every Script before recording ensure that the Test properties is in Main Test with the defaults

> Do not entertain any Parameters for Main Test

> It is not a good practice to load the Object library from the Test Options (if any). Rather the Object library is loaded from the Script using the suitable tool commands. This would actually avoid the hidden settings in the Script and also the ease of Setting the Object library Load and Unload can be better done dynamically in the Test Script rather than doing it manually every time the Test Suite is ran.

> Ensure the Add-ins is correct from the Add-ins tab. 

Test Recorder about Script Environment

The Basic idea of setting the Test Bed is that the Test Suite must be potable and can readily be ran in any environment given the initial conditions. 

For this to happen, the automation tool supports a lot of functions to evolve a generic methodology where we can wrap up the entire built-ins to run before the Test Suite start executing the Script. In other word the fashion of organizing the Test Scripts remain in the Test automation developer\'s mind to harbinger the issues and hurdles that can be avoided with little or less of programming.

Test Recorder about Script Environment: Automation Inits ()

Common Functions that get into Initialization Script are

> Usage of built-in commands to keep the test path dynamically loaded. Options to rule out the possibility of Test Path Definitions

> Close all the object files and data files in the Initialization Script

> Connection to the database should be done in the Inits Script

> Always Unload and Load the Object library, and it should be done only in Inits Script.

> Define all the \"public\" Variables in the Inits Script

> Establish the db connections in the Inits Test Script 

Test Recorder about Control Points

In any given automation tool the overall control of AUT is by Object identification technique. By this unique feature the tool recognizes the Application as an medium to interrogate with the Tester supplied inputs and tests the mobility of the Business Logistics.

Invoking this object identification technique the test tool does have certain control features that checks the application at various given point of time. Innumerous criteria, myriads of object handlers, plenty of predefined conditions are the features that determine the so called object based features of the Functional Check points. Each tester has a different perspective of defining the Control points.

Test Recorder about Control Points - If. � Else:

> Before we start the \"if else\" construct the nature of the control point is commented along side.

 For e.g.,# Home Page ValidationIf ( == \"0\")print (\"Successfully Launched\");elseprint (\"Operation Unsuccessful\"); 

> For all Data Table operation the return-code of the Open function should be handled in the \"if else\" construct.

Test Recorder about Data Access

In automation Test Data becomes very critical to control, supplement and transfer in the application. In automation tools the Test Data is handled in data sheets of Excel format or a.csv file that is basically a character separated file using the Data driven technology.

 In most regression batch testing the Test Data is handled in data tables with proper allocation of test data in the sheets.

Test Recorder about Control Points - Check Points

> Any checkpoints should not be a component of X & Y Co-ordinate dependant. In practical terms if there is a Check point that is defined on X,Y Parameters then the usability of the control point wouldn\'t make any sense for the application to test. The following are some of the criteria which denotes the do\'s and don\'t\'s of checkpoints.

S.No Check Point Include Exclude

a Text Check Capture Text, Position of the Text, Font & Font Size, Text area,

b Bitmap Check only the picture Window or Screen that holds the picture, x-y co-ordinates,

c Web Check URL Check, orphan page Avoid any text validation


> As a case study, the WinRunner automation tool is mentioned here as examples for creating check points. Usage of OBJ_CHECK_INFO or WIN_CHECK_INFO can be avoided and inculcate the idea of creating always the GUI Check point with Multiple Property. The advantages are to identify every small object with its clause, properties and its relativity with the previous versions. This not only enables the Regression comparisons but also it gives you the flexibility of defining the GUI Checks in all Physical state of the Object. 

Test Recorder about Data Handlers

Test Data can be accessed by built-in data functions. Some of the common practices that would help a automation tester to use the data-tables in a proper fashion.

> SINGLE DATA TABLE: By default every automation tool gives the data-table as an input file can be created using a tool wizard or sometimes potentially creating using a character-separated file. This wizard would help us in creating a Data sheet with its column names from the objects used in the test objects. With this concept, we can evolve a technique to load any File or manipulate the AUT by predefined set of cases.

> Multiple Data Table: It\'s a common practice to use the single default data file for many test scripts. Often the usage of data tables is restricted to one file at a moment. Handling multiple data tables is not advisable and incur a lot of redundant code to handle the table manipulations. As a general practice the data file is mapped to every script. This mean every Test Script will have a unique data table for easier data access and also the data operation will become easy to maintain.

In Compuware\'s QARun following is the code used.


// Run a test script

TestData (\"CreditLogon.csv\")

Call TestFunc1


For e.g in Mercury Interactive\'s WinRunner,

call_close \"Test_Script1\" (dTable1.xls) ;

#

call_close \"Test_Script2\" (dTable2.xls);


> Data files should be initialized before starting by way of simple tool commands by transferring a standard template data table to the actual template. By this practice the need of deleting data after every run in the data table can be avoided.

In Mercury Interactive\'s WinRunner the piece of code below explains the data table Initialization.

#/***************Data Table Initialization*****************

ddt_open(Template, DDT_MODE_READ);

ddt_open(dTable, DDT_MODE_READWRITE);

ddt_export(Template,dTable);

ddt_save(dTable);

ddt_close(dTable);

ddt_close(Template);

ddt_close_all_tables();

#/***************Data Table Initialization*****************


> Dynamic loading of data from the Data base operation is the most advisable practice to be followed, but yet handling the db operations with some meticulous programming would always benefit the tester avoiding a variety of operational hazard and reducing the data access time for remote server database to the local data table.

Some of the tips, which need to be followed in the WinRunner TSL handling when we use the db commands.

Set the row before writing the data values in to the data-table.

i.e., Use the following TSL Command

public count;

count = 1;

ddt_set_row (dTable, count);

Now we use the set value by row command for writing the values in it

ddt_set_val_by_row (dTable, count, \"CTS_EMP_NAME\", value);

Need less to mention here, but to avoid confusion it is better to use the same column names as found in the Data Base table. And never insert any columns before or after or in between the column names in the WinRunner data table. It is a better practice to load the data table with the data as found in the Backend database.

Fig . shows the Automation Test Plan, its pre-requisites, Initial Conditions and the Test repository. This figure also gives the idea of building any Automation Test plan. 

Online Vs Batch Execution - User Input . Where should the input_dialog_box function exist - in the driver file or in individual script?

Using Dialog functions we can use the Interactive Testing accomplished.

In Mercury Interactive the following code is

SSN = create_input_dialog (\"Please Enter the SSN Number\");

In Compuware\'s QARun the following code is Dialog \"Array_A\" Array_A []

USER = Array_A[\"Userid\"]

Pass = Array_A[\"Password\"] 

Online Vs Batch Execution - Online Test Scripts How do we use Online Scripts?

Using Dialog functions we can use the Interactive Testing accomplished.


In Mercury Interactive the following code is

SSN = create_input_dialog (\"Please Enter the SSN Number\");

In Compuware\'s QARun the following code is Dialog \"Array_A\" Array_A []

USER = Array_A[\"Userid\"]

Pass = Array_A[\"Password\"] 

Online Vs Batch Execution - TRe-Runnable Tests

Should setup scripts be made re-runnable? If yes then why? 

Also what is the best way to make them re-runnable (should it be attaching a random-number string or should it be, \'if\' statements to check if data already exists) 

It is best to create scripts that are re-runnable but we understand that it may not be possible for all cases for Set-Up type.

Online Vs Batch Execution - TRe-Runnable Tests Calling a driver file from within a driver file? Is this advisable?

No. 

Online Vs Batch Execution - Functions & Compiled Modules-Load Library

Loading libraries and memory issues, i.e. if a library contains 100 functions and only one function is used then unnecessarily we are loading all the function into memory.

Should we make multiple smaller libraries and load and unload libraries frequently or just have one big library and keep it loaded all throughout the execution of master driver Known Issue.We will run into memory issues when loading 100 functions into memory.

Online Vs Batch Execution - Functions & Compiled Modules-Data Fetch Should we open and read from data table in driver scripts? Why or why not?

The Purpose of the driver script is to setup the application and then calls each individual scripts. 

To open, read and close the data-file should happen at the individual test script level.

Online Vs Batch Execution - Functions & Compiled Modules - User Defined Functions Creating user-defined libraries and functions: How to access if a script should be made a function - What are the pros and cons of making a script a function versus just using it as a script and calling it from the driver file?

We have to load the function library first before we are able to make a call out to any of the functions defined in a function library. 

Using User-defined function is more efficient in the sense that they are compiled and loaded into memory before a function is being called and a function can be used over and again without having to recompile the function library.

WinRunner: Test Director

  WinRunner: Test Director  is bassiclaly defined as : 

  *  Test Director is a one-stop solution for organizing the entire test cycle.

    * It has four main tabs(categories) corresponding to the various phases in the testing cycle namely Requirements, Test Plan, Test Lab and Defects.

    * Requirements can be entered and organized into various categories like login operations, database operations and so on.

    * After setting up requirements, test cases corresponding(covering) these requirements can be defined and associated with the requirements. A requirement can be covered by multiple test cases and a test case can cover multiple requirements.

    * The test plan can be defined with test cases each with test steps and can be manual or automated.

    * Test sets are created to group similar test cases and then the test sets can be run.

    * If a particular test set fails the run, after examination, the tester/QA can enter a defect in the associated defect tracking system. Attributes such as severity can be assigned.

    * Test Director allows two modes of operation - user and administrator. Administrator can create and update user and group accounts, configure mail, customize project lists, custom project entities and setup wokflow whereas the user doesn\'t have these privileges.

    * The six project entities are Requirement, Test, Test Step, Test Set(Execution), Run and Defect.

    * Test Director allows attachments(file, URL, snapshot) with requirements, test step, test case, test run or defect.

    * Test Director is flexible and can be customized within certain limits. Additional fields can be added in requirements, test case, test step, test plan and defects.

    * Test Director has what is known as favorite views wherein any view or report or graph can be made to look as the user wants it to. The user can make only certain columns viewable and make it a favorite view.

    * Test Director also has filters to filter test cases, requirements, defects by any of the various attributes like severity, assigned to etc.

    * Test Director also has an Exection Flow option which is used to schedule automated test cases.

    * Work flow is setup by the administrator which includes creating Visual Basic modules to save data before entering a bug in a defect tracking system, to perform operations before opening a bug form, after a bug field is changed and so on.

    * Test Director also a comprehensive document generator utility to develop professional reports for the testing process.

    * Also reports and graphs corresponding to requirements, test plan, test lab and defects can be created.

    * The host machine can also be configured while running the test sets. 

WinRunner: Test Director - Test Repositories?

A Separate test repository will be created for each groups project. The test repositories will be created as Common Directories and will be located on a network server (this area should be a shared area where the group stores the rest of their files and documents).

> Initially all test repositories will be created using a Microsoft Access Database. In the future we may change this to SQL Server.

> The path to the network area can not be more than 47 characters (A Test Director restriction).

> The path can not contain any special characters such as $,& -, or %.

> All folders that contain the test repositories should start with TD_ .

> All test repositories should start with TD_ .


TD_NameofProject : 

* Reports : Created automatically by Test Director, this is where it stores results of tests etc.

* Tests : This is where the test scripts will reside if use WinRunner also.

* GUImap : This is where the GUI map files will reside

* Datafile : This is where all data flat files and excel spreadsheets will reside

* Docs : This is where copies of documents that pertain to this project will reside Fonts

> This is where a copy of the font groups will reside

* Functions : This is where the function library for the project will reside.

> Within Test Director various folders can be created as a way to organize your project and tests. These folders are stored in the database and may or may not be apparent on the file system.

TD_NameofProject

* FolderName : Folder for functional regression tests

* SubFolder : Sub folder for Specific Window

* FolderName : Folder for SC functional regression tests

> It is not recommended to nest the folders more than 3 levels deep.

WinRunner: Test Director - Steps to take before creating Test Projects:

Before starting Test Director, you should close all applications that are not required for testing. (Mail, Explorer, Screen Savers, CD Player etc).

. After installing a new version of Test Director and WinRunner, it is a good idea to make a back up copy of the following ini files to another location(testers choice of location). This is recommended to allow the tester to easily reset their WinRunner/TestDirector environment in the event of system corruption.

c:windowswrun.ini

c:windowsmercury.ini

c:~TestDirectorbintd.ini

c:~TestDirectorbinfilters.ini

c:~TestDirectorbinforms.ini

c:~TestDirectorbingrids.ini


WinRunner: Test Director - Set Up Recommendations

Before a tester starts creating folders and test scripts, they should configure their Test project using the Administration menu.

> Create various Users and User Groups for your project through the Administration -> Setup Users� menu item. Test Director comes with the following Pre-defined users and groups. We recommend that you create a user id that is similar to your Network login. You also have the option to create a password or leave it blank. The default is blank. You can also create your own groups or use the default groups provided by Test Director.

Default Users and Groups :

> Users :

> Admin

> Guest

Groups :

> TDAdmin Has full privileges in a TestDirector project. This is the only type of user which can make changes to the information in the Setup Users dialog box. It is recommended to assign this user type to one person per group who will serve as the TestDirector Administrator.

> QATester Can create and modify tests in Plan Tests mode, and create test sets, run tests, delete test runs, and report defects in Run Tests mode. This user type is recommended for a quality assurance tester.

> Project Manager Can report new defects, delete defects, and modify a defect\'s status. This user type is recommended for a project manager or quality assurance manager.

> Developer Can report new defects, and change a defect\'s status to Fixed. This user type is recommended for a software developer.

> Viewer Has read-only privileges in a project.

> Test Director also give you the option to customize your projects by creating user-defined fields for the dialog boxes and grids, creating categories and modifying drop down lists. These options enable you to add information that is relevant to your project. Modification to your projects are done using the Administration -> Customize Project Menu item. For more details on how to customize your project please see Chapter > in the Test Director\'s Administrators Guide.

> Decide on Script Naming Convention and consistently use the naming convention for all tests created within the project. Please reference the Naming Conventions section for more information.

> Create Test Folders to organize your tests into various sections. Examples of possible folder(s) names could be the types of testing you are doing ie: (functional, negative, integration ) or you could base your folder names on the specific modules or windows you are testing.


> Create test scripts on the Plan tests folder using the New button in the test frame or menu item Plan -> New Test.

        The Test window has four tabs, Details, Design Steps, Test script and attach.

> The Details tab, should be used to list all the information regarding the test. Test Director defaults to displaying the Status : 

* Design; 

* Created : 

* date and time;

* Designer: 

* your id.

> The Design Steps Tab, should be used to list detail instructions on how to execute your test.

> The Test Script tab, is used for tests that are turned into automated tests, on this page the automated WinRunner code will appear.

> The Attach Tab, can be used to attach bitmaps or other files required for testing to the script.

> When creating folders, tests and test sets in Test Director make sure every item has a description.

> Create a \"Documentation\" test to document how to set up our testing environment and run your tests.

> It is recommended that we write our test scripts as detailed as possible and not assume that the executor of your test \"knows how to use our module\". By making your scripts as detailed as possible, this will allow others from outside our project to understand how to execute your tests, in the event that they have to run the tests.

> Create Test Sets to group like test together, or to specify the order in which your tests should run. Test director has a limit of 99 test scripts per test set.

> Export the test scripts into word via the Document Generator, Menu item Report -> Document Generator. 

WinRunner: Test Director - Documentation Standards:

Use a consistent naming convention for all test scripts and tests.

> Put Detailed descriptions on all test folders and test scripts to explain the purpose of the tests.

> Each test script should have a detailed test step associated with it.

> Before any automation project is started, the tester should write an automation standards document. The automation standards document will describe the following :

> Automation Environment

> Installation requirements

> Client Machines Configurations

> Project Information

> WinRunner and Test Director Option Settings

> Identify Servers, Database and Sub System tests will run against

> Naming Convention for project

> Specific Recording Standards that apply to individual project. 

WinRunner: Test Director - Naming Conventions:

> Never call a test \"tests\", WinRunner/TestDirector has problem distinguishing the name test with the directory tests.

> The project automation document will specify any naming conventions used for the individual projects. 

WinRunner: Test Director - Set Up Recommendations:

> bring up Test Director

> select Plan - > Import Automated Tests

> select your tests and import them

> select the test grid button

> change each tests subject to point to the folder you want them in

> now copy all the tests from un-attached to the folder.

> close test director

> bring up test director again.

> If after all this they are not there, create a dummy test to refresh the tree view, the tree window does not seem to refresh very well. 

WinRunner: Test Director - Importing WinRunner tests into Test Director

> bring up Test Director

> select Plan - > Import Automated Tests

> select your tests and import them

> select the test grid button

> change each tests subject to point to the folder you want them in

> now copy all the tests from un-attached to the folder.

> close test director

> bring up test director again.

> If after all this they are not there, create a dummy test to refresh the tree view, the tree window does not seem to refresh very well. 

WinRunner: Test Director - How to delete Mercury Toolbar out of MS Word

If we ever play with the Test Director import into word feature, you will automatically create a test director toolbar in your MS word application. They best way to get rid of this toolbar is to ;

> Shut down word if open

> Bring up windows explorer

> Navigate to C:Program FilesMicrosoft OfficeOfficeSTARTUP

> Delete all instances of Tdword.*

> Restart word and verify it is now gone. 

WinRunner: Other Test Director Features

The Test Director application has a number of other features : 

> Running Manual tests via Test Director application. Test Director has a feature that allows you to run your manual tests through the Mini Step Utility. This feature allows you compare the actual outcome to the expected results and record the results in the Test Director database at run time.

> Test Director also has the capability of converting your manual tests into Automated Tests in the WinRunner application.

> Test Director also provides reporting and graphing capabilities, that will assist you in your reviewing the process of test planning and test execution. Test Director provides a number of standard reports and graph formats as well as allows the user to create customized reports and graph.

> Defect Tracking. Test Director also provides a Defect tracking tool in the Test Director product

WinRunner: How to see the internal version of WebTest in your machine?

To see the internal version in your machine, right-click the ns_ext.dll file, select Properties, and click the Version tab. The ns_ext.dll file is located in the arch subdirectory of your WinRunner directory.

WinRunner: Web sites contain ActiveX controls

If your web sites contains ActiveX controls, you must install ActiveX add-in support when you install the WebTest add-in. 

WinRunner: Web sites contain Java applets

if your web sites contain Java applets, you need to install Java add-in support for WinRunner. 

WinRunner: Steps to take before recording:

> Before starting to record, you should close all applications that are not required for testing. (Mail, Explorer, Screen Savers, CD Player etc).

> After installing a new version of Test Director and WinRunner, it is a good idea to make a back up copy of the following ini files to another location(testers choice of location). This is recommended to allow the tester to easily reset their WinRunner/TestDirector environment in the event of system corruption.

c:windowswrun.ini

c:windowsmercury.ini

c:~TestDirectorbintd.ini

c:~TestDirectorbinfilters.ini

c:~TestDirectorbinforms.ini

c:~TestDirectorbingrids.ini

c:~WinRunnerdatddt_func.ini

> Make sure your system is set up with all the necessary library functions that have been created.

> Make sure you create a GUI map and font group for each project.

> In the tsl_init file add the command GUI_close_all();. This command will make sure that no GUI maps are loaded when you bring up the WinRunner application. The benefit of this approach is that it will force the tester to load the correct GUI map for their testing, thus preventing scripting errors and other complications.


WinRunner: Libraries Needed for automation:

A number of libraries have been created to aid in the automation projects. Below is a list of the libraries that should be installed on each individual machine.

> csolib32 : This is a library full of many useful functions. This library was created by Mercury Customer Support and can be found in the following zip file csolib.zip. In order to access the library functions, the tsl_init file needs to be modified to run the cso_init file (which will load the libraries when the WinRunner application boots up).

> WebFunctions : This is a library contains functions designed to run on the YOUR-COMPANY Web systems. 

WinRunner: Commands and Checkpoint Verification information for Web:

> Must do a set_window(); command for each action on a new window, this will assist the script in recognizing/resetting the window state and help prevent scripts failing due to slow network/system performance.

> Must add report_msg or tl_step command after each test to record what happens in the test.

> A obj_check_qui statement checks only one object on the window, and win_check_qui statement checks multiple objects in the window.

> The single property check allows you to check a single property of an object. The single property check dialog will add one of the following functions to your script.

button_check_info

edit_check_info

list_check_info

obj_check_info

scroll_check_info

static_check_info

The Object/Window check allows you to check every default object in the window. After it has completed it inserts an obj_check_gui statement in your script.

The Multiple Objects check allows you to check two or more objects in the window. This selection works best, because it first brings up the checkpoint window, then after the user selects add you can navigate to the AUT. Also, for some reason, the data in the object is retrieved with this feature but not the object/window check. After it has completed it inserts a win_check_gui statement in your script.

> There are 3 main types of GUI checks you can do with WinRunner. Single Property and Object/Window checks, and Multiple Objects.

> There are 35 Web functions, that come with the web test add-in. For the full list please see the TSL reference guide. The table below lists the most commonly used functions.


Function           Description

web browser invoke Invokes the browser and opens a specified site.

web image click Clicks a hypergraphic link or an image.

web label click Clicks the specified label.

web link click Clicks a hypertext link.

web link valid Checks whether a URL name of a link is valid (not broken).

web obj get info Returns the value of an object property.

Needs a set_window command to be run before used

web obj get text Returns a text string from an object.

web obj text exists Returns a text value if it is found in an object.

web sync Waits for the navigation of a frame to be completed.

web url valid Checks whether a URL is valid

web find text Returns the location of text within a page.


> Most of the Web Test functions do not return a value to the test log, in order to record a pass or fail, conditional logic will have to be added to your code below the web function to send a tl_step or report_msg to the log 

WinRunner: How to Structure tests for Web:

Create Begin and End Scripts. This will ensure that WinRunner starts and stops from the same place.

> Mercury recommends that it is better to use smaller specific GUI maps for your testing than have one large GUI map that encompasses your whole application.

> Comment all major selections or events in the script. This will make debugging easier

> Need to create a init script to load correct GUI map, font group and set option variables.

A few of the options you should set are:

# Turns off real time error message reporting if the test case

# fails. The error is still logged in the test results window.

setvar (\"mismatch_break\", \"off\");

# Turn off beeping

setvar (\"beep\", \"off\");

setvar (\"sync_fail_beep\", \"off\");

# Make sure context sensitive errors don\'t trigger real time

# failures that stop the script. The error is still logged in the

# test results window.

setvar (\"cs_fail\", \"off\");

# Sets time winrunner waits between executing statements

# (Mercury default is 0)

setvar (\"cs_run_delay\", \"500\");

# Sets time winrunner waits to make sure window is stable

# (Mercury default is 1000)

setvar (\"delay_msec\", \"500\");

# Sets the fail test when single property fails to uncheck (bug - recommend set to un-check) setvar (\"single_prop_check_fail\", \"0\");

> Determine all paths to start up directory and then set them in the options window.

> In your closing/ending scripts use the GUI_unload_all command to unload all GUI maps in memory.


WinRunner: Recording tips:

> Always record in Context Sensitive mode.

> WinRunner is case sensitive, so be careful in your scripting regarding what is put in upper/lower case.

> If using the full check text test case, make sure you add a filter to block items such as a date, user ID (which might vary depending upon the time the script is running and who is running it).

When recording in Analog mode, avoid holding down the mouse button if this results in a repeated action. For example, do not hold down the mouse button to scroll a window. Instead, scroll by clicking the scrollbar arrow repeatedly. This enables WinRunner to accurately execute the test.

Before switching from Context Sensitive mode to Analog mode during a recording session, always move the current window to a new position on the desktop. This ensures that when you run the test, the mouse pointer will reach the correct areas of the window during the Analog portion of the test.

When recording, if you click a non- standard GUI object, WinRunner generates a generic obj_ mouse_ click statement in the test script. For example, if you click a graph object, it records: obj_ mouse_ click (GS_ Drawing, 8, 53, LEFT); If your application contains a non- standard GUI object which behaves like a standard GUI object, you can map this object to a standard object class so that WinRunner will record more intuitive statements in the test script.

Do not Save in the test procedure unless it is absolutely necessary, this will prevent the need to write numerous clean up scripts.

Do not use the mouse for drop down selections, whenever possible use hotkeys and the arrow keys. When navigating through a window use the tab and arrow keys instead of using a mouse, this will make maintenance of scripts due to UI changes easier in the future.

> If recording on a PC, make sure the environmental settings are set up correctly. Use the Control Panels -> Regional Settings window to make sure that the date format, number formatting, currency, time and date are set the same for all PC\'s that will be using the scripts. This should be done to ensure that playback of test cases do not fail due to a date, currency or time differences. The best way to handle this is to record a small script to set the correct settings in the Control Panel Regional Settings window.

> If recording on a PC, make sure that all workstations running the scripts have the same Windows Display settings. By setting the PC\'s window appearance, and display settings the same, this will help ensure that bitmap comparisons and other graphic tests do not fail due to color and size differences. The best way to handle this is to record a small script to set the correct settings in the Control Panel Display Settings window.

When recording, if you click on an object whose description was not learned by the RapidTest Script wizard, WinRunner learns a description of the object and adds it to a temporary GUI map file.

> WinRunner does not compile the scripts until run time, so be careful to check your code before running it. Another option is to put your script in debug mode and step through the code to make sure it will compile correctly.

> Please Indent \"if\" statements and loops, to help make the code more understandable.

> To add a new object(s) to a GUI map that already exists; perform the following steps:

A. Ensure the no GUI Maps are loaded in the GUI Map Editor.

B. Do a simple recording that will include the object you need added to the GUI Map. This will put the object into the TEMP GUI Map.

C. Go into the Temp GUI Map and delete objects that are already contained in the existing GUI Map.

D. Go into the GUI Map Editor and load the existing GUI Map.

E. Use the Expand button to display two panels on the window.


F. Using the Copy button, copy the values in TEMP into the existing GUI Map. 7. Save the GUI Map file on the network in J:CorpQATDTD_DaybrGUImap . (or substitute TD_web with whatever machine you are currently working on).

> While scripting and debugging your script it is a good idea to put a load command for the Web Function script at the top of your script and an unload at the bottom of your script. This code will automatically load the function library when you run your scripts, thus saving you the extra step when you try to debug your scripts. It is very important to remember to comment out the lines when you are done debugging/developing.

Below is a sample of code you can use.

#this is here for debugging only, when run in shell script will comment out.

#reload (\"J:\\CorpQATD\\TD_web\\functions\\Webfunctions\");

#this is here for debugging only, when run in shell script will comment out.

#unload (\"J:\\CorpQATD\\TD_web\\functions\\Webfunctions\");

> If you create a script by copying and existing one as save as, make sure to go into windows explorer to delete the exp and res folders. You could carry with you extra files you don\'t need. 

WinRunner: Documentation:

> Each test procedure should have a manual test plan associated with it.

> Use word to write the detailed test plan information.

> When test planning is completed cut and paste (or translate) the test plans into test director.

> When creating folders, tests and test sets in Test Director make sure every item has as description.

> When creating test scripts, cut and paste the default header script:


###########################################################

# Script Name:

# Description:

#

# Project:

# #########################################################

# Revision History:

# Date: Initials: Description of change:

#

###########################################################

# Explaination of what script does:

#

#

###########################################################

#this is here for debugging only, when run in shell script

#will comment out.

#reload (\"J:\\CorpQATD\\TD_Daybr\\functions\\functions\");

{put code here}


#this is here for debugging only, when run in shell script #will comment out.

#unload (\"J:\\CorpQATD\\TD_Daybr\\functions\\functions\");

> Before any automation project is started, the tester will write an automation standard document. The automation standards document will describe the following:

> Automation Environment

> Installation requirements

> Client Machines Configurations

> Project Information

> WinRunner Option Settings

> Identify Servers, Database and Sub System tests will run against

> Naming Convention for project

 Specific Recording Standards that apply to individual project.

> While Scripting please comment major portions of the script using WinRunner\'s comment command \"#\" . (example: #This is a comment.) 

WinRunner: Naming Conventions:

> Never call a test \"tests\", WinRunner/TestDirector has problem distinguishing the name test with the directory tests.

> The project automation document will specify any naming conventions used for the individual projects.

> Test Script names should be in UPPER CASE. 

WinRunner: When Running Scripts:

When you make your shell script, remember to run it the first time in update mode to create all the expected results, and then run it in verify mode. 

The reason this needs to be done is because the expect results reside under each specific test script, and for shell scripts it created sub folders for each test it runs. 

The expected results are not pulled from the individual test area to the shell script area, so it needs to be run in update mode to re-create them. Another option is to use Windows Explorer and copy all your expected results folders to the directory containing the shell script.

Explain Get Text checkpoint from object/window with syntax?

You can use text checkpoints in your test scripts to read and check text in GUI objects and in areas of the screen. While creating a test you point to an object or a window containing text. WinRunner reads the text and writes a TSL statement to the test script. You may then add simple programming elements to your test scripts to verify the contents of the text.

b. You can use a text checkpoint to:

i. Read text from a GUI object or window in your application, using obj_get_text and win_get_text

ii. Search for text in an object or window, using win_find_text and obj_find_text

iii. Move the mouse pointer to text in an object or window, using obj_move_locator_text and win_move_locator_text

iv. Click on text in an object or window, using obj_click_on_text and win_click_on_text

Explain Get Text checkpoint from screen area with syntax?

a. We use obj_get_text (, ) function to get the text from an objectb. 

We use win_get_text (window, out_text [, x1, y1, x2, y2]) function to get the text from a window.

Explain Get Text checkpoint from selection (web only) with syntax?

We use win_get_text (window, out_text [, x1, y1, x2, y2]) function to get the text from a window. 

Explain Get Text checkpoint web text checkpoint with syntax?

Returns a text string from an object. web_obj_get_text (object, table_row, table_column, out_text [, text_before, text_after, index]);

i. object The logical name of the object.

ii. table_row If the object is a table, it specifies the location of the row within a table. The string is preceded by the # character.

iii. table_column If the object is a table, it specifies the location of the column within a table. The string is preceded by the # character.

iv. out_text The output variable that stores the text string.

v. text_before Defines the start of the search area for a particular text string.

vi. text_after Defines the end of the search area for a particular text string.

vii. index The occurrence number to locate. (The default parameter number is numbered 1) 

How to manage text using WinRunner

a.Searching text on the window

i. find_text ( string, out_coord_array, search_area [, string_def ] ); string The string that is searched for. The string must be complete, contain no spaces, and it must be preceded and followed by a space outside the quotation marks. To specify a literal, case-sensitive string, enclose the string in quotation marks. Alternatively, you can specify the name of a string variable. In this case, the string variable can include a regular expression.

out_coord_array The name of the array that stores the screen coordinates of the text (see explanation below). search_area The area to search, specified as coordinates x1,y1,x2,y2. These define any two diagonal corners of a rectangle. The interpreter searches for the text in the area defined by the rectangle. string_def Defines the type of search to perform. If no value is specified, (0 or FALSE, the default), the search is for a single complete word only. When 1, or TRUE, is specified, the search is not restricted to a single, complete word.


b. getting the location of the text string

i. win_find_text ( window, string, result_array [, search_area [, string_def ] ] ); window The logical name of the window to search. string The text to locate. To specify a literal, case sensitive string, enclose the string in quotation marks. Alternatively, you can specify the name of a string variable. The value of the string variable can include a regular expression. The regular expression should not include an exclamation mark (!), however, which is treated as a literal character. For more information regarding Regular Expressions, refer to the \"Using Regular Expressions\" chapter in your User\'s Guide.

result_array The name of the output variable that stores the location of the string as a four-element array. search_area The region of the object to search, relative to the window. This area is defined as a pair of coordinates, with x1,y1,x2,y2 specifying any two diagonally opposite corners of the rectangular search region. If this parameter is not defined, then the entire window is considered the search area.

string_def Defines how the text search is performed. If no string_def is specified, (0 or FALSE, the default parameter), the interpreter searches for a complete word only. If 1, or TRUE, is specified, the search is not restricted to a single, complete word.


c. Moving the pointer to that text string

i. win_move_locator_text (window, string [ ,search_area [ ,string_def ] ] ); window The logical name of the window. string The text to locate. To specify a literal, case sensitive string, enclose the string in quotation marks. Alternatively, you can specify the name of a string variable. The value of the string variable can include a regular expression (the regular expression need not begin with an exclamation mark).

search_area The region of the object to search, relative to the window. This area is defined as a pair of coordinates, with x1, y1, x2, y2 specifying any two diagonally opposite corners of the rectangular search region. If this parameter is not defined, then the entire window specified is considered the search area.

string_def Defines how the text search is performed. If no string_def is specified, (0 or FALSE, the default parameter), the interpreter searches for a complete word only. If 1, or TRUE, is specified, the search is not restricted to a single, complete word.


d. Comparing the text

i. compare_text (str1, str2 [, chars1, chars2]);

str1, str2 The two strings to be compared.

chars1 One or more characters in the first string.

chars2 One or more characters in the second string. These characters are substituted for those in chars1. 

WinRunner: How to test to see if the window is maximized

If we want to test to see if the window is maximized here is a sample of how to code it. This code would be best used in the start up script of any automation project.

#first grab the windows handle for the netsoft elite window

win_get_info(\"Browser Main Window \",\"handle\",value);

#now test to see if window can be maximized

if(win_check_info(\"Browser Main Window \",\"maximizable\",FALSE)!=E_OK)

{

#Now run maximize function and pass in the handle\'s value

if (is_maximized(value) == E_OK)

{

report_msg(\"Ran Max window test and maxed the window\");

win_max(\"Browser Main Window \");

}

else

{

report_msg(\"Ran Max window test and did not have to do anything\");

}

}

# end of script 

WinRunner: How to determine which window you are on:


Each time a new browser window appears, you need to test to make sure the correct window is activated. to do with use the following code:

#test to make sure on browser

win_check_info(\"Browser Main Window_1\",\"enabled\",1);

# check to make sure the menu says Menu Selection

menu = obj_check_gui(\"title_shad\", \"list5.ckl\", \"gui5\", 5);

if (menu == 0)

report_msg(\"On Menu Window\");

else

{

report_msg(\"not on right window\");

texit:

WinRunner: How to test if a link exists and is valid

Use the web_link_valid command, then add some conditional logic to say whether or not the test passed.

# verify the link is valid

set_window(\"Default Menu\", 1);

yes = web_link_valid(\"YOUR PRODUCT APPLICATION\", valid);

if (yes == 0)

report_msg(\"link exists on page\");

else

report_msg(\"no link\");

WinRunner: How to select a link on a web page

In order to select a link, you need to use the web_link_click command.

 win_activate (\"Browser Main Window\");set_window (\"Default Menu\", 0);web_link_click(\"YOUR PRODUCT APPLICATION\");web_sync(5);

WinRunner: How to check a property of an object on the web page

The most flexible and dynamic GUI check point is the Multiple Objects Check point. This feature allows you to view the objects before selecting them, and then gives you the opportunity to select with properties of the object you want to test.

Steps to Verify contents of a list box:

1. Turn on record in context sensitive mode (this will create GUI objects for you, if you try using insert function only the code will be created, and you will then have to run in update mode to generate the gui checks).

2. Select Create -> GUI Check -> Multiple Objects

3. Next the Create GUI check point window will come up

4. Press the Add button

5. Now move the cursor around the screen and select the object(s) you want to test.

6. When done selecting the objects press right mouse

7. Now you will be brought back to the Create GUI check point window. Listed in the window will be the object(s) you selected. For each object a list of properties will be selected. Using the check boxes on the left select which values you want to check. To view the content of the values, click on the < � > in the expected results.

8. By clicking on the < � > the edit check window will come up, that will allow you to edit the values.

9. When done press OK on all windows. Then the following code will be added to your script.

win_activate (\"Browser Main Window_0\");

win_check_gui(\"State Selection\", \"list1.ckl\", \"gui1\", 1);

10. To modify or edit your GUI check point select Create -> Edit GUI checklist, and the Create GUI check point window will come back up. 

WinRunner: Parameterization rules:

> Do not call the excel sheet default.xls, rename it the same name as your script (or calling script).

If you want the change the start row of the table change the code table_Row = 1 on the line

for(table_Row = 1; table_Row <= table_RowCount; table_Row ++)

> The c:~WinRunnerdatddt_func.ini file lists what functions will work with Data Driven testing. No web functions are listed in this file. If you want to data drive a web function you will have to added them to the file.

> Any Excel file used for data driven testing must be saved in excel format.

> The Excel Files can only have one worksheet and no formatting.

> The character length max for a number in a cell is 10 char. Anything over becomes scientific notation and does not work. There are two workarounds to this problem, option one is to use concatenation, and option two is to use a \' in the field and make the value a string.

Workaround 1:

Use the & (concatenation command to make your values larger. ) Here is a code sample: edit_set(\"1\" & ddt_val(table,\"SalesNumMax\"));

Workaround 2:

In the data table, instead of typing in the number as 12345678901 type it in as \'12345678901. The \' in the front of the number will make it a string (and strings char limits are 255).

> Also a field length can not start with leading 0\'s. To work around this, use either of the methodologies shown above.

> When defining a new table in the DataDriver Wizard, the table will not be saved if an .xls extension is not specified.

Workaround: When typing the name of a new table, set the name to have an .xls extension. If you use the Parameterize Data function, DO NOT highlight the row, just put your cursor on the space you want to over lay and it will work. If you highlight the whole row, it comes back all messed up.

Here are some steps that explain how to use the DataDriven functionality in WinRunner.

1. Record your script save it

2. Select wizard Tools -> Data Driven Wizard

3. Press Next button

4. At Use a new or existing Excel file box: Navigate to data file area and select data file or enter name for new file.

5. On Assign table name variable: Enter the name to call table in script.

6. Check off Add statements and create a data-driven test

7. Check Parameterize test by line

8. Press Next button

9. On Test Script line to parameterize: either do not replace line (if you don\'t want to) or select a new column. If you select a new column (you can change column name if you want.

10. Repeat for all lines that appear (this depends upon how many line you have in script.

11. When done press Finish

12. Your script will come back and it is all parameterized for you.


Here are Code:

1 table = \"path to excel file\";

2 rc = ddt_open(table, DDT_MODE_READ);

3 if (rc!= E_OK && rc != E_FILE_OPEN)

4 pause(\"Cannot open table.\");

5 ddt_get_row_count(table,table_RowCount);

6 for(table_Row = 1; table_Row <= table_RowCount; table_Row ++)

7 {

8 ddt_set_row(table,table_Row);

9 edit_set(\"Log\",ddt_val(table,\"Log\"));

10 obj_type(\"Log\",\"\");

11 edit_set(\"password\",ddt_val(table, \"password\"));

12 button_press(\"Login\");

13 }

14 ddt_close(table);

Manual

1. Create an xls file (using WinRunner\'s Tools -> Data Table)

2. Make the Columns names be your variable name, make the rows be your data.

3. Save the Xls file

4. At the top of your script type line 1 (take from above example) - This sets the table name for you.

5. Type lines 2 - 5 exactly - this tells the script to open the table and does error handling incase it can\'t open the table, and gets the count of the table

6. Now move cursor to area want to parameterize.

7 Type lines 6 - 8. If you do not want to script to start on row 1, change table_Row = (row to start on).

If you want to run numerous times then create a loop here.

8. Now move cursor to line you want to parameterize. You parameterize by replacing the value in the edit_field statement with ddt_val(table, \"variable\").

Before:

edit_set(\"log\", \"STORE\");

After parameterization will look like:

edit_set(\"Log\", ddt_val(table,\"Log\"));

9. Repeat for all lines you want to parameterize.

10. Then add the closing }.

11. Add the last line (14) to close the table.

12. Repeat steps 7 - 11 for all areas you need to parameterize in code.


ddt_func.ini file:

The Data Wizard functionality uses the ddt_func.ini file to determine which functions you can parameterize. If you run the wizard and find that a certain function does not parameterize the work around is to add the parameter to the ddt_func.ini file. Here are the steps :

 1. Shut down WinRunner Application 

2. Open the ddt_func.ini file located in your ~winrunnerdat directory. 

3. Add the function you want to add and the parameter of the function you want the Data wizard to change. 

4. Save the file 

5. Bring up WinRunner Again.

 6. Your function should now work with the data wizard 

WinRunner: Use the following templates to assist in your scripting

As a default header for all scripts.

###########################################################

# Script Name:

# Description:

#

# Project:

# #########################################################

# Revision History:

# Date: Initials: Description of change:

#

###########################################################

# Explaination of what script does:

#

#

###########################################################

#this is here for debugging only, when run in shell script

#will comment out.

#reload (\"J:\\CorpQATD\\TD_Daybr\\functions\\functions\");

{put code here}


As a default script will reset the WinRunner\'s environment to have the correct option settings and GUI maps loaded.

###########################################################

# Script Name: Setenv

# Description: This script sets up the environment for the

# the automated testing suite ( ).

# Project:

# #########################################################

# Revision History:

#

# Date: Initials: Description of change:

#

#

###########################################################

# Load the Gui map

#GUI_unload (\"c:\\ \\ \");

# remember to use double slashes

# Load Functions

#font group

# Load any dll\'s

# set any option parameters for this particular script.

# Turns off error message if the test case fails.

setvar (\"mismatch_break\", \"off\");

# Turn off beeping

setvar (\"beep\", \"off\");

setvar (\"sync_fail_beep\", \"off\");

# Make sure context sensitive errors don\'t trigger failure

setvar (\"cs_fail\", \"off\");

# Sets time winrunner waits between executing statements

setvar (\"cs_run_delay\", \"2000\");

# Sets time winrunner waits to make sure window is stable

setvar (\"delay_msec\", \"2000\");

# Declare any Constant Declarations

# Declare any Variable Declarations



As a default script to use for calling all the scripts in your project.

###########################################################

# Script Name: OpenClose

# Description: This is the calling(main script that runs ....

#

# Project:

# #########################################################

# Revision History:

#

# Date: Initials: Description of change:

#

###########################################################

status=0;

passed=0;

failed=1;

#Run the set up environment script

call \"c:\\ \"();

#Run the begin script

call \"c:\\ \"();

# Run

call \"c:\\ \"();

# Run end script

call \"c:\\ \"();

# Run the closeenv script

call \"c:\\ \"();


As a default script to reset your WinRunner environment to the generic default settings.

###########################################################

# Script Name: closeenv

# Description: This script re-sets the environment to the

# default settings.

# Project:

# #########################################################

# Revision History:

#

# Date: Initials: Description of change:

# 1

#

###########################################################

# Load the Gui map

#GUI_unload (\"c:\\ \\ \"); # remember to use double slashes

# Load Functions

#font group

# Load any dll\'s

# set any option parameters for this particular script.

# Turns off error message if the test case fails.

setvar (\"mismatch_break\", \"off\");

# Turn off beeping

setvar (\"beep\", \"off\");

setvar (\"sync_fail_beep\", \"off\");

# Make sure context sensitive errors don\'t trigger failure

setvar (\"cs_fail\", \"off\");

# Sets time winrunner waits between executing statements

setvar (\"cs_run_delay\", \"2000\");

# Sets time winrunner waits to make sure window is stable

setvar (\"delay_msec\", \"2000\");

# Declare any Constant Declarations

# Declare any Variable Declarations


WinRunner: The following code is written to replace WinDiff as used by WinRunner for showing differences in file comparison checks.


Written by Misha Verplak

INSTRUCTIONS

. Place these files into winrunner arch directory:

wdiff_replace.exe

wdiff_replace.ini


. Rename wdiff.exe to wdiff_orig.exe


. Rename wdiff_replace.exe to wdiff.exe . Edit wdiff_replace.ini to specify the new difference program


FILES

wdiff_replace.exe compiled program

wdiff_replace.ini settings

wdiff_replace.c C source code

wdiff_readme.txt this file :)


#include

#define TITLE_NAME TEXT(\"wdiff_replace\")

#define CLASS_NAME TEXT(\"funny_class\")

#define BC2_APP_PATH TEXT(\"C:\\Program Files\\Beyond Compare 2\\BC2.exe\")

#define WDIFF_INI TEXT(\"wdiff.ini\")

#define WDIFF_REPL_INI TEXT(\"wdiff_replace.ini\")

#define EMPTY_TXT TEXT(\"[EMPTY]\")

extern char** _argv;

int WINAPI

WinMain (HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow)

{

DWORD res;

TCHAR sLeftFile[MAX_PATH], sRightFile[MAX_PATH];

TCHAR sAllArgs[MAX_PATH*2];

TCHAR sPathDrive[MAX_PATH], sPathDir[MAX_PATH], sPathFile[MAX_PATH], sPathExt[MAX_PATH];

TCHAR sDiffReplIni[MAX_PATH];

TCHAR sDiffApp[MAX_PATH];

TCHAR sErrMsg[MAX_PATH*2];

TCHAR sArgN[10], sArg[MAX_PATH];

int n;

/* using argv[0] (current fullpath exe), extract directory name, expect to find ini there */

_splitpath (_argv[0], sPathDrive, sPathDir, sPathFile, sPathExt);

sprintf(sDiffReplIni, \"%s%s%s\", sPathDrive, sPathDir, WDIFF_REPL_INI);

/* read wdiff.ini for WinRunner\'s two files */

res = GetPrivateProfileString(TEXT(\"WDIFF\"), TEXT(\"LeftFile\"), EMPTY_TXT, sLeftFile, MAX_PATH, WDIFF_INI);

res = GetPrivateProfileString(TEXT(\"WDIFF\"), TEXT(\"RightFile\"), EMPTY_TXT, sRightFile, MAX_PATH, WDIFF_INI);

/* check if got the default string, this means didn\'t get a value */

if (!strcmp(sLeftFile, EMPTY_TXT) || !strcmp(sRightFile, EMPTY_TXT)) {

MessageBox (NULL, TEXT(\"Problem reading LeftFile or RightFile from wdiff.ini\"), TITLE_NAME, MB_ICONERROR | MB_OK);

return(0);

}

/* read wdiff_replace.ini for file & path of replacement to wdiff */

res = GetPrivateProfileString(TEXT(\"Diff\"), TEXT(\"diff_app\"), EMPTY_TXT, sDiffApp, MAX_PATH, sDiffReplIni);

if (!strcmp(sDiffApp, EMPTY_TXT)) {

sprintf(sErrMsg, \"Problem reading diff_app from:nn%s\", sDiffReplIni);

MessageBox (NULL, sErrMsg, TITLE_NAME, MB_ICONERROR | MB_OK);

return(0);

}

/*

* read wdiff_replace.ini for args

* add the arguments together, with quotes, eg. \"arg1\" \"arg2\"

* also substitute for LeftFile and RightFile

*/

sprintf(sAllArgs, \"\");

n=1;

while(1) {

sprintf(sArgN, \"arg%d\", n);

res = GetPrivateProfileString(TEXT(\"Diff\"), sArgN, EMPTY_TXT, sArg, MAX_PATH, sDiffReplIni);

if (!strcmp(sArg, EMPTY_TXT)) break;

if (!strcmp(sArg, TEXT(\"LeftFile\"))) strcpy(sArg, sLeftFile);

if (!strcmp(sArg, TEXT(\"RightFile\"))) strcpy(sArg, sRightFile);

if (n == 1) {

sprintf(sAllArgs, \"\"%s\"\", sArg);

}

else {

sprintf(sAllArgs, \"%s \"%s\"\", sAllArgs, sArg);

}

n++;

}

/* Run alternative diff application with its args (could use spawn here?) */

res = execlp (sDiffApp, TEXT(\"dummy\"), sAllArgs);

/* exec replaces current app in the same env, so only get to here if problems */

sprintf(sErrMsg, \"Problem running diff_app:nn%s\", sDiffApp);

MessageBox (NULL, sErrMsg, TITLE_NAME, MB_ICONERROR | MB_OK);

return 0;

WinRunner: The following script and dll provides WinRunner with perl-like regular expression search and match functions, to use with any GUI property, and add search and match functions.

# regular expressions from DLL

extern int re_match(string str, string re, out int m_pos, out int m_len, inout string detail <252>);

extern int re_search(string str, string re, out int m_pos, out int m_len, inout string detail <252>);

public function re_func_init()

{ auto re_func_dll, html_name;

# location of dll

re_func_dll = getenv(\"M_ROOT\") \"\\arch\\rexp.dll\";

# to access exported functions

load_dll(re_func_dll);

# function generator declarations

generator_add_function(\"re_search\",\"Search a string for a regular expression.n\"

\"Returns 0 no match, 1 found match, gets position and length.n\"

\"Submatch results in \'detail\', use re_get_details() or re_get_match().\",5,

\"search_string\",\"type_edit\",\"\"string to search\"\",

\"regular_expression\",\"type_edit\",\"\"regexp\"\", \"Out position\",\"type_edit\",\"position\",

\"Out length\",\"type_edit\",\"len\", \"Out detail\",\"type_edit\",\"detail\");

generator_add_category(\"regex\");

generator_add_function_to_category(\"regex\",\"re_search\");

generator_set_default_function(\"regex\",\"re_search\");


generator_add_function(\"re_match\",\"Match a regular expression to a whole string.n\"

\"Returns 0 no match, 1 found match, gets position and length.n\"

\"Submatch results in \'detail\', use re_get_details() or re_get_match().\",5,

\"match_string\",\"type_edit\",\"\"string to match\"\",

\"regular_expression\",\"type_edit\",\"\"regexp\"\", \"Out position\",\"type_edit\",\"position\",

\"Out length\",\"type_edit\",\"len\", \"Out detail\",\"type_edit\",\"detail\");

generator_add_function_to_category(\"regex\",\"re_match\");


generator_add_function(\"re_get_detail\",\"Get the (sub)match position and length from the detail.n\"

\"Typically used after re_search() or re_match()nsubmatch can be 0 for whole match\",6,

\"detail\",\"type_edit\",\"detail\", \"submatch\",\"type_edit\",\"0\", \"Out nsubs\",\"type_edit\",\"nsubs\",

\"Out line\",\"type_edit\",\"line\", \"Out position\",\"type_edit\",\"position\", \"Out length\",\"type_edit\",\"len\");

generator_add_function_to_category(\"regex\",\"re_get_detail\");


generator_add_function(\"re_get_match\",\"Get the (sub)matched string from the detail.n\"

\"Typically used after re_search() or re_match()nsubmatch can be 0 for whole match\",4,

\"original_string\",\"type_edit\",\"orig_str\", \"detail\",\"type_edit\",\"detail\",

\"submatch\",\"type_edit\",\"0\", \"Out match_str\",\"type_edit\",\"match_str\");

generator_add_function_to_category(\"regex\",\"re_get_match\");


generator_add_function(\"re_print_detail\",\"Print the re match details to the debug window.n\"

\"Typically used after re_search() or re_match().\",1, \"detail\",\"type_edit\",\"detail\");

generator_add_function_to_category(\"regex\",\"re_print_detail\");


generator_add_function(\"matche\",\"Replacement for the builtin match() function.\",2,

\"match_string\",\"type_edit\",\"\"string to match\"\", \"regular_expression\",\"type_edit\",\"\"regexp\"\");

generator_add_function_to_category(\"string\",\"matche\");

generator_add_function_to_category(\"regex\",\"matche\");

generator_add_function(\"match\",\"Do not use this function. Use matche() instead.\",0);

}


# replacement for the builtin match() function

public function matche(search_string, regexp)

{

extern RSTART, RLENGTH;

auto rc, m_pos, m_len, detail;

if(re_search(search_string, regexp, m_pos, m_len, detail))

{

rc = m_pos+1;

RSTART = m_pos+1;

RLENGTH = m_len;

}

else

{

rc = 0;

RSTART = 0;

RLENGTH = 0;

}

return rc;

}


# internal function to decode detail from DLL

function _detail_decode(detail, position, nbytes)

{

auto v, v_hi;

v = int(ascii(substr(detail, position, 1))/2);

if(nbytes == 2)

{

v_hi = int(ascii(substr(detail, position+1, 1))/2);

v += v_hi*256;

}

return v;

}


# dump the detail to WinRunner\'s debug window

#

# structure of the detail string:

# (1 byte ) size of this detail, ie. number of submatches + 1

# (2 bytes) line number where match occurred, counting from 1

# [(2 bytes) position of (sub)match, 0-th submatch is whole match

# [(2 bytes) length of (sub)match

# [--------- repeated to a maximum of 50 submatches ---]

#

public function re_print_detail(detail)

{

auto size, line, i, pos, len, s;


size = _detail_decode(detail, 1, 1);

print \"size \" size;

if (size == 0) return E_OK;

print \"submatches \" (size-1);

line = _detail_decode(detail, 2, 2);

print \"line \" line;


for (s=0; s<size; s++)

{

pos = _detail_decode(detail, s*4+4, 2);

len = _detail_decode(detail, s*4+6, 2);

print \"sub(\" s \") pos: \" pos \" len: \" len;

}

return E_OK;

}


# get the (sub)match position and length from the detail

public function re_get_detail(in detail, in submatch, out nsubs, out line, out position, out len)

{

auto size;


nsubs = 0;

position = 0;

len = 0;

line = 0;


size = _detail_decode(detail, 1, 1);

if (size == 0) return E_NOT_FOUND;

nsubs = size-1;

if (submatch < 0) return E_OUT_OF_RANGE;

if (submatch+1 > size) return E_OUT_OF_RANGE;


line = _detail_decode(detail, 2, 2);

position = _detail_decode(detail, submatch*4+4, 2);

len = _detail_decode(detail, submatch*4+6, 2);

return E_OK;

}


# get the (sub)matched string from the detail

public function re_get_match(in orig_str, in detail, in submatch, out match_str)

{

auto rc, nsubs, position, len, line;


match_str = \"\";


rc = re_get_detail(detail, submatch, nsubs, line, position, len);

if (rc != E_OK) return rc;


match_str = substr(orig_str, position+1, len);

return E_OK;

Every time there is a change in the Application Object I need to change the Object name and rerun the Test Script with a new object Name. Any suggestions on it?

Online Vs Batch Execution : Functions & Compiled Modules - Wild Card Characters

> Every time there is a change in the Application Object I need to change the Object name and rerun the Test Script with a new object Name. Any suggestions on it?



If there is a minimal change in the application Object then it is better to wild card the Object properties 

What�s the WinRunner?

WinRunner is Mercury Interactive Functional Testing Tool. 

How many types of Run Modes are available in WinRunner?

WinRunner provide three types of Run Modes.
> Verify Mode
> Debug Mode
> Update Mod

What�s the Verify Mode?

In Verify Mode, WinRunner compare the current result of application to it�s expected result. 

What�s the Debug Mode?

In Debug Mode, WinRunner track the defects in a test script. 

What�s the Update Mode?

In Update Mode, WinRunner update the expected results of test script. 

How many types of recording modes available in WinRunner?

WinRunner provides two types of Recording Mode:
> Context Sensitive
> Analog 

What�s the Context Sensitive recording?

WinRunner captures and records the GUI objects, windows, keyboard inputs, and mouse click activities through Context Sensitive Recording. 

What�s the Analog recording?

It captures and records the keyboard inputs, mouse click and mouse movement. It�s not captures the GUI objects and Windows. 

Where are stored Debug Result?t

Debug Results are always saved in debug folder.

What�s WinRunner testing process?t

WinRunner involves six main steps in testing process : 

> Create GUI map

> Create Test

> Debug Test

> Run Test

> View Results

> Report Defects 

What�s the GUI SPY?

We can view the physical properties of objects and windows through GUI SPY. 

How many types of modes for organizing GUI map files?

WinRunner provides two types of modes : 

> Global GUI map files

> Per Test GUI map files 

What�s the contained in GUI map files?

GUI map files stored the information, it learns about the GUI objects and windows. 

What�s the difference between GUI map and GUI map files?

The GUI map is actually the sum of one or more GUI map files. 

How do you view the GUI map content?

We can view the GUI map content through GUI map editor.

What�s the checkpoint?

Checkpoint enables you to check your application by comparing it�s expected results of application to actual results. 

What�s the Execution Arrow?

Execution Arrow indicates the line of script being executed. 

What�s the Insertion Point?

Insertion point indicates the line of script where we can edit and insert the text. 

What�s the Synchronization?

Synchronization is enables we to solve anticipated timing problems between test and application. 

What�s the Function Generator?

Function Generator provides the quick and error free way to add TSL function on the test script. 

How many types of checkpoints are available in WinRunner?

WinRunner provides four types of checkpoints-

> GUI Checkpoint

> Bitmap Checkpoint

> Database Checkpoint

> Text Checkpoint 

What�s contained in the Test Script?

Test Script contained the Test Script Language.

What are the Data Driven Test?

When we want to test our application, we may want to check how it performance same operation with the multiple sets of data. 

How do you record a Data Driven Test?

When we want to test our application, We may want to check how it performance same operation with the multiple sets of data. 

What are the steps of creating a Data Driven Test?

Data Driven Testing have four steps : 

> Creating test

> Converting into Data Driven Test

> Run Test

> Analyze test

What�s the extension of GUI map files?

GUI map files extension is �.gui�

What statement generated by WinRunner when you check any objects?

Obj_check_gui statement. 

What statement generated by WinRunner when you check any windows?

Win_check_gui statement 

What statement generated by WinRunner when you check any bitmap image over the objects?

Obj_check_bitmap statement 

What statement generated by WinRunner when you check any bitmap image over the windows?

Win_check_bitmap statement 

What statement used by WinRunner in Batch Testing?

�Call� statement.

Which short key is used to freeze the GUI Spy?

�Ctrl+F3� 

How many types of parameter used by WinRunner?

WinRunner provides three types of Parameter-

> Test

> Data Driven

> Dynamic 

How many types of Merging used by WinRunner?

WinRunner used two types of Merging-

> Auto

> Manual 

What�s the Virtual Objects Wizard?

Whenever WinRunner is not able to read an objects as an objects then it uses the Virtual Objects Wizard. 

How do you handle unexpected events and errors?

WinRunner uses the Exception Handling function to handle unexpected events and errors.

How do you comment your script?

We comment script or line of the script by inserting �#� at the beginning of script line.

What�s the purpose of the Set_Windows command?

Set_Window command set the focus to the specified windows. 

How you created your test script?

Programming.

What�s a command to invoke application?

Invoke_application 

What do you mean by the logical name of objects?

Logical name of an objects is determined by it�s class but in most cases, the logical name is the label that appear on an objects. 

How many types of GUI checkpoints?

In Winrunner, three types of GUI checkpoints-

> For Single Properties

> For Objects/Windows

> For Multiple Objects 

How many types of Bitmap Checkpoints?

In Winrunner, two types of Bitmap Checkpoints-

> For Objects/Windows

> For Screen Area 

How many types of Database Checkpoints?

In Winrunner, three types of Database Checkpoints-

> Default Check

> Custom Check

> Runtime Record Check 

What is Gui Map?

�GUI Map� provides a layer of indirection between the objects described in the script and the widgets created by the application. The GUI Map is made up of all currently loaded GUI Map files. GUI Map files are viewed in the GUI Map Editor. The GUI map file contains the logical names and physical descriptions of GUI objects. WinRunner stores information it learns about a window or object in a GUI Map. The GUI map provides a centralized object repository, allowing testers to verify and modify any tested object. These changes are then automatically propagated to all appropriate scripts, eliminating the need to build new scripts each time the application is modified. When WinRunner runs a test, it uses the GUI map to locate objects. It reads an object�s description in the GUI map and then looks for an object with the same properties in the application being tested. Each of these objects in the GUI Map file will be having a logical name and a physical description. When WinRunner learns the description of a GUI object, it does not learn all its properties. Instead, it learns the minimum number of properties to provide a unique identification of the object. Each object is identified within the scope of its parent window, not the entire application. An Example of how WinRunner uses a logical name and physical description to identify an object :

> �Print� for a Print dialog box, or �OK� for an OK button. This short name connects WinRunner to the object�s longer physical description.

>>>>> Logical Name : This is the name that appears in the test script when user records an application. Usually WinRunner uses attached text that WinRunner can read as the Logical name. 

> WinRunner checks that there are no other objects in the GUI map with the same name.

set_window (\"Readme.doc - WordPad\", 10);

menu_select_item (\"File; Print... Ctrl+P\");

set_window (\"Print\", 12);

button_press (\"OK\");


>>>>> Physical Description : The physical description contains a list of the object�s physical Properties. The Print dialog box, for example, is identified as a window with the label �Print�

> Readme.doc window: {class: window, label: \"Readme.doc - WordPad\"}

File menu: {class: menu_item, label: File, parent: None}

Print command: {class: menu_item, label: \"Print... Ctrl+P\", parent: File}

Print window: {class: window, label: Print}

OK button: {class: push_button, label: OK}

For each class, WinRunner learns a set of default properties. Each default property is classified �obligatory� or �optional�. An obligatory property is always learned (if it exists).

An optional property is used only if the obligatory properties do not provide unique identification of an object. These optional properties are stored in a list. WinRunner selects the minimum number of properties from this list that are necessary to identify the object. It begins with the first property in the list, and continues, if necessary, to add properties to the description until it obtains unique identification for the object. In cases where the obligatory and optional properties do not uniquely identify an object, WinRunner uses a selector to differentiate between them. Two types of selectors are available:

>>> Location selector : The location selector uses the spatial order of objects within the window, from the top left to the bottom right corners, to differentiate among objects with the same description.

>>> Index selector : The index selector uses numbers assigned at the time of creation of objects to identify the object in a window. Use this selector if the location of objects with the same description may change within a window.

Consider an example where a form has two OK buttons

> Script : set_window (\"Form1\", 2);

button_press (\"OK\") ;{ class: push_button, label: OK, MSW_id: 1}

button_press (\"OK_1) ;{ class: push_button, label: OK, MSW_id: 2}

WinRunner recorded the object logical names as OK and OK_1, in Physical description, both the buttons is having the same class and label properties. So, WinRunner assigned the third property to both the buttons \"MSW_id (Microsoft windows Id) which is assigned by operating system. When we run the script, WinRunner will recognize those objects by MSW_id as the id is different for both the OK buttons. User can modify the logical name or the physical description of an object in a GUI map file using the GUI Map Editor. Changing the logical name of an object is useful when the assigned logical name is not sufficiently descriptive or is too long. Changing the physical description is necessary when the property value of an object changes.

Using the GUI Spy, user can view the properties of any GUI object on users desktop. User use the Spy pointer to point to an object, and the GUI Spy displays the properties and their values in the GUI Spy dialog box. User can choose to view all the properties of an object, or only the selected set of properties that WinRunner learns.

           There are two modes for organizing GUI map files.

> Global GUI Map file : a single GUI Map file for the entire application.

> GUI Map File per Test : WinRunner automatically creates a GUI Map file for each test created

> Global GUI Map File is the default mode. As the name suggests in Global mode a single script is created for the entire application.

Using Rapid Test Script Wizard option we can learn the entire application but this option is not available in the GUI File Test mode.

In Global GUI Map file, it learns all the objects in the window into one temporary GUI Map file. We can see this temporary file by Tools 

-> GUI Map Editor -> L0 .

               In this file all objects and their property values will be stored. We need to save this GUI Map file explicitly. If it�s not saved, then all the entries will remain in the temporary file. We can specify whether we have to load this temporary GUI Map file should be loaded each time in the General Options.

In GUI Map File Per Test a script is generated for each and every window/ screen in the application. In GUI Map File Per Test if we save the test script it implicitly saves all the GUI objects in a separate GUI file.

> If an object isn\'t found in the GUI Map during recording, WinRunner reads its attributes and adds it to the Temporary GUI Map file. During playback, it doesn\'t matter which GUI Map file defines an object. Objects may be identified from any loaded GUI Map file whether it is temporary file or GUI Map file. If the GUI Map already contains an object, another file with that object cannot be loaded into the GUI Map. GUI files with unsaved changes preceded by asterisk (*) . Temporary GUI always loads automatically.

WinRunner fails to identify an object in a GUI due to various reasons.

i. The object is not a standard windows object.

ii. If the browser used is not compatible with the WinRunner version, GUI Map Editor will not be able to learn any of the objects displayed in the browser window.

        The GUI Map Editor displays the various GUI Map files created and the windows and objects learned in to them with their logical name and physical description. We can invoke GUI Map Editor from the Tools Menu in WinRunner. The different options available in the GUI Map Editor are : 

> Learn : Enables users to learn an individual GUI object, a window, or the entire GUI objects within a window.

> Modify : Opens the Modify dialog box and allows user to edit the logical name and the physical description of the selected GUI object.

> Add : adds a GUI objects to the open GUI Map files.

> Delete : deletes the selected GUI objects from the open GUI Map files.

> Copy (Expanded view only) : copies the selected GUI objects to the other GUI map file in the GUI Map Editor.

> Move (Expanded view only) : moves the selected GUI objects to the other GUI map file in the GUI Map Editor.

> Show : highlights the selected GUI object if the object is visible on the screen.

> Find : Helps users to easily locate a specific GUI object in the GUI map.

> Expand (GUI Files view only) : expands the GUI Map Editor Dialog box, enabling the user to copy or move GUI objects between open GUI Map files.

> Collapse (GUI Files view only) : collapses the GUI Map Editor Dialog box.

> Trace (GUI Files view only) : Enables user to trace a GUI object that appears in more than one GUI Map file.

         User can clear a GUI Map file using the �Clear All� option in the GUI Map Editor. When the user is working with GUI map per test mode and if the user clear the temporary GUI map, the GUI map test information will not be saved with the test and the test may fail.

>>> Filters Options : GUI Map Editor has a Filter option which enables user to define which GUI objects to display in the GUI Map Editor, there are 3 options. Filter by Logical Name:-If selected, displays only those GUI objects whose Logical Names contain the substring user specified.

>>>> Filter by Physical Description : If selected, displays only those GUI objects whose Physical Descriptions contain the substring user specified.

>> Filter by Class : If selected, displays only those GUI objects in the class user specified.

           Saving Changes to the GUI Map When the user makes some modification to the physical description or logical name within a GUI map file then the user must save the changes before ending the testing session and exiting WinRunner. User need not save the changes manually if the user is working in the GUI Map File per Test mode. Changes are saved automatically with the test. If the user adds new windows from a loaded GUI map file to the temporary GUI map file, then when the user save the temporary GUI map file, the New Windows dialog box opens. Prompting to add the new windows to the loaded GUI map file or save them in a new GUI map file.

>>> User can load GUI map in two ways

a) Using function : GUI_load (file_name);

file_name The full path of the GUI map. If the user is not specifying a full path, then WinRunner searches for the GUI map relative to the current file system directory. So, the user must always specify a full path to ensure that WinRunner will find the GUI map.

b) Using map editor : From GUI files drop down in the Map editor the user can select the file name. When the user selects a file name then the GUI file will be loaded.

> Unload GUI map files : GUI_close();

To unload a specific GUI Map file

GUI_close_all;

To unload all the GUI Map files loaded in the memory.

While working in the GUI Map File per Test mode, WinRunner automatically creates, saves, and loads a GUI map file with each test user create. When user work in the Global GUI Map File mode it enables user to save information about the GUI of the application in a GUI map that can be referenced by several tests. When the application changes instead of updating each test individually, user can merely update the GUI map that is referenced by the entire group of tests. The GUI Map File Merge Tool enables user to merge multiple GUI map files into a single GUI map file. Before merging GUI map files, user must specify at least two source GUI map files to merge and at least one GUI map file as a target file. The target GUI map file can be an existing file or a new (empty) file.

> Auto merge : The merge tool merges all GUI map files, and prompts user only if there are conflicts to resolve between the files. Manual merge:-user merges each GUI map file manually. The merge tool prevents user from creating conflicts while merging the files. Many applications contain custom GUI objects. A custom GUI object is any object not belonging to one of the standard classes used by WinRunner these objects are therefore assigned to the generic �object� class. When WinRunner records an operation on a custom object, it generates obj_mouse_click statements in the test script. If a custom object is similar to a standard object, user can map it to one of the standard classes. user can also configure the properties WinRunner uses to identify a custom object during Context Sensitive testing. The mapping and the configuration user set are valid only for the current WinRunner session. To make the mapping and the configuration permanent, user must add configuration statements at the startup test script. A startup test is a test script that is automatically run each time when the user start WinRunner. User can create startup tests that load GUI map files and compiled modules, configure recording, and start the application under test. User can designate a test as a startup test by entering its location in the Startup Test box in the Environment tab in the General Options dialog box. User can use the RapidTest Script wizard to create a basic startup test called myinit that loads a GUI map file and the application being tested. When working in the GUI Map File per Test mode the myinit test does not load GUI map files.

> Sample Startup Test : # Start the Flight application if it is not already displayed on the screen. #invoke_application statement, which starts the application being tested.

if ((rc=win_exists(\"Flight\")) == E_NOT_FOUND)

invoke_application(\"w:\\flight_app\\flight.exe\", \"\", \"w:\\flight_app\",SW_SHOW);

# Load the compiled module \"qa_funcs\". load statements, which load compiled modules #containing user-defined functions that users frequently call from their test scripts.

load (\"qa_funcs\", 1, 1);

# Load the GUI map file \"flight.gui\". GUI_load statements, which load one or more GUI #map files. This ensures that WinRunner recognizes the GUI objects in the application #when the user run tests.

GUI_load (\"w:\\qa\\gui\\flight.gui\");

# Map the custom �borbtn� class to the standard �push_button� class. set_class_map statement configure how WinRunner records GUI objects in application.

set_class_map (�borbtn�, �push_button�);

> Deleting a Custom Class : User can delete only custom object classes. The standard classes used by WinRunner cannot be deleted.

>>> Virtual object : Applications may contain bitmaps that look and behave like GUI objects. WinRunner records operations on these bitmaps using win_mouse_click statements. By defining a bitmap as a virtual object, user can instruct WinRunner to treat it like a GUI object such as a virtual push buttons, radio buttons, check buttons, lists, or tables when the user record and run tests. If none of these is suitable, user can map a virtual object to the general object class.

> WinRunner identifies a virtual object according to its size and its position within a window, the x, y, width, and height properties are always found in a virtual object�s physical description. If these properties are changed or deleted, WinRunner cannot recognize the virtual object. If the user move or resize an object, user must use the wizard to create a new virtual object. The virtual object should not overlap GUI objects in application (except for those belonging to the generic �object� class, or to a class configured to be recorded as �object�). If a virtual object overlaps a GUI object, WinRunner may not record or execute tests properly on the GUI object.

>>>> Advantages of GUI Map : 

> Maintainability : If a button label changes in the application, update the button description once in the GUI map rather than in 500 tests

Readability

button_press (\"Insert\")

instead of

button_press(\"{class: ThunderSSCommand}\");

Portability

Use the same script for all platforms, with a different GUI map for each platform.


What is TestDirector?

TestDirector is a test management tool produced by Mercury Interactive. 

 Its four modules : 

> Requirements, 

> Test Plan, 

> Test Lab and 

> Defects Manager 

            are integrated to enable information to flow smoothly between different stages of the testing process. Completely Web-enabled, TestDirector supports communication and collaboration among distributed testing teams.

TestDirector has been classified in the following categories :

> Defect Tracking

> Testing and Analysis

> Debugging

> Automated Software Quality (ASQ) 

How you integrated your automated scripts with TestDirector?

When we work with WinRunner, we can choose to save our tests directly to our TestDirector database or while creating a test case in the TestDirector we can specify whether the script in automated or manual.

 And if it is automated script then

Is there any possibility to restrict duplication of defects being created in TD?

No Way. The only thing we can do is to find the similar defects and delete or close them. 

What is Quality Center( Test Director)?

We can use Quality Center to create a project (central repository) of manual and automated tests and components, build test cycles, run tests and components, and report and track defects.

 We can also create reports and graphs to help we review the progress of test planning, runs, and defect tracking before a software release. When we work in QuickTest,

 We can create and save tests and components directly to our Quality Center project. We can run QuickTest tests or components from Quality Center and then use Quality Center to review and manage the results. We can also use Quality Center with Business Process Testing support to create business process tests, comprised of the components we create in either QuickTest or Quality Center with Business Process Testing support.

What do you call the window testdirector-testlab?

Execution Grid\". It is place from where we Run all Manual / Automated Scripts 

After creating the test cases in excel and exported to TD. How does test director know the headings?

To export the test cases from spreadsheet to TD there are 8 steps.

 In 6th step we need to map the Td fields with corresponding Spreadsheet columns. Hence we are the mapping so we can map according to our specifications.

How to use TestDirect like a Dashboard?

The new version of TD (TestDirector for Quality Center) should provide we with that.

 If we not want to upgrade, we have to design our own \"start page\", include the apps and bits we want to display, and use code to extract data from TD.

Can you retrieve a test case once you have deleted them in Test Director ?

How do we import testcases written in Excel to Test Director

> Use Mecury Interactive Microsoft Excel Add-in for importing test cases written in excel sheet.

> It is available on Add-ins page.

> Select the rows in excel which you want to upload to TD.

> Then select export to TD option under tools menu .

Is it necessary to learn Test Director for beginners

Test director is a test management tool, it is used across all major organizations and is generally used for management of all test activities in organization. 

It is important to learn this tool, but for beginners it is enough to understand how to log defects into it and how to run tests using it.

Can you please explain the procedure of connecting TestDirector in QTP?

To connect to TD from QTP follow the steps... 

Open Qtp ==> Tools ==> Select TestDirector Connection ==> In Server Connction Box Enter TD address(URL of TD) ==> Click Connect==> In project Connection Box Enter the Details

 Domain,Project,User name and Password ==> Click Connect If we want to reconnect on startup check the reconnect on startup and save password for reconnection on startup.Then close.

What are the various types of reports in TestDirector?

For each and every phase we can get reports, like for requirements, test cases, and test run. 

There are some types of reports also available like report summary, progress report and requirements coverage report. Each and every test director client tool consists of a menu bar Analysis. 

By using this menu we can crate reports in table format. You can generate graphs. All graphs options in maths are supported. And we can create various types of charts too.

TD (Quality Center 9.0) how can you run automated test cases?

While designing our test steps in QC for automation tests in test plan module,

Test Script tab is availble. we can generate script here or copy from your automatioon tool. While running our tests, it will ask for on which host we want to run. 

We need to select the system in our network. Then run it. Before going to run our script in a system, the automation tool, like WinRunner, must be installed on that system. Otherwise you will get an error.

How do we attach Excel sheet with test director?

This function is for getting datatable(excel sheet) in test director.

Try to use it and as vbs file and call this function to get ur datatable.

GetAttachment(FileName, OutPath)

FileName The name of the attachment that needs to be copied

OutPath The folder location where the file needs to be stored

Return value The full path where the file has been copied on the local file system


Example:

FilePath = GetAttachment(\"test.pdf\", \"C:\")

MsgBox \"Your file is here:\" & FilePath


The GetAttachmentFromTest finds the attachment associated to the given test name and stores it in a local folder.


GetAttachmentFromTest(TestName, FileName, OutPath)


TestName The name of the test where the attachment is located

FileName The name of the attachment that need to be copied

OutPath The folder location where the file needs to be stored

Return value The full path where the file has been copied on the local file system


Example:

FilePath = GetAttachmentFromTest(\"Attachment\", \"hello.vbs\", \"C:aa\")

MsgBox \"Your file is here:\" & FilePath

What is the use of Test Lab in Test director?

Test Lab can be used to create a test set. You can add one or many test cases into a test set.

 Then run all test cases in a test set together and change the status as pass/fail.

Can we map the Defects directly to the requirements (not thru the test cases) in the Test Director?

Yes.

> Create our req. structure.

> Create the test case structure and the test cases.

> Map the testcases to the apr. req.

> Run and report bugs from your test cases in the test lab module.

> The database structure in TD is mapping testcase to defects, only if you have created the bug from the apr. test case. Maybe you can update the mapping by using some code in the bug script module (from the customize project funktion), as fare as I know, its not possible to map defects directly to an req. 

How do I run reports from Test Director?

This is how you do it:

> Open the test director project.

> Display the requirements module.

> Chose report

 

Analysis > Reports > Standard Requirements Report. 

Can we export the files from Test director to Excel Sheet? If yes then how?

Yes we can Export from test director to the Excel Sheet.its following as 

> Design tab -- Right click -> go to save as -> select excel and save it

Requirement tab  Right click on main req/ click on export/ save as word, excel or other template. This would save all the child requirement.

Test plan tab : only individual test can be exported. 

> No parent - child export is possible.Select a test script. click on the design steps tab. right click anywhere on the open window. click on export and save as.

> Test lab tab : select a child group. Click on execution grid if it is not selected. right click anywhere . default save option is excel. but can be saved in doc and other formats. select \'all\' or \'selected\' option.

> defects tab : right click anywhere on the window, export all or \'selected\' defects and save excel sheet or document. 

Can we upload test cases from an excel sheet into Test Director?

Yes, We can do that. Go to Add In menu in TestDirector, find the Excel add in, and install it in you machine. Now open excel, 

We can find the new menu option export to Test director. Rest of the procedure is self explanatory .

How can we map a single defect to two test scripts? Is there a way in test director so that we can state that defect defect X is same for test script A and test script B?

No way. When we run a script ,When we run a script,We find and generate a defect report. 

In other words, Every defect report is unique to a single test script.

How can we export multiple test cases from TD in a single go?

Open any test and click on the tab \' design step \'. 

Once it opens, We can right click on the first cell and export into any format.

How to customize the reports generated?

This depends a lot of what we are interested in \"reporting on\". 

We have to combine both SQL and VB script to extract data from TD to Excel. 

Its also possible to \"customize\" the standard reports given from the \"analyze\" tab, this is written in XML if you are familiar with this language. If we log in to Mercury support we will be able to find a lot of code examples.

How many tabs in TestDirector and explain them?

There are 4 tabs available in Testdirector : 

A. Requirement : to track the customer requirenments 

B. Testplan : to design the testcases & to store the testscripts 

C. Testlab : to exectue the testsets & track the resultsD. Defect : to log a defect & to track the logged defects

How to map requirements with testcases in TestDirector?

A. In requirements TAB select coverage view.

B. Select requirement by clicking on Parent/Child or grand Child.

C. On right hand side (in Coverage View Window) another window will appear. It has two TABS 

(a) Tests Coverage 

(b) Details. Test Coverage TAB will be selected by default or you click it.

D. Click on Select Tests Button. A new window will appear on right hand side and we will see a list of all tests. We can select any test case we want to map with our requirement.

How to use TestDirector in real time projects?

Once completed the preparing of the test cases.

> Export the test cases in to Test Director. ( it will contain total 8 steps).

> The test cases will be loaded in the Test Plan module.

> Once the execution is started, we move the test cases from Test Plan tab to the Test Lab module.

> In Test Lab, we execute the test cases and put as pass or fail or incomplete. We generate the graphs in the test lab for daily report and sent to the onsite (where ever we want to deliver).

> If we got any defects and raise the defect in the defect module. When raising the defect ,attach the defect with the screen shot. 

How can we add requirements to test cases in Test Director?

We can add requirements to test cases in two ways : 


> Either from Requriements tab or Test Plan tab.


> Navigate to the appropriate Requirement and right click, We can find the menu to map the test case and the vice versa is available in Test plan tab.

What does Test Grid contains ?

        Test Grid contains bassically :

   > The Test Grid displays all the tests in a TestDirector project.

   > The Test Grid contains the following key elements:

   > Test Grid toolbar, with buttons of commands commonly used when creating and modifying the Test Grid.

   > Grid filter, displaying the filter that is currently applied to a column.

   > Description tab, displaying a description of the selected test in the Test Grid.

   > History tab, displaying the changes made to a test. For each change, the grid displays the field name, date of the change, name of the person who made the change, and the new value. 

How to generate the graphs in Test Director ?

Open test director and then click the Analysis we will find three type of graphs : 

> Planning Progress Graphs.

> Planning Summary Graphs.

> Defect Age Graph.

       Click any of one and we can generate the graphs. The generation of graphs in the Test Director that to Test Lab module is :

> Analysis.

> Graph.

> Graph Wizard.

> Select the graph type as Summary and click the Next button.

> Select the show current tests and click the next button.

> Select the Define a new filter and click the Filter button.

> Select the test set and click the Ok button.

> Select the Plan : subject and click the ok button.

> Select the Plan: Status.

> Select the test set as x-Axis.

> Click the Finish button.

What is the difference between Master test plan and test plan??

many diff are there :

> Master test plan is the doccumaent in which each and every functional point is validated.

> Test case docuument contains test cases, Test case is the perception with which probability of finding the defect is more.

How to add Test ID to TestPlan?

Create an object with a type = Number.

              Name it something like \"Test_ID\" in the Customize Entities area. Then go into the Workflow Script Editor to \"TestPlan module script/TestPlan_Test_MoveTo\" and insert the following :

               If Test_Fields.Field(\"Our Object Name\").Value <> Test_Fields.Field (\"TS_TEST_ID\").Value then Test_Fields.Field(\"Your Object Name\").Value = Test_Fields.Field(\"TS_TEST_ID\") .Value end if

This will put an object on each test thet displays the Test ID Number.

How will you generate the defect ID in test director? Is it generated automatically or not?

The Defect ID will be Generated Automatically after the submission of the defect.

Difference between WinRunner and Test Director?

           Many diff are there : 

> WinRunner : Its an Automation Testing tool, used for automation of manual written Test Cases to Test Scripts and Regression Test also. 


> Test Director : Its an Testing Management tool, used from Creating Test Plan,Preparation of testCases, Execution of testcases and generating defect report.Also used for maintaining Test Scripts. 

How many types of reports can be generated using TestDirector?

Reports on TestDirector display information about test requirements, the test plan, test runs, and defect tracking. Reports can be generated from each TestDirector module using the default settings, or we can customize them. When customizing a report, 

We can apply filters and sort conditions, and determine the layout of the fields in the report. We can further customize the report by adding sub-reports. We can save the settings of our reports as favorite views and reload them as needed.

What are the 3 views and what is the purpose of each view?

The 3 views of requirment are:

1) Document View-tabulated view

2) Coverage View-establish a relationship between requirement and the test assosiated with them along with their execution status.Mostly the requirements are written in this view only.

3) Coverage analysis view-show a chart with requirementassociated with the test,and execution status of the test. 

What is the main purpose of storing requirements in Test Director?

In TestDirector (Requirement Tab) We Stores Our Project Requirement documents according to our modules or functionality of the applications. 

This helps us to makes sures that all requirements are covered when we trace developed Test Case/Test Script to the requirements. This helps QA Manager to review what extent the requirements are covered.

How to connect to a database?

Code : 

Const adOpenStatic = 3

Const adLockOptimistic = 3

Const adUseClient = 3

Set objConnection = CreateObject(\"ADODB.Connection\")

Set objRecordset = CreateObject(\"ADODB.Recordset\")

objConnection.Open \"DRIVER={Microsoft ODBC for Oracle};UID=;PWD=

\"

objRecordset.CursorLocation = adUseClient

objRecordset.CursorType = adopenstatic

objRecordset.LockType = adlockoptimistic

ObjRecordset.Source=\"select field1,field2 from testTable\"

ObjRecordset.ActiveConnection=ObjConnection ObjRecordset.Open \'This will execute your Query

If ObjRecordset.recordcount>0 then

Field1 = ObjRecordset(\"Field1\").Value

Field2 = ObjRecordset(\"Field2\").Value

End if 

How can I check if a environment variable exist or not?

When we use Environment(\"Param1\").value then QTP expects the environment variable to be already defined. 

But when we use Environment.value(\"Param1\") then QTP will create a new internal environment variable if it does not exists already. So to be sure that variable exist in the environment try using Environment(\"Param1\").value.

What is the OBJECT ORIENTED TESTING METRICS?

Testing metrics can be grouped into two categories : 

> Encapsulation and 

> Inheritance. 

>>> Encapsulation : Lack of cohesion in methods (LCOM) : The higher the value of LCOM, the more states have to be tested.

>> Percent public and protected (PAP) : This number indicates the percentage of class attributes that are public and thus the likelihood of side effects among classes.

>> Public access to data members (PAD) : This metric shows the number of classes that access other class\'s attributes and thus violation of encapsulation.

>> Inheritance : Number of root classes (NOR) : A count of distinct class hierarchies.

Fan in (FIN) - FIN > 1 is an indication of multiple inheritance and should be avoided.

Number of children (NOC) and depth of the inheritance tree (DIT) : For each subclass, its superclass has to be re-tested. The above metrics (and others) are different than those used in traditional software testing, however, metrics collected from testing should be the same (i.e. number and type of errors, performance metrics, etc.). 

What we normally check for in the Database Testing?

In DB testing we need to check for : 

A. The field size validation

B. Check constraints.

C. Indexes are done or not (for performance related issues)

D. Stored procedures

E. The field size defined in the application is matching with that in the db. Database testing involves some indepth knowledge of the given application and requires more defined plan of approach to test the data. Key issues include :

> data Integrity

> data validity

> data manipulation and updates.

             Tester must be aware of the database design concepts and implementation rules.

Define : What is the SOFTWARE TESTING METRICS?

In general testers must rely on metrics collected in analysis, design and coding stages of the development in order to design, develop and conduct the tests necessary. 

These generally serve as indicators of overall testing effort needed. High level design metrics can also help predict the complexities associated with integration testing and the need for specialized testing software. 

Cyclomatic complexity may yield modules that will require extensive testing as those with high cyclomatic complexity are more likely to be error prone. Metrics collected from testing, on the other hand, usually comprise of the number and type of errors, failures, bugs and defects found. These can then serve as measures used to calculate further testing effort required. 

They can also be used as a management tool to determine the extensity of the project\'s success or failure and the correctness of the design. In any case these should be collected, examined and stored for future needs.

How can I import environment from a file on disk?

Environment.LoadFromFile \"C:Env.xml\" 

What is way of writing testcases for database testing?

Many ways are there : 

We have to do the following for writing the database testcases.

1. First of all we have to understand the functional requirement of the application throughly.

2. Then we have to find out the back end tables used, joined used between the tables, cursors used, tiggers used, stored procedures used, input parmeter used and output parameters used for developing that requirement.

3. After knowing all these things we have to write the testcase with different input values for checking all the paths of SP. One thing writing testcases for backend testing not like functinal testing. We have to use white box testing techniques. To write testcase for database its just like functional testing.

1. Objective : Write the objective that you would like to test. eg: To check the shipment that i load thru xml is getting inserted for perticular customer.

2. Write the method of input or action that you do. eg: Load an xml with all data which can be added to a customer.

3. Expected : Input should be viewd in database. eg: The shipment should be loaded sucessfully for that customer,also it should be seen in application.

4. We can write ssuch type of testcases for any functionality like update,delete etc.

>>> At first we need to go through the documents provided. Need to know what tables, stored procedures are mentioned in the doc.Then test the functionality of the application. Simultaneously, start writing the DB testcases.. with the queries we have used at the backend while testing, the tables and stored procedures we have used in order to get the desired results. Trigers that were fired. Based on the stored procedure we can know the functionality for a specific peice of the application. So that we can write queries related to that. From that we make DB testcases also.

What is the mean of Constraints of Software Quality Assurance??

Difficult to institute in small organizations where available resources to perform the necessary activities are not present. 

A smaller organization tends not to have the required resources like manpower, capital etc to assist in the process of SQA. Cost not budgeted In addition , SQA requires the expenditure of dollars that are not otherwise explicitly budgeted to software engineering and software quality. 

The implementation of SQA involves immediate upfront costs, and the benefits of SQA tend to be more long-term than short-term. Hence, some organizations may be less willing to include the cost of implementing SQA into their budget.

My test fails due to checkpoint failing, Can I validate a checkpoint without my test failing due to checpoint failure?

Reporter.Filter = rfDisableAll \'Disables all the reporting stuff

chk_PassFail = Browser(...).Page(...).WebEdit(...).Check (Checkpoint(\"Check1\"))

Reporter.Filter = rfEnableAll \'Enable all the reporting stuff

if chk_PassFail then

MsgBox \"Check Point passed\"

else

MsgBox \"Check Point failed\"

end if 

How to test data loading in Data base testing?

We have to do the following things while we are involving in Data Load testing.

> We have know about source data (table(s), columns, datatypes and Contraints)

> We have to know about Target data (table(s), columns, datatypes and Contraints)

> We have to check the compatibility of Source and Target.

> We have to Open corresponding DTS package in SQL Enterprise Manager and run the DTS package (If we are using SQL Server).

> Then we should compare the column\'s data of Source and Target.

> We have to check the number to rows of Source and Target.

> Then we have to update the data in Source and see the change is reflecting in Target or not.

> We have to check about junk character and NULLs.

Define : Reluctance of implementing Software Quality Assurance?

Managers are reluctant to incur the extra upfront cost Such upfront cost are not budgeted in software development therefore management may be unprepared to fork out the money.

> Avoid Red : Tape (Bureaucracy)

> Red tape means extra administrative activities that needs to be performed as SQA involves a lot of paper work. New procedures to determine that software quality is correctly implemented needs to be developed, followed through and verified by external auditing bodies. These requirements involves a lot of administrative paperwork.

How can I check if a checkpoint passes or not?

chk_PassFail = Browser(...).Page(...).WebEdit(...).Check (Checkpoint(\"Check1\"))

if chk_PassFail then

MsgBox \"Check Point passed\"

else

MsgBox \"Check Point failed\"

end if 

define : Formal Technical Review: Reviews that include walkthroughs, inspection?

Reviews that include walkthroughs, inspection, round-robin reviews and other small group technical assessment of software. It is a planned and control meeting attended by the analyst, programmers and people involve in the software development.

    > Uncover errors in logic, function or implementation for any representation of software,

    > To verify that the software under review meets the requirements.

    > To ensure that the software has been represented according to predefined standards.

    > To achieve software that is developed in a uniform manner.

    > To make project more manageable.

    > Early discovery of software defects, so that in the development and maintenance phase the errors are substantially reduced. \" Serves as a training ground, enabling junior members to observe the different approaches in the software development phases (gives them helicopter view of what other are doing when developing the software).

    > Allows for continuity and backup of the project. This is because a number of people are become familiar with parts of the software that they might not have otherwise seen.

    > Greater cohesion between different developers. 

What SQL statements have you used in Database Testing?

The most important statement for database testing is the SELECT statement, which returns data rows from one or multiple tables that satisfies a given set of criteria. We may need to use other DML (Data Manipulation Language) statements like INSERT, UPDATE and DELTE to manage our test data.

We may also need to use DDL (Data Definition Language) statements like CREATE TABLE, ALTER TABLE, and DROP TABLE to manage our test tables.

We may also need to some other commands to view table structures, column definitions, indexes, constraints and store procedures.

Define Test Specifications?

The Test case specifications should be developed from the test plan and are the second phase of the test development life cycle. The test specification should explain \"how\" to implement the test cases described in the test plan.

Test Specification Items Each test specification should contain the following items :

> Case No : The test case number should be a three digit identifer of the following form: c.s.t, 

> where : c- is the chapter number, s- is the section number, and t- is the test case number.

> Title : is the title of the test.

> ProgName : is the program name containing the test.

> Author : is the person who wrote the test specification.

> Date : is the date of the last revision to the test case.

> Background : (Objectives, Assumptions, References, Success Criteria) : Describes in words how to conduct the test.

> Expected Error(s) : Describes any errors expected.

> Reference(s) : Lists reference documententation used to design the specification.

> Data : (Tx Data, Predicted Rx Data): Describes the data flows between the Implementation Under Test (IUT) and the test engine.

> Script : (Pseudo Code for Coding Tests): Pseudo code (or real code) used to conduct the test.

Example Test Specification

Test Specification

Case No.

> Title : Invalid Sequence Number (TC)

ProgName : Background : (Objectives, Assumptions, References, Success Criteria)

Validate that the IUT will reject a normal flow PIU with a transmissionheader that has an invalid sequence number.

Data: (Tx Data, Predicted Rx Data)

IUT

<-------- DATA FIS, OIC, DR1 SNF=20

<-------- DATA LIS, SNF=20

--------> -RSP $2001


> Script : (Pseudo Code for Coding Tests)

SEND_PIU FIS, OIC, DR1, DRI SNF=20

SEND_PIU LIS, SNF=20

R_RSP $2001

What\'s the difference between a checkpoint and output value?

Checkpoint only checks for the specific attribute of an object in AUT while Output value can output those attributes value to a column in data table. 

What is database testing and what we test in database testing?

Database testing is all about testing joins, views, inports and exports , testing the procedures, checking locks, indexing etc. Its not about testing the data in the database.

Usually database testing is performed by DBA. Database testing involves some in depth knowledge of the given application and requires more defined plan of approach to test the data.

      Key issues include :

A) Data Integrity

B) Data Validity

C) Data Manipulation and updates

      Tester must be aware of the database design concepts and implementation rules.

Data bas testing basically include the following.

A)Data validity testing.

B)Data Integritity testing

C)Performance related to data base.

D)Testing of Procedure,triggers and functions.

for doing data validity testing we should be good in SQL queries. For data integrity testing you should know about referintial integrity and different constraint. For performance related things you should have idea about the table structure and design. For testing Procedure triggers and functions we should be able to understand the same. 

To learn to use WinRunner, should I sign up for a course at a nearby educational institution?

Free, or inexpensive, education is often provided on the job, by an employer, while one is getting paid to do a job that requires the use of WinRunner and many other software testing tools. 

In liquid of a job, it is often a good idea to sign up for courses at nearby educational institutes. Classes, especially non-degree courses in community colleges, tend to be inexpensive.

What is checkpoint?

Checkpoint is basically a point in the test which validates for truthfulness of a specific things in the AUT.

There are different types of checkpoints depending on the type of data that needs to be tested in the AUT. It can be text, image/bitmap, attributes, XML etc.

Where to use function or action?

Well answer depends on the scenario. If we want to use the OR feature then you have to go for Action only. 

If the functionality is not about any automation script i.e. a function like getting a string between to specific characters, now this is something not specific to QTP and can be done on pure VB Script, so this should be done in a function and not an action. Code specific to QTP can also be put into an function using DP.

Decision of using function/action depends on what any one would be comfortable using in a given situation.

Should I take a course in manual testing?

Yes, We want to consider taking a course in manual testing.

Because learning how to perform manual testing is an important part of one\'s education. 

Unless we have a significant personal reason for not taking a course, We do not want to skip an important part of an academic program.

How to use SQL queries in WinRunner/QTP

In QTP using output databse check point and database check point, 

Select SQL manual queries option and enter the \"select\" queris to retrive data in the database and compare the expected and actual.

What is the difference between an Action and a function?

Action is a thing specific to QTP while functions are a generic thing which is a feature of VB Scripting. Action can have a object repository associated with it while a function can\'t. 

A function is just lines of code with some/none parameters and a single return value while an action can have more than one output parameters.

How can software QA processes be implemented without stifling productivity?

When we implement software QA processes without stifling productivity, we want to implement them slowly over time. We want to use consensus to reach agreement on processes, and adjust, and experiment, as an organization grows and matures.Productivity will be improved instead of stifled.

Problem prevention will lessen the need for problem detection. Panics and burnout will decrease, and there will be improved focus, and less wasted effort. At the same time, attempts should be made to keep processes simple and efficient, minimize paperwork, promote computer based processes and automated tracking and reporting, minimize time required in meetings and promote training as part of the QA process. 

However, no one, especially not the talented technical types like bureaucracy, and in the short run things may slow down a bit. A typical scenario would be that more days of planning and development will be needed, but less time will be required for late night bug fixing and calming of irate customers.

What are the different stages involved in Database Testing

verify field level data in the database with respect to frontend transactions : 

> verify the constraint (primary key,foriegn key ....)

> verify the performance of the procedures

> verify the triggers (execution of triggers)

> verify the transactions (begin,commit,rollback)

What steps does a tester take in testing Stored Procedures?

First the tester should to go through the requirement, as to why the particular stored procedure is written for. Then check whether all the required indexes, joins, updates, deletions are correct comparing with the tables mentions in the Stored Procedure. 

And also he has to ensure whether the Stored Procedure follows the standard format like comments, updated by, etc. Then check the procedure calling name, calling parameters, and expected reponses for different sets of input parameters. 

Then run the procedure themself with database client programs like TOAD, or mysql, or Query Analyzer. Rerun the procedure with different parameters, and check results against expected values. Finally, automate the tests with WinRunner.

What if the application has functionality that wasn\'t in the requirements?

It can take a serious effort to determine if an application has significant unexpected or hidden functionality, which can indicate deeper problems in the software development process. If the functionality isn\'t necessary to the purpose of the application, it should be removed, as it may have unknown impacts or dependencies that were not taken into account by the designer or the customer. 

If not removed, design information will be needed to determine added testing needs or regression testing needs. Management should be made aware of any significant added risks as a result of the unexpected functionality. If the unexpected functionality only affects areas, e.g. minor improvements in user interface, then it may not be a significant risk.

Can I change properties of a run time object?

No (but Yes also). We can use  GetROProperty(\"outerText\") to get the outerText of a object but there is no function like SetROProperty to change this property.

But we can use WebElement().object.outerText=\"Something\" to change the property.

How do you test whether the database is updated as and when an information are added in the front end?Give me an example?

It depends on what level of testing we are doing.When we want to save something from front end obviously, it has to store somewhere in the database.

We will need to find out the relevant tables involved in saving the records. Data Mapping from front end to the tables.Then enter the data from front end and save. Go to database, fire queries to get the same date from the back end.

Can you give me five common solutions?

Solid requirements, realistic schedules, adequate testing, firm requirements, and good communication. Ensure the requirements are solid, clear, complete, detailed, cohesive, attainable and testable. All players should agree to requirements. Use prototypes to help nail down requirements.

Have schedules that are realistic. Allow adequate time for planning, design, testing, bug fixing, re-testing, changes and documentation. Personnel should be able to complete the project without burning out.Do testing that is adequate. Start testing early on, re-test after fixes or changes, and plan for sufficient time for both testing and bug fixing. 

Avoid new features. Stick to initial requirements as much as possible. Be prepared to defend design against changes and additions, once development has begun and be prepared to explain consequences.If changes are necessary, ensure they\'re adequately reflected in related schedule changes. Use prototypes early on so customers\' expectations are clarified and customers can see what to expect ; this will minimize changes later on. Communicate. Require walkthroughs and inspections when appropriate; make extensive use of e-mail, networked bug-tracking tools, tools of change management. Ensure documentation is available and up-to-date. Use documentation that is electronic, not paper. Promote teamwork and cooperation.

Can I change properties of a test object?

Yes. We can use SetTOProperty to change the test object properties. 

It is recommended that we switch off the Smart Identification for the object on which we use SetTOProperty function.

How do you test whether a database in updated when information is entered in the front end?

It depend on our application interface..

> If our application provides view functionality for the entered data, then we can verify that from front end only. Most of the time Black box test engineers verify the functionality in this way.


> If our application has only data entry from front end and there is no view from front end, then we have to go to Database and run relevent SQL query.


> We can also use database checkpoint function in WinRunner.

Can you give me five common problems?

Poorly written requirements, Unrealistic schedules, inadequate testing, adding new features after development is underway and poor communication.

Requirements are poorly written when they\'re unclear, incomplete, too general, or not testable; therefore there will be problems.The schedule is unrealistic if too much work is crammed in too little time. 

Software testing is inadequate if none knows whether or not the software is any good until customers complain or the system crashes. It\'s extremely common that new features are added after development is underway. Miscommunication either means the developers don\'t know what is needed, or customers have unrealistic expectations and therefore problems are guaranteed.

What is the difference between Test Objects and Run Time Objects ?

Test objects are basic and generic objects that QTP recognize. 

Run time object means the actual object to which a test object maps.

When to associate a library file with a test and when to use execute file?

When we associate a library file with the test, then all the functions within that library are available to all the actions present in the test. 

But when we use Executefile function to load a library file, then the function are available in the action that called executefile. 

By associated a library to a test we share variables across action (global variables basically), using association also makes it possible to execute code as soon as the script runs because while loading the script on startup QTP executes all the code on the global scope. We can use executefile in a library file associated with the test to load dynamic files and they will be available to all the actions in the test.

What testing approaches can you tell me about?

Each of the followings represents a different testing approach :

> Black box testing, 

> White box testing, 

> Unit testing, 

> Incremental testing, 

> Integration testing, 

> Functional testing, 

> System testing, 

> End-to-End testing, 

> Sanity testing, 

> Regression testing, 

> Acceptance testing, 

> load testing, 

> Performance testing, 

> Usability testing, 

> Install/uninstall testing, 

> Recovery testing, 

> Security testing, 

> Compatibility testing, 

> Exploratory testing, 

> Ad-hoc testing, 

> User acceptance testing,

> Comparison testing, 

> Alpha testing, 

> Beta testing, and 

> Mutation testing. 

How to test a SQL Query in Winrunner? without using DataBase CheckPoints?

By writing scripting procedure in the TCL we can connect to the database and we can test data base and queries.

The exact proccess should be:

> Connect to the database

db_connect(\"query1\",DRIVER={drivername};SERVER=server_name;

UID=uidname;PWD=password;DBQ=database_name \");

> Execute the query

db_excecute_query(\"query1\",\"write query u want to execute\");

   Condition to be mentioned-


> disconnect the connection

db_disconnect(\"query\");

Is a \"A fast database retrieval rate\" a testable requirement?

No. I do not think so. Since the requirement seems to be ambiguous.

The SRS should clearly mention the performance or transaction requirements i.e. It should say like \'A DB retrival rate of 5 micro sec\'.

What\'s difference between client/server and Web Application ?

Client/server based is any application architecture where one server application and one or many client applications are involved like your mail server and MS outlook Express, it can be a web application as well, where the Web Application is a kind of client server application that is hosted on the web server and accessed over the internet or intranet.

There are lots of things that differs between testing of the two type above and cann\'t be posted in one post but you can look into the data flow, communication and servside variable like session and security etc

What�s the basic concept of QuickTest Professional (QTP)?

QTP is based on two concept-

> Recording

> Playback

How to Test Database Procedures and Triggers?

Before testing Data Base Procedures and Triggers, Tester should know that what is the Input and out put of the procedures/Triggers, Then execute Procedures and Triggers, if you get answer that Test Case will be pass other wise fail.
These requirements should get from DEVELOPER 

Define : Software Quality Assurance Activities?

Software Quality Assurance Activities is bassically defined as : 

    >  Application of Technical Methods (Employing proper methods and tools for developing software)

    > Conduct of Formal Technical Review (FTR)

    > Testing of Software

    > Enforcement of Standards (Customer imposed standards or management imposed standards)

    > Control of Change (Assess the need for change, document the change)

    > Measurement (Software Metrics to measure the quality, quantifiable)

    * Records Keeping and Recording (Documentation, reviewed, change control etc. i.e. benefits of docs). 

Which scripting language used by QuickTest Professional (QTP)?

Scripting language used by Quich Test Professional is basically following as : 

> QTP using VB scripting. 

What is data driven test?

 Re-execution of our test with different input values is called Re-testing. In validate our Project calculations, test engineer follows retesting manner through automation tool.Re-teting is also called DataDriven Test.There are 4 types of datadriven tests.

> Dynamic Input submissiion ( key driven test) : Sometines a test engineer conducts retesting with different input values to validate the calculation through dynamic submission.For this input submission, test engineer use this function in TSL scriipt-- create_input_dialog (\"label\");

> Data Driven Files Through FLAT FILES ( .txt,.doc) : Sometimes testengineer conducts re-testing depends on flat file contents. He collect these files from Old Version databases or from customer side.

>Data Driven Tests From FRONTEND GREAVES : Some times a test engineer create automation scripts depends on frontend objects values such as 

(a) list 

(b) menu 

(c) table 

(d) data window 

(5) ocx etc.,

> Data Driven Tests From EXCEL SHEET : sometimes a testengineer follows this type of data driven test to execute their script for multiple inputs. These multiple inputs consists in excel sheet columns. We have to collect this testdata from backend tables . 

What\'s the difference between STATIC TESTING and DYNAMIC TESTING?

> Dynamic testing : Required program to be executed.

> Static testing: Does not involve program execution.

            The program is run on some test cases & results of the program�s performance are examined to check whether the program operated as expected E.g. Compiler task such as Syntax & type checking, symbolic execution, program proving, data flow analysis, control flow analysis.

How many types of recording facility are available in QuickTest Professional (QTP)?

QTP provides three types of recording methods-

> Context Recording (Normal)

> Analog Recording

> Low Level Recording

How to Test database in Manually? Explain with an example

Observing that operations, which are operated on front-end is effected on back-end or not.

        The approach is as follows : 

While adding a record thr\' front-end check back-end that addition of record is effected or not. So same for delete, update,Ex:Enter employee record in database thr\' front-end and check if the record is added or not to the back-end(manually). 

How can I be effective and efficient, when I\'m testing e-commerce web sites?

When we\'re doing black box testing of an e-commerce web site, we\'re most efficient and effective when we\'re testing the site\'s visual appeal, content, and home page. When we want to be effective and efficient, We need to verify that the site is well planned : 

> verify that the site is customer-friendly; 

> verify that the choices of colors are attractive;

> verify that the choices of fonts are attractive; 

> verify that the site\'s audio is customer friendly; 

> verify that the site\'s video is attractive; 

> verify that the choice of graphics is attractive;

> verify that every page of the site is displayed properly on all the popular browsers; 

> verify the authenticity of facts; ensure the site provides reliable and consistent information; 

> test the site for appearance; 

> test the site for grammatical and spelling errors; 

> test the site for visual appeal, choice of browsers, consistency of font size, download time, broken links, missing links, incorrect links, and browser compatibility; 

> test each toolbar, each menu item, every window, every field prompt, every pop-up text, and every error message; 

> test every page of the site for left and right justifications, every shortcut key, each control, each push button, every radio button, and each item on every drop-down menu; 

> test each list box, and each help menu item. Also check, if the command buttons are grayed out when they\'re not in use. 

When to use a Recovery Scenario and when to us on error resume next?

Recovery scenarios are used when we cannot predict at what step the error can occur or when we know that error won\'t occur in our. 

QTP script but could occur in the world outside QTP, again the example would be \"out of paper\", as this error is caused by printer device driver. \"On error resume next\" should be used when we know if an error is expected and dont want to raise it, we may want to have different actions depending upon the error that occurred. Use err.number & err.description to get more details about the error.

What is Database testing?

Data bas testing basically include the following.

> Data validity testing.

> Data Integritity testing

> Performance related to data base.

> Testing of Procedure,triggers and functions.

for doing data validity testing we should be good in SQL queries. For data integrity testing we should know about referintial integrity and different constraint. For performance related things we should have idea about the table structure and design. for testing Procedure triggers and functions we should be able to understand the same.

What is the future of software QA/testing?

In software QA/testing, employers increasingly want us to have a combination of technical, business, and personal skills. By technical skills they mean skills in IT, quantitative analysis, data modeling, and technical writing. By business skills they mean skills in strategy and business writing. 

By personal skills they mean personal communication, leadership, teamwork, and problem-solving skills. We, employees, on the other hand, want increasingly more autonomy, better lifestyle, increasingly more employee oriented company culture, and better geographic location. We continue to enjoy relatively good job security and, depending on the business cycle, many job opportunities. We realize our skills are important, and have strong incentives to upgrade our skills, although sometimes lack the information on how to do so. 

Educational institutions increasingly ensure that we are exposed to real-life situations and problems, but high turnover rates and a rapid pace of change in the IT industry often act as strong disincentives for employers to invest in our skills, especially non-company specific skills. Employers continue to establish closer links with educational institutions, both through in-house education programs and human resources. 

The share of IT workers with IT degrees keeps increasing. Certification continues to keep helping employers to quickly identify us with the latest skills. During boom times, smaller and younger companies continue to be the most attractive to us, especially those that offer stock options and performance bonuses in order to retain and attract those of us who are the most skilled. High turnover rates continue to be the norm, especially during economic boom. Software QA/testing continues to be outsourced to offshore locations. Software QA/testing continues to be performed by mostly men, but the share of women keeps increasing.

What does a Recovery Scenario consists of?

> Trigger : Trigger is nothing but the cause for initiating the recovery scenario. It could be any popup window, any test error, particular state of an object or any application error. 

> Action : Action defines what needs to be done if scenario has been triggered. It can consist of a mouse/keyboard event, close application, call a recovery function defined in library file or restart windows. We can have a series of all the specified actions.

> Post-recovery operation : Basically defined what need to be done after the recovery action has been taken. It could be to repeat the step, move to next step etc.

What is a Recovery Scenario?

Recovery scenario gives us an option to take some action for recovering from a fatal error in the test. 

The error could range in from occasional to typical errors. Occasional error would be like \"Out of paper\" popup error while printing something and typical errors would be like \"object is disabled\" or \"object not found\". 

A test case have more then one scenario associated with it and also have the priority or order in which it should be checked.

What is the definiton of top down design?

Top down design progresses from simple design to detailed design. Top down design solves problems by breaking them down into smaller, easier to solve subproblems. 

Top down design creates solutions to these smaller problems, and then tests them using test drivers. In other words, top down design starts the design process with the main module or system, then progresses down to lower level modules and subsystems. To put it differently, top down design looks at the whole system, and then explodes it into subsystems, or smaller parts.

A systems engineer or systems analyst determines what the top level objectives are, and how they can be met. He then divides the system into subsystems, i.e. breaks the whole system into logical, manageable-size modules, and deals with them individually.

What we normally check for in the Database Testing?

In DB testing we need to check for,

>  The field size validation

>  Check constraints.

>  Indexes are done or not (for performance related issues)

>  Stored procedures

>  The field size defined in the application is matching with that in the db. 

Define : Descriptive Programming ?

Descriptive programming is nothing but a technique using which operations can be performed on the AUT object which are not present in the

Why should I use static testing techniques?

There are several reasons why one should use static testing techniques.

 1: One should use static testing techniques because static testing is a bargain, compared to dynamic testing.

 2: Static testing is up to 100 times more effective. Even in selective testing, static testing may be up to 10 times more effective. The most pessimistic estimates suggest a factor of 4.

 3: Since static testing is faster and achieves 100% coverage, the unit cost of detecting these bugs by static testing is many times lower than detecting bugs by dynamic testing.

 4: About half of the bugs, detectable by dynamic testing, can be detected earlier by static testing.

 5: If one uses neither static nor dynamic test tools, the static tools offer greater marginal benefits.

 6: If an urgent deadline looms on the horizon, the use of dynamic testing tools can be omitted, but tool supported static testing should never be omitted. 

Define : Software Testing?

Software testing is a critical component of the software engineering process. It is an element of software quality assurance and can be described as a process of running a program in such a manner as to uncover any errors. This process, while seen by some as tedious, tiresome and unnecessary, plays a vital role in software development. 

 Testing involves operation of a system or application under controlled conditions and evaluating the results (eg, \'if the user is in interface A of the application while using hardware B, and does C, then D should happen\'). The controlled conditions should include both normal and abnormal conditions. 

Testing should intentionally attempt to make things go wrong to determine if things happen when they shouldn\'t or things don\'t happen when they should. It is oriented to \'detection\'. Organizations vary considerably in how they assign responsibility for QA and testing. Sometimes they\'re the combined responsibility of one group or individual. Also common are project teams that include a mix of testers and developers who work closely together, with overall QA processes monitored by project managers. It will depend on what best fits an organization\'s size and business structure.

When should I use SMART Identification?

SMART Identification : Smart Identification is nothing but an algorithm used by QTP when it is not able to recognize one of the object. A very generic example as per the QTP manual would be, 

A photograph of a 8 year old girl and boy and QTP records identification properties of that girl when she was 8, now when both are 10 years old then QTP would not be able to recognize the girl. 

But there is something that is still the same, that is there is only one girl in the photograph. So it kind of PI (Programmed intelligence) not AI.

How can I make some rows colored in the data table?

Well you can\'t do it normally but you can use Excel COM API\'s do the same. Below code will explain some expects of Excel COM APIs.

code :

Set xlApp=Createobject(\"Excel.Application\")

set xlWorkBook=xlApp.workbooks.add

set xlWorkSheet=xlWorkbook.worksheet.add

xlWorkSheet.Range(\"A1:B10\").interior.colorindex = 34 \'Change the color of the cells

xlWorkSheet.Range(\"A1:A10\").value=\"text\" \'Will set values of all 10 rows to \"text\"

xlWorkSheet.Cells(1,1).value=\"Text\" \'Will set the value of first row and first col

rowsCount=xlWorkSheet.Evaluate(\"COUNTA(A:A)\") \'Will count the # of rows which have non blank value in the column A

colsCount=xlWorkSheet.Evaluate(\"COUNTA(1:1)\") \'Will count the # of non blank columns in 1st row

xlWorkbook.SaveAs \"C:Test.xls\"

xlWorkBook.Close

Set xlWorkSheet=Nothing

Set xlWorkBook=Nothing

set xlApp=Nothing

What\'s difference between QA/testing ?

The quality assurance process is a process for providing adequate assurance that the software products and processes in the product life cycle conform to their specific requirements and adhere to their established plans.\"
The purpose of Software Quality Assurance is to provide management with appropriate visibility into the process being used by the software project and of the products being built .

What testing tools should we use?

 We should use both static and dynamic testing tools. To maximize software reliability, we should use both static and dynamic techniques, supported by appropriate static and dynamic testing tools.

 1: Static and dynamic testing are complementary. Static and dynamic testing find different classes of bugs. Some bugs are detectable only by static testing, some only by dynamic.

 2: Dynamic testing does detect some errors that static testing misses. To eliminate as many errors as possible, both static and dynamic testing should be used.

 3: All this static testing (i.e. testing for syntax errors, testing for code that is hard to maintain, testing for code that is hard to test, testing for code that does not conform to coding standards, and testing for ANSI violations) takes place before compilation.

 4: Static testing takes roughly as long as compilation and checks every statement we have written. 

Give us a QuickTest Professional (QTP) 8.2 Tips and Tricks (1) ?

Data Table : 

    Two Types of data tables : 

> Global data sheet : Accessible to all the actions

> Local data sheet : Accessible to the associated action only.

Usage :

DataTable(\"Column Name\",dtGlobalSheet) for Global data sheet DataTable(\"Column Name\",dtLocalSheet) for Local data sheet If we change any thing in the Data Table at Run-Time the data is changed only in the run-time data table. The run-time data table is accessible only through then test result. The run-time data table can also be exported using DataTable.Export or DataTable.ExportSheet


How can I save the changes to my DataTable in the test itself?

Well QTP does not allow anything for saving the run time changes to the actual data sheet. 

The only work around is to share the spreadsheet and then access it using the Excel COM Api\'s.

What is the difference between static and dynamic testing?

Many Diff are there : 

 1: Static testing is about prevention, dynamic testing is about cure.

 2: The static tools offer greater marginal benefits.

 3: Static testing is many times more cost-effective than dynamic testing.

 4: Static testing beats dynamic testing by a wide margin.

 5: Static testing is more effective!

 6: Static testing gives you comprehensive diagnostics for your code.

 7: Static testing achieves 100% statement coverage in a relatively short time, while dynamic testing often often achieves less than 50% statement coverage, because dynamic testing finds bugs only in parts of the code that are actually executed.

 8: Dynamic testing usually takes longer than static testing. Dynamic testing may involve running several test cases, each of which may take longer than compilation.

 9: Dynamic testing finds fewer bugs than static testing.

 10: Static testing can be done before compilation, while dynamic testing can take place only after compilation and linking.

 11: Static testing can find all of the followings that dynamic testing cannot find: syntax errors, code that is hard to maintain, code that is hard to test, code that does not conform to coding standards, and ANSI violations. 

What is the difference between data validity and data integrity?

Many Diff are there : 

1 : Data validity is about the correctness and reasonableness of data, while data integrity is about the completeness, soundness, and wholeness of the data that also complies with the intention of the creators of the data.

2 : Data validity errors are more common, and data integrity errors are less common. 

3 : Errors in data validity are caused by human beings - usually data entry personnel - who enter, for example, 13/25/2010, by mistake, while errors in data integrity are caused by bugs in computer programs that, for example, cause the overwriting of some of the data in the database, when somebody attempts to retrieve a blank value from the database.

By giving the description in form of the string arguments.

We can describe an object directly in a statement by specifying property := value pairs describing the object instead of specifying an object�s name. The general syntax is :

> TestObject(\"PropertyName1:=PropertyValue1\", \"...\" , \"PropertyNameX:=PropertyValueX\")

TestObject � the test object class could be WebEdit, WebRadioGroup etc�.

> PropertyName := PropertyValue the test object property and its value. Each property := value pair should be separated by commas and quotation marks. Note that we can enter a variable name as the property value if we want to find an object based on property values we retrieve during a run session.


Consider the HTML Code given below:


<INPUT type=�textbox� name=�txt_Name�>

<INPUT type=�radio� name=�txt_Name�>


Now to refer to the textbox the statement would be as given below


Browser(�Browser�).Page(�Page�).WebEdit(�Name:=txt_Name�,�html tag:=INPUT�).set �Test�


And to refer to the radio button the statement would be as given below


Browser(�Browser�).Page(�Page�).WebRadioGroup(�Name:=txt_Name�,�html tag:=INPUT�).set �Test�


If we refer to them as a web element then we will have to distinguish between the 2 using the index property


Browser(�Browser�).Page(�Page�).WebElement(�Name:=txt_Name�,�html tag:=INPUT�,�Index:=0�).set �Test� � Refers to the textbox Browser(�Browser�).Page(�Page�).WebElement(�Name:=txt_Name�,�html tag:=INPUT�,�Index:=1�).set �Test� � Refers to the radio button 

How do you test data integrity?

Data integrity is tested by the following tests :

> Verify that we can create, modify, and delete any data in tables.

> Verify that sets of radio buttons represent fixed sets of values.

> Verify that a blank value can be retrieved from the database.

> Verify that, when a particular set of data is saved to the database, each value gets saved fully, and the truncation of strings and rounding of numeric values do not occur.

> Verify that the default values are saved in the database, if the user input is not specified.

> Verify compatibility with old data, old hardware, versions of operating systems, and interfaces with other software.


Why do we perform data integrity testing?

Because we want to verify the completeness, soundness, and wholeness of the stored data.

Testing should be performed on a regular basis, because important data could, can, and will change over time.

What black box testing types can you tell me about?

Black box testing is functional testing, not based on any knowledge of internal software design or code. Black box testing is based on requirements and functionality. 

> Functional testing is also a black-box type of testing geared to functional requirements of an application.

> System testing is also a black box type of testing. 

> Acceptance testing is also a black box type of testing. 

> Functional testing is also a black box type of testing.

> Closed box testing is also a black box type of testing. 

> Integration testing is also a black box type of testing.

How many types of Parameters are available in QuickTest Professional (QTP)?

QTP provides three types of Parameter-

> Method Argument

> Data Driven

> Dynamic

What is software testing methodology?

One software testing methodology is the use a three step process of : 

> Creating a test strategy;

> Creating a test plan/design; and

> Executing tests. 

          This methodology can be used and molded to our organization\'s needs. Rob Davis believes that using this methodology is important in the development and ongoing maintenance of his clients\' applications.

How to browse through all the properties of a properties collection?

Two ways are there : 

1st :

For each desc in obj_ChkDesc

Name=desc.Name

Value=desc.Value

RE = desc.regularexpression

Next

2nd :

For i=0 to obj_ChkDesc.count - 1

Name= obj_ChkDesc(i).Name

Value= obj_ChkDesc(i).Value

RE = obj_ChkDesc(i).regularexpression

Next


What is the checklist for credit card testing?

In credit card testing the following validations are considered

Testing the 4-DBC (Digit batch code) for its uniqueness (present on right corner of credit card)
The message formats in which the data is sent
LUHN testing
Network response
Terminal validations

How to write Test cases for Login screen?

The format for all test cases could be something like this
> test cases for GUI
> +ve test cases for login.
> -ve test cases for login.
in the -ve scenario : we should include boundary analysis to create test cases ,Value Analysis. Equivalence Classes,Positive and Negative test cases) plus cross-site scripting and SQL injection. SQL injection is especially high-risk for login pages.

Why Testing CANNOT Ensure Quality

Testing in itself cannot ensure the quality of software. All testing can do is give you a certain level of assurance (confidence) in the software. On its own, the only thing that testing proves is that under specific controlled conditions, the software functioned as expected by the test cases executed. 

What is the difference between version and release?

Both version and release indicate particular points in the software development life cycle, or in the life cycle of a document. Both terms, version and release, are similar, i.e. pretty much the same thing, but there are minor differences between them.
1: Version means a variation of an earlier or original type. For example, you might say, \"I\'ve downloaded the latest version of XYZ software from the Internet. The version number of this software is _____\"
2: Release is the act or instance of issuing something for publication, use, or distribution. Release means something thus released. For example, \"Microsoft has just released their brand new gaming software known as _______\"

How do I check if property exists or not in the collection?

The answer is that it\'s not possible. Because whenever we try to access a property which is not defined its automatically added to the collection. The only way to determine is to check its value that is use a if statement if obj_ChkDesc(html tag).value = empty then. 

How to remove a description from the collection ?

obj_ChkDesc.remove html tag would delete the html tag property from the collection.

How to find all the Bugs during first round of Testing?

We understand the problems we are facing. I was involved with a web-based HR system that was encountering the same problems. What I ended up doing was going back over a few release cycles and analyzing the types of defects found and when (in the release cycle including the various testing cycles) they were found. I started to notice a distinct trend in certain areas.

For each defect type, I started looking into the possibility if it could have been caught in the prior phase (lots of things were being found in the Systems test phase that should have been caught earlier). If so, why wasn\'t it caught? Could it have been caught even earlier (say via a peer review)? If so, why not? This led me to start examining the various processes and found a definite problem with peer reviews (not very thorough IF they were even being done) and with the testing process (not rigorous enough). We worked with the customer and folks doing the testing to start educating them and improving the processes. The result was the number of defects found in the latter test stages (System test for example) were cut by over half! It was getting harder to find problems with the product as they were discovering them earlier in the process -- saving time & money!

Is regression testing performed manually?

If the initial testing approach was manual testing, then the regression testing is usually performed manually. Conversely, if the initial testing approach was automated testing, then the regression testing is usually performed by automated testing. 

How to choose which defect to remove in 1000000 defects?

How to choose which defect to remove in 1000000 defects? (because It will take too much resources in order to remove them all.)

Software QA/Testing Technical FAQs

(Continued from previous question...)

How to choose which defect to remove in 1000000 defects? (because It will take too much resources in order to remove them all.)


Answe1:
Are you the programmer who has to fix them, the project manager who has to supervise the programmers, the change control team that decides which areas are too high risk to impact, the stakeholder-user whose organization pays for the damage caused by the defects or the tester?
The tester does not choose which defects to fix.
The tester helps ensure that the people who do choose, make a well-informed choice.
Testers should provide data to indicate the *severity* of bugs, but the project manager or the development team do the prioritization.
When I say \"indicate the severity\", I don\'t just mean writing S3 on a piece of paper. Test groups often do follow-up tests to assess how serious a failure is and how broad the range of failure-triggering conditions.
Priority depends on a wide range of factors, including code-change risk, difficulty/time to complete the change, which stakeholders are affected by the bug, the other commitments being handled by the person most knowledgeable about fixing a certain bug, etc. Many of these factors are not within the knowledge of most test groups.
we surely can prioritize them once detected. In our org we assign severity level to the defects depending upon their influence on other parts of products. If a defect doesnt allow you to go ahead and test test the product, it is critical one so it has to be fixed ASAP. We have 5 levels as
-critical
-High
-Medium
-Low
-Cosmetic

What\'s the QuickTest Professional (QTP) testing process?

QTP testing process consist of seven steps-

* Preparing to recoding
* Recording
* Enhancing our script
* Debugging
* Run
* Analyze
* Report Defects (more)

How to test the memory leakage manually?

Here are tools to check this. Compare DevPartner can help we test our application for Memory leaks if the application is complex. Also depending upon the OS on which we need to check for memory leaks we need to select the tool. 

How to Start recording using QuickTest Professional (QTP)?

> Record or click the Record button.
When the Record and Run Settings dialog box opens to do this ;
> In the Web tab, select Open the following browser when a record or run session begins.
> In the Windows Applications tab, confirm that Record and run on these applications (opened on session start) is selected, and that there are no applications listed.

How to insert a check point to a image to check enable property in QTP?

AS we are saying that the all images are as push button than we can check the property enabled or disabled. If we are not able to find that property than go to object repository for that object and click on add remove to add the available properties to that object. Let me know if that works. And if we take it as image than we need to check visible or invisible property that also might help you are there are no enable or disable properties for the image object. 

How to Save your test using QuickTest Professional (QTP)?

Select File 
> Save or click the Save button.
> The Save dialog box opens to the Tests folder.
> Create a folder which you want to save to, select it, and click Open.
> Type our test name in the File name field.
> Confirm that Save Active Screen files is selected. Click Save. 
> our test name is displayed in the title bar of the main QuickTest window.

How to Run a Test using QuickTest Professional (QTP)?

Start running our test.
Click Run or choose Test > Run. The Run dialog box opens.
Select New run results folder. Accept the default results folder name.
Click OK to close the Run dialog box.

What is Static Analysis?