Friday, December 25, 2009

New Year New Articles

I couldn't publish any article last two months because i had a busy calender. So guys there will be new articles lined up for 2010.....

Wishing you a Merry Christmas and Happy new year

Friday, October 2, 2009

Test Automation Benefits & Priorities

I have listed out few points about automation below. These points will elaborate how important an automation tool in testing is.

Test Automation Benefits
  • Improves testing efficiency
  • Consistent and repeatable testing process
  • Supports quality metrics for test optimization
  • Improved regression tests - decrease cost of change
  • More tests can be run in less time - better coverage and shorter time
  • 24/7 operation - better use of resources
  • Human resources are free to perform advanced manual tests
  • Objective and measurable performance and stress tests
  • Execution of tests that can't be done manually
  • Tests can access system parameters that are not visible to tester
  • Simple reproduction of found defects
  • Testing costs decrease with reuse
Test Automation Priorities


  • Right selection of test procedures for automation is critical.
  • The test automation project should prove its usefulness to the organization by providing maximum impact for minimum effort from the start.
  • The following list represents recommended candidates for the first automation stage:
    - simplest to automate tests
    - frequently running tests
    - critical set tests
  • Good for system "smoke test" execution

Thursday, September 17, 2009

Test Automation Introduction

Test Automation is the use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions. Commonly, test automation involves automating a manual process already in place that uses a formalized testing process.

Many test automation tools provide record and playback features that allow users to record interactively user actions and replay it back


Types of Test Automation

  • Functional Testing
  • Performance Testing
  • Stress Testing
  • Load Testing

Functional testing

Functional testing is testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements. System testing falls within the scope of black box testing, and as such, should require no knowledge of the inner design of the code or logic.

Tools
  • WinRunner
  • IBM Rational Functional Tester
  • Silk Test

Performance Testing

Performance Testing is testing that is performed, from one perspective, to determine how fast some aspect of a system performs under a particular workload. It can also serve to validate and verify other quality attributes of the system, such as scalability and reliability.
demonstrate that the system meets performance criteria.
impossible to exactly replicate this workload variability of a real time system

Tools
  • OpenSTA
  • LoadRunner
  • Silk Performer

Stress Testing/Load Testing

Stress Testing is a form of testing that is used to determine the stability of a given system or entity. It involves testing beyond normal operational capacity, often to a breaking point, in order to observe the results. Stress testing may have a more specific meaning in certain industries.
stress testing often refers to tests that put a greater emphasis on robustness, availability, and error handling under a heavy load, than on what would be considered correct behavior under normal circumstances.

Load Testing is the process of creating demand on a system or device and measuring its response.
Load testing generally refers to the practice of modeling the expected usage of a software program by simulating multiple users accessing the program's services concurrently.

Tools
  • LoadRunner
  • Visual Studio Team Test
  • Application Center Test
  • Silk Performer

Wednesday, September 16, 2009

Risk Based Testing

What is RBT

Risk based Testing (RBT) is a type of software testing that prioritize the features and functions to e tested based on priority /importance and likelihood or impact of failure. In theory, since there are an infinite number of possible tests, any set of tests must be a subset of all possible tests.

Most of the people consider James Bach to be the 'father of Risk based testing.

Activities involved in RBT




Risk Identification involves collecting information about the project and classifying it to detemine the amount of potential risk in the test phase and in production.

Risk Strategy involves the identification and assessment of risks and the development of the contingency plans for for possible alternative project or the mitigation of all risks.

Risk assessment means determining the effect (including cost) of potential risks.

Risk Mitigation is based on information gained from the previous actives of identifying, planning and assessing risks. Risk mitigation/avoidance activities or minimize their impact

Risk Reporting is based on information obtained from the previous topics and it's very often done in standard graphs.

Risk Predication involves forecasting risks using the history and knowledge of previously identified risks

Sunday, August 2, 2009

What is Usability Testing

What is Usability Testing - Part 2


Planning a Test

The first thing to know about planning a usability test is that every test is different in scope, and results will vary a lot depending on the purpose and context of the test. Testing a single new feature will look very different from testing several key scenarios in a new Application.

What Are You Going to Test?

Next, you need to decide what you’re going to test. The best way to do this is to meet with the design and development team and choose features that are new, frequently used, or considered troublesome or especially important. After choosing these features, prioritize them and write task scenarios based on them. A task scenario is a story that represents typical user activities and focuses on a single feature or group of related features. Scenarios should be:

- Short - Time is precious during usability testing, so you don’t want to spend too much time on reading or explaining scenarios.

- Specific - The wording of the scenario should be unambiguous and have a specific end goal.

- Realistic - The scenario should be typical of the activities that an average user will do on the Application.

- In the user’s language and related to the user’s context - The scenario should explain the task the same way that users would. This emphasizes the importance of the pre-session discussion, which gives you the opportunity to understand the participant’s relationship with the Application.

Here’s an example scenario for an Application that sells images:

Ex: You’re looking for an image that you can use on your company’s support Application. Find an appropriate image and add it to your basket. Be sure to let me know when you’re done.

Who is going to evaluate the Application?

Who you choose to evaluate the Application will have a massive effect on the outcome of the research. It’s very important to develop a thoughtful screener for recruiting your participants.

Imagine that you’re creating an Application that sells images. Your customers are people who want to buy images—a huge group of people. Narrow your focus to a short and concise user profile, a picture of your ideal test participants. This profile should be based on your primary user (customer) segment and contain characteristics that those users share.

In this scenario, our participants are graphic designers or other people who use graphic design software and purchase images online. Create and order a list of these users’ characteristics. While you’re creating the user profile, you may realize that you have two or more equally important subgroups—people who buy images for business use and people who buy images for home use. This is fine as long as you can justify the relevance of each subgroup to the features that you’ll be testing.

- Test with a reasonable number of participants —The best results come from testing no more than 5 users and running as many small tests as you can afford.



The most striking truth of the curve is that zero users give zero insights.

As soon as you collect data from a single test user, your insights shoot up and you have already learned almost a third of all there is to know about the usability of the design. The difference between zero and even a little bit of data is astounding.

When you test the second user, you will discover that this person does some of the same things as the first user, so there is some overlap in what you learn. People are definitely different, so there will also be something new that the second user does that you did not observe with the first user. So the second user adds some amount of new insight, but not nearly as much as the first user did.

The third user will do many things that you already observed with the first user or with the second user and even some things that you have already seen twice. Plus, of course, the third user will generate a small amount of new data, even if not as much as the first and the second user did.

As you add more and more users, you learn less and less because you will keep seeing the same things again and again. There is no real need to keep observing the same thing multiple times, and you will be very motivated to go back to the drawing board and redesign the Application to eliminate the usability problems.

After the fifth user, you are wasting your time by observing the same findings repeatedly but not learning much new.

You need to test additional users when the Application has several highly distinct groups of users
Article is prepared by:
Anushka Wickramaratne
Senior QA Engineer

Monday, July 27, 2009

What is Usability Testing

What is Usability Testing - Part 1


Usability testing is a technique for ensuring that the intended users of a system can carry out the intended tasks efficiently, effectively and satisfactorily. In other words, Usability testing should be an iterative practice, completed several times during the design and development life-cycle. The end result is an improved product and a better understanding of the users that we’re designing it for.


Main steps in the usability testing process are Planning, Gathering Data and Reporting results.




There are two scenarios for usability testing:

1. If you are a software product vendor, testing real users of your product means you are evaluating for design. Based on how you have designed the application, can users complete the tasks they need to do? Testing real users doing real tasks can also point out if the UI guidelines you are following are working within the context of your product, and when consistency helps or hinders the users’ ability to do their work.


2. If you are a software product purchaser, you can do usability testing to evaluate a product for purchase. For example, your company might consider buying a product for their twenty thousand employees. Before the company spends its money, it wants to make sure that the product in question will really help employees do their jobs better. Usability testing can also be useful to see if the proposed application follows published UI style guidelines (internal or external). It’s best to use UI guidelines as an auxiliary, rather than primary, source of information for making purchase decisions.

This Guide is discussing on the first scenario ‘evaluating for design’.


When Usability Testing is appropriate?


Usability testing is carried out pre-release so that any significant issues identified can be addressed. It can be carried out at various stages of the design process as well. In the early stages, however, techniques such as walkthroughs are often more appropriate.



Article is prepared by:

Anushka Wickramaratne

Senior QA Engineer



Saturday, July 25, 2009

What is a Test Case

In simple terms, test cases are how the testers validate that software meets customer requirements. It’s a tool which helps testers to test a product.


Test case is a set of conditions organized in a manner which determines a requirement is met in the product by testers. Then testers can decide whether the test is passed or failed based on the output of the test case. If the output is passed the conclusion is that the tested requirement is functioning correctly. If test case failed then that requirement needs a code change to fix the issue. When a test case is failed, testers called it as bug/issue* of the system.

The main inputs for test cases are requirements of the system. This can in format of a requirement document which represents using deferent techniques such as use cases. Then it’s testers’ responsibility to design test cases to cover all requirements. There are tests to cover the positive test and the negative test for a requirement as the minimum.

There are several information required for a complete test case and those are,

Test case ID – to identify a test case, it should have a unique number.


Test case description – This includes the description of the feature or the requirement that’s going to test by this test case.


Steps – steps should be included to guide the tester to execute the desired test case


Input data – If the test case requires the specific data, those can be included here. When designing the test case, it’s acceptable to include the test data among the steps.


Reference – this will link the test case to the requirement which has documented in requirement document.


Expected result – this the expected output of the test case and how the feature/requirement should function in the product.


Testing status– this will contain information whether the test case is passed or failed after execution of the test.


Author – This will contain information who has designed the test cases.


Date – this will hold the information when the test cases are designed.


Change history- this will give an idea what has been changing in the test cases after creation of it.

Author, Date and change history can be single entry for a set of test cases.


*Bug/Issue will be discussed in detail in another post.

Vocabulary...

Followings are some of the words that will be useful for the next topic.


Risk is the potential loss to an organization, as for example, the risk resulting from the misuse of its computer. This may involve unauthorized disclosure, unauthorized modification, and/or loss of information resources, as well as the authorized but incorrect use of a computer. Risk can be measured by performing risk analysis.


Risk Analysis is an analysis of an organization’s information resources, its existing controls, and its remaining organization and computer system vulnerabilities. It combines the loss potential for each resource or combination of resources with an estimated rate of occurrence to establish a potential level of damage in dollars or other assets.


A Threat is something capable of exploiting vulnerability in the security of a computer system or application. Threats include both hazards and events that can trigger flaws.


Vulnerability is a design, implementation, or operations flaw that may be exploited by a threat; the flaw causes the computer system or application to operate in a fashion different from its published specifications and to result in destruction or misuse of equipment or data.


Control is anything that tends to cause the reduction of risk. Control can accomplish this by reducing harmful effects or by reducing the frequency of occurrence.



Reference: CSTE CBOk v 6.2

Friday, July 10, 2009

Clear doubts about QC vs QA

Quality Assurance Versus Quality Control

What you guess? it does mean same? any guess?

The following statements help differentiate quality control from quality assurance:

• Quality control relates to a specific product or service.

• Quality control verifies whether specific attribute(s) are in, or are not in, a specific product or service.

• Quality control identifies defects for the primary purpose of correcting defects.

• Quality control is the responsibility of the team/worker.

• Quality control is concerned with a specific product.

• Quality assurance helps establish processes.

• Quality assurance sets up measurement programs to evaluate processes.

• Quality assurance identifies weaknesses in processes and improves them.

• Quality assurance is a management responsibility, frequently performed by a staff function.

• Quality assurance is concerned with all of the products that will ever be produced by a process.

• Quality assurance is sometimes called quality control over quality control because it evaluates whether quality control is working.

• Quality assurance personnel should never perform quality control unless it is to validate quality control.



Reference: CSTE CBOK v6.2

Monday, July 6, 2009

My idea of this.............

My idea of this blog is to publish the knowledge of QA that I gathered thorough my leanings and experience to anyone who is interested. Further I’m planning to add articles of new concepts in Quality Assurance/Quality Control.

Vocabulary in Software Testing

What is Quality

A product is a quality product if it is defect free. To the producer, a product is a quality product if it meets or conforms to the statement of requirements that defines the product. This statement is usually shortened to: quality means meets requirements. From a customer’s perspective, quality means “fit for use.”


What is Quality Assurance (QA)

The set of support activities (including facilitation, training, measurement, and analysis) needed to provide adequate confidence that processes are established and continuously improved to produce products that meet specifications and are fit for use.


What is Quality Control (QC)

The process by which product quality is compared with applicable standards, and the action taken when nonconformance is detected. Its focus is defect detection and removal. This is a line function; that is, the performance of these tasks is the responsibility of the people working within the process.


Reference: CSTE CBOK v6.2