Friday, 13 November 2015

Syllabys

1.Software Testing Fundamentals

2.What is Testing

3.Importance of Testing

4.Term and modification of testing

5.Fundamental Test process

6.Types of testing

7.Life cycle of testing

8.What is Tested application

9.Essentials of Testing

10.Economics of testing

11.Fundamentals of Quality

12.Test management

13.Testing concepts and fundamentals

14.Software development process (SDLC) and SDLC Models

15.Testing Principles and Fundamentals

16.Testing Approaches- Black box Testing and White Box Testing

17. Testing Technique – Static Testing, Dynamic testing

18.Testing Process

19.Test Planning

20.Administrative Plan

21.Risk management

22.Test Focus

23.Test Objectives

24.Test Strategy and The Build Strategy

25.Problem management and Control

26.Test case Design

27.“V” Model and levels of testing

    - Unit Testing,.

    - Integration Testing(Bottom Up, Top Down, Big bang, Sandwich)

    - System testing - > ( GUI, Usability,Configuration, Compatibility, Availability, Reliability,                   Installation,System Integration Testing)

   - User Acceptance Testing - > (Alpha Testing, Beta testing)

28.Testing Types

-Functional testing

-Structure Testing

-Specialized Testing

-Planning Your TEST efforts

Thursday, 12 November 2015

1.Software Testing Fundamentals

Software Testing Fundamentals
let's  start by considering why testing is needed. Testing is necessary because we all make mistakes. Some of those mistakes are unimportant, but some of them are expensive or dangerous. We need to check everything and anything we produce because things can always go wrong - humans make mistakes all the time - it is what we do best! Because we should assume our work contains mistakes, we all need to check our own work. However, some mistakes come from bad assumptions and blind spots, so we might make the same mistakes when we check our own work as we made when we did it. So we may not notice the flaws in what we have done. Ideally, we should get someone else to check our work - another person is more likely to spot the flaws.Here, we'll explore the implications of these two simple paragraphs again and again. Does it matter if there are mistakes in what we do? Does it matter if we don't find some of those flaws? We know that in ordinary life, some of our mistakes do not matter, and some are very important. It is the same with software systems. We need to know whether a particular error is likely to cause problems. To help us think about this, we need to consider the context within which we use different software systems

 Software systems context

These days, almost everyone is aware of software systems. We encounter them in our homes, at work, while shopping, and because of mass-communication systems. More and more, they are part of our lives. We use software in day-today business applications such as banking and in consumer products such as cars and washing machines. However, most people have had an experience with software that did not work as expected: an error on a bill, a delay when waiting for a credit card to process and a website that did not load correctly are common examples of problems that may happen because of software problems.Not all software systems carry the same level of risk and not all problems have the same impact when they occur. A risk is something that has not happened yet and it may never happen; it is a potential problem. We are concerned about these potential problems because, if one of them did happen, we'd feel a negative impact. When we discuss risks, we need to consider how likely it is that the problem would occur and the impact if it happens. For example, whenever we cross the road, there is some risk that we'll be injured by a car. The likelihood depends on factors such as how much traffic is on the road, whether there is a safe crossing place, how well we can see, and how fast we can cross. The impact depends on how fast the car is going, whether we are wearing protective gear, our age and our health. The risk for a particular person can be worked out and therefore the best road-crossing strategy.

 Causes of software defects

Why is it that software systems sometimes don't work correctly? We know that people make mistakes - we are fallible.If someone makes an error or mistake in using the software, this may lead directly to a problem - the software is used incorrectly and so does not behave as we expected. However, people also design and build the software and they can make mistakes during the design and build. These mistakes mean that there are flaws in the software itself. These are called defects or sometimes bugs or faults. Remember, the software is not just the code; check the definition of software again to remind yourself. When the software code has been built, it is executed and then any defects may cause the system to fail to do what it should do (or do something it shouldn't), causing a failure. Not all defects result in failures; some stay dormant in the code and we may never notice them.

Wednesday, 11 November 2015

2.What is Testing


People have different definitions about testing. Some say testing is just about UI verification. And some say testing is just finding defects. But I tried to categorize the whereabouts of testing in the below 10 points.


1. Testing is about quality

Testing is about providing a quality product to the customer

Quality in terms of usage

Quality in terms of look and feel

Quality in terms of data integrity

Quality in terms of security


2. Testing is about ideas

Any given application can be tested in many ways. If you try out, each individual will propose a different approach and idea. We as a tester have to analyze and pick the most suitable approach.


3. Testing is about thinking like a customer

This is one of the common saying that we get to hear almost every day :).

When we test an application, we should always think from a customer (who will use the application) point of view. Relate the flows which ideally the customer would perform on the application. Check to see if the labels/text for messages and warnings are user friendly so that the customer understands the issue if any.


4.Testing is about coverage

More coverage means more improved quality product.

List and execute all the test combinations. Try to uncover all the odd combinations that the customer is likely to do. Prepare requirement traceability matrix. I am sure not all do this, but prepare one formally or informally. List down all the boundary conditions and negative test cases. Prioritize all the test cases. Do at least one cycle of regression of Priority-1/Priority-2 test cases before completing the QA cycle.


5. Testing is about finding defects

Defect is described as a deviation of actual result from the requirements. This holds very true in case of testing against requirements. When checking for negative scenarios or doing ad-hoc testing we still find defects.
Defects should be raised as soon as they are found and with all relevant data. People tend to miss out raising defects assuming that they are minor or just UI. Every valid defect which gets fixed adds to the quality of the product.

6. Testing is about simplicity

There is no point in building an application which is of high complexity and of no use. Rather we should suggest for simple design which even a lay man can use.

Suggest for enhancements in the system. You can suggest improving the layout, labeling, button names and messages. Users will always prefer using a system which is less complex and easy to use and understandable. Simpler the better.


7. Testing is about collaboration

Testing is an activity which cannot be performed all alone.
It always has to be in collaboration with the other teams like requirement, design, development, process etc.
Imagine we raised a defect and could not convince other team to get that fixed though it is valid. In this case we did not do our job completely. We should not be easily convinced with any justification from other teams unless it is documented and reviewed by stake holders.



8. Testing is about documentation

Documentation plays a major role in testing phase.
Document the test scenarios, test cases. Prepare traceability matrix. Prepare checklist of test activities done. Prepare checklist of UI testing done. Capture all the screenshots/evidences.
These documents will be very useful in future for reference in case someone has to do a round of testing again. Document all the defects in any means Microsoft Excel or defect management tool. Document the test data, environment details etc. as well.


9. Testing is about time management

Defect found later in test cycle impacts the cost and time. If we can uncover more critical defects initially in the test cycle, the more time we get to test it better.


10. Testing is about attitude

This may not be the last point, but surely attitude is a must for a good tester. You should have the right attitude in a right way to do your job right.


Monday, 9 November 2015

3.Importance of software Testing



Let’s take a look of advantages of software testing in the Software Development Life Cycle:


Testing should be introduce in the early stage of the SDLC, The cost of fixing the bug is larger if testing is not done in early stage & bugs found in later stages.
In the today’s competitive market only the quality product stays longtime firmly, so to make sure the produce the good quality product the testing of application is key factor in SDLC.
As it not possible makes it software application is defect free but testing will be necessary.
Most important thing of testing is the development environment is different than the Testing environment and the testing done on testing environment is similar to the Production environment.


In other words, while developing the application the developer may be using Internet Explorer browser but it might be possible that actual user is using different browsers. So in the testing of application, in the browser compatibility testing (depends on client browser Requirements) may get issues if any & gets cleared before moving into production. So this case the tester is playing a role of End Users.


After all for growth of any business the most important user satisfaction & testing plays a key role in to make this happen.

Saturday, 7 November 2015

5.Fundamental Test process

Testing is a process rather than a single activity. This process starts from test planning then designing test cases, preparing for execution and evaluating status till the test closure. So, we can divide the activities within the fundamental test process into the following basic steps:

1) Planning and Control
2) Analysis and Design
3) Implementation and Execution
4) Evaluating exit criteria and Reporting
5) Test Closure activities

1) Planning and Control:

Test planning has following major tasks:
i. To determine the scope and risks and identify the objectives of testing.
ii. To determine the test approach.
iii. To implement the test policy and/or the test strategy. (Test strategy is an outline that describes the testing portion of the software development cycle. It is created to inform PM, testers and developers about some key issues of the testing process. This includes the testing objectives, method of testing, total time and resources required for the project and the testing environments.).
iv. To determine the required test resources like people, test environments, PCs, etc.
v. To schedule test analysis and design tasks, test implementation, execution and evaluation.
vi. To determine the Exit criteria we need to set criteria such as Coverage criteria.(Coverage criteria are the percentage of statements in the software that must be executed during testing. This will help us track whether we are completing test activities correctly. They will show us which tasks and checks we must complete for a particular level of testing before we can say that testing is finished.)

Test control has the following major tasks:
i. To measure and analyze the results of reviews and testing.
ii. To monitor and document progress, test coverage and exit criteria.
iii. To provide information on testing.
iv. To initiate corrective actions.
v. To make decisions.

2) Analysis and Design:

Test analysis and Test Design has the following major tasks:
i. To review the test basis. (The test basis is the information we need in order to start the test analysis and create our own test cases. Basically it’s a documentation on which test cases are based, such as requirements, design specifications, product risk analysis, architecture and interfaces. We can use the test basis documents to understand what the system should do once built.)
ii. To identify test conditions.
iii. To design the tests.
iv. To evaluate testability of the requirements and system.
v. To design the test environment set-up and identify and required infrastructure and tools.

3) Implementation and Execution:
During test implementation and execution, we take the test conditions into test casesand procedures and other testware such as scripts for automation, the test environment and any other test infrastructure. (Test cases is a set of conditions under which a tester will determine whether an application is working correctly or not.)
(Testware is a term for all utilities that serve in combination for testing a software like scripts, the test environment and any other test infrastructure for later reuse.)

Test implementation has the following major task:
i. To develop and prioritize our test cases by using techniques and create test data for those tests. (In order to test a software application you need to enter some data for testing most of the features. Any such specifically identified data which is used in tests is known as test data.)
We also write some instructions for carrying out the tests which is known as test procedures.
We may also need to automate some tests using test harness and automated tests scripts. (A test harness is a collection of software and test data for testing a program unit by running it under different conditions and monitoring its behavior and outputs.)
ii. To create test suites from the test cases for efficient test execution.
(Test suite is a collection of test cases that are used to test a software program to show that it has some specified set of behaviours. A test suite often contains detailed instructions and information for each collection of test cases on the system configuration to be used during testing. Test suites are used to group similar test cases together.)
iii. To implement and verify the environment.

Test execution has the following major task:
i. To execute test suites and individual test cases following the test procedures.
ii. To re-execute the tests that previously failed in order to confirm a fix. This is known as confirmation testing or re-testing.
iii. To log the outcome of the test execution and record the identities and versions of the software under tests. The test log is used for the audit trial. (A test log is nothing but, what are the test cases that we executed, in what order we executed, who executed that test cases and what is the status of the test case (pass/fail). These descriptions are documented and called as test log.).
iv. To Compare actual results with expected results.
v. Where there are differences between actual and expected results, it report discrepancies as Incidents.

4) Evaluating Exit criteria and Reporting:
Based on the risk assessment of the project we will set the criteria for each test level against which we will measure the “enough testing”. These criteria vary from project to project and are known as exit criteria.
Exit criteria come into picture, when:
— Maximum test cases are executed with certain pass percentage.
— Bug rate falls below certain level.
— When achieved the deadlines.

Evaluating exit criteria has the following major tasks:
i. To check the test logs against the exit criteria specified in test planning.
ii. To assess if more test are needed or if the exit criteria specified should be changed.
iii. To write a test summary report for stakeholders.

5) Test Closure activities:
Test closure activities are done when software is delivered. The testing can be closed for the other reasons also like:
When all the information has been gathered which are needed for the testing.
When a project is cancelled.
When some target is achieved.
When a maintenance release or update is done.

Test closure activities have the following major tasks:
i. To check which planned deliverables are actually delivered and to ensure that all incident reports have been resolved.
ii. To finalize and archive testware such as scripts, test environments, etc. for later reuse.
iii. To handover the testware to the maintenance organization. They will give support to the software.
iv To evaluate how the testing went and learn lessons for future releases and projects.

Friday, 6 November 2015

6.Types of testing

Test types are introduced as a means of clearly defining the objective of a certain level for a program or project.  A test type is focused on a particular test objective, which could be the testing of the function to be performed by the component or system; a non-functional quality characteristics, such as reliability or usability; the structure or architecture of the component or system; or related to changes, i.e confirming that defects have been fixed (confirmation testing or retesting) and looking for unintended changes (regression testing). Depending on its objectives, testing will be organized differently. Hence there are four software test types:
  1. Functional testing
  2. Non Functional testing
  3. Structural/white box Testing 
 3.Functional testing

In functional testing basically the testing of the functions of component or system is done. It refers to activities that verify a specific action or function of the code. Functional test tends to answer the questions like “can the user do this” or “does this particular feature work”. This is typically described in a requirements specification or in a functional specification.
The techniques used for functional testing are often specification-based. Testing functionality can be done from two perspective:
  • Requirement-based testing: In this type of testing the requirements are prioritized depending on the risk criteria and accordingly the tests are prioritized. This will ensure that the most important and most critical tests are included in the testing effort.
  • Business-process-based testing: In this type of testing the scenarios involved in the day-to-day business use of the system are described. It uses the knowledge of the business processes. For example, a personal and payroll system may have the business process along the lines of: someone joins the company, employee is paid on the regular basis and employee finally leaves the company.

2. Non Functional testing
In non-functional testing the quality characteristics of the component or system is tested. Non-functional refers to aspects of the software that may not be related to a specific function or user action such as scalability or security. Eg. How many people can log in at once? Non-functional testing is also performed at all levels like functional testing.
Non-functional testing includes:
  • Functionality testing
  • Reliability testing
  • Usability testing
  • Efficiency testing
  • Maintainability testing
  • Portability testing
  • Baseline testing
  • Compliance testing
  • Documentation testing
  • Endurance testing
  • Load testing
  • Performance testing
  • Compatibility testing
  • Security testing
  • Scalability testing
  • Volume testing
  • Stress testing
  • Recovery testing
  • Internationalization testing and Localization testing
      • Functionality testing: Functionality testing is performed to verify that a software application performs and functions correctly according to design specifications. During functionality testing we check the core application functions, text input, menu functions and installation and setup on localized machines, etc.
      • Reliability testing: Reliability Testing is about exercising an application so that failures are discovered and removed before the system is deployed. The purpose of reliability testing is to determine product reliability, and to determine whether the software meets the customer’s reliability requirements.
      • Usability testing: In usability testing basically the testers tests the ease with which the user interfaces can be used. It tests that whether the application or the product built is user-friendly or not.
 Usability testing includes the following five components:
      1. Learnability: How easy is it for users to accomplish basic tasks the first time they encounter the design?
      2. Efficiency: How fast can experienced users accomplish tasks?
      3. Memorability: When users return to the design after a period of not using it, does the user remember enough to use it effectively the next time, or does the user have to start over again learning everything?
      4. Errors: How many errors do users make, how severe are these errors and how easily can they recover from the errors?
      5. Satisfaction: How much does the user like using the system?
    • Efficiency testing: Efficiency testing test the amount of code and testing resources required by a program to perform a particular function. Software Test Efficiency is number of test cases executed divided by unit of time (generally per hour).
    • Maintainability testing: It basically defines that how easy it is to maintain the system. This means that how easy it is to analyze, change and test the application or product.
    • Portability testing: It refers to the process of testing the ease with which a computer software component or application can be moved from one environment to another, e.g. moving of any application from Windows 2000 to Windows XP. This is usually measured in terms of the maximum amount of effort permitted. Results are measured in terms of the time required to move the software and complete the and documentation updates.
    • Baseline testing: It refers to the validation of documents and specifications on which test cases would be designed. The requirement specification validation is baseline testing. 
    • Compliance testing: It is related with the IT standards followed by the company and it is the testing done to find the deviations from the company prescribed standards.
    • Documentation testing: As per the IEEE Documentation describing plans for, or results of, the testing of a system or component, Types include test case specification, test incident report, test log, test plan, test procedure, test report. Hence the testing of all the above mentioned documents is known as documentation testing.
    • Endurance testing: Endurance testing involves testing a system with a significant load extended over a significant period of time, to discover how the system behaves under sustained use. For example, in software testing, a system may behave exactly as expected when tested for 1 hour but when the same system is tested for 3 hours, problems such as memory leaks cause the system to fail or behave randomly.
    • Load testing: A load test is usually conducted to understand the behavior of the application under a specific expected load. Load testing is performed to determine a system’s behavior under both normal and at peak conditions. It helps to identify the maximum operating capacity of an application as well as any bottlenecks and determine which element is causing degradation. E.g. If the number of users are in creased then how much CPU, memory will be consumed, what is the network and bandwidth response time
    • Performance testing: Performance testing is testing that is performed, to determine how fast some aspect of a system performs under a particular workload. It can serve different purposes like it can demonstrate that the system meets performance criteria. It can compare two systems to find which performs better. Or it can measure what part of the system or workload causes the system to perform badly.
    • Compatibility testing: Compatibility testing is basically the testing of the application or the product built with the computing environment. It tests whether the application or the software product built is compatible with the hardware, operating system, database or other system software or not.
    • Security testing: Security testing is basically to check that whether the application or the product is secured or not. Can anyone came tomorrow and hack the system or login the application without any authorization. It is a process to determine that an information system protects data and maintains functionality as intended.
    • Scalability testing: It is the testing of a software application for measuring its capability to scale up in terms of any of its non-functional capability like load supported, the number of transactions, the data volume etc.
    • Volume testing: Volume testing refers to testing a software application or the product with a certain amount of data. E.g., if we want to volume test our application with a specific database size, we need to expand our database to that size and then test the application’s performance on it.
    • Stress testing: It involves testing beyond normal operational capacity, often to a breaking point, in order to observe the results. It is a form of testing that is used to determine the stability of a given system. It  put  greater emphasis on robustness, availability, and error handling under a heavy load, rather than on what would be considered correct behavior under normal circumstances. The goals of such tests may be to ensure the software does not crash in conditions of insufficient computational resources (such as memory or disk space).
    • Recovery testing: Recovery testing is done in order to check how fast and better the application can recover after it has gone through any type of crash or hardware failure etc. Recovery testing is the forced failure of the software in a variety of ways to verify that recovery is properly performed. For example, when an application is receiving data from a network, unplug the connecting cable. After some time, plug the cable back in and analyze the application’s ability to continue receiving data from the point at which the network connection got disappeared. Restart the system while a browser has a definite number of sessions and check whether the browser is able to recover all of them or not.
    • Internationalization testing and Localization testing: Internationalization is a process of designing a software application so that it can be adapted to various languages and regions without any changes. Whereas Localization is a process of adapting internationalized software for a specific region or language by adding local specific components and translating text.
3. Structural Testing
  • The structural testing is the testing of the structure of the system or component.
  • Structural testing is often referred to as ‘white box’ or ‘glass box’ or ‘clear-box testing’ because in structural testing we are interested in what is happening ‘inside the system/application’.

  • In structural testing the testers are required to have the knowledge of the internal implementations of the code. Here the testers require knowledge of how the software is implemented, how it works.
  • During structural testing the tester is concentrating on how the software does it. For example, a structural technique wants to know how loops in the software are working. Different test cases may be derived to exercise the loop once, twice, and many times. This may be done regardless of the functionality of the software.
  • Structural testing can be used at all levels of testing. Developers use structural testing in component testing and component integration testing, especially where there is good tool support for code coverage. Structural testing is also used in system and acceptance testing, but the structures are different. For example, the coverage of menu options or major business transactions could be the structural element in system or acceptance testing.



Thursday, 5 November 2015

7.Life cycle of testing


SOFTWARE TESTING LIFE CYCLE (STLC)

Software Testing Life Cycle (STLC) defines the steps/ stages/ phases in testing of software. However, there is no fixed standard STLC in the world and it basically varies as per the following:
Nevertheless, Software Testing Life Cycle, in general, comprises of the following phases:

Software Testing Life Cycle


Phase Activity Deliverables Necessity
Requirements/ Design Review You review the software requirements/ design (Well, if they exist.)
  • ‘Review Defect’ Reports
Curiosity
Test Planning Once you have gathered a general idea of what needs to be tested, you ‘plan’ for the tests.
  • Test Plan
  • Test Estimation
  • Test Schedule
Farsightedness
Test Designing You design/ detail your tests on the basis of detailed requirements/design of the software (sometimes, on the basis of your imagination).
  • Test Case /Test Data
  • Requirements Traceability Matrix
Creativity
Test Environment Setup You setup the test environment (server/ client/ network, etc) with the goal of replicating the end-users’ environment.
  • Test Environment
Rich company
Test Execution You execute your Test Cases/ Scripts in the Test Environment to see whether they pass.
  • Test Results (Incremental)
  • Defect Report
Patience
Test Reporting You prepare various reports for various stakeholders.
  • Test Results (Final)
  • Test/ Defect Metrics
  • Test Closure Report
  • Who Worked Late & on Weekends (WWLW) Report [Depending on how fussy your Management is]
Diplomacy


Note that the STLC phases mentioned above do not necessarily have to be in the order listed; some phases can sometimes run in parallel (For instance, Test Designing and Test Execution). And, in extreme cases, the phases might also be reversed (For instance, when there is Cursing prior to Testing).