Warning: Cannot modify header information - headers already sent by (output started at /home/dealkhus/cosmoread.com/wp-includes/formatting.php:5079) in /home/dealkhus/cosmoread.com/wp-content/themes/cosmoread/header.php on line 18

Warning: Cannot modify header information - headers already sent by (output started at /home/dealkhus/cosmoread.com/wp-includes/formatting.php:5079) in /home/dealkhus/cosmoread.com/wp-content/themes/cosmoread/header.php on line 19

Warning: Cannot modify header information - headers already sent by (output started at /home/dealkhus/cosmoread.com/wp-includes/formatting.php:5079) in /home/dealkhus/cosmoread.com/wp-content/themes/cosmoread/header.php on line 20

Warning: Cannot modify header information - headers already sent by (output started at /home/dealkhus/cosmoread.com/wp-includes/formatting.php:5079) in /home/dealkhus/cosmoread.com/wp-content/themes/cosmoread/header.php on line 21
FRESH
Home » Education » Software Testing

Software Testing

Begining Software Testing

Software testing is the process used to measure the quality of developed computer software. Usually, quality is constrained to such topics as correctness, completeness, security, but can also include more technical requirements as described under the ISO standard ISO 9126, such as capability, reliability, efficiency, portability, maintainability, compatibility, and usability. Testing is a process of technical investigation, performed on behalf of stakeholders, that is intended to reveal quality-related information about the product with respect to the context in which it is intended to operate. This includes, but is not limited to, the process of executing a program or application with the intent of finding errors. Quality is not an absolute; it is value to some person. With that in mind, testing can never completely establish the correctness of arbitrary computer software; testing furnishes a criticism or comparison that compares the state and behaviour of the product against a specification. An important point is that software testing should be distinguished from the separate discipline of Software Quality Assurance (SQA), which encompasses all business process areas, not just testing.

There are many approaches to software testing, but effective testing of complex products is essentially a process of investigation, not merely a matter of creating and following routine procedure. One definition of testing is “the process of questioning a product in order to evaluate it”, where the “questions” are operations the tester attempts to execute with the product, and the product answers with its behavior in reaction to the probing of the tester[1]. Although most of the intellectual processes of testing are nearly identical to that of review or inspection, the word testing is also used to connote the dynamic analysis of the product—putting the product through its paces. Sometimes one therefore refers to reviews, walkthroughs or inspections as “static testing”, whereas actually running the program with a given set of test cases in a given development stage is often referred to as “dynamic testing”, to emphasize the fact that formal review processes form part of the overall testing scope.

Introduction

In general, software engineers distinguish software faults from software failures. In case of a failure, the software does not do what the user expects. A fault is a programming error that may or may not actually manifest as a failure. A fault can also be described as an error in the correctness of the semantic of a computer program. A fault will become a failure if the exact computation conditions are met, one of them being that the faulty portion of computer software executes on the CPU. A fault can also turn into a failure when the software is ported to a different hardware platform or a different compiler, or when the software gets extended.

Software testing may be viewed as a sub-field of Software Quality Assurance but typically exists independently (and there may be no SQA areas in some companies). In SQA, software process specialists and auditors take a broader view on software and its development. They examine and change the software engineering process itself to reduce the amount of faults that end up in the code or deliver faster.

Regardless of the methods used or level of formality involved, the desired result of testing is a level of confidence in the software so that the organization is confident that the software has an acceptable defect rate. What constitutes an acceptable defect rate depends on the nature of the software. An arcade video game designed to simulate flying an airplane would presumably have a much higher tolerance for defects than software used to control an actual airliner.

A problem with software testing is that the number of defects in a software product can be very large, and the number of configurations of the product larger still. Bugs that occur infrequently are difficult to find in testing. A rule of thumb is that a system that is expected to function without faults for a certain length of time must have already been tested for at least that length of time. This has severe consequences for projects to write long-lived reliable software, since it is not usually commercially viable to test over the proposed length of time unless this is a relatively short period. A few days or a week would normally be acceptable, but any longer period would usually have to be simulated according to carefully prescribed start and end conditions.

A common practice of software testing is that it is performed by an independent group of testers after the functionality is developed but before it is shipped to the customer. This practice often results in the testing phase being used as project buffer to compensate for project delays, thereby compromising the time devoted to testing. Another practice is to start software testing at the same moment the project starts and it is a continuous process until the project finishes.

This is highly problematic in terms of controlling changes to software: if faults or failures are found part way into the project, the decision to correct the software needs to be taken on the basis of whether or not these defects will delay the remainder of the project. If the software does need correction, this needs to be rigorously controlled using a version numbering system, and software testers need to be accurate in knowing that they are testing the correct version, and will need to re-test the part of the software wherein the defects were found. The correct start point needs to be identified for retesting. There are added risks in that new defects may be introduced as part of the corrections, and the original requirement can also change part way through, in which instance previous successful tests may no longer meet the requirement and will need to be re-specified and redone (part of regression testing). Clearly the possibilities for projects being delayed and running over budget are significant.

Another common practice is for test suites to be developed during technical support escalation procedures. Such tests are then maintained in regression testing suites to ensure that future updates to the software don’t repeat any of the known mistakes.

It is commonly believed that the earlier a defect is found the cheaper it is to fix it. This is reasonable based on the risk of any given defect contributing to or being confused with further defects later in the system or process. In particular, if a defect erroneously changes the state of the data on which the software is operating, that data is no longer reliable and therefore any testing after that point cannot be relied on even if there are no further actual software defects.

Time Detected [2]

Time Introduced Requirements Architecture Construction System Test Post-Release Requirements

Architecture

Construction

1 3 5-10 10 10-100
1 10 15 25-100
1 10 10-25

In counterpoint, some emerging software disciplines such as extreme programming and the agile software development movement, adhere to a “test-driven software development” model. In this process unit tests are written first, by the software engineers (often with pair programming in the extreme programming methodology). Of course these tests fail initially; as they are expected to. Then as code is written it passes incrementally larger portions of the test suites. The test suites are continuously updated as new failure conditions and corner cases are discovered, and they are integrated with any regression tests that are developed.

Unit tests are maintained along with the rest of the software source code and generally integrated into the build process (with inherently interactive tests being relegated to a partially manual build acceptance process).

The software, tools, samples of data input and output, and configurations are all referred to collectively as a test harness.

History

he separation of debugging from testing was initially introduced by Glenford J. Myers in 1979.[3] Although his attention was on breakage testing it illustrated the desire of the software engineering community to separate fundamental development activities, such as debugging, from that of verification. Drs. Dave Gelperin and William C. Hetzel classified in 1988 the phases and goals in software testing as follows:[4]

until 1956 it was the debugging oriented period, where testing was often associated to debugging: there was no clear difference between testing and debugging. From 1957-1978 there was the demonstration oriented period where debugging and testing was distinguished now – in this period it was shown, that software satisfies the requirements. The time between 1979-1982 is announced as the destruction oriented period, where the goal was to find errors. 1983-1987 is classified as the evaluation oriented period: intention here is that during the software lifecycle a product evaluation is provided and measuring quality. From 1988 on it was seen as prevention oriented period where tests were to demonstrate that software satisfies its specification, to detect faults and to prevent faults.

Dr. Gelperin chaired the IEEE 829-1989 (Test Documentation Standard) with Dr. Hetzel writing the book The Complete Guide to Software Testing. Both works were pivotal in to today’s testing culture and remain a consistent source of reference. Dr. Gelperin and Jerry E. Durant also went on to develop High Impact Inspection Technology that builds upon traditional Inspections but utilizes a test driven additive.

White box, black box, and grey box testing

black box testing are terms used to describe the point of view a test engineer takes when designing test cases. Black box testing assumes an external view of the test object; one inputs data and one sees only outputs from the test object. White box testing provides an internal view of the test object and its processes.

In recent years the term grey box testing has come into common usage. The typical grey box tester is permitted to set up or manipulate the testing environment, such as by seeding a database, and can view the state of the product after his actions, such as performing a SQL query on the database to be certain of the values of columns.

Grey box testing is used almost exclusively by client-server testers or others who use a database as a repository of information, but can also apply to a tester who has to manipulate input or configuration files directly, or perform testing like SQL injection. It can also be used by testers who know the internal workings or algorithm of the software under test and can write tests specifically for the anticipated results. For example, testing a data warehouse implementation involves loading the target database with information, and verifying the correctness of data population and loading of data into the correct tables.

Verification and validation

Software testing is used in association with verification and validation (V&V). Verification is the checking of or testing of items, including software, for conformance and consistency with an associated specification. Software testing is just one kind of verification, which also uses techniques such as reviews, inspections, and walkthroughs. Validation is the process of checking what has been specified is what the user actually wanted.

  • Verification: Are we doing the job right?
  • Validation: Have we done the right job?

Levels of testing

  • Unit testing tests the minimal software component, or module. Each unit (basic component) of the software is tested to verify that the detailed design for the unit has been correctly implemented.
  • Integration testing exposes defects in the interfaces and interaction between integrated components (modules). Progressively larger groups of tested software components corresponding to elements of the architectural design are integrated and tested until the software works as a whole.
  • System testing tests an integrated system to verify that it meets its requirements.
  • System integration testing verifies that a system is integrated to any external or third party systems defined in the system requirements.
  • Acceptance testing can be conducted by the end-user, customer, or client to validate whether or not to accept the product. Acceptance testing may be performed after the testing and before the implementation phase. See also Development stage
    • Alpha testing is simulated or actual operational testing by potential users/customers or an independent test team at the developers’ site. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing, before the software goes to beta testing.
    • Beta testing comes after alpha testing. Versions of the software, known as beta versions, are released to a limited audience outside of the company. The software is released to groups of people so that further testing can ensure the product has few faults or bugs. Sometimes, beta versions are made available to the open public to increase the feedback field to a maximal number of future users.

It should be noted that although both Alpha and Beta are referred to as testing it is in fact use immersion. The rigors that are applied are often unsystematic and many of the basic tenets of testing process are not used. The Alpha and Beta period provides insight into environmental and utilization conditions that can impact the software.

After modifying software, either for a change in functionality or to fix defects, a regression test re-runs previously passing tests on the modified software to ensure that the modifications haven’t unintentionally caused a regression of previous functionality. Regression testing can be performed at any or all of the above test levels. These regression tests are often automated.

Test cases, suites, scripts, and scenarios

is a software testing document,which consists of event, action, input, output, expected result, and actual result. Clinically defined (IEEE 829-1998) a test case is an input and an expected result. This can be as pragmatic as ‘for condition x your derived result is y’, whereas other test cases described in more detail the input scenario and what results might be expected. It can occasionally be a series of steps (but often steps are contained in a separate test procedure that can be exercised against multiple test cases, as a matter of economy) but with one expected result or expected outcome. The optional fields are a test case ID, test step or order of execution number, related requirement(s), depth, test category, author, and check boxes for whether the test is automatable and has been automated. Larger test cases may also contain prerequisite states or steps, and descriptions. A test case should also contain a place for the actual result. These steps can be stored in a word processor document, spreadsheet, database, or other common repository. In a database system, you may also be able to see past test results and who generated the results and the system configuration used to generate those results. These past results would usually be stored in a separate table.

The term test script is the combination of a test case, test procedure, and test data. Initially the term was derived from the product of work created by automated regression test tools. Today, test scripts can be manual, automated, or a combination of both.

The most common term for a collection of test cases is a test suite. The test suite often also contains more detailed instructions or goals for each collection of test cases. It definitely contains a section where the tester identifies the system configuration used during testing. A group of test cases may also contain prerequisite states or steps, and descriptions of the following tests.

Collections of test cases are sometimes incorrectly termed a test plan. They might correctly be called a test specification. If sequence is specified, it can be called a test script, scenario, or procedure.

A sample testing cycle

Although testing varies between organizations, there is a cycle to testing:

  1. Requirements Analysis: Testing should begin in the requirements phase of the software development life cycle. During the design phase, testers work with developers in determining what aspects of a design are testable and under what parameter those tests work.
  2. Test Planning: Test Strategy, Test Plan(s), Test Bed creation. A lot of activities will be carried out during testing, so that a plan is needed.
  3. Test Development: Test Procedures, Test Scenarios, Test Cases, Test Scripts to use in testing software.
  4. Test Execution: Testers execute the software based on the plans and tests and report any errors found to the development team.
  5. Test Reporting: Once testing is completed, testers generate metrics and make final reports on their test effort and whether or not the software tested is ready for release.
  6. Retesting the Defects

Not all errors or defects reported must be fixed by a software development team. Some may be caused by errors in configuring the test software to match the development or production environment. Some defects can be handled by a workaround in the production environment. Others might be deferred to future releases of the software, or the deficiency might be accepted by the business user. There are yet other defects that may be rejected by the development team (of course, with due reason) if they deem it

Code coverage

Code coverage is inherently a white box testing activity. The target software is built with special options or libraries and/or run under a special environment such that every function that is exercised (executed) in the program(s) are mapped back to the function points in the source code. This process allows developers and quality assurance personnel to look for parts of a system that are rarely or never accessed under normal conditions (error handling and the like) and helps reassure test engineers that the most important conditions (function points) have been tested.

Test engineers can look at code coverage test results to help them devise test cases and input or configuration sets that will increase the code coverage over vital functions. Two common forms of code coverage used by testers are statement (or line) coverage, and path (or edge) coverage. Line coverage reports on the execution footprint of testing in terms of which lines of code were executed to complete the test. Edge coverage reports which branches, or code decision points were executed to complete the test. They both report a coverage metric, measured as a percentage.

Generally code coverage tools and libraries exact a performance and/or memory or other resource cost which is unacceptable to normal operations of the software. Thus they are only used in the lab. As one might expect there are classes of software that cannot be feasibly subjected to these coverage tests, though a degree of coverage mapping can be approximated through analysis rather than direct testing.

There are also some sorts of defects which are affected by such tools. In particular some race conditions or similar real time sensitive operations can be masked when run under code coverage environments; and conversely some of these defects may become easier to find as a result of the additional overhead of the testing code.

Code coverage may be regarded as a more up-to-date incarnation of debugging in that the automated tools used to achieve statement and path coverage are often referred to as “debugging utilities”. These tools allow the program code under test to be observed on screen whilst the program is executing, and commands and keyboard function keys are available to allow the code to be “stepped” through literally line by line. Alternatively it is possible to define pinpointed lines of code as “breakpoints” which will allow a large section of the code to be executed, then stopping at that point and displaying that part of the program on screen. Judging where to put breakpoints is based on a reasonable understanding of the program indicating that a particular defect is thought to exist around that point. The data values held in program variables can also be examined and in some instances (with care) altered to try out “what if” scenarios. Clearly use of a debugging tool is more the domain of the software engineer at a unit test level, and it is more likely that the software tester will ask the software engineer to perform this. However, it is useful for the tester to understand the concept of a debugging tool.

Roles in software testing

Software testing can be done by software testers. Until the 1950s the term software tester was used generally, but later it was also seen as a separate profession. Regarding the periods and the different goals in software testing (see D. Gelperin and W.C. Hetzel) there have been established different roles: test lead/manager, tester, test designer, test automater/automation developer, and test administrator.

Participants of testing team:

  1. Tester
  2. Developer
  3. Business Analyst
  4. Customer
  5. Information Service Management
  6. Test Manager
  7. Senior Organization Management
  8. Quality team

Quotes

  • An effective way to test code is to exercise it at its natural boundaries.” — Brian Kernighan
  • Program testing can be used to show the presence of bugs, but never to show their absence!” — Edsger Dijkstra
  • Beware of bugs in the above code; I have only proved it correct, not tried it.” — Donald Knuth
  • A relatively small number of causes will typically produce a large majority of the problems or defects (80/20 Rule).” — Pareto principle

  Principles in testing

Check Also

IAAF set to allow Russians in the European Championship

IAAF set to allow Russians in the European Championship #Cosmoread: Russian field and track athletes, …

  • I don’t even understand how I stopped up right here, however I thought this put up used to be good.
    I don’t recognise who you might be however definitely you’re going to a famous blogger when you are not already.
    Cheers!