Unit - 4
Software Testing
Testing is the method of running a programme with the intention of discovering mistakes. It needs to be error-free to make our applications work well. It will delete all the errors from the programme if testing is performed successfully.
The method of identifying the accuracy and consistency of the software product and service under test is software testing. It was obviously born to verify whether the commodity meets the customer's specific prerequisites, needs, and expectations. At the end of the day, testing executes a system or programme to point out bugs, errors or faults for a particular end goal.
Software testing is a way of verifying whether the actual software product meets the expected specifications and ensuring that the software product is free of defects. It requires the execution of software/system components to test one or more properties of interest using manual or automated methods. In comparison to actual specifications, the aim of software testing is to find mistakes, gaps or missing requirements.
Some like to claim software testing as testing with a White Box and a Black Box. Software Testing, in simple words, means Program Verification Under Evaluation (AUT).
Benefits of software testing
There are some pros of using software testing:
● Cost - effective: It is one of the main benefits of software checking. Testing every IT project on time allows you to save the money for the long term. It costs less to correct if the bugs were caught in the earlier stage of software testing.
● Security: The most vulnerable and responsive advantage of software testing is that people search for trustworthy products. It helps to eliminate threats and concerns sooner.
● Product - quality: It is a necessary prerequisite for any software product. Testing ensures that consumers receive a reliable product.
● Customer - satisfaction: The main goal of every product is to provide its consumers with satisfaction. The best user experience is assured by UI/UX Checking.
Key takeaway:
● The method of identifying the accuracy and consistency of the software product and service under test is software testing.
● Some like to claim software testing as testing with a White Box and a Black Box.
● Software Testing, in simple words, means Program Verification Under Evaluation.
It is also known as “behavioral testing” which focuses on the functional requirements of the software, and is performed at later stages of the testing process unlike white box which takes place at an early stage. Black-box testing aims at functional requirements for a program to derive sets of input conditions which should be tested. Black box is not an alternative to white-box, rather, it is a complementary approach to find out a different class of errors other than white-box testing.
Black-box testing is emphasizing on different set of errors which falls under following categories:
- Incorrect or missing functions
- Interface errors
- Errors in data structures or external database access
- Behavior or performance errors
- Initialization and termination errors.
- Boundary value analysis: The input is divided into higher and lower end values. If these values pass the test, it is assumed that all values in between may pass too.
- Equivalence class testing: The input is divided into similar classes. If one element of a class passes the test, it is assumed that all the class is passed.
- Decision table testing: Decision table technique is one of the widely used case design techniques for black box testing. This is a systematic approach where various input combinations and their respective system behaviour are captured in a tabular form. That’s why it is also known as a cause-effect table. This technique is used to pick the test cases in a systematic manner; it saves the testing time and gives good coverage to the testing area of the software application. Decision table technique is appropriate for the functions that have a logical relationship between two and more than two inputs.
Advantages:
● More effective on larger units of code than glass box testing.
● Testers need no knowledge of implementation, including specific programming languages.
● Testers and programmers are independent of each other.
● Tests are done from a user's point of view.
● Will help to expose any ambiguities or inconsistencies in the specifications.
● Test cases can be designed as soon as the specifications are complete.
Disadvantages:
● Only a small number of possible inputs can actually be tested, to test every possible input stream would take nearly forever.
● Without clear and concise specifications, test cases are hard to design.
● There may be unnecessary repetition of test inputs if the tester is not informed of test cases the programmer has already tried.
● May leave many program paths untested.
● Cannot be directed toward specific segments of code which may be very complex (and therefore more error prone).
● Most testing related research has been directed toward glass box testing.
Key takeaway:
- In Black box testing the main focus is on the information domain.
- This technique exercises the input and output domain of the program to uncover errors in program, function, behavior and performance.
● In this testing technique the internal logic of software components is tested.
● It is a test case design method that uses the control structure of the procedural design test cases.
● It is done in the early stages of software development.
● Using this testing technique software engineer can derive test cases that:
● All independent paths within a module have been exercised at least once.
● Exercise true and false both the paths of logical checking.
● Execute all the loops within their boundaries.
● Exercise internal data structures to ensure their validity.
Advantages:
● As the knowledge of internal coding structure is prerequisite, it becomes very easy to find out which type of input/data can help in testing the application effectively.
● The other advantage of white box testing is that it helps in optimizing the code.
● It helps in removing the extra lines of code, which can bring in hidden defects.
● We can test the structural logic of the software.
● Every statement is tested thoroughly.
● Forces test developers to reason carefully about implementation.
● Approximate the partitioning done by execution equivalence.
● Reveals errors in "hidden" code.
Disadvantages:
● It does not ensure that the user requirements are fulfilled.
● As knowledge of code and internal structure is a prerequisite, a skilled tester is needed to carry out this type of testing, which increases the cost.
● It is nearly impossible to look into every bit of code to find out hidden errors, which may create problems, resulting in failure of the application.
● The tests may not be applicable in a real world situation.
● Cases omitted in the code could be missed out.
Key takeaway:
- It is a test case design method that uses the control structure of the procedural design test cases.
- It does not ensure that the user requirements are fulfilled.
Unit testing focuses verification effort on the smallest unit of software design—the software component or module. Using the component- level design description as a guide, important control paths are tested to uncover errors within the boundary of the module. The relative complexity of tests and uncovered errors is limited by the constrained scope established for unit testing. The unit test is white-box oriented, and the step can be conducted in parallel for multiple components.
Unit Test Considerations
The tests that occur as part of unit tests are illustrated schematically in Figure below. The module interface is tested to ensure that information properly flows into and out of the program unit under test. The local data structure is examined to ensure that data stored temporarily maintains its integrity during all steps in an algorithm's execution. Boundary conditions are tested to ensure that the module operates properly at boundaries established to limit or restrict processing. All independent paths (basis paths) through the control structure are exercised to ensure that all statements in a module have been executed at least once. And finally, all error handling paths are tested.
Fig 1: Unit Test
Tests of data flow across a module interface are required before any other test is initiated. If data does not enter and exit properly, all other tests are moot. In addition, local data structures should be exercised and the local impact on global data should be ascertained (if possible) during unit testing.
Selective testing of execution paths is an essential task during the unit test. Test cases should be designed to uncover errors due to erroneous computations, incorrect comparisons, or improper control flow. Basis path and loop testing are effective techniques for uncovering a broad array of path errors.
Among the more common errors in computation are
● Misunderstood or incorrect arithmetic precedence,
● Mixed mode operations,
● Incorrect initialization,
● Precision inaccuracy,
● Incorrect symbolic representation of an expression.
Comparison and control flow are closely coupled to one another (i.e., change of flow frequently occurs after a comparison). Test cases should uncover errors such as
- Comparison of different data types,
- Incorrect logical operators or precedence,
- Expectation of equality when precision error makes equality unlikely,
- Incorrect comparison of variables,
- Improper or nonexistent loop termination,
- Failure to exit when divergent iteration is encountered, and
- Improperly modified loop variables.
Among the potential errors that should be tested when error handling is evaluated are
● Error description is unintelligible.
● Error noted does not correspond to error encountered.
● Error condition causes system intervention prior to error handling.
● Exception-condition processing is incorrect.
● Error description does not provide enough information to assist in the location of the cause of the error.
Boundary testing is the last (and probably most important) task of the unit test step. Software often fails at its boundaries. That is, errors often occur when the nth element of an n-dimensional array is processed, when the ith repetition of a loop with i passes is invoked, when the maximum or minimum allowable value is encountered. Test cases that exercise data structure, control flow, and data values just below, at, and just above maxima and minima are very likely to uncover errors.
Unit Test Procedures
Unit testing is normally considered as an adjunct to the coding step. After source level code has been developed, reviewed, and verified for correspondence to component level design, unit test case design begins. A review of design information provides guidance for establishing test cases that are likely to uncover errors in each of the categories discussed earlier. Each test case should be coupled with a set of expected results.
Fig 2: Unit Test Environment
Because a component is not a stand-alone program, driver and/or stub software must be developed for each unit test. The unit test environment is illustrated in Figure above. In most applications a driver is nothing more than a "main program" that accepts test case data, passes such data to the component (to be tested), and prints relevant results. Stubs serve to replace modules that are subordinate (called by) the component to be tested.
A stub or "dummy subprogram" uses the subordinate module's interface, may do minimal data manipulation, prints verification of entry, and returns control to the module undergoing testing. Drivers and stubs represent overhead. That is, both are software that must be written (formal design is not commonly applied) but that is not delivered with the final software product. If drivers and stubs are kept simple, actual overhead is relatively low. Unfortunately, many components cannot be adequately unit tested with "simple" overhead software. In such cases, complete testing can be postponed until the integration test step (where drivers or stubs are also used).
Unit testing is simplified when a component with high cohesion is designed. When only one function is addressed by a component, the number of test cases is reduced and errors can be more easily predicted and uncovered.
Advantage of Unit Testing
● Can be applied directly to object code and does not require processing source code.
● Performance profilers commonly implement this measure.
Disadvantages of Unit Testing
● Insensitive to some control structures (number of iterations)
● Does not report whether loops reach their termination condition
● Statement coverage is completely insensitive to the logical operators (|| and &&).
Key takeaway:
- Unit testing focuses verification effort on the smallest unit of software design—the software component or module.
- The unit test is white-box oriented, and the step can be conducted in parallel for multiple components.
- Unit testing is normally considered as an adjunct to the coding step.
Integration testing is a systematic technique for constructing the program structure while at the same time conducting tests to uncover errors associated with interfacing. The objective is to take unit tested components and build a program structure that has been dictated by design.
There is often a tendency to attempt non incremental integration; that is, to construct the program using a "big bang" approach. All components are combined in advance. The entire program is tested as a whole. And chaos usually results! A set of errors is encountered.
Correction is difficult because isolation of causes is complicated by the vast expanse of the entire program. Once these errors are corrected, new ones appear and the process continues in a seemingly endless loop.
Incremental integration is the antithesis of the big bang approach. The program is constructed and tested in small increments, where errors are easier to isolate and correct; interfaces are more likely to be tested completely; and a systematic test approach may be applied.
Top-down Integration
Top-down integration testing is an incremental approach to construction of program structure. Modules are integrated by moving downward through the control hierarchy, beginning with the main control module (main program). Modules subordinate (and ultimately subordinate) to the main control module are incorporated into the structure in either a depth-first or breadth-first manner.
Fig 3: Top down integration
Referring to Figure above, depth-first integration would integrate all components on a major control path of the structure. Selection of a major path is somewhat arbitrary and depends on application-specific characteristics. For example, selecting the left-hand path, components M1, M2, M5 would be integrated first. Next, M8 or (if necessary for proper functioning of M2) M6 would be integrated.
Then, the central and right-hand control paths are built. Breadth-first integration incorporates all components directly subordinate at each level, moving across the structure horizontally. From the figure, components M2, M3, and M4 (a replacement for stub S4) would be integrated first. The next control level, M5, M6, and so on, follows.
The integration process is performed in a series of five steps:
● The main control module is used as a test driver and stubs are substituted for all components directly subordinate to the main control module.
● Depending on the integration approach selected (i.e., depth or breadth first), subordinate stubs are replaced one at a time with actual components.
● Tests are conducted as each component is integrated.
● On completion of each set of tests, another stub is replaced with the real component.
● Regression testing may be conducted to ensure that new errors have not been introduced. The process continues from step 2 until the entire program structure is built.
The top-down integration strategy verifies major control or decision points early in the test process. In a well-factored program structure, decision making occurs at upper levels in the hierarchy and is therefore encountered first. If major control problems do exist, early recognition is essential. If depth-first integration is selected, a complete function of the software may be implemented and demonstrated.
For example, consider a classic transaction structure in which a complex series of interactive inputs is requested, acquired, and validated via an incoming path. The incoming path may be integrated in a top-down manner. All input processing (for subsequent transaction dispatching) may be demonstrated before other elements of the structure have been integrated. Early demonstration of functional capability is a confidence builder for both the developer and the customer.
Top-down strategy sounds relatively uncomplicated, but in practice, logistical problems can arise. The most common of these problems occurs when processing at low levels in the hierarchy is required to adequately test upper levels. Stubs replace low-level modules at the beginning of top-down testing; therefore, no significant data can flow upward in the program structure. The tester is left with three choices:
● Delay many tests until stubs are replaced with actual modules,
● Develop stubs that perform limited functions that simulate the actual module, or
● Integrate the software from the bottom of the hierarchy upward.
The first approach (delay tests until stubs are replaced by actual modules) causes us to loose some control over correspondence between specific tests and incorporation of specific modules. This can lead to difficulty in determining the cause of errors and tends to violate the highly constrained nature of the top-down approach. The second approach is workable but can lead to significant overhead, as stubs become more and more complex.
Bottom-up Integration
Bottom-up integration testing, as its name implies, begins construction and testing with atomic modules (i.e., components at the lowest levels in the program structure). Because components are integrated from the bottom up, processing required for components subordinate to a given level is always available and the need for stubs is eliminated.
A bottom-up integration strategy may be implemented with the following steps:
- Low-level components are combined into clusters (sometimes called builds) that perform a specific software sub-function.
- A driver (a control program for testing) is written to coordinate test case input and output.
- The cluster is tested.
- Drivers are removed and clusters are combined moving upward in the program structure.
Fig 4: Bottom up integration
Integration follows the pattern illustrated in Figure above. Components are combined to form clusters 1, 2, and 3. Each of the clusters is tested using a driver (shown as a dashed block). Components in clusters 1 and 2 are subordinate to Ma. Drivers D1 and D2 are removed and the clusters are interfaced directly to Ma. Similarly, driver D3 for cluster 3 is removed prior to integration with module Mb. Both Ma and Mb will ultimately be integrated with component Mc, and so forth.
As integration moves upward, the need for separate test drivers lessens. In fact, if the top two levels of program structure are integrated top down, the number of drivers can be reduced substantially and integration of clusters is greatly simplified.
Key takeaway:
- Integration testing is a systematic technique for constructing the program structure while at the same time conducting tests to uncover errors associated with interfacing.
- Top-down integration testing is an incremental approach to construction of program structure.
- The top-down integration strategy verifies major control or decision points early in the test process.
- Bottom-up integration testing, as its name implies, begins construction and testing with atomic modules.
Validation Testing, performed by QA practitioners, is to determine if the device meets the specifications and performs the functions for which it is intended and meets the objectives and user needs of the organization. This sort of testing, as well as verification testing, is quite important. At the end of the production process, validation is performed and takes place after verification is finished.
Thus, developers apply validation testing to ensure customer satisfaction. The aim is to verify and be satisfied in the product or system and to satisfy the customer's requirements. It also requires the approval of the programme from the end user.
As software is checked, the purpose is to check the accuracy of the defects and bugs found. Developers patch them when glitches and bugs are detected. The programme is reviewed again after that to ensure that no bugs are left. The output of the software product scales up in that way.
The objective of software testing is to assess software quality in terms of the number of defects found in the software, the number of running tests and the device covered by the tests. If, with the aid of testing, bugs or defects are detected, the bugs are reported and repaired by the development team. When the bugs are patched, testing is carried out again to ensure that they are truly fixed and that the programme has not created any new defects. The consistency of the programme improves with the complete period.
Validation checking phases Process:
● Validation Planning – To coordinate all the tasks that need to be included during research.
● Define Requirements – To set targets and identify the research criteria.
● Selecting a Team – Selecting a capable and experienced leadership team (the third party included).
● Developing Documents – To create a document for the user specification detailing the operating conditions.
● Estimation/Evaluation – To test the programme and present a validation report as per the specifications
● Fixing bugs or Incorporating Changes - To adjust the programme so that any errors detected during assessment can be deleted.
Validation-Test Criteria
● Along with a series of checks for black boxes.
● The object of the test plan and test procedure is to check:
● Requirements are met or not met.
● Whether or not all behavioural features are accomplished.
● If all performance criteria are met or not.
● Whether or not the text is accurate.
Black-box testing
It is also known by “behavioural testing” which focuses on the functional requirements of the software, and is performed at later stages of testing process unlike white box which takes place at early stage. Black-box testing aims at functional requirements for a program to derive sets of input conditions which should be tested. Black box is not an alternative to white-box, rather, it is a complementary approach to find out a different class of errors other than white-box testing.
Black-box testing is emphasizing on different set of errors which falls under following Categories:
a) Incorrect or missing functions
b) Interface errors
c) Errors in data structures or external database access
d) Behaviour or performance errors
e) Initialization and termination errors.
Configuration Review
● Check whether or not all software configuration elements have been properly created.
● This process is often referred to as "audit"
Key takeaway:
● Black-box testing aims at functional requirements for a program to derive sets of input conditions which should be tested
● The consistency of the programme improves with the complete period.
● Developers apply validation testing to ensure customer satisfaction
● At the end of the production process, validation is performed and takes place after verification is finished
Software is the only one element of a larger computer-based system. Ultimately, software is incorporated with other system elements (e.g., hardware, people, information), and a series of system integration and validation tests are conducted. These tests fall outside the scope of the software process and are not conducted solely by software engineers. However, steps taken during software design and testing can greatly improve the probability of successful software integration in the larger system.
A classic system testing problem is "finger-pointing." This occurs when an error is uncovered, and each system element developer blames the other for the problem. Rather than indulging in such nonsense, the software engineer should anticipate potential interfacing problems and
● Design error-handling paths that test all information coming from other elements of the system,
● Conduct a series of tests that simulate bad data or other potential errors at the software interface,
● Record the results of tests to use as "evidence" if finger-pointing does occur, and
● Participate in planning and design of system tests to ensure that software is adequately tested.
System testing is actually a series of different tests whose primary purpose is to fully exercise the computer-based system. Although each test has a different purpose, all work to verify that system elements have been properly integrated and perform allocated functions.
Key takeaway:
- Software is the only one element of a larger computer-based system.
- Software is incorporated with other system elements (e.g., hardware, people, information), and a series of system integration and validation tests are conducted.
- System testing is actually a series of different tests whose primary purpose is to fully exercise the computer-based system.
Debugging is the method of repairing a bug in the programme in the sense of software engineering. In other words, it applies to error detection, examination and elimination. This operation starts after the programme fails to function properly and ends by fixing the issue and checking the software successfully. As errors need to be fixed at all levels of debugging, it is considered to be an extremely complex and repetitive process.
Debugging process
Steps that are involved in debugging include:
● Identification of issue and preparation of report.
● Assigning the software engineer's report to the defect to verify that it is true.
● Defect Detection using modelling, documentation, candidate defect finding and checking, etc.
● Defect Resolution by having the device modifications required.
● Corrections validation.
Debugging strategies -
- To grasp the system, research the system for the longer term. Depending on the need, it helps debuggers create various representations of systems to be debugged. System analysis is also carried out actively to detect recent improvements made to the programme.
- Backward analysis of the issue that includes monitoring the software backwards from the fault message position to locate the region of defective code. A thorough region study is conducted to determine the cause of defects.
- Forward programme analysis includes monitoring the programme forward using breakpoints or print statements and studying the outcomes at various points in the programme. In order to locate the flaw, the region where the wrong outputs are obtained is the region that needs to be centered.
- Using previous software debugging experience, the software has similar problems in nature. This approach's effectiveness depends on the debugger's expertise.
Key takeaway:
● Debugging is the method of repairing a bug in the programme in the sense of software engineering.
● It applies to error detection, examination and elimination.
References:
- Software Engineering: Theory and Practice (Fourth Edition – Pfleeger
- Software Engineering- Mishra /Mohanty (Pearson Education)
- Software Engineering‐Schaum’s Series (TMH)
- Software Project Management ‐ Sanjay Mohapatra (Cengage Learning)
Unit - 4
Software Testing
Testing is the method of running a programme with the intention of discovering mistakes. It needs to be error-free to make our applications work well. It will delete all the errors from the programme if testing is performed successfully.
The method of identifying the accuracy and consistency of the software product and service under test is software testing. It was obviously born to verify whether the commodity meets the customer's specific prerequisites, needs, and expectations. At the end of the day, testing executes a system or programme to point out bugs, errors or faults for a particular end goal.
Software testing is a way of verifying whether the actual software product meets the expected specifications and ensuring that the software product is free of defects. It requires the execution of software/system components to test one or more properties of interest using manual or automated methods. In comparison to actual specifications, the aim of software testing is to find mistakes, gaps or missing requirements.
Some like to claim software testing as testing with a White Box and a Black Box. Software Testing, in simple words, means Program Verification Under Evaluation (AUT).
Benefits of software testing
There are some pros of using software testing:
● Cost - effective: It is one of the main benefits of software checking. Testing every IT project on time allows you to save the money for the long term. It costs less to correct if the bugs were caught in the earlier stage of software testing.
● Security: The most vulnerable and responsive advantage of software testing is that people search for trustworthy products. It helps to eliminate threats and concerns sooner.
● Product - quality: It is a necessary prerequisite for any software product. Testing ensures that consumers receive a reliable product.
● Customer - satisfaction: The main goal of every product is to provide its consumers with satisfaction. The best user experience is assured by UI/UX Checking.
Key takeaway:
● The method of identifying the accuracy and consistency of the software product and service under test is software testing.
● Some like to claim software testing as testing with a White Box and a Black Box.
● Software Testing, in simple words, means Program Verification Under Evaluation.
It is also known as “behavioral testing” which focuses on the functional requirements of the software, and is performed at later stages of the testing process unlike white box which takes place at an early stage. Black-box testing aims at functional requirements for a program to derive sets of input conditions which should be tested. Black box is not an alternative to white-box, rather, it is a complementary approach to find out a different class of errors other than white-box testing.
Black-box testing is emphasizing on different set of errors which falls under following categories:
- Incorrect or missing functions
- Interface errors
- Errors in data structures or external database access
- Behavior or performance errors
- Initialization and termination errors.
- Boundary value analysis: The input is divided into higher and lower end values. If these values pass the test, it is assumed that all values in between may pass too.
- Equivalence class testing: The input is divided into similar classes. If one element of a class passes the test, it is assumed that all the class is passed.
- Decision table testing: Decision table technique is one of the widely used case design techniques for black box testing. This is a systematic approach where various input combinations and their respective system behaviour are captured in a tabular form. That’s why it is also known as a cause-effect table. This technique is used to pick the test cases in a systematic manner; it saves the testing time and gives good coverage to the testing area of the software application. Decision table technique is appropriate for the functions that have a logical relationship between two and more than two inputs.
Advantages:
● More effective on larger units of code than glass box testing.
● Testers need no knowledge of implementation, including specific programming languages.
● Testers and programmers are independent of each other.
● Tests are done from a user's point of view.
● Will help to expose any ambiguities or inconsistencies in the specifications.
● Test cases can be designed as soon as the specifications are complete.
Disadvantages:
● Only a small number of possible inputs can actually be tested, to test every possible input stream would take nearly forever.
● Without clear and concise specifications, test cases are hard to design.
● There may be unnecessary repetition of test inputs if the tester is not informed of test cases the programmer has already tried.
● May leave many program paths untested.
● Cannot be directed toward specific segments of code which may be very complex (and therefore more error prone).
● Most testing related research has been directed toward glass box testing.
Key takeaway:
- In Black box testing the main focus is on the information domain.
- This technique exercises the input and output domain of the program to uncover errors in program, function, behavior and performance.
● In this testing technique the internal logic of software components is tested.
● It is a test case design method that uses the control structure of the procedural design test cases.
● It is done in the early stages of software development.
● Using this testing technique software engineer can derive test cases that:
● All independent paths within a module have been exercised at least once.
● Exercise true and false both the paths of logical checking.
● Execute all the loops within their boundaries.
● Exercise internal data structures to ensure their validity.
Advantages:
● As the knowledge of internal coding structure is prerequisite, it becomes very easy to find out which type of input/data can help in testing the application effectively.
● The other advantage of white box testing is that it helps in optimizing the code.
● It helps in removing the extra lines of code, which can bring in hidden defects.
● We can test the structural logic of the software.
● Every statement is tested thoroughly.
● Forces test developers to reason carefully about implementation.
● Approximate the partitioning done by execution equivalence.
● Reveals errors in "hidden" code.
Disadvantages:
● It does not ensure that the user requirements are fulfilled.
● As knowledge of code and internal structure is a prerequisite, a skilled tester is needed to carry out this type of testing, which increases the cost.
● It is nearly impossible to look into every bit of code to find out hidden errors, which may create problems, resulting in failure of the application.
● The tests may not be applicable in a real world situation.
● Cases omitted in the code could be missed out.
Key takeaway:
- It is a test case design method that uses the control structure of the procedural design test cases.
- It does not ensure that the user requirements are fulfilled.
Unit testing focuses verification effort on the smallest unit of software design—the software component or module. Using the component- level design description as a guide, important control paths are tested to uncover errors within the boundary of the module. The relative complexity of tests and uncovered errors is limited by the constrained scope established for unit testing. The unit test is white-box oriented, and the step can be conducted in parallel for multiple components.
Unit Test Considerations
The tests that occur as part of unit tests are illustrated schematically in Figure below. The module interface is tested to ensure that information properly flows into and out of the program unit under test. The local data structure is examined to ensure that data stored temporarily maintains its integrity during all steps in an algorithm's execution. Boundary conditions are tested to ensure that the module operates properly at boundaries established to limit or restrict processing. All independent paths (basis paths) through the control structure are exercised to ensure that all statements in a module have been executed at least once. And finally, all error handling paths are tested.
Fig 1: Unit Test
Tests of data flow across a module interface are required before any other test is initiated. If data does not enter and exit properly, all other tests are moot. In addition, local data structures should be exercised and the local impact on global data should be ascertained (if possible) during unit testing.
Selective testing of execution paths is an essential task during the unit test. Test cases should be designed to uncover errors due to erroneous computations, incorrect comparisons, or improper control flow. Basis path and loop testing are effective techniques for uncovering a broad array of path errors.
Among the more common errors in computation are
● Misunderstood or incorrect arithmetic precedence,
● Mixed mode operations,
● Incorrect initialization,
● Precision inaccuracy,
● Incorrect symbolic representation of an expression.
Comparison and control flow are closely coupled to one another (i.e., change of flow frequently occurs after a comparison). Test cases should uncover errors such as
- Comparison of different data types,
- Incorrect logical operators or precedence,
- Expectation of equality when precision error makes equality unlikely,
- Incorrect comparison of variables,
- Improper or nonexistent loop termination,
- Failure to exit when divergent iteration is encountered, and
- Improperly modified loop variables.
Among the potential errors that should be tested when error handling is evaluated are
● Error description is unintelligible.
● Error noted does not correspond to error encountered.
● Error condition causes system intervention prior to error handling.
● Exception-condition processing is incorrect.
● Error description does not provide enough information to assist in the location of the cause of the error.
Boundary testing is the last (and probably most important) task of the unit test step. Software often fails at its boundaries. That is, errors often occur when the nth element of an n-dimensional array is processed, when the ith repetition of a loop with i passes is invoked, when the maximum or minimum allowable value is encountered. Test cases that exercise data structure, control flow, and data values just below, at, and just above maxima and minima are very likely to uncover errors.
Unit Test Procedures
Unit testing is normally considered as an adjunct to the coding step. After source level code has been developed, reviewed, and verified for correspondence to component level design, unit test case design begins. A review of design information provides guidance for establishing test cases that are likely to uncover errors in each of the categories discussed earlier. Each test case should be coupled with a set of expected results.
Fig 2: Unit Test Environment
Because a component is not a stand-alone program, driver and/or stub software must be developed for each unit test. The unit test environment is illustrated in Figure above. In most applications a driver is nothing more than a "main program" that accepts test case data, passes such data to the component (to be tested), and prints relevant results. Stubs serve to replace modules that are subordinate (called by) the component to be tested.
A stub or "dummy subprogram" uses the subordinate module's interface, may do minimal data manipulation, prints verification of entry, and returns control to the module undergoing testing. Drivers and stubs represent overhead. That is, both are software that must be written (formal design is not commonly applied) but that is not delivered with the final software product. If drivers and stubs are kept simple, actual overhead is relatively low. Unfortunately, many components cannot be adequately unit tested with "simple" overhead software. In such cases, complete testing can be postponed until the integration test step (where drivers or stubs are also used).
Unit testing is simplified when a component with high cohesion is designed. When only one function is addressed by a component, the number of test cases is reduced and errors can be more easily predicted and uncovered.
Advantage of Unit Testing
● Can be applied directly to object code and does not require processing source code.
● Performance profilers commonly implement this measure.
Disadvantages of Unit Testing
● Insensitive to some control structures (number of iterations)
● Does not report whether loops reach their termination condition
● Statement coverage is completely insensitive to the logical operators (|| and &&).
Key takeaway:
- Unit testing focuses verification effort on the smallest unit of software design—the software component or module.
- The unit test is white-box oriented, and the step can be conducted in parallel for multiple components.
- Unit testing is normally considered as an adjunct to the coding step.
Integration testing is a systematic technique for constructing the program structure while at the same time conducting tests to uncover errors associated with interfacing. The objective is to take unit tested components and build a program structure that has been dictated by design.
There is often a tendency to attempt non incremental integration; that is, to construct the program using a "big bang" approach. All components are combined in advance. The entire program is tested as a whole. And chaos usually results! A set of errors is encountered.
Correction is difficult because isolation of causes is complicated by the vast expanse of the entire program. Once these errors are corrected, new ones appear and the process continues in a seemingly endless loop.
Incremental integration is the antithesis of the big bang approach. The program is constructed and tested in small increments, where errors are easier to isolate and correct; interfaces are more likely to be tested completely; and a systematic test approach may be applied.
Top-down Integration
Top-down integration testing is an incremental approach to construction of program structure. Modules are integrated by moving downward through the control hierarchy, beginning with the main control module (main program). Modules subordinate (and ultimately subordinate) to the main control module are incorporated into the structure in either a depth-first or breadth-first manner.
Fig 3: Top down integration
Referring to Figure above, depth-first integration would integrate all components on a major control path of the structure. Selection of a major path is somewhat arbitrary and depends on application-specific characteristics. For example, selecting the left-hand path, components M1, M2, M5 would be integrated first. Next, M8 or (if necessary for proper functioning of M2) M6 would be integrated.
Then, the central and right-hand control paths are built. Breadth-first integration incorporates all components directly subordinate at each level, moving across the structure horizontally. From the figure, components M2, M3, and M4 (a replacement for stub S4) would be integrated first. The next control level, M5, M6, and so on, follows.
The integration process is performed in a series of five steps:
● The main control module is used as a test driver and stubs are substituted for all components directly subordinate to the main control module.
● Depending on the integration approach selected (i.e., depth or breadth first), subordinate stubs are replaced one at a time with actual components.
● Tests are conducted as each component is integrated.
● On completion of each set of tests, another stub is replaced with the real component.
● Regression testing may be conducted to ensure that new errors have not been introduced. The process continues from step 2 until the entire program structure is built.
The top-down integration strategy verifies major control or decision points early in the test process. In a well-factored program structure, decision making occurs at upper levels in the hierarchy and is therefore encountered first. If major control problems do exist, early recognition is essential. If depth-first integration is selected, a complete function of the software may be implemented and demonstrated.
For example, consider a classic transaction structure in which a complex series of interactive inputs is requested, acquired, and validated via an incoming path. The incoming path may be integrated in a top-down manner. All input processing (for subsequent transaction dispatching) may be demonstrated before other elements of the structure have been integrated. Early demonstration of functional capability is a confidence builder for both the developer and the customer.
Top-down strategy sounds relatively uncomplicated, but in practice, logistical problems can arise. The most common of these problems occurs when processing at low levels in the hierarchy is required to adequately test upper levels. Stubs replace low-level modules at the beginning of top-down testing; therefore, no significant data can flow upward in the program structure. The tester is left with three choices:
● Delay many tests until stubs are replaced with actual modules,
● Develop stubs that perform limited functions that simulate the actual module, or
● Integrate the software from the bottom of the hierarchy upward.
The first approach (delay tests until stubs are replaced by actual modules) causes us to loose some control over correspondence between specific tests and incorporation of specific modules. This can lead to difficulty in determining the cause of errors and tends to violate the highly constrained nature of the top-down approach. The second approach is workable but can lead to significant overhead, as stubs become more and more complex.
Bottom-up Integration
Bottom-up integration testing, as its name implies, begins construction and testing with atomic modules (i.e., components at the lowest levels in the program structure). Because components are integrated from the bottom up, processing required for components subordinate to a given level is always available and the need for stubs is eliminated.
A bottom-up integration strategy may be implemented with the following steps:
- Low-level components are combined into clusters (sometimes called builds) that perform a specific software sub-function.
- A driver (a control program for testing) is written to coordinate test case input and output.
- The cluster is tested.
- Drivers are removed and clusters are combined moving upward in the program structure.
Fig 4: Bottom up integration
Integration follows the pattern illustrated in Figure above. Components are combined to form clusters 1, 2, and 3. Each of the clusters is tested using a driver (shown as a dashed block). Components in clusters 1 and 2 are subordinate to Ma. Drivers D1 and D2 are removed and the clusters are interfaced directly to Ma. Similarly, driver D3 for cluster 3 is removed prior to integration with module Mb. Both Ma and Mb will ultimately be integrated with component Mc, and so forth.
As integration moves upward, the need for separate test drivers lessens. In fact, if the top two levels of program structure are integrated top down, the number of drivers can be reduced substantially and integration of clusters is greatly simplified.
Key takeaway:
- Integration testing is a systematic technique for constructing the program structure while at the same time conducting tests to uncover errors associated with interfacing.
- Top-down integration testing is an incremental approach to construction of program structure.
- The top-down integration strategy verifies major control or decision points early in the test process.
- Bottom-up integration testing, as its name implies, begins construction and testing with atomic modules.
Validation Testing, performed by QA practitioners, is to determine if the device meets the specifications and performs the functions for which it is intended and meets the objectives and user needs of the organization. This sort of testing, as well as verification testing, is quite important. At the end of the production process, validation is performed and takes place after verification is finished.
Thus, developers apply validation testing to ensure customer satisfaction. The aim is to verify and be satisfied in the product or system and to satisfy the customer's requirements. It also requires the approval of the programme from the end user.
As software is checked, the purpose is to check the accuracy of the defects and bugs found. Developers patch them when glitches and bugs are detected. The programme is reviewed again after that to ensure that no bugs are left. The output of the software product scales up in that way.
The objective of software testing is to assess software quality in terms of the number of defects found in the software, the number of running tests and the device covered by the tests. If, with the aid of testing, bugs or defects are detected, the bugs are reported and repaired by the development team. When the bugs are patched, testing is carried out again to ensure that they are truly fixed and that the programme has not created any new defects. The consistency of the programme improves with the complete period.
Validation checking phases Process:
● Validation Planning – To coordinate all the tasks that need to be included during research.
● Define Requirements – To set targets and identify the research criteria.
● Selecting a Team – Selecting a capable and experienced leadership team (the third party included).
● Developing Documents – To create a document for the user specification detailing the operating conditions.
● Estimation/Evaluation – To test the programme and present a validation report as per the specifications
● Fixing bugs or Incorporating Changes - To adjust the programme so that any errors detected during assessment can be deleted.
Validation-Test Criteria
● Along with a series of checks for black boxes.
● The object of the test plan and test procedure is to check:
● Requirements are met or not met.
● Whether or not all behavioural features are accomplished.
● If all performance criteria are met or not.
● Whether or not the text is accurate.
Black-box testing
It is also known by “behavioural testing” which focuses on the functional requirements of the software, and is performed at later stages of testing process unlike white box which takes place at early stage. Black-box testing aims at functional requirements for a program to derive sets of input conditions which should be tested. Black box is not an alternative to white-box, rather, it is a complementary approach to find out a different class of errors other than white-box testing.
Black-box testing is emphasizing on different set of errors which falls under following Categories:
a) Incorrect or missing functions
b) Interface errors
c) Errors in data structures or external database access
d) Behaviour or performance errors
e) Initialization and termination errors.
Configuration Review
● Check whether or not all software configuration elements have been properly created.
● This process is often referred to as "audit"
Key takeaway:
● Black-box testing aims at functional requirements for a program to derive sets of input conditions which should be tested
● The consistency of the programme improves with the complete period.
● Developers apply validation testing to ensure customer satisfaction
● At the end of the production process, validation is performed and takes place after verification is finished
Software is the only one element of a larger computer-based system. Ultimately, software is incorporated with other system elements (e.g., hardware, people, information), and a series of system integration and validation tests are conducted. These tests fall outside the scope of the software process and are not conducted solely by software engineers. However, steps taken during software design and testing can greatly improve the probability of successful software integration in the larger system.
A classic system testing problem is "finger-pointing." This occurs when an error is uncovered, and each system element developer blames the other for the problem. Rather than indulging in such nonsense, the software engineer should anticipate potential interfacing problems and
● Design error-handling paths that test all information coming from other elements of the system,
● Conduct a series of tests that simulate bad data or other potential errors at the software interface,
● Record the results of tests to use as "evidence" if finger-pointing does occur, and
● Participate in planning and design of system tests to ensure that software is adequately tested.
System testing is actually a series of different tests whose primary purpose is to fully exercise the computer-based system. Although each test has a different purpose, all work to verify that system elements have been properly integrated and perform allocated functions.
Key takeaway:
- Software is the only one element of a larger computer-based system.
- Software is incorporated with other system elements (e.g., hardware, people, information), and a series of system integration and validation tests are conducted.
- System testing is actually a series of different tests whose primary purpose is to fully exercise the computer-based system.
Debugging is the method of repairing a bug in the programme in the sense of software engineering. In other words, it applies to error detection, examination and elimination. This operation starts after the programme fails to function properly and ends by fixing the issue and checking the software successfully. As errors need to be fixed at all levels of debugging, it is considered to be an extremely complex and repetitive process.
Debugging process
Steps that are involved in debugging include:
● Identification of issue and preparation of report.
● Assigning the software engineer's report to the defect to verify that it is true.
● Defect Detection using modelling, documentation, candidate defect finding and checking, etc.
● Defect Resolution by having the device modifications required.
● Corrections validation.
Debugging strategies -
- To grasp the system, research the system for the longer term. Depending on the need, it helps debuggers create various representations of systems to be debugged. System analysis is also carried out actively to detect recent improvements made to the programme.
- Backward analysis of the issue that includes monitoring the software backwards from the fault message position to locate the region of defective code. A thorough region study is conducted to determine the cause of defects.
- Forward programme analysis includes monitoring the programme forward using breakpoints or print statements and studying the outcomes at various points in the programme. In order to locate the flaw, the region where the wrong outputs are obtained is the region that needs to be centered.
- Using previous software debugging experience, the software has similar problems in nature. This approach's effectiveness depends on the debugger's expertise.
Key takeaway:
● Debugging is the method of repairing a bug in the programme in the sense of software engineering.
● It applies to error detection, examination and elimination.
References:
- Software Engineering: Theory and Practice (Fourth Edition – Pfleeger
- Software Engineering- Mishra /Mohanty (Pearson Education)
- Software Engineering‐Schaum’s Series (TMH)
- Software Project Management ‐ Sanjay Mohapatra (Cengage Learning)
Unit - 4
Software Testing
Testing is the method of running a programme with the intention of discovering mistakes. It needs to be error-free to make our applications work well. It will delete all the errors from the programme if testing is performed successfully.
The method of identifying the accuracy and consistency of the software product and service under test is software testing. It was obviously born to verify whether the commodity meets the customer's specific prerequisites, needs, and expectations. At the end of the day, testing executes a system or programme to point out bugs, errors or faults for a particular end goal.
Software testing is a way of verifying whether the actual software product meets the expected specifications and ensuring that the software product is free of defects. It requires the execution of software/system components to test one or more properties of interest using manual or automated methods. In comparison to actual specifications, the aim of software testing is to find mistakes, gaps or missing requirements.
Some like to claim software testing as testing with a White Box and a Black Box. Software Testing, in simple words, means Program Verification Under Evaluation (AUT).
Benefits of software testing
There are some pros of using software testing:
● Cost - effective: It is one of the main benefits of software checking. Testing every IT project on time allows you to save the money for the long term. It costs less to correct if the bugs were caught in the earlier stage of software testing.
● Security: The most vulnerable and responsive advantage of software testing is that people search for trustworthy products. It helps to eliminate threats and concerns sooner.
● Product - quality: It is a necessary prerequisite for any software product. Testing ensures that consumers receive a reliable product.
● Customer - satisfaction: The main goal of every product is to provide its consumers with satisfaction. The best user experience is assured by UI/UX Checking.
Key takeaway:
● The method of identifying the accuracy and consistency of the software product and service under test is software testing.
● Some like to claim software testing as testing with a White Box and a Black Box.
● Software Testing, in simple words, means Program Verification Under Evaluation.
It is also known as “behavioral testing” which focuses on the functional requirements of the software, and is performed at later stages of the testing process unlike white box which takes place at an early stage. Black-box testing aims at functional requirements for a program to derive sets of input conditions which should be tested. Black box is not an alternative to white-box, rather, it is a complementary approach to find out a different class of errors other than white-box testing.
Black-box testing is emphasizing on different set of errors which falls under following categories:
- Incorrect or missing functions
- Interface errors
- Errors in data structures or external database access
- Behavior or performance errors
- Initialization and termination errors.
- Boundary value analysis: The input is divided into higher and lower end values. If these values pass the test, it is assumed that all values in between may pass too.
- Equivalence class testing: The input is divided into similar classes. If one element of a class passes the test, it is assumed that all the class is passed.
- Decision table testing: Decision table technique is one of the widely used case design techniques for black box testing. This is a systematic approach where various input combinations and their respective system behaviour are captured in a tabular form. That’s why it is also known as a cause-effect table. This technique is used to pick the test cases in a systematic manner; it saves the testing time and gives good coverage to the testing area of the software application. Decision table technique is appropriate for the functions that have a logical relationship between two and more than two inputs.
Advantages:
● More effective on larger units of code than glass box testing.
● Testers need no knowledge of implementation, including specific programming languages.
● Testers and programmers are independent of each other.
● Tests are done from a user's point of view.
● Will help to expose any ambiguities or inconsistencies in the specifications.
● Test cases can be designed as soon as the specifications are complete.
Disadvantages:
● Only a small number of possible inputs can actually be tested, to test every possible input stream would take nearly forever.
● Without clear and concise specifications, test cases are hard to design.
● There may be unnecessary repetition of test inputs if the tester is not informed of test cases the programmer has already tried.
● May leave many program paths untested.
● Cannot be directed toward specific segments of code which may be very complex (and therefore more error prone).
● Most testing related research has been directed toward glass box testing.
Key takeaway:
- In Black box testing the main focus is on the information domain.
- This technique exercises the input and output domain of the program to uncover errors in program, function, behavior and performance.
● In this testing technique the internal logic of software components is tested.
● It is a test case design method that uses the control structure of the procedural design test cases.
● It is done in the early stages of software development.
● Using this testing technique software engineer can derive test cases that:
● All independent paths within a module have been exercised at least once.
● Exercise true and false both the paths of logical checking.
● Execute all the loops within their boundaries.
● Exercise internal data structures to ensure their validity.
Advantages:
● As the knowledge of internal coding structure is prerequisite, it becomes very easy to find out which type of input/data can help in testing the application effectively.
● The other advantage of white box testing is that it helps in optimizing the code.
● It helps in removing the extra lines of code, which can bring in hidden defects.
● We can test the structural logic of the software.
● Every statement is tested thoroughly.
● Forces test developers to reason carefully about implementation.
● Approximate the partitioning done by execution equivalence.
● Reveals errors in "hidden" code.
Disadvantages:
● It does not ensure that the user requirements are fulfilled.
● As knowledge of code and internal structure is a prerequisite, a skilled tester is needed to carry out this type of testing, which increases the cost.
● It is nearly impossible to look into every bit of code to find out hidden errors, which may create problems, resulting in failure of the application.
● The tests may not be applicable in a real world situation.
● Cases omitted in the code could be missed out.
Key takeaway:
- It is a test case design method that uses the control structure of the procedural design test cases.
- It does not ensure that the user requirements are fulfilled.
Unit testing focuses verification effort on the smallest unit of software design—the software component or module. Using the component- level design description as a guide, important control paths are tested to uncover errors within the boundary of the module. The relative complexity of tests and uncovered errors is limited by the constrained scope established for unit testing. The unit test is white-box oriented, and the step can be conducted in parallel for multiple components.
Unit Test Considerations
The tests that occur as part of unit tests are illustrated schematically in Figure below. The module interface is tested to ensure that information properly flows into and out of the program unit under test. The local data structure is examined to ensure that data stored temporarily maintains its integrity during all steps in an algorithm's execution. Boundary conditions are tested to ensure that the module operates properly at boundaries established to limit or restrict processing. All independent paths (basis paths) through the control structure are exercised to ensure that all statements in a module have been executed at least once. And finally, all error handling paths are tested.
Fig 1: Unit Test
Tests of data flow across a module interface are required before any other test is initiated. If data does not enter and exit properly, all other tests are moot. In addition, local data structures should be exercised and the local impact on global data should be ascertained (if possible) during unit testing.
Selective testing of execution paths is an essential task during the unit test. Test cases should be designed to uncover errors due to erroneous computations, incorrect comparisons, or improper control flow. Basis path and loop testing are effective techniques for uncovering a broad array of path errors.
Among the more common errors in computation are
● Misunderstood or incorrect arithmetic precedence,
● Mixed mode operations,
● Incorrect initialization,
● Precision inaccuracy,
● Incorrect symbolic representation of an expression.
Comparison and control flow are closely coupled to one another (i.e., change of flow frequently occurs after a comparison). Test cases should uncover errors such as
- Comparison of different data types,
- Incorrect logical operators or precedence,
- Expectation of equality when precision error makes equality unlikely,
- Incorrect comparison of variables,
- Improper or nonexistent loop termination,
- Failure to exit when divergent iteration is encountered, and
- Improperly modified loop variables.
Among the potential errors that should be tested when error handling is evaluated are
● Error description is unintelligible.
● Error noted does not correspond to error encountered.
● Error condition causes system intervention prior to error handling.
● Exception-condition processing is incorrect.
● Error description does not provide enough information to assist in the location of the cause of the error.
Boundary testing is the last (and probably most important) task of the unit test step. Software often fails at its boundaries. That is, errors often occur when the nth element of an n-dimensional array is processed, when the ith repetition of a loop with i passes is invoked, when the maximum or minimum allowable value is encountered. Test cases that exercise data structure, control flow, and data values just below, at, and just above maxima and minima are very likely to uncover errors.
Unit Test Procedures
Unit testing is normally considered as an adjunct to the coding step. After source level code has been developed, reviewed, and verified for correspondence to component level design, unit test case design begins. A review of design information provides guidance for establishing test cases that are likely to uncover errors in each of the categories discussed earlier. Each test case should be coupled with a set of expected results.
Fig 2: Unit Test Environment
Because a component is not a stand-alone program, driver and/or stub software must be developed for each unit test. The unit test environment is illustrated in Figure above. In most applications a driver is nothing more than a "main program" that accepts test case data, passes such data to the component (to be tested), and prints relevant results. Stubs serve to replace modules that are subordinate (called by) the component to be tested.
A stub or "dummy subprogram" uses the subordinate module's interface, may do minimal data manipulation, prints verification of entry, and returns control to the module undergoing testing. Drivers and stubs represent overhead. That is, both are software that must be written (formal design is not commonly applied) but that is not delivered with the final software product. If drivers and stubs are kept simple, actual overhead is relatively low. Unfortunately, many components cannot be adequately unit tested with "simple" overhead software. In such cases, complete testing can be postponed until the integration test step (where drivers or stubs are also used).
Unit testing is simplified when a component with high cohesion is designed. When only one function is addressed by a component, the number of test cases is reduced and errors can be more easily predicted and uncovered.
Advantage of Unit Testing
● Can be applied directly to object code and does not require processing source code.
● Performance profilers commonly implement this measure.
Disadvantages of Unit Testing
● Insensitive to some control structures (number of iterations)
● Does not report whether loops reach their termination condition
● Statement coverage is completely insensitive to the logical operators (|| and &&).
Key takeaway:
- Unit testing focuses verification effort on the smallest unit of software design—the software component or module.
- The unit test is white-box oriented, and the step can be conducted in parallel for multiple components.
- Unit testing is normally considered as an adjunct to the coding step.
Integration testing is a systematic technique for constructing the program structure while at the same time conducting tests to uncover errors associated with interfacing. The objective is to take unit tested components and build a program structure that has been dictated by design.
There is often a tendency to attempt non incremental integration; that is, to construct the program using a "big bang" approach. All components are combined in advance. The entire program is tested as a whole. And chaos usually results! A set of errors is encountered.
Correction is difficult because isolation of causes is complicated by the vast expanse of the entire program. Once these errors are corrected, new ones appear and the process continues in a seemingly endless loop.
Incremental integration is the antithesis of the big bang approach. The program is constructed and tested in small increments, where errors are easier to isolate and correct; interfaces are more likely to be tested completely; and a systematic test approach may be applied.
Top-down Integration
Top-down integration testing is an incremental approach to construction of program structure. Modules are integrated by moving downward through the control hierarchy, beginning with the main control module (main program). Modules subordinate (and ultimately subordinate) to the main control module are incorporated into the structure in either a depth-first or breadth-first manner.
Fig 3: Top down integration
Referring to Figure above, depth-first integration would integrate all components on a major control path of the structure. Selection of a major path is somewhat arbitrary and depends on application-specific characteristics. For example, selecting the left-hand path, components M1, M2, M5 would be integrated first. Next, M8 or (if necessary for proper functioning of M2) M6 would be integrated.
Then, the central and right-hand control paths are built. Breadth-first integration incorporates all components directly subordinate at each level, moving across the structure horizontally. From the figure, components M2, M3, and M4 (a replacement for stub S4) would be integrated first. The next control level, M5, M6, and so on, follows.
The integration process is performed in a series of five steps:
● The main control module is used as a test driver and stubs are substituted for all components directly subordinate to the main control module.
● Depending on the integration approach selected (i.e., depth or breadth first), subordinate stubs are replaced one at a time with actual components.
● Tests are conducted as each component is integrated.
● On completion of each set of tests, another stub is replaced with the real component.
● Regression testing may be conducted to ensure that new errors have not been introduced. The process continues from step 2 until the entire program structure is built.
The top-down integration strategy verifies major control or decision points early in the test process. In a well-factored program structure, decision making occurs at upper levels in the hierarchy and is therefore encountered first. If major control problems do exist, early recognition is essential. If depth-first integration is selected, a complete function of the software may be implemented and demonstrated.
For example, consider a classic transaction structure in which a complex series of interactive inputs is requested, acquired, and validated via an incoming path. The incoming path may be integrated in a top-down manner. All input processing (for subsequent transaction dispatching) may be demonstrated before other elements of the structure have been integrated. Early demonstration of functional capability is a confidence builder for both the developer and the customer.
Top-down strategy sounds relatively uncomplicated, but in practice, logistical problems can arise. The most common of these problems occurs when processing at low levels in the hierarchy is required to adequately test upper levels. Stubs replace low-level modules at the beginning of top-down testing; therefore, no significant data can flow upward in the program structure. The tester is left with three choices:
● Delay many tests until stubs are replaced with actual modules,
● Develop stubs that perform limited functions that simulate the actual module, or
● Integrate the software from the bottom of the hierarchy upward.
The first approach (delay tests until stubs are replaced by actual modules) causes us to loose some control over correspondence between specific tests and incorporation of specific modules. This can lead to difficulty in determining the cause of errors and tends to violate the highly constrained nature of the top-down approach. The second approach is workable but can lead to significant overhead, as stubs become more and more complex.
Bottom-up Integration
Bottom-up integration testing, as its name implies, begins construction and testing with atomic modules (i.e., components at the lowest levels in the program structure). Because components are integrated from the bottom up, processing required for components subordinate to a given level is always available and the need for stubs is eliminated.
A bottom-up integration strategy may be implemented with the following steps:
- Low-level components are combined into clusters (sometimes called builds) that perform a specific software sub-function.
- A driver (a control program for testing) is written to coordinate test case input and output.
- The cluster is tested.
- Drivers are removed and clusters are combined moving upward in the program structure.
Fig 4: Bottom up integration
Integration follows the pattern illustrated in Figure above. Components are combined to form clusters 1, 2, and 3. Each of the clusters is tested using a driver (shown as a dashed block). Components in clusters 1 and 2 are subordinate to Ma. Drivers D1 and D2 are removed and the clusters are interfaced directly to Ma. Similarly, driver D3 for cluster 3 is removed prior to integration with module Mb. Both Ma and Mb will ultimately be integrated with component Mc, and so forth.
As integration moves upward, the need for separate test drivers lessens. In fact, if the top two levels of program structure are integrated top down, the number of drivers can be reduced substantially and integration of clusters is greatly simplified.
Key takeaway:
- Integration testing is a systematic technique for constructing the program structure while at the same time conducting tests to uncover errors associated with interfacing.
- Top-down integration testing is an incremental approach to construction of program structure.
- The top-down integration strategy verifies major control or decision points early in the test process.
- Bottom-up integration testing, as its name implies, begins construction and testing with atomic modules.
Validation Testing, performed by QA practitioners, is to determine if the device meets the specifications and performs the functions for which it is intended and meets the objectives and user needs of the organization. This sort of testing, as well as verification testing, is quite important. At the end of the production process, validation is performed and takes place after verification is finished.
Thus, developers apply validation testing to ensure customer satisfaction. The aim is to verify and be satisfied in the product or system and to satisfy the customer's requirements. It also requires the approval of the programme from the end user.
As software is checked, the purpose is to check the accuracy of the defects and bugs found. Developers patch them when glitches and bugs are detected. The programme is reviewed again after that to ensure that no bugs are left. The output of the software product scales up in that way.
The objective of software testing is to assess software quality in terms of the number of defects found in the software, the number of running tests and the device covered by the tests. If, with the aid of testing, bugs or defects are detected, the bugs are reported and repaired by the development team. When the bugs are patched, testing is carried out again to ensure that they are truly fixed and that the programme has not created any new defects. The consistency of the programme improves with the complete period.
Validation checking phases Process:
● Validation Planning – To coordinate all the tasks that need to be included during research.
● Define Requirements – To set targets and identify the research criteria.
● Selecting a Team – Selecting a capable and experienced leadership team (the third party included).
● Developing Documents – To create a document for the user specification detailing the operating conditions.
● Estimation/Evaluation – To test the programme and present a validation report as per the specifications
● Fixing bugs or Incorporating Changes - To adjust the programme so that any errors detected during assessment can be deleted.
Validation-Test Criteria
● Along with a series of checks for black boxes.
● The object of the test plan and test procedure is to check:
● Requirements are met or not met.
● Whether or not all behavioural features are accomplished.
● If all performance criteria are met or not.
● Whether or not the text is accurate.
Black-box testing
It is also known by “behavioural testing” which focuses on the functional requirements of the software, and is performed at later stages of testing process unlike white box which takes place at early stage. Black-box testing aims at functional requirements for a program to derive sets of input conditions which should be tested. Black box is not an alternative to white-box, rather, it is a complementary approach to find out a different class of errors other than white-box testing.
Black-box testing is emphasizing on different set of errors which falls under following Categories:
a) Incorrect or missing functions
b) Interface errors
c) Errors in data structures or external database access
d) Behaviour or performance errors
e) Initialization and termination errors.
Configuration Review
● Check whether or not all software configuration elements have been properly created.
● This process is often referred to as "audit"
Key takeaway:
● Black-box testing aims at functional requirements for a program to derive sets of input conditions which should be tested
● The consistency of the programme improves with the complete period.
● Developers apply validation testing to ensure customer satisfaction
● At the end of the production process, validation is performed and takes place after verification is finished
Software is the only one element of a larger computer-based system. Ultimately, software is incorporated with other system elements (e.g., hardware, people, information), and a series of system integration and validation tests are conducted. These tests fall outside the scope of the software process and are not conducted solely by software engineers. However, steps taken during software design and testing can greatly improve the probability of successful software integration in the larger system.
A classic system testing problem is "finger-pointing." This occurs when an error is uncovered, and each system element developer blames the other for the problem. Rather than indulging in such nonsense, the software engineer should anticipate potential interfacing problems and
● Design error-handling paths that test all information coming from other elements of the system,
● Conduct a series of tests that simulate bad data or other potential errors at the software interface,
● Record the results of tests to use as "evidence" if finger-pointing does occur, and
● Participate in planning and design of system tests to ensure that software is adequately tested.
System testing is actually a series of different tests whose primary purpose is to fully exercise the computer-based system. Although each test has a different purpose, all work to verify that system elements have been properly integrated and perform allocated functions.
Key takeaway:
- Software is the only one element of a larger computer-based system.
- Software is incorporated with other system elements (e.g., hardware, people, information), and a series of system integration and validation tests are conducted.
- System testing is actually a series of different tests whose primary purpose is to fully exercise the computer-based system.
Debugging is the method of repairing a bug in the programme in the sense of software engineering. In other words, it applies to error detection, examination and elimination. This operation starts after the programme fails to function properly and ends by fixing the issue and checking the software successfully. As errors need to be fixed at all levels of debugging, it is considered to be an extremely complex and repetitive process.
Debugging process
Steps that are involved in debugging include:
● Identification of issue and preparation of report.
● Assigning the software engineer's report to the defect to verify that it is true.
● Defect Detection using modelling, documentation, candidate defect finding and checking, etc.
● Defect Resolution by having the device modifications required.
● Corrections validation.
Debugging strategies -
- To grasp the system, research the system for the longer term. Depending on the need, it helps debuggers create various representations of systems to be debugged. System analysis is also carried out actively to detect recent improvements made to the programme.
- Backward analysis of the issue that includes monitoring the software backwards from the fault message position to locate the region of defective code. A thorough region study is conducted to determine the cause of defects.
- Forward programme analysis includes monitoring the programme forward using breakpoints or print statements and studying the outcomes at various points in the programme. In order to locate the flaw, the region where the wrong outputs are obtained is the region that needs to be centered.
- Using previous software debugging experience, the software has similar problems in nature. This approach's effectiveness depends on the debugger's expertise.
Key takeaway:
● Debugging is the method of repairing a bug in the programme in the sense of software engineering.
● It applies to error detection, examination and elimination.
References:
- Software Engineering: Theory and Practice (Fourth Edition – Pfleeger
- Software Engineering- Mishra /Mohanty (Pearson Education)
- Software Engineering‐Schaum’s Series (TMH)
- Software Project Management ‐ Sanjay Mohapatra (Cengage Learning)