14.- Testability
When referred to software, testability is the degree to which a software artifact (i.e., a software system, software module, requirements, or design document) supports testing in a given test context. Testability is not an intrinsic property of a software artifact and cannot be measured directly (such as software size). Instead, testability is an extrinsic property that results from the interdependency of the software to be tested and the test goals, test methods used, and test resources (i.e., the test context).
Testability can be understood as visibility and control:
- Visibility is our ability to observe the states, outputs, resource usage, and other side effects of the software under test.
- Control is our ability to apply inputs to the software under test or place it in specified states.
Design for Testability is a concept used in electronic hardware design for over 50 years, driven by the obvious fact that in order to be able to test an integrated circuit both during the design stage and later in production, it has to be designed so it can be tested. The testing points, procedures, and testing equipment requirements must be taken into account in the design since testability cannot be added at a later stage, as the circuit is already in silicon. Testability is a critical non-functional requirement that affects almost every aspect of electronic hardware design. In a similar way, complex software systems require testing both during design and production, and the same principles apply. A software-defined GNSS receiver must be designed for testability.
A primary purpose of testing is to detect software failures so that defects may be discovered and corrected. Testing cannot establish that a product functions properly under all conditions but can only establish that it does not function properly under specific conditions.
But such design for testability has also an impact on System Architecture design since it typically drives a clear separation of concerns, layered architecture or service orientation, and high cohesiveness of entities in the code. Tests behave very much like system “clients”: unit tests imitate the behavior of a corresponding client-class or classes invoking target class methods; component tests imitate the behavior of client-components; functional tests imitate the end user1. Designing for testability implies providing clear and understandable interfaces between classes, components, services, and, ultimately, the user interface and the rest of the system. Design patterns2\(^{,}\)3 as façade, gateway, or observer foster testability. Logging, abstract interfaces, verbose output modes, and a flexible configuration system are other desirable features that enable testability.
Software tests have the following desirable features4:
- Tests should be independent and repeatable.
- Tests should be well organized and reflect the structure of the tested code.
- Tests should be portable and reusable.
- When tests fail, they should provide as much information about the problem as possible.
- The testing framework should liberate test writers from housekeeping chores and let them focus on the test content.
- Tests should be fast.
- Tests should be deterministic.
- Tests should be automatic, they should run with no human intervention.
Indicators of Testability
It follows a list of possible testability indicators for a software-defined GNSS receiver:
- Unit / component / integration testing:
- Availability of a software testing framework.
- Number of available unit / component / integration tests.
- Availability of an automated building tool.
- System testing:
- Time To First Fix (TTFF) testability:
- Possibility to set up the receiver in cold, warm, and hot starts.
- Acquisition sensitivity testability:
- Possibility to set up the receiver to acquire a single signal and report time-tagged \(C/N_0\) and acquisition status.
- Tracking sensitivity testability:
- Possibility to set up the receiver to track a single signal and report time-tagged \(C/N_0\) and tracking status.
- Time To First Fix (TTFF) testability:
References
-
A. Yakima, Design for Testability: A Vital Aspect of the System Architect Role in SAFe., Scaled Agile, Inc., 2015. ↩
-
E. Gamma, R. Helm, R. Johnson, J. Vlissides, Design Patterns: Elements of Reusable Object-Oriented Software, Addison-Wesley Professional, 1994. ↩
-
M. Fowler, Patterns of Enterprise Application Architecture, Addison-Wesley Professional; 1 edition (November 15, 2002). ↩
-
J. Whittaker, J. Arbon, J. Carollo, How Google Tests Software, Westford, Massachusetts: Pearson Education, 2012. ↩
Leave a comment