The goal of this course is to study the problem of ensuring that a piece of code (describing software and/or hardware) works as expected.
The first part of the course covers testing, which means that test stimuli are provided to the software/hardware while observing the produced result. This means that the code's outputs are observed as it runs, i.e., the underlying software/hardware is executed, simulated, or powered on. Based on the produced results it is possible to judge whether the test was successful (i.e., the expected result was produced) or failed. A single test, usually, is not enough to say whether the code works as expected or not. The course thus covers different strategies to design tests and improve test coverage, i.e., ways to improve the confidence that the various tests provide meaningful information about the tested code. The course also highlights different strategies on how to implement en deloy test frameworks.
The second part is concerned with an complementary approach to testing, called static program analysis. The idea here is to make statements about a given piece of code without explicitly running it. This is done by deducing information about the possible intermediate results and finally the code's outputs through abstractions. The idea here is to reason about all possible outcomes that the code may exhibit at once and thus proving definitive statements about the code. Example of such statements could be (1) the code never performs a division by zero or (2) the output value produced is always in the range from 2.3 to 45.
- Enseignant: Florian Brandner
- Enseignant responsable de l'UE: Ulrich Kuhne