Logical Execution Time (LET)
Logical Execution Time (LET) refers to a temporal execution model in real-time systems in which the logical execution time of a task is decoupled from its actual physical execution time. A task reads in all input values at a defined start time (Release) and writes its output values only at a predefined end time (Terminate), regardless of when the calculation is actually completed in between. This makes the task appear to the outside world as if it has the exact logical execution time - provided that its real runtime fits completely into this interval. This concept aims to make the timing behavior of embedded software more deterministic' and predictable. LET is becoming increasingly important, particularly in safety-critical deeply embedded applications, such as in the automotive industry, as it guarantees temporal determinism and prevents undesirable fluctuations (jitter) in the execution of control functions. By abstracting the execution time from the specific hardware, LET enables stable timing behavior of software components and facilitates the integration and maintenance of complex embedded systems.[1][2] Hier sind die gewünschten Änderungen für den Abschnitt „Basics“, mit einer kurzen Erklärung des „Last-is-Best“-Modells als Einführung:
Basics
Conventional approach: Last-Is-Best communication model
In traditional real-time systems, the Last-Is-Best communication model is commonly used. In this approach, The last available values are read and written from a shared memory. Data consistency is ensured at task level by synchronization mechanisms (such as spin locks), so that a component always reads the same age of values, e.g. temperature and speed. In parallel processing, different components are executed on different cores or tasks.[3]
One problem of the LIB model is sampling and response time jitter. In the event of jitter, a component either accesses obsolete data or control decisions are available too late. Both effects lead to a degradation in the performance of the control software. Another disadvantage of jitter is that the system no longer behaves deterministically. Whether jitter occurs or not depends on the execution time of each task and the system load. Since utilization and execution times vary, it effectively depends on a random selection whether old or current data is used. The problem of jitter increases with the number of cores since more components compete simultaneously for access to shared memory. This is one cause why the LIB model scales poorly with an increasing number of cores.
Logical Execution Time (LET) approach
The concept of Logical Execution Time is based on the fact that a fixed time window is defined for each task between the reading of the input and the provision of the output. The task may perform its calculation within this LET interval, whereby the exact execution time plays no role for the external time behavior. The only important thing is when inputs are accepted and when results are output. In contrast to Last-Is-Best communication model, LET ensures constant and predictable times of data communication. The input values remain unchanged during the entire logical execution time, and output values are held back until the end. As a result, no random delays (jitter) occur in the data flows, which makes the system behavior deterministic' and increases robustness. A prerequisite for LET is that the defined time window is large enough to cover the worst-case execution time of the respective task. In practice, the LET interval is therefore at least as long as the Worst-Case Execution Time (WCET) of the task (possibly including communication times between control units). If the WCET is exceeded, the task cannot adhere to its logical time, which leads to failure to meet the time specifications. Therefore, WCET analyses and corresponding dimensioning of the LET intervals are essential for successful design. However, compliance with the specified LET times can also be monitored at runtime with little overhead in order to detect deviations at an early stage.[4][5][6]
Technical implementation
The implementation of LET in real-time operating systems requires mechanisms to synchronize input and output accesses of a task exactly at the defined time limits. Native support for this is still rare in classic automotive RTOS - in avionics, for example, there are operating systems according to ARINC-653 that can directly ensure LET behavior. In automotive systems without built-in LET functionality, developers rely on software solutions to achieve the desired timing behavior. Common approaches to realizing LET include:[5]
- Double or ring buffer: Tasks first write their results to a new buffer, while other components continue to access the old values in the previous buffer until the LET time expires. At the end time, the buffer pointer is switched so that the new data is read from then on[5]. This principle ensures that only consistent, simultaneous values are used during a LET interval, but requires careful management of multiple buffers.
- Driver tasks (wrappers): In addition to the actual application task, special tasks are scheduled in the RTOS that take input values into local copies at the exact release time and pass the output of the main task to the outside[5] at the terminate time. In between, the main task only works with its local variables. This pattern - often referred to as a LET wrapper or driver task - implements the LET scheme, but increases the number of context switches and generates overhead, especially if very short LET times are required[5].
A central technical feature of such implementations is the additional memory requirement: Intermediate buffers or duplicate variables must be kept for each signal exchanged on the input and output side. Studies show that the memory footprints increase slightly due to LET - for example by approx. 7.5% in an industry-related case study - compared to direct, non-buffered communication. In contrast, the runtime impact is moderate: as LET does not force a complete wait until the end of the period, but can execute other tasks in between, CPU utilization remains efficient. Rather, the complexity shifts towards minor scheduling and memory overhead in favor of deterministic timing properties.[7]
AUTOSAR as a widespread platform for automotive ECUs only supports LET via additional concepts so far (as of approx. 2020). In classic AUTOSAR systems with Runtime Environment (RTE), LET can be achieved by configuring the runnables and communication types appropriately. With implicit communication, the RTE copies all the required signals at the start of a runnable anyway and writes back output values at the end, which corresponds to the basic LET principle. For complete LET semantics across several runnables, this behavior can be enforced by additional auxiliary runnables: e.g. separate read and write routines that run synchronously with the task start/end. This is functionally equivalent to the driver task approach mentioned above and has already been successfully used by automotive suppliers such as Continental in engine control systems on multi-core systems. However, this approach is associated with effort and potential timing overheads. In order to reduce such workarounds, the AUTOSAR consortium has defined Timing Extensions in which LET can be explicitly described as a model element. Future AUTOSAR toolchains can build on this to take LET behavior directly into account during code generation and scheduling configuration.[2][4][5]
Advantages and Challenges in Deeply Embedded Systems
Advantages
In deeply embedded systems - i.e. embedded control units with high real-time requirements and often limited resources - LET offers decisive advantages. Advantages are above all the deterministics and stability of the time behavior: data flow chains (cause-effect chains) behave the same with LET in every execution cycle, provided the tasks adhere to their time budgets. Outputs appear at strictly fixed intervals, which ensures precise control intervals in vehicle control systems, for example (e.g. in engine control systems, vehicle dynamics or ADAS sensor fusion). Developers can consider the timing of a function independently of the utilization of other functions, as LET creates temporal decoupling. This also facilitates the extension of systems: New software components or changes to existing functions do not affect the timing behavior of the other components if the LET principles are adhered to. This results in high robustness against changes and better platform independence - the function design remains valid even if hardware or execution conditions vary later. In the automotive industry, where ECU software is regularly updated and ported between vehicle platforms, this stability is a major advantage.[4]
Another advantage is the possibility of lock-free communication between tasks. The clear temporal protocol of LET allows common variables to be exchanged consistently without mutex synchronization, as read and write accesses are strictly separated in different phases. In fact, LET is already being used specifically to create lock-in-free data flows on multi-core ECUs. Mercedes-Benz, for example, reported on the successful use of LET for multi-core processing in electric vehicle platforms in order to implement concurrent control software without race conditions. Thus, in deeply embedded automotive systems, LET helps to improve parallelism and integration without compromising on timing predictability.[4]
Challenges
Challenges when using LET lie primarily in the strict time constraints and the additional work involved in design and analysis. Each task must be designed in such a way that it never exceeds its logically assigned time window - this requires careful worst-case analyses, optimization of the code and, if necessary, reserves for unforeseen delays. The conservative design of LET intervals (i.e. slightly longer than typical execution times) results in an average computing time reserve that is not used by the task in question. While other tasks can use this free capacity (work-conserving scheduling), it does mean that the time dimensioning of the system must be done carefully for maximum load. In resource-constrained ECUs (with limited CPU power and memory), this design process is demanding. In addition, LET slightly increases the complexity of the software architecture - buffers, additional variables or tasks are required, which makes system design and testing more complex. However, thanks to the advantages mentioned (determinism, robustness, composability), this effort is considered justified in many safety-critical areas in order to obtain certifiable and reliable real-time systems.
References
- ↑ Specification of Timing Extensions https://www.autosar.org/fileadmin/standards/R20-11/CP/AUTOSAR_TPS_TimingExtensions.pdf
- ↑ 2.0 2.1 Logical Execution Time Implementation and Memory Optimization Issues in AUTOSAR Applications for Multicores https://www.ecrts.org/forum/download/FMTV17_def.pdf
- ↑ D. Ziegenbein und A. Hamann, „Timing-aware control software design for automotive systems“, in 2015 52nd ACM/EDAC/IEEE Design Automation Conference (DAC), 2015, S. 1–6.
- ↑ 4.0 4.1 4.2 4.3 Industry-track: System-Level Logical Execution Time for Automotive Software Development https://www.ida.ing.tu-bs.de/index.php?eID=dumpFile&t=f&f=24920&token=04ed003e3cef30622882b964cf8ee3cfdb731cdd
- ↑ 5.0 5.1 5.2 5.3 5.4 5.5 Logical Execution Time in the Automotive Environment - ESE 2018 | MicroConsult Academy https://www.microconsult.de/2149-0-Logical-Execution-Time-in-the-Automotive-Environment---ESE-2018.html
- ↑ Modeling with the Timing Definition Language (TDL) https://ptolemy.berkeley.edu/projects/chess/pubs/157/Pree_Modeling_with_TDL.pdf
- ↑ The Logical Execution Time Paradigm: New Perspectives for Multicore Systems https://ckirsch.github.io/publications/invited/Dagstuhl18-LET.pdf