Calculation Worst-case_execution_time
1 calculation
1.1 considerations
1.2 automated approaches
1.2.1 static analysis techniques
1.2.2 measurement , hybrid techniques
calculation
since days of embedded computing, embedded software developers have either used:
end-to-end measurements of code, example performed setting i/o pin on device high @ start of task, , low @ end of task , using logic analyzer measure longest pulse width, or measuring within software using processor clock or instruction count.
manual static analysis techniques such counting assembler instructions each function, loop etc. , combining them.
both of these techniques have limitations. end end measurements place high burden on software testing achieve longest path; counting instructions applicable simple software , hardware. in both cases, margin error used account untested code, hardware performance approximations or mistakes. margin of 20% used, although there little justification used figure, save historical confidence ( worked last time ).
as software , hardware have increased in complexity, have driven need tool support. complexity increasingly becoming issue in both static analysis , measurements. difficult judge how wide error margin should , how tested software system is. system safety arguments based on high-water mark achieved during testing used, become harder justify software , hardware become less predictable.
in future, requirement safety critical systems analyzed using both static , measurement-based approaches.
considerations
the problem of finding wcet analysis equivalent halting problem , therefore insoluble in general case. fortunately kind of systems engineers typically want find wcet for, software typically structured, terminate , analyzable.
most methods finding wcet involve approximations (usually rounding upwards when there uncertainties) , hence in practice exact wcet regarded unobtainable. instead, different techniques finding wcet produce estimates wcet. estimates typically pessimistic, meaning estimated wcet known higher real wcet (which desired). work on wcet analysis on reducing pessimism in analysis estimated value low enough valuable system designer.
wcet analysis refers execution time of single thread, task or process. however, on modern hardware, multi-core, other tasks in system impact wcet of given task if share cache, memory lines , other hardware features. further, task scheduling events such blocking or interruptions should considered in wcet analysis if can occur in particular system. therefore, important consider context in wcet analysis applied.
automated approaches
there many automated approaches calculating wcet beyond manual techniques above. these include:
analytical techniques improve test cases increase confidence in end end measurements
static analysis of software (“static” meaning without executing software).
combined approaches, referred “hybrid” analysis, being combination of measurements , structural analysis
static analysis techniques
a static wcet tool attempts estimate wcet examining computer software without executing directly on hardware. static analysis techniques have dominated research in area since late 1980s, although in industrial setting, end-to-end measurements approaches standard practice.
static analysis tools work @ high-level determine structure of program s task, working either on piece of source code or disassembled binary executable. work @ low-level, using timing information real hardware task execute on, specific features. combining 2 kinds of analysis, tool attempts give upper bound on time required execute given task on given hardware platform.
at low-level, static wcet analysis complicated presence of architectural features improve average-case performance of processor: instruction/data caches, branch prediction , instruction pipelines, example. possible, increasingly difficult, determine tight wcet bounds if these modern architectural features taken account in timing model used analysis. example, cache locking techniques can used simplifying wcet estimation , providing predictability.
certification authorities such european aviation safety agency, therefore, rely on model validation suites.
static analysis has resulted in results simpler hardware, possible limitation of static analysis hardware (the cpu in particular) has reached complexity extremely hard model. in particular, modelling process can introduce errors several sources: errors in chip design, lack of documentation, errors in documentation, errors in model creation; leading cases model predicts different behavior observed on real hardware. typically, not possible accurately predict behavior, pessimistic result used, can lead wcet estimate being larger achieved @ run-time.
obtaining tight static wcet estimation particularly difficult on multi-core processors.
there number of commercial , academic tools implement various forms of static analysis.
measurement , hybrid techniques
measurement-based , hybrid approaches try measure execution times of short code segments on real hardware, combined in higher level analysis. tools take account structure of software (e.g. loops, branches), produce estimate of wcet of larger program. rationale s hard test longest path in complex software, easier test longest path in many smaller components of it. worst case effect needs seen once during testing analysis able combine other worst case events in analysis.
typically, small sections of software can measured automatically using techniques such instrumentation (adding markers software) or hardware support such debuggers, , cpu hardware tracing modules. these markers result in trace of execution, includes both path taken through program , time @ different points executed. trace analyzed determine maximum time each part of program has ever taken execute, maximum observed iteration time of each loop , whether there parts of software untested (code coverage).
measurement-based wcet analysis has resulted in results both simple , complex hardware, although static analysis can suffer excessive pessimism in multi-core situations, impact of 1 core on hard define. limitation of measurement relies on observing worst-case effects during testing (although not @ same time). can hard determine if worst case effects have been tested.
there number of commercial , academic tools implement various forms of measurement-based analysis.
Comments
Post a Comment