Home Tools SWAT


SWAT is a collection of tools for embedded software execution time and energy consumption estimation and optimization. Each tool performs elementary operation such as static and dynamic model construction, energy estimation, software analysis and reporting, back-annotation and so on.

Such tools have been organized into "core" flows implementing the following processes:


  • Execution time and energy consumption estimation
  • Fine-grained analysis and back-annotation
  • Instrumentation, tracing and trace post-processing
  • Target processor characterization
  • Source to source optimization
  • Optimization support engine
  • Dynamic voltage and frequency scaling optimization


Core tools provide seamless mechanisms for integration with other tools of the COMPLEX flow.

More in detail, the SWAT toolchain implements modelling, estimation and optimization techniques for embedded software applications written in pure C code. The tool chain is organized into a front-end, responsible for the modelling phase (target processor model, source static model), a set of “core” flows implementing the different functionalities of SWAT, and a post-processing engine, necessary to analyse the execution traces (event traces).

The goal of the core flows is described below:

Target processor characterization flow. This flow has the goal of expressing the execution time and energy consumption characteristics of the target core in terms of LLVM instructions. The input of this flow is an instruction-level characterization of the target processor (provided by the vendor) and the output is an abstract model of that processor expressed in terms of the LLVM instruction-set.

Estimation flow. This is the most important SWAT flow: it provides the functionality for performing dynamic estimation of the execution time and energy consumption of a given application stimulated with a specific set of data. The models involved in this process are data-independent and thus using different set of data does not requires changing the models (in the front-end) but only re-running the instrumented application. The input is a set of C source files and the target processor model as derived by the characterization flow, and the output is an overall estimate of execution time and power consumption.

Analysis and back-annotation flow. The models generated in the front-end phase of the estimation flow can be analyzed in further detail do derive different static and dynamic metrics. Such metrics are then summarized either in the form of an html report or as back-annotated source code. Energy and timing contributions, in particular, are reported for each line of the source code.

Optimization flow. The optimization flow performs different kinds of optimizations. Such optimizations do not result in a modification of the application source code, but rather in a set of “suggestions” indicating what transformations should be applied to the code to improve its execution time and/or energy consumption. Currently an experimental C-to-C optimization flow has been implemented on top of the LLVM opt tool integrated with MOST. This toolchain explores the optimization options available in opt to find the most beneficial combination for the specific application. A second flow performs an optimization by selecting the CPU operating mode (in terms of voltage and frequency scaling) to be assigned to each function or group of functions. The third and last flow collect a wide range of metrics on basic-blocks and functions and proposes specific very high-level transformations to apply to the most critical portions of the application.

Instrumentation and trace flow. This flow has the goal of tracing specific information during the execution of the application. The flow is split into a static, rule-based instrumentation phase and in a dynamic, optional execution phase. If used as a standalone tool, both phases are executed and the resulting event trace is fed to a post processor to collect statistics. If used in conjunction with other COMPLEX tools, only the instrumentation phase is necessary. This phase produces as output a binary library (or set of object files) implementing the instrumented version of the application. Such a library is the linked with other part of the system’s executable models to allow a complete system simulation and estimation.


Last Updated ( Saturday, 18 May 2013 22:52 )  


Successful final review meeting
On Thursday, May 25th, the final COMPLEX review meeting has been held in Brussels.


Final public deliverables uploaded

All public COMPLEX deliverables are now available in the Deliverables section.


COMPLEX @ ISCUG'2013 conference
14-15 April, 2013 - Noida, India


Newsflash RSS Feed