Keynote Speakers
Elvira Albert Complutense University of Madrid | Resource Analysis of Distributed and Concurrent Programs |
Distributed and concurrent systems are composed of distributed nodes that communicate and coordinate their actions, and concurrent tasks that interleave their execution within the distributed nodes. Resource analysis of distributed concurrent systems needs to consider the distribution, communication and task interleaving aspects of the systems. In this talk, we will describe the basic framework proposed for the resource analysis of distributed concurrent systems, together with the new notions of cost that arise in such context. In particular, we will discuss the notions of: peak cost that captures the maximum amount of resources that each distributed node might require along the whole execution; and parallel cost which corresponds to the maximum cost of the execution by taking into account that, when distributed tasks run in parallel, we need to account only for the cost of the most expensive one. | |
Kerstin I. Eder University of Bristol | Whole Systems Energy Transparency: More Power to Software Developers! |
Energy efficiency is now a major, if not the major, constraint in electronic systems engineering. Significant progress has been made in low power hardware design for more than a decade. The potential for savings is now far greater at the higher levels of abstraction in the system stack. The greatest savings are expected from energy consumption-aware software [1]. Designing software for energy efficiency requires visibility of energy consumption from the hardware, where the energy is consumed, all the way through to the programs that ultimately control what the hardware does. This visibility is termed energy transparency [3]. My lecture emphasizes the importance of energy transparency from hardware to software as a foundation for energy aware software engineering. Energy transparency is a concept that enables a deeper understanding of how algorithms and coding impact on the energy consumption of a computation when executed on hardware. It is a key prerequisite for informed design space exploration and helps system designers find the optimal tradeoff between performance, accuracy and energy consumption of a computation [2]. Promoting energy efficiency to a first class software design goal is an urgent research challenge that must be addressed to achieve truly energy efficient systems. I will start by illustrating how to measure energy consumption of software for embedded platforms [7]. This enables monitoring energy consumption of software at runtime, providing insights into the power, energy and timing behaviour of algorithms with some seemingly surprising results [4]. Energy models can be built to predict the energy consumption of programs based on execution statistics or trace data obtained from a simulator, or statically by employing advanced static resource consumption analysis techniques. I will introduce our approach to energy consumption modelling at the Instruction Set Architecture (ISA) level [8], comparing energy profiles of different instructions as well as exposing the impact of data width on energy consumption. We then focus on two approaches to static analysis for energy consumption estimation: one based on solving recurrence equations, the other based on implicit path enumeration (IPET). With the former it is possible to extract parameterized energy consumption functions [10,6,9], while the latter produces absolute values [5]. Analysis can be performed either directly at the ISA level [10,5] or at the Intermediate Representation (IR) of the compiler, here the LLVM IR [9,6,5]. The latter is enabled through a novel mapping technique that associates costs at the ISA level with entities at the LLVM IR level [5]. A critical review of the assumptions underlying both analysis approaches, in particular, an investigation into the impact of using a cost model that assigns a single energy consumption cost to each instruction, leads to a better understanding of the limitations of static resource bound analysis techniques on the safety and tightness of the bounds retrieved. The fact that energy consumption is inherently data dependent will be illustrated for a selected set of instructions, some with rather beautiful heat maps, with conclusions being drawn on how energy consumption analysis differs from execution time analysis, and an intuition into why analysing for worst-case dynamic energy is infeasible in general [11]. This leads me to discuss new research challenges for energy consumption modelling as well as for static energy consumption analysis. I will close with a call to giving ``more power'' to software developers so that ``cooler'' programs can be written in the future. | |
Lars Kristiansen University of Oslo | On Resource Analysis of Imperative Programs |
To what extent is it possible to estimate a program's resource requirements by analyzing the program code? I started my research on resource analysis of imperative programs about 17 years ago, and I published my last paper on the subject about five years ago. I will try to share some of the insights I gained during those years. I will also reflect a little bit upon the nature of this type of research: Is this pure theoretical computer science? Should we aim for real-life applications? To what extent can we expect such applications? Recently it has turned out the theory I (together with Amir Ben-Amram and Neil Jones) developed for resource analysis may have real-life applications when it is re-used as compiler theory. Towards the end of my talk I will explain these applications. | |
Jan Reinecke Universität Saarland | Challenges for Timing Analysis of Multi-Core Architectures |
In real-time systems, timely computation of outputs is as important as computing the correct output values. Timing analysis is a fundamental step in proving that all timing constraints of an application are met. Given a program and a microarchitecture, the task of timing analysis is to determine an upper bound on the response time of the program on the given microarchitecture under all possible circumstances. While this task is in general undecidable, it can be solved approximatively with sound abstractions. Current microarchitectural developments make timing analysis increasingly difficult: contemporary processor architectures employ deep pipelines, branch predictors, and caches to improve performance. Further, multi-core processors share buses, caches, and other resources, introducing interference between logically-independent programs. I will discuss three challenges arising from these developments, and approaches to overcome these challenges:
|