7:30 - 8:15 Registration 
 8:15 - 8:30 Welcome
 8:30 - 09:00 Tom Henzinger: Reactive Systems: A Powerful Paradigm for Representing and Analyzing Complex Dynamic Interactions
09:00 - 09:30 Thomas Pock: Learning better models for inverse problem in imaging
09:30 - 10:00 Hanspeter Mössenböck: Truffle - A Framework for Writing Self-optimizing Language Interpreters
 10:00 - 10:30 Coffee Break 
 10:30 - 11:00 Daniel Gruss: Microarchitectural Attacks: From the Basics to Arbitrary Read and Write Primitives without any Software Bugs
 11:00 - 12:00 Panel: "Requirements for a Good Informatics Education at School" 
 12:00 -12:30 Awards 
12:30 - 13:30 Lunch 
 13:30 - 14:00 Ben L. Titzer: A Little on V8 and WebAssembly 
14:00 - 14:30 Alastair Reid: Creating Formal Specifications of the Arm Processor Architecture 
14:30 - 15:00 Laura Kovacs: Automated Reasoning for Systems Engineering 
 15:00 - 15:30 Coffee Break 
15:30 - 16:00 Sebastian Krinninger: Towards Optimal Dynamic Graph Compression 
16:00 - 16:30 Gernot Plank: Computational Modeling of Cardiac Function - The Personalized Heartbeat 
16:30 - 17:00 Christoph Dellago: Machine learning in the atomistic simulations of materials

Tom Henzinger:

Reactive Systems: A Powerful Paradigm for Representing and Analyzing Complex Dynamic Interactions

A reactive system is a dynamic system that evolves in time by reacting to external events. Computer science has developed powerful theories, algorithms, and tools for modeling and predicting the behavior of reactive systems. These techniques are based on mathematical logic, programming languages, and game theory. They were originally developed to let us build a more dependable digital infrastructure, but their utility transcends computer science. For example, an aircraft, a network of business processes, and a living organism are all examples of reactive systems. Our understanding and the design of such systems can benefit greatly from reactive modeling and analysis techniques such as execution, composition, abstraction, and state exploration. 

Thomas Pock:

Learning better models for inverse problem in imaging

In this talk, I will present our recent activities in learning better models for inverse problems in imaging. We consider classical variational models used for inverse problems but generalized these models by introducing a large number of free model parameters. We learn the free model parameters by minimizing a loss  function  comparing the reconstructed images obtained from the variational models with ground truth solutions from a training data base. I will also show very recent results on learning "deeper" regularizers that are already able to capture semantic information of images. We show applications to different inverse problems  in imaging where we put a particular focus on image reconstruction from undersampled MRI data.

Hanspeter Mössenböck:

Truffle - A Framework for Writing Self-optimizing Language Interpreters

Truffle is a Java-based language implementation framework that is especially suited for dynamically-typed languages such as JavaScript, Python or Ruby. Programs are translated to abstract syntax trees (AST), which are then interpreted. From the feedback that is collected during interpretation, the AST is automatically transformed to adapt to the observed input. When the AST stabilizes, it is JIT-compiled to efficient machine code. Run-time feedback and speculative optimizations (that can be taken back if necessary) lead to a peak performance that is comparable to that of statically-typed programs compiled with optimizing compilers.

Daniel Gruss:

Microarchitectural Attacks: From the Basics to Arbitrary Read and Write Primitives without any Software Bugs

In this talk, we will discuss microarchitectural attacks which arise from various processor optimizations. Modern processors are highly optimized systems where every single cycle of computation time matters. Many optimizations depend on the data that is being processed. Microarchitectural side-channel attacks leak secrets from cryptographic computations, from general purpose computations, or from the kernel. This leakage even persists across all common isolation boundaries, such as processes, containers, and virtual machines. Microarchitectural fault attacks exploit the physical imperfections of modern computer systems. Shrinking process technology introduces effects between isolated hardware elements that can be exploited by attackers to take control of the entire system. These attacks are especially interesting in scenarios where the attacker is unprivileged or even sandboxed. We will investigate known and new side channels and show that microarchitectural attacks can be fully automated and run in JavaScript or other constrained environments. By the end of the talk we will have built arbitrary read and write primitives, which allow an attacker on an affected system without any software bugs to read arbitrary data through the Meltdown attack and to perform arbitrary modifications of data through the Rowhammer attack.

Ben L. Titzer: 

A Little on V8 and WebAssembly

Ever wondering how webpages do all those interesting things? How can the web offer rich user experiences across so many different types of computers without sacrificing safety or performance?  Hint: it's lots of code underneath and the answer to running all of that is virtual machines. This talk will cover some aspects of the V8 virtual machine which implements JavaScript and WebAssembly and offers webpages fast performance without exposing your computer to risk.

Alastair Reid:

Creating Formal Specifications of the Arm Processor Architecture

This is a talk about why creating formal specifications for real world systems is hard and what we can do about it. Some of the key problems are the semantic gap between the architects’ intention and the written specification; challenges persuading different groups to adopt a common specification; the number and diversity of existing implementations; and the practical impossibility of formally verifying all implementations against the specification.We discuss lessons learned when creating a formal specification of the Arm Processor Architecture and using that specification to formally validate processors against the specification.  And we discuss how those lessons can be applied in other contexts.  This includes use of traditional testing, formal validation, social engineering and building a virtuous cycle to drive up the quality of the specification.

Laura Kovacs:

Automated Reasoning for Systems Engineering

Automated reasoning, and in particular first-order theorem proving, is one of the earliest research areas within artificial intelligence and formal methods. It is undergoing a rapid development thanks to its successful use in program analysis and verification, semantic Web, database systems, symbolic computation, theorem proving in mathematics, and other related areas. Breakthrough results in all areas of theorem proving have been obtained, including improvements in theory, implementation, and the development of powerful theorem proving tools. In this talk we give a brief overview on the main ingredients of automated theorem proving, and focus on recent challenges and developments in the area. Further, we will discuss recent applications of theorem proving in rigorous systems engineering. 

Sebastian Krinninger:

Towards Optimal Dynamic Graph Compression 

The graph compression paradigm encompasses techniques for turning an arbitrary input graph into a significantly smaller (i.e., sparser) output graph that approximately preserves certain key properties of the input. Graph compression is a fundamental concept in modern algorithm design with applications ranging from distance oracles to solving Laplacian systems of linear equations. In this talk, I will present my recent contributions to dynamic graph compression. The goal of such dynamic algorithms is to maintain the sparse output graph as the input graph undergoes insertions and deletions of edges; the challenge here is to design algorithms with provable guarantees on the running time spent after each update performed to the input graph.

Gernot Plank:

Computational Modeling of Cardiac Function - The Personalized Heartbeat 

Advances in numerical techniques and the ever increasing computational power have rendered the execution of forward models of total heart function feasible. Using such models based on clinical images and parameterized to reflect a given patient's physiology, are a highly promising approach to comprehensively and quantitatively characterize cardiovascular function in a given patient. Such models are anticipated to play a pivotal role in future precision medicine as a method to stratify diseases, optimize therapeutic procedures, predict outcomes and thus better inform clinical decision making. Key challenges to be addressed are two-fold. Expensive computational models must be made efficient enough to be compatible with clinical time frames and generic models must be specialized based on clinical data, which requires complex parameterization and data assimilation procedures.

Christoph Dellago:

Machine learning in the atomistic simulations of materials

Atomistic computer simulations of processes occurring in condensed matter systems are challenging for several distinct but related reasons. For large systems, the accurate calculation of energies and forces needed in molecular dynamics simulations may be computationally demanding, particularly if electronic structure calculations are used for this purpose. Other difficulties arising in the dynamical simulation of condensed matter processes consist in detecting local structures characteristic for stable or metastable phases and in identifying important degrees of freedom that capture the essential physics of the process under study. In this talk, I will discuss how these problems can be addressed using machine learning approaches. The freezing transition will serve as illustrative example.