Abstracts of Technical Talks
Martin Avanzini | Complexity Analysis of Functional Programs via Sized-types |
One successful approach to automatic verification of termination properties of higher-order functional programs is based on sized-types, and has been shown to be quite robust and amenable to automation. In sized-types, a type carries not only information about the kind of each object, but also about its size, hence the name. This information is then exploited when requiring that recursive calls are done on arguments of strictly smaller size. Estimating the size of intermediate results is also a crucial aspect of complexity analysis. However, up to now, the only attempt of using sized-types for complexity analysis is due to Vasconcelos, and confined to space complexity. The sized-types system, by itself, does not guarantee any complexity-theoretic property on the typed program, except for the size of the output being bounded by a certain function on the size of the input. Complexity analysis of a program P can however be seen as a size analysis of an instrumented program which computes not only P, but its complexity. In this talk, I will introduce a novel sized-type system aimed towards automated complexity analysis. In particular, I will outline a type inference procedure which reduced sized-type inference to a problem of solving a set of inequalities. These inequalities can in turn be solved by standard techniques employed, e.g., in the synthesis of ranking functions. Despite undecidability of sized-type inference in general, the proposed inference procedure is relative complete in the sense that a given program is typeable if and only if the generated set of equalities can be solved. |
Matthias Baaz | LKepsilon and the immedeate reduction of arbitrary cuts to universal cuts |
An epsilon companion of a LK derivation is an LK-epsilon proof where both the end sequent and the cuts consist of epsilon-translation of first-order formulas.We show, that any LK-derivation can be translated into an epsilon companion with only universal cuts with exponential increase of complexity. This demonstrates, that Schütte-Tait elimination prcedures are incompatible with the First epsilon Theorem. we discuss many additional implications. |
Uwe Egly | Quantifier Handling in QBF Calculi |
Thomas Eiter | Streaming Multi-Context Systems |
Multi-Context Systems (MCS) are a powerful framework to interlink heterogeneous knowledge bases. Recently, dynamic data in such environments has become an issue and respective extensions of MCS, such as reactive MCS, evolving MCS, and asynchronous MCS, were proposed. However, they lack explicit support of stream processing, some abstract from computing time, or lack notions of equilibria. We present streaming MCS, which equip MCS with stream reasoning capabilities. We define a run-based semantics that accounts for asynchronous distributed execution and enables contexts in cyclic exchange to establish information equilibria in locally stable runs; this helps to avoid infinite loops. Under suitable ramifications, ad-hoc query answering is NP-complete; prediction is clearly undecidable in general, but in PSPACE for relevant settings. Joint work with Min Dao-Tran and Harald Beck. |
Mirco Giacobbe | Counterexample-guided Refinement of Template Polyhedra |
Template polyhedra generalize intervals and octagons to polyhedra whose facets are orthogonal to a given set of arbitrary directions. They offer a very powerful framework for the reachability analysis of hybrid automata, as an appropriate choice of directions allows an effective tuning between accuracy and precision. In this talk, I will present a method for the automatic discovery of directions that generalize and eliminate spurious counterexamples, task which was previously left to the user or a heuristic. I will show that for the class of convex hybrid automata, i.e. hybrid automata with (possibly non-linear) convex constraints on derivatives, such directions can be found using convex optimization. Finally, I will show our experimental results on several bechmarks, demonstrating an effective time-unbounded reachability analysis for a richer class of hybrid automata than was previously possible, and superior performance for the special case of linear hybrid automata. This is joint work with Sergiy Bogomolov, Goran Frehse, and Thomas Henzinger. |
Radu Grosu | Temporal Logic as Filtering |
We show that metric temporal logic (MTL) can be viewed as linear time-invariant filtering, by interpreting addition, multiplication, and their neutral elements, over the idempotent dioid (max,min,0,1). Moreover, by interpreting these operators over the field of reals (+,x,0,1), one can associate various quantitative semantics to a metric-temporal-logic formula, depending on the filter's kernel used: square, rounded-square, Gaussian, low-pass, band-pass, or high-pass. This remarkable connection between filtering and metric temporal logic allows us to freely navigate between the two, and to regard signal-feature detection as logical inference. To the best of our knowledge, this connection has not been established before. We prove that our qualitative, filtering semantics is identical to the classical MTL semantics. We also provide a quantitative semantics for MTL, which measures the normalized, maximum number of times a formula is satisfied within its associated kernel, by a given signal. We show that this semantics is sound, in the sense that, if its measure is 0, then the formula is not satisfied, and it is satisfied otherwise. We have implemented both of our semantics in Matlab, and illustrate their properties on various formulas and signals, by plotting their computed measures. This is joint work with A. Rodionova, E. Bartocci and D. Nickovic. |
Stefan Hetzl | System Description: General Architecture for Proof Theory (GAPT) |
GAPT is a proof theory framework containing data structures, algorithms, parsers and other components common in proof theory and automated deduction. In contrast to automated and interactive theorem provers whose focus is the construction of proofs, GAPT concentrates on the transformation and further processing of proofs. In this talk, I will describe the system and give a brief demonstration. |
Cezary Kaliszyk | Proof Assistant Advice for Multiple Logical Foundations |
With proof assistants becoming more commonly used to guarantee the correctness of programs and proofs, automatically providing proof advice becomes more important. The most powerful general purpose advice and automation for formal proofs is currently provided by systems that combine machine learning with automated reasoning. In this talk we will discuss several limitations of the current generation of such tools and propose methods that will combine the knowledge and the reasoning techniques present in the current systems into a learning and reasoning system working over a large part of today's body of formalized knowledge including different able to work across logical foundations and even provide proof advice for semi-formal proofs. |
Christoph Kirsch | Teaching Computer Science Through Self-Referentiality |
Imagine a world in which virtually everyone at least intuitively understands the fundamental principles of information and computation, just like, say, Boolean and elementary algebra. Computing in such a world would be as natural to people as using a calculator app or making plans for the weekend. Computer science, however, is still a young field compared to others and lacks maturity, despite the enormous demand created by information technology. To address the problem we would like to encourage everyone in the computer science community to go back to their favorite topic and identify the absolute basics that they feel are essential for understanding the topic. We present here our experience in trying to do just that with programming languages and runtime systems as our favorite topic. We argue that understanding the construction of their semantics and in particular the self-referentiality involved in that is essential for understanding basic computer science. We have developed a tiny C compiler as well as a MIPS emulator and hypervisor called selfie that can compile, execute, and host itself. Selfie serves as fully self-contained tool for teaching how semantics of realistic programming formalisms and concurrent systems is created on a digital computer. http://selfie.cs.uni-salzburg.at Selfie is joint work with Martin Aigner, Christian Barthel, Michael Lippautz, and Simone Oblasser. |
Igor Konnov | Verifying Safety and Liveness of Threshold-guarded Fault-Tolerant Distributed Algorithms |
Distributed algorithms have many mission-critical applications ranging from embedded systems and replicated databases to cloud computing. Due to asynchronous communication, process faults, or network failures, these algorithms are difficult to design and verify. Many algorithms achieve fault tolerance by using threshold guards that, for instance, ensure that a process waits until it has received an acknowledgment from a majority of its peers. Consequently, domain-specific languages for fault-tolerant distributed systems offer language support for threshold guards. Recently, we have introduced an automated method for model checking of safety and liveness of threshold-guarded distributed algorithms in systems where the number of processes and the fraction of faulty processes are parameters. Our method is based on a short counterexample property: if a distributed algorithm violates a temporal specification (in a fragment of LTL), then there is a counterexample whose length is bounded and independent of the parameters. We extended the Byzantine Model Checker with our technique, and verified liveness and safety of 10 prominent fault-tolerant distributed algorithms, most of which were out of reach for existing techniques. |
Temur Kutsia | A unification-Based Approach to the Construction of Orthogonal Designs in Powers of Two |
Orthogonal designs are an important class of combinatorial designs. We model orthogonal design construction problems as problems of solving polynomial equations of a specific form, and investigate designs of orders 16, 32, 64, and 128. The obtained polynomial system, due to its size, cannot be handled by traditional search algorithms. For instance, to construct orthogonal designs of order 128, one needs to solve a system consisting of 5088 polynomial equations of degree 2 in 126 variables. By establishing and proving connections between central concepts of the theory of orthogonal designs and equational unification, we are able to use unification techniques to completely tackle these systems of equations, whose solutions give orthogonal designs. Our approach not only reports the orthogonal designs, but also constructs the corresponding design matrices. In this way, we always give a constructive solution to the problem which is not always the case with other approaches used in design theory. Besides, we establish connections between base orthogonal designs and minimal complete sets of unifiers, which helps to use unification methods in design theory (orthogonal design equivalence among other topics). This is a joint work with Ilias Kotsireas and Dimistis Simos. |
Alexander Leitsch | CERES in intuitionistic logic |
Usual methods of cut-elimination (in LJ and LK) are based on rewriting of proofs where the rewrite rules are derived from Gentzen's proof of the cut-elimination theorem. CERES (cut-elimination by resolution), originally defined for classical first-order logic and the calculus LK, is of a different type: the given proof p of S is analyzed as a whole, a set of clauses C is extracted encoding the cut structure of p; a resolution refutation of C is combined with so-called proof projections of P to a proof p' of S with only atomic cuts (the CERES normal form). In a final step the atomic cuts in p' can be eliminated and a cut-free proof p'' of S can be obtained. It was shown that CERES is much more efficient than methods based on proof rewriting. In transferring the method to intuitionistic logic and LJ-proofs p major problems arise: the proof projections of p are (in general) classical (they are LK- but not LJ-proofs) and so is the CERES normal form. Moreover, there are resolution refutations of C from which no intuitionistic cut-free proof can be constructed at all. We solve this problem by introducing a resolution principle for cut-free proofs which allows to resolve the proof projections directly. The completeness of this proof resolution principle (which means that an intuitionistic proof of the end-sequent is eventually obtained) is shown via reductive cut-elimination and a subsumption principle for cut-free proofs. The new method CERESIL behaves similarly to CERES with respect to complexity; CERESIL can give a nonelementary speed-up over rewrite-based methods for LJ. |
Anela Lolic | Propositional interpolation induces first-order interpolation for finitely-valued logics |
We develop a general methodology to connect propositional and first-order interpolation. In fact, the existence of suitable Skolemizations and Herbrand expansions together with a propositional interpolant suffice to show first-order interpolation. This methodology is realized for lattice-based finitely-valued logics, the top element representing true, and can be extended to (fragments of) infinitely-valued logics. |
Michael Morak | The Power of Non-Ground Rules in Answer Set Programming |
This talk is about a novel paradigm for encoding hard problems in answer set programming (ASP) by making use of large rules that depend on the actual instance of the problem. This new encoding paradigm can be used to solve problems up to the third level of the polynomial hierarchy. State-of-the-art ASP solvers are geared towards short rules, therefore, in order to deal with large rules, the technique of rule decomposition is key in the practical realization of this approach. |
Kenji Miyamoto | Implementing the Embedding Lemma |
Hilbert's epsilon calculus is a first-order logic enriched with the epsilon operator ε. The epsilon operator allows to express for a formula A(x) a term εxA(x) whose property is characterized by the critical axiom A(t)→A(εxA(x)) where t is an arbitrary term. Due to the conservativity results of Hilbert's epsilon calculus, namely, the first and second epsilon theorems, occurrences of critical axioms in proofs can be eliminated, and then the elimination process increases the complexity of proofs. I am going to report my recent work in progress concerning the embedding lemma, which is a first step towards complexity analysis of the conservativity results given by Moser and Zach. |
Georg Moser | Multivariate Amortised Resource Analysis for Term Rewrite Systems |
In this talk I will present amortised resource analysis in the context of term rewrite systems and introduce a novel amortised analysis based on the potential method. The method is represented in an inference system akin to a type system and gives rise to polynomial bounds on the innermost runtime complexity of the analysed rewrite system. The crucial feature of the inference system is the admittance of multivariate bounds in the context of arbitrary data structures in a completely uniform way. This extends earlier univariate resource analysis of typed term rewrite systems and continues our program of applying automated amortised resource analysis to rewriting. Finally, I'll comment on very recent work on logarithmic amortised complexity. This is joint work with Martin Hofmann. |
Petr Novotny | Stochastic Invariants for Probabilistic Termination |
Termination is one of the basic liveness properties, and we study the termination problem for probabilistic programs with real-valued variables. Previous works focused on the qualitative problem that asks whether an input program terminates with probability 1 (almost-sure termination). A powerful approach for this qualitative problem is the notion of ranking supermartingales with respect to a given set of invariants. The quantitative problem (probabilistic termination) asks for bounds on the termination probability, and this problem has not been addressed yet. A fundamental and conceptual drawback of the existing approaches to address probabilistic termination is that even though the supermartingales consider the probabilistic behavior of the programs, the invariants are obtained completely ignoring the probabilistic aspect (i.e., the invariants are obtained considering all behaviors with no information about the probability). In this work we address the probabilistic termination problem for linear-arithmetic probabilistic programs with nondeterminism. We formally define the notion of stochastic invariants, which are constraints along with a probability bound that the constraints hold, for the probabilistic termination problem. We introduce a new concept of repulsing supermartingales. First, we show that repulsing supermartingales can be used to obtain bounds on the probability of the stochastic invariants. Second, we show the effectiveness of repulsing supermartingales in the following three ways: (1) With a combination of ranking and repulsing supermartingales we can compute lower bounds on the probability of termination; (2) repulsing supermartingales provide witnesses for refutation of almost-sure termination; and (3) with a combination of ranking and repulsing supermartingales we can establish persistence properties of probabilistic programs. Joint work with K. Chatterjee and D. Zikelic. |
Reinhard Pichler | CSPs and CQs: the Hardness of Identifying Easy to Solve Cases |
Hypertree decompositions, as well as the more powerful generalized hypertree decompositions (GHDs), and the yet more general fractional hypertree decompositions (FHD) are hypergraph decomposition methods successfully used for answering conjunctive queries (CQs) and for the solution of constraint satisfaction problems (CSPs). Each hypergraph H has a width relative to each of these decomposition methods: its hypertree width hw(H), its generalized hypertree width ghw(H), and its fractional hypertree width fhw(H), respectively. For each constant k, classes of CQs or CSPs whose associated hypergraphs H have hw(H), ghw(H) or fhw(H) at most k can be answered/solved in polynomial time. Deciding if hw(H) is at most k for a fixed constant k can be checked in polynomial time. The complexity of checking whether fhw(H) is at most k for a fixed constant k was unknown and was stated by Grohe and Marx (SODA 2006) as an important open problem. We settle this problem by proving NP-completeness of this problem. The same construction settles another open problem by showing that deciding whether ghw(H) is at most 2 is also NP-complete. Hardness was previously known for k greater than or equal to 3, whilst the case k = 2 has remained open since 2001. Joint work with Wolfgang Fischl and Georg Gottlob. |
Norbert Preining | Classification of SAT and VAL in Gödel Logics |
We characterize the recursively enumerable first order Gödel logics with Δ with respect to validity and non-satisfiability. The finitely valued and four infinitely valued Gödel logics with Δ are recursively enumerable, not-satisfiability is recursively enumerable if validity is recursively enumerable. This is in contrast to first-order Gödel logics without Δ, where validity is recursively enumerable for finitely valued and two infinitely valued Gödel logics, not-satisfiability is recursively enumerable if validity is recursively enumerable or 0 isolated in the truth value set. |
Martina Seidl | QRAT Proofs for QBF Preprocessing |
For the practical evaluation of quantified Boolean formulas (QBFs) an additional preprocessing step is beneficial in many cases. The application of a preprocessor, however, prevented the extraction of proofs for the original formula. Such proofs are required to independently validate correctness of the preprocessor’s rewritings and the solver’s result. To address this issue, we introduced the QRAT proof system. With QRAT it is possible to capture all state-of-the-art preprocessing techniques in a uniform manner and to check the result of a preprocessor efficiently. |
Ana Sokolova | Local Linearizability |
The semantics of concurrent data structures is usually given by a sequential specification and a consistency condition. Linearizability is the most popular consistency condition due to its simplicity and general applicability. Nevertheless, for applications that do not require all guarantees offered by linearizability, recent research has focused on improving performance and scalability of concurrent data structures by relaxing their semantics. In this talk, I will present local linearizability, a relaxed consistency condition that is applicable to container-type concurrent data structures like pools, queues, and stacks. We will briefly discuss theoretical and practical properties of local linearizability and its relationship to other existing consistency conditions. Furthermore, I will show you a generic implementation method for locally linearizable data structures that uses existing linearizable data structures as building blocks. Our implementations show performance and scalability improvements over the original building blocks and outperform the fastest existing container-type implementations. Joint work with Helmut Veith (among others). |
René Thiemann | Certifying Termination Proofs for Programs |
CeTA is a highly reliable and powerful certifier for automatically generated termination proofs for term rewrite systems. We present the recent extension of CeTA for checking termination proofs of programs. More precisely, we consider the cooperation graph approach of Brockschmidt, Cook and Fuhs. We decompose this powerful and monolithic technique for proving termination of abstract programs (labeled transition systems) into separate modular and manageable termination techniques which all have a well-defined semantics. Soundness of all techniques has been proven in Isabelle and as a result CeTA is now capable of checking termination proofs generated by T2. Joint work with Marc Brockschmidt, Sebastiaan Joosten and Akihisa Yamada. |
Georg Weissenbacher | Accelerating Bug Detection in C Programs |
Sarah Winkler | Ground Confluence and Maximal Ordered Completion |
Ordered completion is a standard approach to equational theorem proving. Recently, maximal completion has proved to be a simple yet efficient approach to completion, based on maxSAT/maxSMT. While it is not difficult to extend this method to ordered completion, it is not clear how to effectively test ground confluence in an ordered completion tool as ground confluence is an undecidable property even of terminating rewrite systems. In this talk it is shown how the SAT-based setting can be exploited to strengthen the ground confluence criterion from (Martin and Nipkow, 1990), and how this test can be effectively used in a maximal ordered completion procedure. |