Logic and Learning

Our work on Logic and Learning integrated deductive and inductive reasoning and (partly) builds the foundation of the group's teaching in the Computer Science Master specialisation topic 'Logic and Learning'.

Reinforcement Learning

We study novel view upon reinforcement learning, in which we are interested in applications of RL in use cases, were average rewards may be non-zero. While RL methodologies have been extensively researched upon, this particular application area has only received scarce attention in the literature. In part, our motivation stems from applications in Operation Research, where it is typically the case that rewards are profit derived. Similar use cases can be found in more general applications in economics. Based on a principled study of the mathematical background of discounted reinforcement learning we establish a novel adaptation of standard reinforcement learning, dubbed Average Reward Adjusted Discounted Reinforcement Learning. Our approach stems from revisiting the Laurent Series expansion of the discounted state value and a subsequent reformulation of the target function guiding the learning process.

Rule Learning

In many application areas of machine learning, like automotive, medicine, health and insurance industries, etc., the need for security and transparency of the applied methods is not only preferred but increasingly often of utmost importance. State-of-the-art results in typical classification tasks are mostly achieved by unexplainable machine learning methods, like deep neural networks, for instance. However, these approaches lack explainability and interpretability and the application of such black-box methods often results in classifications that are incomprehensible for human beings. Thus, attempts are being made to replace pure black-box models with interpretable methods. We aim to gain insights in the application of rule learning methods and the corresponding advantages as well as disadvantages. In particular, we work on a modular methodology that combines state-of-the-art methods in (stochastic) machine learning with traditional methods in rule learning to provide efficient and scalable algorithms for the classification of vast data sets, while remaining explainable.

Machine Translations

In collaboration with the Ladin Cultural Institute "Micurá de Rü" we develop machine tranlation for low-resource languages, in paticular Ladin. Our main interest here are trustworthy applications of multilangual models. The main challenge in this endeavor is the limited availability of training data, a crucial factor to implement high quality translation systems using state-of-the-art techniques. To address this limitation, the effectiveness of fine-tuning pre-trained multilingual models (such as NLLB or T5) have been exploited.

Program Analysis

Program analysis, is the algorithmic discovery of properties of a program by inspection of the source text. In our work we focus on (fully) automated methods and cost analysis.

Functional Programs and Rewriting

Initially, our research was concerned with derivational and runtime complexity of term rewrite systems and corresponding work on implicit complexity theory. A milestone of this work constitutes the development of our award-winning Tyrolean Complexity Tool (TcT); TcT is open source and freely available. More recently, we focus on the amortised cost analysis of functional data structures. Here we emphasise the fully automated inference of precise bounds. Analysing these data structures requires sophisticated potential functions, which typically contain logarithmic expressions. Possibly for these reasons, and despite the recent progress in automated resource analysis, they have long eluded automation. A milestone of our work in this direction is the development of ATLAS. ATLAS established the first fully-automated amortised complexity analysis of self-adjusting data structures. Following earlier work, our analysis is based on potential function templates with unknown coefficients.

Probabilistic and Quantum Programs

Probabilistic variants of well-known computational models such as automata, Turing machines or the lambda calculus have been studied since the early days of computer science. One of the main reasons for considering probabilistic models is that they often allow for the design of more efficient algorithms than their deterministic counterparts. Milestone achiement of our group on the program analysis of probabilistic programs have been state-of-the-art methodologies for the (fully) automated of functional and imperative programs. In short, we tackle the automated verification (and sometimes improvement) of the analysis of sophisticated probabilistic programs. Our interest in verification of probabilistic programs and stochastic processes squares nicely with our ongoing interests in interpretability and also verification of machine learning methods. Expanding on earlier improvements on the modularity and scalability of the analysis of probabilistic programs, we have also established a sound framework for the expected cost analysis of mixed classical-quantum programs.

Verification

To formalise data structures, data types, their properties, behaviours and relationships must be precisely defined and represented. Formalisation provides a mathematical basis for reasoning about the behaviour, correctness and performance of data structures in software systems. Liquid Haskell (LH for short) is an extension of the Haskell programming language that uses static analysis to verify the correctness of programs. It allows the specification of predicates and invariants over data types and functions. These specifications describe the expected properties of data structures and their operations in a precise, mathematical way. LH then uses SMT solvers to automatically check that the specifications are met throughout the program. We employ formalisation in LH to provide a sound and precise rendering of textbook arguments on the cost aanylsis of data structures.

Additional Topics

Proof Theory

Most abstractly, proof theory asks (and often answers) the question what more do we know from a proof of a theorem than from its truth. A traditional, but slightly forgotten technique in logic (and proof theory) is Hilbert's epsilon calculus. Strikingly the related epislon elimination method still yields the up-to-date most direct algorithm for computing the Herbrand disjunction of an extensional formula. A central advantage is that the upper bound on the Herbrand complexity obtained is independent of the propositional structure of the proof. A milestone achievement or our group is the observation that this independence also holds for first-order logic with equality.

Subrecursive Hierarchies

It is well-known that given any arithmetical theory T with proof-theoretic ordinal alpha, then the provable recursive functions of T are exactly those functions computable within complexity bounds by the Hardy functions (strictly) less than alpha. The Hardy functions thus constitutes a hierarchy of fast-growing subrecursive functions that allow us to link two separate notions of complexity: (i) the logical complexity of proving the well-definedness of a specifcation and (ii) the computational complexity of any program fufilling this specification. This deep (and far-reaching) result in proof theory suggest a strong connection between termination techniques and cost analysis that alas are very subtle in concrete applications. Working out these subtleties is a recurrent theme of our group's research.

Nach oben scrollen