Artificial intelligence research strengthened in Paderborn

October 18, 2022

Federal Ministry of Education and Research funds two AI junior research groups at the University of Paderborn

Artificial intelligence (AI) is considered one of the key technologies of the future. Whether voice assistants, smarthomes or models for predicting pandemics: AI supports us in countless areas and has become an indispensable part of our everyday lives. In order to comprehensively understand the rapid technical developments, to further develop the technology in a targeted manner and to bring it into application, research in the field is more important than ever. Since September, two new AI junior research groups at the University of Paderborn have been investigating how machine learning can be optimised and used more effectively than before. The goal of the young scientists is, on the one hand, to improve the quality of models of dynamic systems by combining machine learning and expert knowledge and, on the other hand, to make the training of deep neural networks more robust, efficient and interactive through so-called multi-objective optimisation methods. The Federal Ministry of Education and Research (BMBF) is funding both groups for three years with a total of about 1.8 million euros.

The two teams plan to work closely together to bundle their competencies in the AI field in the best possible way. Paderborn offers the optimal infrastructure for this: through the Paderborn Center for Parallel Computing (PC2), the scientists have the opportunity to use the latest HPC hardware (High Performance Computing) for functional and demonstration tests. The methods, software tools and data of the junior research groups will eventually be made available to a broad user community free of charge as open-access material.

Combining the advantages of expert and data knowledge for model building

From temperature estimation in electric motors to predicting COVID-19 dispersion dynamics or the unemployment rate: a wide range of dynamic systems in engineering, economics and social sciences as well as physics, biology, chemistry and medicine can be described by mathematical models. However, modelling these complex dynamic systems is often a challenge. “In recent years, there has been a clear trend away from expert models towards black-box models developed using machine learning. Both have many advantages, but also disadvantages,” says Dr.-Ing. Oliver Wallscheid from the Paderborn Institute of Electrical Engineering and Information Technology and head of one of the new AI junior research groups. The aim of the scientists around Wallscheid is to combine the advantages of expert- and data-driven modelling in order to significantly improve the model quality in terms of accuracy, robustness and complexity for various applications. In the project “Automated modelling and validation of dynamic systems using machine learning and a priori expert knowledge” (ML-Expert), they are researching how the basically different modelling paradigms – developed by experts or by AI – can be combined in a hybrid modelling.

Wallscheid explains the problem so far: “While expert-based approaches reproduce the system behaviour in a robust and interpretable way, they often require a lot of time as well as human resources and show systematic deviations due to simplifications. In contrast, black-box models, i.e. data-driven models, can sometimes be generated quickly and without significant prior knowledge. While this allows accurate and scalable models to be generated, they lack interpretability and robustness to outliers, among other things.”

Efficient and resource-oriented data generation, model building and validation should lead to faster development and application cycles in the future. “Especially for cost-relevant industrial applications, e.g. in the automotive, energy or automation industries, this represents significant added value, also against the backdrop of a worsening shortage of skilled workers. However, our work should not be limited to these industries, but should be domain-spanning,” Wallscheid emphasises.

Optimal compromise solutions for machine learning methods

Machine learning methods are also used in numerous other applications. So-called deep neural networks (DNNs) enable intelligent image recognition or speech processing, for example. The increase in available computing capacities makes it possible to construct ever larger, deeper and more complex DNNs. With these advances, the challenges also increase: Optimally, the construction and training of DNNs should simultaneously fulfil numerous, sometimes conflicting, goals in the best possible way. “In contrast to many areas of technology and society, in which the consideration of multiple criteria is a matter of course, e.g. in cancer treatment, the enormous potential of a multi-objective approach in machine learning has so far remained largely untapped,” says Jun.-Prof. Dr. Sebastian Peitz from the Institute of Computer Science. The head of the new AI junior research group wants to change this with the project “Multicriteria Machine Learning – Efficiency, Robustness, Interactivity and System Knowledge” (MultiML). The goal is to make the training of DNNs more robust, efficient and interactive by developing multi-objective optimisation methods and thus decisively improve it. Furthermore, the additional integration of system knowledge should enable the construction of extremely efficient methods tailored to specific problem classes.

Peitz gives simple examples: “In almost all areas of technology, economy and society, the dilemma arises that competing criteria are of similar importance: Electric vehicles should drive fast and have a long range at the same time, a product to be manufactured should have high quality and low production costs, and political decisions should take both economic and ecological aspects into account.” The challenge is always to identify and select optimal compromise solutions, so-called Pareto optima. “In machine learning, too, there are numerous criteria that should be fulfilled simultaneously in the best possible way, such as robustness to incomplete input data, generalisation beyond the training data or the best possible compliance with physical laws,” the computer scientist continues.

The development of multi-objective optimisation methods for machine learning should enable algorithms that allow the set of optimal trade-offs to be determined in a highly efficient manner. “Knowing all the trade-off solutions enables users, among other things, to make much more informed and conscious trade-offs and to adapt learning procedures according to the situation by reprioritising the individual goals,” explains Peitz.

Related Articles

Mobile Road Blocker M30 from Hörmann

Mobile Road Blocker M30 from Hörmann

Flexible and certified protection for events Public festivals, music events or Christmas markets - open-air events require appropriate security concepts to provide the best possible protection for the people on site. An important part of this concerns the protection...

Share This