Seminar: Towards Deep Continuous-Discrete Machine Learning

12 Mar 2019 11:00 AM - 12:00 PM
Presenter/Speaker: Professor Stefan Kramer, Johannes Gutenberg University (JGU) Mainz, Germany
Location: G.1.15

Since the beginnings of machine learning - and indeed already mentioned in Alan Turing's groundbreaking 1950 paper "Computing machinery and intelligence" - two opposing approaches have been pursued: On the one hand, approaches that relate learning to knowledge and mostly use "discrete" formalisms of formal logic. On the other hand, approaches which, mostly motivated by biological models, investigate learning in artificial neural networks and predominantly use "continuous" methods from numerical optimization and statistics. The recent successes of deep learning can be attributed to the latter, the "continuous" approach, and are currently opening up new opportunities for computers to "perceive" the world and to act, with far-reaching consequences for industry, science and society. The massive success in recognizing "continuous" patterns is the catalyst for a new enthusiasm for artificial intelligence methods. However, today's artificial neural networks are hardly suitable for learning and understanding "discrete" logical structures, and this is one of the major hurdles to further progress.

Accordingly, one of the biggest open problems is to clarify the connection between these two learning approaches (logical-discrete, neural-continuous). In particular, the role and benefits of prior knowledge need to be reassessed and clarified. The role of formal logic in ensuring sound reasoning must be related to perception through deep networks. Further, the question of how to use prior knowledge to make the results of deep learning more stable, and to explain and justify them, is to be discussed. The extraction of symbolic knowledge from networks is a topic that needs to be re-examined against the background of the successes of deep learning. Finally, it is an open question if and how the principles responsible for the success of deep learning methods can be transferred to symbolic learning. In talk, I will discuss these topics and give examples of various recent approaches.

Stefan Kramer is professor of data mining at Johannes Gutenberg University (JGU) Mainz, Germany, and honorary professor of the University of Waikato. He is author of more than 220 publications and award-winning papers (ICDM, KDD, ILP, ICBK). Currently, he is advisor to the German Federal Ministry of Education and Research (BMBF) on matters of machine learning on the boards "Plattform Lernende Systeme" and "Plattform Industrie 4.0". In 2018, Stefan Kramer and Kristian Kersting initiated the network for deep continuous-discrete machine learning (DeCoDeML) by the Rhine-Main universities (RMU): the Goethe University Frankfurt am Main, Johannes Gutenberg University Mainz, and Technische Universität Darmstadt.

Add to My Calendar