Invited Speakers
Abstracts
Neurosymbolic AI (NeSy) is regarded as the third wave in AI. It aims at combining knowledge representation and reasoning with neural networks. Numerous approaches to NeSy are being developed and there exists an `alphabet-soup' of different systems, whose relationships are often unclear. I will discuss the state-of-the art in NeSy and argue that there are many similarities with statistical relational AI (StarAI).
Taking inspiration from StarAI, and exploiting these similarities, I will argue that Neurosymbolic AI = Logic + Probability + Neural Networks. I will also provide a recipe for developing NeSy approaches: start from a logic, add a probabilistic interpretation, and then turn neural networks into `neural predicates'. Probability is interpreted broadly here, and is necessary to provide a quantitative and differentiable component to the logic. At the semantic and the computation level, one can then combine logical circuits (ako proof structures) labeled with probability, and neural networks in computation graphs.
I will illustrate the recipe with NeSy systems such as DeepProbLog, a deep probabilistic extension of Prolog, and DeepStochLog, a neural network extension of stochastic definite clause grammars (or stochastic logic programs).
Bio:
Prof. Dr. Luc De Raedt is Director of Leuven.AI, the KU Leuven Institute for AI, full professor of Computer Science at KU Leuven, and guest professor at Örebro University (Sweden) at the Center for Applied Autonomous Sensor Systems in the Wallenberg AI, Autonomous Systems and Software Program. He is working on the integration of machine learning and machine reasoning techniques, also known under the term neurosymbolic AI. He has chaired the main European and International Machine Learning and Artificial Intelligence conferences (IJCAI, ECAI, ICML and ECMLPKDD) and is a fellow of EurAI, AAAI and ELLIS, and member of Royal Flemish Academy of Belgium. He received ERC Advanced Grants in 2015 and 2023.
ChatGPT and other LLMs are the most recent major outcome of the ongoing Ai revolution. The talk begins with a brief discussion of such (text-based) generative AI tools and showcases instances where these models excel, namely when it comes to generating beautifully composed texts. We then discuss shortcomings of LLM, especially where they produce erroneous information. This is often the case when they are prompted for data that are not already present in Wikipedia or other authoritative Web sources. To understand why so many errors and "hallucinations" occur, we report about our findings about the "psychopathology of everyday prompting" and identify and illustrate several key reasons for potential failures in language models, which include, but are not limited to: (i) information loss due to data compression, (ii) training bias, (iii) the incorporation of incorrect external data, (iv) the misordering of results, and (v) the failure to detect and resolve logical inconsistencies contained in a sequence of LLM-generated prompt-answers. In the second part of the talk, we give a survey of Chat2Data project, which endeavors to leverage language models for the automated verification and enhancement of relational databases, all while mitigating the pitfalls (i)-(v) mentioned earlier.
Bio:
Georg Gottlob is a Professor of Computer Science at the University of Calabria and a Professor Emeritus at Oxford University. Until recently, he was a Royal Society Research Professor at Oxford,and a Fellow of Oxford's St John's College and an Adjunct Professor at TU Wien. His interests include knowledge representation, database theory, query processing, web data extraction, and (hyper)graph decomposition techniques. Gottlob has received the Wittgenstein Award from the Austrian National Science Fund and the Ada Lovelace Medal in the UK. He is an ACM Fellow, an ECCAI Fellow, a Fellow of the Royal Society, and a member of the Austrian Academy of Sciences, the German National Academy of Sciences, and the Academia Europaea. He chaired the Program Committees of IJCAI 2003 and ACM PODS 2000, is on the Editorial Board of JCSS, and was on the Editorial Boards of JACM and CACM. He was a founder of Lixto, a web data extraction firm acquired in 2013 by McKinsey & Company. In 2015 he co-founded Wrapidity, a spin out of Oxford University based on fully automated web data extraction technology developed in the context of an ERC Advanced Grant.. Wrapidity was acquired by Meltwater, an internationally operating media intelligence company. Gottlob then co-founded the Oxford spin-out DeepReason.AI, which provided knowledge graph and rule-based reasoning software to customers in various industries. DeepReason.AI was also acquired by Meltwater.
We argue that giving a LLM the ability to use formal reasoning tools such as theorem provers and planners is sufficient for achieving a base-level integration of Type I and Type II reasoning. We further speculate that the resulting system is an in vitro instance of a simple intelligence organism and could be studied in order to better understand intelligence in general.
Bio:
Henry Kautz is a Professor in the Department of Computer Science at the University of Virginia, Charlottesville. He formerly served as Director of Intelligent Information Systems at the National Science Foundation and on the faculty of University of Washington and University of Rochester. He won the AAAI Computers and Thought award in 1989 and the AAAI-ACM Alan Newell award in 2019.