Our Next Meetup - Neuro-symbolic methods for robust and explainable common sense reasoning
Speaker:
Filip Ilievski, Ph.D
Research Lead, USC Information Sciences Institute
Research Assistant Professor of Computer Science, USC Viterbi
(ilievski@isi.edu)

Filip Ilievski is Research Assistant Professor of Computer Science at the University of Southern California (USC) and Research Lead in the Information Sciences Institute (ISI) at the USC Viterbi School of Engineering. Filip holds a Ph.D. in Natural Language Processing from the Vrije Universiteit (VU) in Amsterdam, where he also worked as a postdoctoral researcher before joining USC. His research focuses on developing robust and explainable neuro-symbolic technology with positive real-world impact, based on neural methods and high-quality knowledge. Filip has made extensive contributions in identifying long-tail entities in text, performing robust and explainable commonsense reasoning, and managing large-scale knowledge resources. Over the past three years, he mentored dozens of Master’s and Ph.D. students, and has been collaborating with researchers at USC, CMU, Bosch Research, RPI, University of Amsterdam, and the University of Lyon. Filip has over 60 peer-reviewed publications in top-tier venues on commonsense reasoning, information extraction, and knowledge graphs. He has also been actively organizing workshops (AAAI’21), tutorials (AAAI’21, ISWC’20, ISWC’21, TheWebConf’22, KGC’22, AAAI’23), symposiums (USC), and a special journal issue (Semantic Web Journal) on these topics.
Topic Summary:
In this talk, I will present our efforts in building natural-language AI models that use common sense to act robustly and explain their reasoning in open-world situations. State-of-the-art technology is inadequate for this purpose: background knowledge and rules provide explainability but cannot generalize to unseen situations, whereas neural models with natural generalizability are prone to making silly mistakes and cannot explain their decisions. I will describe our methods that combine the best of the symbolic and the neural worlds to perform well on open-world tasks like question answering and story comprehension without the need for task-specific training data. I will describe our knowledge organization and enrichment efforts, together with our robust and explainable neuro-symbolic commonsense methods that reason over this commonsense knowledge. The models are able to explain their reasoning based on procedural knowledge about participant attributes, by providing exemplar or prototypical cases, and by imagining scenes as graph structures. I will conclude with our ongoing efforts to apply these agents to real-world challenges, such as intelligent traffic monitoring, socially assistive technologies, and hate speech detection on the Web