Workshop Description

Reasoning is a core ability in human cognition. Its power lies in the ability to theorize about the environment, to make implicit knowledge explicit, to generalize given knowledge and to gain new insights. There are a lot of findings in cognitive science research which are based on experimental data about reasoning tasks, among others models for the Wason selection task or the suppression task discussed by Byrne and others. This research is supported also by brain researchers, who aim at localizing reasoning processes within the brain. Early work often used propositional logic as a normative framework. Any deviation from it has been considered an error. Central results like findings from the Wason selection task or the suppression task inspired a shift from propositional logic and the assumption of monotonicity in human reasoning towards other reasoning approaches. This includes but is not limited to models using probabilistic approaches, mental models, or non-monotonic logics. Considering cognitive theories for syllogistic reasoning show that none of the existing theories is close to the existing data. But some formally inspired cognitive complexity measures can predict human reasoning difficulty for instance in spatial relational reasoning. Automated deduction, on the other hand, is mainly focusing on the automated proof search in logical calculi. And indeed there is tremendous success during the last decades. Recently a coupling of the areas of cognitive science and automated reasoning is addressed in several approaches. For example there is increasing interest in modeling human reasoning within automated reasoning systems including modeling with answer set programming, deontic logic or abductive logic programming. There are also various approaches within AI research for common sense reasoning and in the meantime there exist benchmarks for commonsense reasoning, like the Winograd and the COPA challenge. Despite a common research interest -- reasoning -- there are still several milestones necessary to foster a better inter-disciplinary research. First, to develop a better understanding of methods, techniques, and approaches applied in both research fields. Second, to have a synopsis of the relevant state-of-the-art in both research directions. Third, to combine methods and techniques from both fields and find synergies. E.g., techniques and methods from computational logic have never been directly applied to model adequately human reasoning. They have always been adapted and changed. Fourth, we need more and better experimental data that can be used as a benchmark system. Fifth, cognitive theories can benefit from a computational modeling. Hence, both fields -- human and automated reasoning -- can both contribute to these milestones and are in fact a conditio sine qua non. Achievements in both fields can inform the others. Deviations between fields can inspire to seek a new and profound understanding of the nature of reasoning. A core goal of Bridging-the-gap-Workshops is to make results from psychology, cognitive science, and AI accessible to each other. The goal is to develop systems that can adapt themselves to an individuals' reasoning process and that such systems follow the principle of explainable AI to ensure trustfulness and to support the integration of results from other fields. We propose a human syllogistic reasoning challenge to predict future inferences of an individual reasoner based on some previous observations. Hence, participants can develop cognitive AI models (written in Python) that predict the next inference. These predictions are then evaluated in the CCobra framework. This is the fifth workshop in a series of successful Bridging the Gap Between Human and Automated Reasoning workshops located at previous conferences: 2015 at the International Conference on Automated Deduction in Berlin (CADE-25), 2016 at the International Conference on Artificial Intelligence in New York (IJCAI 2016), 2017 at the Annual Meeting of the Cognitive Science Society and 2018 as part of the of the FAIM workshop program located at the Federated Artificial Intelligence Meeting (FAIM) which included the major conferences IJCAI-ECAI , ICML, AAMAS, ICCBR and SoCS. With the location at the IJCAI-2019 we are aiming at participants from various areas of AI and Autonomous Agents. The goal of this workshop is to bring together leading researchers from artificial intelligence, automated deduction, computational logics and the psychology of reasoning that are interested in a computational foundations of human reasoning -- both as speakers and as audience members. Its ultimate goal is to share knowledge, discuss open research questions, and inspire new paths. Like its preceding event, it is intended to get an overview of existing approaches and make a step towards a cooperation between computational logic and cognitive science. Topics of interest include, but are not limited to the following:
  • limits and differences between automated and human reasoning
  • psychology of deduction and common sense reasoning
  • logics modeling human reasoning
  • non-monotonic, defeasible, and classical reasoning
  • benchmark problems relevant in both fields
  • approaches to tackle benchmark problems like the Winograd Schema Challenge or the COPA challenge
  • a human syllogistic reasoning challenge
  • The workshop is located at IJCAI-19 and is supported by IFIP TC12.

    Invited Speaker


    List of important dates

  • Full Paper submission deadline: 13th May, 2019
  • Notification: 5th June, 2019, 13th June, 2019
  • Model Submission for PRECORE challege: 15th May, 2019
  • Workshop: 12th August, 2019
  • Submission and Contribution Format

    This year's Bridging workshop will accept papers and submissions to the PRECORE challenge:
  • Papers, including the description of work in progress, are welcome and should be formatted according to the Springer LNCS guidelines. The length should not exceed 15 pages. All papers must be submitted in PDF. Formatting instructions and the LNCS style files can be obtained at The EasyChair submission site is available at:
  • The PRECORE challenge is based on CCOBRA, a Python framework for the behavioral analysis of reasoning models. The framework does not pose restrictions with respect to formalisms as long as individual predictions to syllogistic problems can be generated. Final model submissions are due on May 15th, 11:59 UTC-12 as a zip-Archive. Please describe your model on a conceptual level on two pages in the workshop template. Details on the submission of the zip-Archive can be found at:
  • Proceedings

    The proceedings of the workshop will most likely be published as CEUR workshop proceedings.


  • Ulrich Furbach, University of Koblenz
  • Steffen Hölldobler, Technische Universität Dresden
  • Marco Ragni, University of Freiburg
  • Claudia Schon, University of Koblenz
  • Contact: Claudia Schon
  • Program Committee

  • Christoph Beierle, Fernuniversität Hagen
  • Emmanuelle Diez Saldanha, Technische Universität Dresden
  • Phan Minh Dung, Asian Institute of Technology, Dresden University of Technology
  • Ulrich Furbach, University of Koblenz
  • The Anh Han, Teesside University
  • Steffen Hölldobler, Technische Universität Dresden
  • Antonis C. Kakas, University Cyprus
  • Gabriele Kern-Isberner, TU Dortmund
  • Sangeet Khemlani, Naval Research Lab, USA
  • Robert A. Kowalski, Imperial College London, GB
  • Oliver Obst, Western Sydney University
  • Luís Moniz Pereira, Universidade Nova Lisboa, Portugal
  • Marco Ragni, University of Freiburg
  • Claudia Schon, University of Koblenz
  • Nicolas Riesterer, University of Freiburg
  • Keith Stenning, Edinburgh University
  • Frieder Stolzenburg, Harz University of Applied Sciences
  • Contact: Claudia Schon

    Accepted Papers


    Program Overview