Skip to main content
Skip to main content

Ilaria and Jeff at workshop in York

November 13, 2023 Philosophy

A young woman holding a coffee mug and smiling downwards at a robot.

On reasoning with norms and precedents.

November 13 and 14, Ilaria Canavotto and Jeff Horty have talks at a workshop, sponsored by the Department of Philosophy at the University of York in England, and supported by its Institute for Safe Autonomy, on the theme of "Defeasible Reasoning: Work at the Intersection of Ethics and AI." Ilaria's talk is on "Explanation through legal precedent based reasoning," and Jeff's, which reported joint work with Ilaria, is on "Knowledge representation for computational normative reasoning." (Abstracts for both talks are below, and you can see some related work live at UMD when Ilaria speaks at the Values Centered AI seminar on November 30.) Also at the York workshop are two UMD Philosophy alums: Bijan Parsia *09, now Professor of Computer Science at Manchester; and Aleks Knoks *20, now at the University of Luxembourg, presenting "a new formal model of the way reasons and their interaction determine deontic status of actions."


Explanation through legal precedent based reasoning / Ilaria Canavotto

Computational models of legal precedent-based reasoning developed in the field of Artificial Intelligence and Law have recently been applied to the development of explainable Artificial Intelligence methods. The key idea behind these approaches is to interpret training data as a set of precedent cases; a model of legal precedent-based reasoning can then be used to generate an argument supporting a new decision (or prediction, or classification) on the basis of its similarity to a precedent case [3,4,5]. In this talk, which builds on [1,2], I will present a model of precedent-based reasoning that provides us with an alternative way of generating arguments supporting a new decision: instead of citing similar precedent cases, the model generates arguments based on how the base-level factors present in the new case support higher level concepts. After presenting the model, I will discuss some open questions and work in progress.


Knowledge representation for computational normative reasoning / Jeff Horty

I will talk about issues involved in designing a machine capable of acquiring, representing, and reasoning with information needed to guide everyday normative reasoning – the kind of reasoning that robotic assistants would have to engage in just to help us with simple tasks. After reviewing some current top-down, bottom-up, and hybrid approaches, I will define a new hybrid approach that generalises ideas developed in the fields of AI and law and legal theory.