Annual Lecture Series (in-person)
Lunchtime Talks (in-person)
- This event has passed.
LTT: H. Cheon
October 19 @ 12:10 pm - 1:30 pm UTC-4
Hyundeuk Cheon, Seoul National University, Center Visiting Fellow
Explicating the Principle of Explicability
ABSTRACT: In this talk, I attempt to explicate the principle of explicability for artificial intelligence (AI). While there is widespread consensus that AI needs to be explicable (expressed by different terms such as explainability, interpretability, transparency, or accountability), there are unresolved issues on WHY, WHO(WHOM), and WHAT of the explicability principle. Following Floridi and colleagues, I take the principle of explicability as incorporating both the epistemological sense of intelligibility and the ethical sense of accountability. The following questions will be addressed: what it is for, whom AI is explicable to, what kinds of explanation is demanded by the principle. I claim that the explicability is mainly for the autonomy of algorithm-users as rational decision-makers and trust of algorithm-patients in algorithms and their results. Thus, AI needs to be explicable to algorithm-users as well as to algorithm-patients. To satisfy the intelligibility, we call for a causal explanation of a particular outcome, which can be regarded as giving reasons. To be accountable, the explanation has to be justified. Finally, I will respond to the skepticism about the applicability of the principle.
Please Note: Non-Pitt individuals who want to attend our in-person talks must send an email in advance to Katie Labuda (firstname.lastname@example.org) requesting Guest Building Access, or you will not be able to enter the Cathedral of Learning.
If you’d prefer to watch online, please register here: https://pitt.zoom.us/webinar/register/WN_06Nu08_BTViLRmxZpDXhVw