Associate Professor Joanna J. Bryson
Joanna Bryson is a Reader (tenured Associate Professor) at the University of Bath. She has broad academic interests in the structure and utility of intelligence, both natural and artificial. She is best known for her work in systems AI and AI ethics, both of which she began during her PhD in the 1990s, but she and her colleagues publish broadly, in biology, anthropology, sociology, philosophy, cognitive science, and politics. Current projects include “The Limits of Transparency for Humanoid Robotics” funded by AXA Research, and “Public Goods and Artificial Intelligence” funded by Princeton’s University Center for Human Values. Other current research includes understanding the causality behind the correlation between wealth inequality and political polarization, generating transparency for AI systems, and research on machine prejudice deriving from human semantics. She holds degrees in Psychology from Chicago and Edinburgh, and in Artificial Intelligence from Edinburgh and MIT.
Title: “AI Is Necessarily Irresponsible”
Professor Barry O’Sullivan
Professor Barry O’Sullivan, University College Cork, Ireland, FEurAI, FIAE, FICS, MRIA, is an award-winning academic working in the fields of artificial intelligence, constraint programming, operations research, ethics, and public policy for AI and data analytics. Professor O’Sullivan is a Fellow and current President of the European Artificial Intelligence Association (EurAI), one of the world’s largest AI associations with over 4500 members in over 30 countries, and is the Vice Chair of the European Commission High-Level Expert Group on AI. He is a founding director of the Insight Centre in Ireland which has over 450 researchers working in data science and AI. He recently established the Science Foundation Ireland Centre for Research Training for AI which funds 30 PhD students in AI annually. He is a Member of the Irish Royal Academy.
Title. “Trustworthy AI and the Road to Satisfaction”
Professor Virginia Dignum
Professor Virginia Dignum is WASP Professor of Social and Ethical Artificial Intelligence at Umeå University, and associated with the Delft University of Technology in the Netherlands. Her research focuses on the ethical and societal impact of AI. She is a Fellow of the European Artificial Intelligence Association (EURAI), a member of the European Commission High Level Expert Group on Artificial Intelligence, and of the Executive Committee of the IEEE Initiative on Ethics of Autonomous Systems.
Title: “Responsible Artificial Intelligence”
Professor Marcus Liwicki
Professor Marcus Liwicki is chaired professor at Luleå University of Technology and a senior assistant in the University of Fribourg. His research interests include machine learning, pattern recognition, artificial intelligence, human computer interaction, digital humanities, knowledge management, ubiquitous intuitive input devices, document analysis, and graph matching. In 2015, at the young age of 32, he received the ICDAR young investigator award, a bi-annual award acknowledging outstanding achievements of in pattern recognition for researchers up to the age of 40.
Title: “Conventional and Deep Machine Learning for Reading Systems”
Abstracts
Tuesday 9.00: Joanna J. Bryson – “AI Is Necessarily Irresponsible”
- fully deanthropomorphising the way we talk and think about AI
- using the window AI provides onto the impact of intelligence to better understand society and even biology
Tuesday 15.30: Barry O’Sullivan – “Trustworthy AI and the Road to Satisfaction”
There are growing concerns in society about the power of artificial intelligence and how it is put into practice. This year the European Commission adopted the “Ethics Guidelines for Trustworthy Artificial Intelligence”. However, from the perspective of the science of AI how can we go about developing the tools necessary to live up to the demands of society. In this talk I will highlight a number of specific challenges and how we might go about addressing them in ways that help us achieve Trustworthy AI, but also raise some interesting scientific and technical challenges for the research community. I will specifically discuss some opportunities for my own specific discipline, constraint satisfaction.
Wednesday 9.00: Marcus Liwicki – “Conventional and Deep Machine Learning for Reading Systems”
In this presentation I will summarize the recent breakthroughs of Machine Learning and Deep Learning, particularly in the area of Reading Systems – and then give an outlook into future opportunities and conclude with efforts run at Luleå University of Technology to contribute to a safe and measurable strong impact of AI innovations in everyday life. Reading systems are systems concerning the recognition and understanding of text in any form, i.e., characters printed on paper, strokes hand-written on parchments, but also ASCII written in electronic form. I will show how deep learning has opened new possibilities for recognizing and understanding these texts, but also that other machine learning and reasoning approaches are crucial to complete the overall AI pipeline. In the last part of my presentation I will speak about LTU’s applied AI research efforts and the LTU.AI Digital Innovation Hub, aiming at becoming a sustainable flagship region in Applied AI on national and European level.
Wednesday 13.00: Virginia Dignum – “Responsible Artificial Intelligence”
As Artificial Intelligence (AI) systems are increasingly making decisions that directly affect users and society, many questions raise across social, economic, political, technological, legal, ethical and philosophical issues. Can machines make moral decisions? How should moral, societal and legal values be part of the design process? In this talk, I look at ways to ensure that behaviour by artificial systems is aligned with human values and ethical principles. Given that ethics are dependent on the socio-cultural context and are often only implicit in deliberation processes, methodologies are needed to elicit the values held by designers and stakeholders, and to make these explicit leading to better understanding and trust on artificial autonomous systems. We will in particular focus on the ART principles for AI: Accountability, Responsibility, Transparency.