Vatican meeting explores challenge of artificial intelligence
The Pontifical Council for Culture and the German Embassy to the Holy See host a one-day symposium starting on Thursday looking at “The Challenge of Artificial Intelligence for Human Society and the Idea of the Human Person”.
By Vatican News staff writer
The symposium on Artificial Intelligence – or AI – organized by the Pontifical Council for Culture, in cooperation with the German Embassy to the Holy See, will open in Rome on Thursday.
The theme for the gathering is, “The Challenge of Artificial Intelligence for Human Society and the Idea of the Human Person”. The aim of the meeting is to promote a better awareness of the profound cultural impact AI is likely to have on human society. The symposium will feature six experts from the fields of neuroscience, philosophy, Catholic theology, human rights law, ethics and electrical engineering.
Experts from the Allen Institute for Brain Science, Goethe University, Boston College, and Google will discuss questions regarding AI and whether it can reproduce consciousness, AI and philosophical challenges, and AI and religion, and what it would mean in relation to Catholic doctrine.
The afternoon panel will focus on ethical and legal consequences arising from AI addressed by experts from the EU Agency for Human Rights, the Pontifical University of St. Thomas Aquinas, and Institute of Electrical and Electronics Engineers Technology Centre, and the Dicastery for Promoting Integral Human Development.
Bishop Paul Tighe, the Secretary of the Pontifical Council for Culture, spoke to Vatican Radio’s Thaddeus Jones about the symposium.
Q: Can you tell us about the origins and purpose in holding this joint symposium?
This is a meeting that actually was originally planned about three years ago, two-and-a-half years ago, at a time when Germany were about to take the leadership within the presidency of the European Union. The ambassador at the time was very interested in the work the Council was doing in terms of trying to promote a deeper and better understanding of digital culture in general, but particularly of the possible implications of the emergence of artificial intelligence. In our conversation from when we were planning this two-and-a-half years ago before Covid intervened and delayed us all for the two years, what we really wanted was not so much to focus on the actual ethical takeaways or the conclusions, but to promote a better awareness of the impact AI is likely to have on a whole range of human activities.
So, it’s really in a sense trying to understand the cultural impact and also alerting people who have maybe roles in forming policy in taking leadership in society, and the Church as well, to think about and be alert to what’s coming down the road, what’s emerging for us. And therefore, we think most people have a general awareness of what AI is or might be. I’m not sure how developed it is with many of us, but there’s an awareness there. This was to begin to tease out implications in so many different aspects of our living: social issues, medical issues, political issues, economic issues. And the decision in the end was to bring together six fairly significant speakers representing different disciplines with a view to getting a conversation going, a conversation which will be there primarily to inform the public. It’s an invited public, it’s mainly people working at other embassies, or in State organizations, or it’s people in the Vatican working in departments, a limited number obviously now with Covid. It’s an invited audience and the idea is to sponsor as much as we can with an audience participation and engagement with the questions.
The emphasis in the morning is to have something on how AI really asks us to think again what it means to be human, if artificial intelligence, particularly general artificial intelligence, would be kind of somewhat autonomous, able to reprogram themselves and do sorts of things that we would have thought in the past were exclusively human activities. It invites us to think about what is it that really makes us human? What is it that makes us different? So, one person looking at it is a neuroscientist. What are we learning about consciousness? How do our brains work? Could machines be able to imitate that and achieve the same results? Or is there something different about human consciousness that we need to reflect on? Also, then it will be looked at by a philosopher who will be studying these issues. How does philosophy help society in general think about how AI, which is coming, will be integrated into our ways of living, into our political processes, into our working environments. And the final intervention in the morning is a theologian, James Keenan, who will be examining: how does AI ask us to reflect about issues we’ve always been concerned of in Catholic doctrine, Catholic moral teaching? So, there are issues then about the applications. How do we think about the possibility that AI may be involved in situations of warfare? How does it ask us to think about lethal and autonomous weapons? How could it transform the world of work? How does our Catholic tradition have much to reflect there? And maybe more importantly it asks us, if you take AI in conjunction with nanotechnologies, with our developing cognitive sciences, with gene editing, how do we think about a future where human beings may be able to in some sense take in hand our evolution and program our evolution. We can see values in terms of correcting illnesses and disease and maybe human limitations, but if we were to move more into a thing of enhancement. How do we think about those questions and particularly how do we think about AI in a world that’s already marked by inequality, and the likelihood that AI could increase those inequalities?
In the afternoon, then we move on to more, kind of thinking through some of the normative and regulatory issues. Again, there we wanted a variety of inputs. So we have Michael Flaherty, who’s the director of the Fundamental Rights Agency of the European Union and who is very concerned and looking at the questions of AI and human rights. We have Helen Alford from the Angelicum University who again will be looking at ethical issues, and how do we do ethics in a globalized society where people are coming with different backgrounds and different understandings. And yet, we have to confront an issue that will be the same for all of us. Then a final intervention will be from Clara Neppel who works for the Institute of Electrical and Electronic Engineers who have developed huge levels of reflection informed by their real knowledge of the technologies on how they should work as engineers to ensure that the products they are developing will in fact make a positive contribution to human individuals and to human society. So, in a sense it’s the professions taking responsibility for what they’re doing and saying we need this integrated dialogue. We can’t just leave ethics to specialists. We need also the technologists present and engaged by the same issues. So that’s our aim, to have this general conversation, make it as inclusive as possible. And then from there, see what might be the issues we should take up and develop further.
Q: With advances in technology now the developments in this area are moving so rapidly, but are you hopeful that there is a willingness and openness on the part of people to dialogue on issues and look at the implications, both positive and potentially negative?
I think the positive thing that I am noting is that people working in the area, the technologists, the developers, the scientists who may be sometimes under pressure in their own environments, particularly in commercial environments to push ahead and to develop without really checking through all the issues. I think there’s a great awareness there among those professionals and among even the leaders of those industries of their need to be more responsible and to think through the likely the implications of what they’re doing. I think many of the people who have seen what happened in digital culture, people who developed algorithms to facilitate human communication are now discovering the difficulties of the problems that result from in a sense being maybe a little naive in trusting that people would use these for good, when there were many bad actors and when sometimes commercial considerations were driving people to make decisions that were not necessarily fully beneficial for society.
So, we’ve seen the leaders in the tech sector beginning to take time to reflect on what they’re doing. I think the people who are working in AI want that. They want to engage as many people as possible. It’s almost like a slogan, they want ethics by design. They want ethics built in from the beginning. Not that they develop a technology and then afterwards somebody comes along a little late perhaps and a little breathless trying to wonder what we should do. So, they’re trying to say let’s have ethics in by design, and the way we do that is by being as inclusive as we can be in our design teams ensuring that we overcome prejudice and bias, because they’re very often working on data sets that have been developed in systems that are inherently prejudicial against certain people. So how do we ensure that everything is done with an intent to be inclusive bringing different aspects and different disciplines into the room so that we have something that is produced that is truly of benefit to society. And even for the first time ever we are beginning to ask questions are there certain things we might be better not developing because we won’t be able to ensure that they are truly at the service of humanity.
Q: Would you say that the wider public is also very well informed now in the sense that in popular culture, films, television, media, questions regarding AI are discussed and people are aware of some of the bigger questions we need to ask ourselves?
I can see a real benefit in the fact that, particularly popular culture, has taken up issues about intelligence, artificial intelligence and created a variety of films. Some of those I know horrify the people working in the area and they say look, they are creating fictional ideas of AI and they are frightening people unnecessarily about things we would not thankfully be even able to develop. So that’s one thing I think we have to say, but it does serve to raise the issue and for there to be a kind of questioning. I think there are now emerging some fantastically interesting responses or reflections coming from the world of the arts. I think this summer or earlier in the year in February, there was a magnificent publication by Kazuo Ishiguro called “Klara and the Sun” which was written from the perspective of an artificial intelligent companion, interesting and artificial intelligence that had been developed to help children to deal with loneliness. An artificial intelligence that learns to observe humans and has almost like the naive child or alien (nature), and reveals a lot about human society. So, I think what he’s saying is before we worry too much about the artificial intelligence, we need to think what are the real values we have in human society. What are the things that make life worth living? What are the types of friendship and relationships that give us value and worth in our lives, and how do we promote those values. In an interesting interview he did say that his worry is not so much about artificial intelligence taking over but somehow that humans would become robotic, that humans lose their capacity to be truly human. One of the issues that has emerged, also UNESCO has raised the issue that if children are raised dealing with artificial intelligences which are there to answer their whims, to respond to their every need, to give them questions, can it condition how they think about relationality? But the other is there to be of service to me to do my will and loses their own autonomy, and that those attitudes become ingrained in human persons and they bring them in their bearings with other human beings.
https://www.vaticannews.va/en/vatican-city/news/2021-10/vatican-symposium-challenge-artificial-intelligence-society.html