Deputy Current Affairs Editor Stephen Moynihan talks with UCC’s own Professor Barry O’Sullivan on his recent tenure as the Vice-Chair of the European Commission’s High-Level Expert Group on Artificial Intelligence.
“Of course there are risks… you can’t just let any old technology in the world because you can. There are protections that you need to put in place when you’re using it, but I think by-and-large, artificial intelligence has a massive positive impact on the world.”
Professor Barry O’Sullivan is the Director of the Insight Centre for Data Analytics at the School of Computer Science and Information Technology at University College Cork. He was also vice-chair of the European Commission High-Level Expert Group on Artificial Intelligence, made up of 52 European experts from diverse backgrounds, thinking around the theme of “trustworthy AI”. Experts ranged from industry to academia in fields such as computer science and ethics. It released three publications in total: Policy and Investment Recommendations of Trustworthy AI; an Assessment List for Trustworthy AI; and Ethics Guidelines for Trustworthy AI, each of which is set to inform the creation of a robust regulatory framework for AI in Europe.
The Assessment List for Trustworthy AI allows organisations to determine the trustworthiness of their AI by answering a comprehensive list of questions. The Policy and Investment Recommendations of Trustworthy AI puts forward 33 recommendations to ensure that AI fosters sustainability, growth, competitiveness, inclusion and human empowerment.
However, it is Ethics Guidelines for Trustworthy AI which is set to be most impactful on the future of Artificial Intelligence in the European Union.
This research aims to promote trustworthy AI by identifying four ethical principles which the implementation of AI must adhere to, these being: respect for human autonomy; prevention of harm; fairness; and explicability. The Expert Group used these principles to inform their identification of seven key requirements for AI to be trustworthy: human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; environmental and societal well-being; and accountability. It is this publication that Prof. O’Sullivan says he is most proud of from his time with the expert group.
“There are lots of ethics codes for AI around the world, I read somewhere that there are in excess of 100 ethical codes. I think the one we produced is probably one of the best, if not the best”, he tells Motley.
“The reason I think that is because it’s very practical, not only did we come up with the principles, but we also came up with key requirements, and this tool, the assessment list, to help people actually apply them. Most of the ethics codes that you come across are very theoretical, or very philosophical, and not very practical. Our set of guidelines, the total package of work that we did, ends up as something that’s much more practical than other works”, he argues.
The practical nature of these guidelines is not just the lofty opinion of the Vice-Chair of the Expert Group itself. It is further underlined by the European Commission’s plans to adopt a directive to ensure that the seven key principles outlined in the Ethical Guidelines are adhered to. They will become law if passed by all legislative wings of the European Union.
“That work we did has had quite a profound impact on not only policymaking but also on the governance and regulatory framework around AI”, he said.
Without a robust regulatory framework, the use of AI could become very problematic, especially when one considers how far-reaching it is becoming in society. AI is used in seemingly innocuous settings such as recommending a film on Netflix or unlocking your phone through facial recognition, but it can also be used to automate the reading of medical scans or for decision-making in the public service. Without a strong focus on the ethics of AI, these former issues could impact people’s ability to make informed decisions for themselves or to privacy violations, and the latter issues could have even more severe real-life consequences on individuals should error be made. According to Prof. O’Sullivan, however, these issues are being thought about very seriously in the AI community in order to minimise negative outcomes.
“There are things you need to be careful of, you need to be careful of using AI in situations where it makes very impactful decisions about the wellbeing or even life-or-death of human beings, We need to be careful of not creating situations where we have people thinking that they’re talking to real people when they’re in fact talking to really specific chatbots. That’s not happening, but it could technically happen. Like with all very powerful technologies we have to be very careful with how we use it.”
“One of the great things at the moment is that there’s so much focus on ethics in the context of AI: what’s right and wrong? When should you use it? When should you not? What are the kinds of things you should be concerned about?”
“The AI community being so focused on ethics is a very positive thing, it’s being taken very seriously.”
One fear commonly cited regarding AI is lack of human oversight. It is undeniably scary or unnerving to think that a machine alone could be making life-or-death decisions about us or our loved ones, or even deciding whether one should be entitled to a social welfare payment or a tax credit. By enshrining human oversight as a fundamental aspect of trustworthy AI, one’s fears can be at least partially allayed.
“The human oversight issue is very important. It would be highly unethical to allow AI systems to make life-or-death decisions over human lives, or even before we get to death. Very impactful decisions about the wellbeing of human beings should ultimately be the responsibility of fellow humans”, he said.
“Regarding the kinds of things about automating government decision making with regards to social welfare or that sort of thing, we don’t want to be told that a person is not getting something they’re entitled to just because some AI said it’s the right thing to do”, he continued.
Separately to the High-Level Expert Group on AI, Prof. O’Sullivan is also an expert adviser to AI Watch, a group which monitors how AI is being used, examines what’s working and what isn’t, and gives advice to member states, the European Commission and the European Parliament regarding policymaking.
“One thing AI Watch does that’s great is that it demystifies AI. There are lots of people out there who think that AI is this sort of magic that is being used in all sorts of places. What AI Watch does is that it really tries to get to the facts of the matter, to understand where AI is being used, where it’s not being used, what kind of impact it is having on the public sector, on private citizens, all of these sorts of issues. That is very nice to be involved in because it allows us to really see what’s real and what’s not real. It also supports evidence-based policymaking in Europe, which is obviously extremely important”, he said.
This demystification is important because as artificial intelligence comes to play an increasingly prominent role in our lives it is imperative that the wider public gains an understanding of it, something Prof. O’Sullivan hopes to achieve this through an online, free Elements of AI Course which leads to a certificate of completion from UCC.
It is clear that the powers-that-be must ensure that the use of AI is ethical and responsible in both the private and public sphere. For the uptake of AI to be supported by the wider public and for humanity to put our trust in it, we will need to have confidence that AI practitioners will respect our privacy, our autonomy, and that it won’t exacerbate the pre-existing inequalities in society or negative impact vulnerable groups.
This is why a robust regulatory framework is so important. It can ensure that the use of AI in industry, public service and research is compliant with ethical norms and does not infringe on our fundamental rights as human beings. Without such regulation, we could never be so sure.