Fieke Jansen is a PhD candidate at the Data Justice Lab and Mozilla Foundation Fellow 2019-2020. Daniel Leufer, PhD, is a Mozilla Foundation Fellow 2019-2020 hosted by Access Now and member of the Working Group on Philosophy of Technology at KU Leuven, Belgium.
Discussions on the negative impact of Artificial Intelligence in society include horror stories plucked from either China’s high-tech surveillance state and its use of the controversial social credit system, or from the US and its use of recidivism algorithms and predictive policing.
Typically, Europe is excluded from these stories, due to the perception that EU citizens are protected from such AI-fueled nightmares through the legal protection offered by the GDPR, or because there is simply no horror-inducing AI deployed across the continent.
In contrast to this perception, journalists and NGOs have shown that imperfect and ethically questionable AI systems such as facial recognition, fraud detection and smart (a.k.a surveillance) cities, are also in use across Europe.
For example, the UK police are implementing facial recognition to monitor protests and soccer matches; the Dutch government is being sued for SyRI, a risk-scoring algorithm that is targeting the poor; and the Polish Ministry of Labour and Social Policy introduced a controversial system that profiles unemployed people to determine the type of assistance that a person can obtain from local labour offices.
Meanwhile, AI systems like these are on track to proliferate this decade: one of the three ‘pillars’ of the European Commission’s plan on AI is to boost “AI uptake across the economy, both by the private and public sector.”
To that end, the Commission is investing in the development of AI systems through funding programs such as Horizon 2020, which will have invested nearly €80 billion of funding over 7 years (2014 to 2020), with a significant portion of that going to so-called ‘artificial intelligence’ projects.
According to last week’s leaked white paper on AI regulation, there are plans to increase this funding and further invest in “targeted cloud-based artificial intelligence services,” to “offer world-leading master programs in artificial intelligence,” and to ensure “access to finance for artificial intelligence innovators.”
Amid this proliferation, some EU residents might be comforted by the Commission’s stated commitment to ‘Trustworthy AI,’ most notably through its Ethics Guidelines for Trustworthy AI and the potential influence they might have on the fabled ‘AI Regulation’ promised to come in the first 100 days of the new Commission mandate.
Indeed, it seems clear that public procurement in general, and these EU funding mechanisms in particular, have huge potential to promote the development of ‘trustworthy’ AI systems: i.e. that respect human rights, adhere to ethical guidelines and promote human agency.
Despite the EU’s commitment to ‘trustworthy’ AI sounding noble, the history of technological investments made under Horizon 2020 casts doubt on these intentions.
Take for example iBorderCtrl, a Horizon 2020-funded project that aims to create an automated border security system to detect deception based on facial recognition technology and the measurement of micro-expressions. In short, the EU spent €4.5 million on a project that ‘detects’ whether a visitor to the continent is lying or not by asking them 13 questions in front of a webcam.
The historical practice of lie detection is lacking in substantial scientific evidence and the AI technologies being used here to analyse micro expressions are just as questionable.
To make matters worse, the Commission is ignoring the transparency criteria outlined in the Ethics Guidelines by refusing to publish certain documents, including an ethics assessment, “on the grounds that the ethics report and PR strategy are “commercial information” of the companies involved and of “commercial value”.”
Another example of untrustworthy AI funded by Horizon 2020 is the SEWA project. This project received €3.6 million to develop technology that can read the depths of human sentiment and emotions ‘in the wild’ — for the ultimate purpose of more effectively marketing products to consumers using an ad recommendation engine.
Indeed, the SEWA project was singled out by Shoshana Zuboff in her book, The Age of Surveillance Capitalism, as a pioneer in techniques harvesting personal data to be processed for behavioural prediction/manipulation.
Not only is there doubt among the scientific community that human emotions can be reliably inferred from biometric analysis, but one can also question how ‘trustworthy’ is it to develop technology that exploits people’s emotional states for commercial gain.
In the leaked White Paper, the Commission is caught in a dilemma about whether or not to introduce a 3-5 year ban on facial recognition: on the one hand, such a ban would allow time to safeguard against any abuse, but on the other, they worry that a ban “might hamper the development and uptake of this technology.”
Just recently, the US’s Chief Technology Officer Michael Kratsios urged the EU to “avoid heavy-handed innovation-killing models” in its AI regulation, but we must ask seriously whether certain forms of innovation – in the domain of mass surveillance and pseudo-scientific lie detection – may not require a heavy hand.
Some uses of AI could be by definition untrustworthy, and we need to see a commitment from the EU not to blindly promote ‘AI uptake’ without consideration of its impact.
Projects such as iBorderCtrl and SEWA lead to larger questions: if the criteria to fund AI is not based on a technology’s trustworthiness, ethics, or innovative capabilities, then which drivers inform these decisions?
In the orientation for the first Strategic Plan for the Horizon Europe framework, the funding framework that succeeds Horizon 2020, the EU refers to co-design and public consultation. However, the co-design meeting was composed of speakers from universities, companies and governments across Europe who have a vested interest in Horizon Europe funding.
This process is lacking critical engagement from impartial parties, like civil society, public interest groups or communities who would be affected by these ‘innovative’ projects. This indicates that the commitment to ‘trustworthy’ AI is toothless and what is needed is an honest, critical public debate on the emerging technology that will shape the future public life of Europe.
Going forward, the EU must stand behind their commitment to ‘trustworthy’ AI by actively soliciting independent and critical voices in the strategy development process of Horizon Europe, disclosing how any proposed regulation on AI will be integrated into the funding criteria of Horizon Europe, and also engaging in a postmortem of Horizon 2020.
The postmortem should analyze the extent to which projects funded under Horizon 2020 adhere to human rights standards and relevant ethical guidelines, as well as an analysis of the current processes and procedures to ensure that they are sufficiently safeguarding our fundamental rights.
Source: euractiv.com