Using artificial intelligence is more complicated for the government than for companies. Bram Klievink, Professor of Public Administration, aims to identify the problems and find solutions.
‘If half of the books that Amazon recommends to you aren’t interesting, it’s not really an issue. But if the government makes mistakes in just one-tenth of a per cent of cases, this can be very serious; for example, if it’s trying to identify fraudsters,’ explains Bram Klievink. And a company only needs to test which algorithm yields most profit, within the limits of the law. ‘A government has a much more complex societal agenda.’
A striking example: in early 2020 a court declared that the Dutch government’s System Risk Indicator (SyRI) was unlawful. This instrument had been in use since 2014 and its purpose was to prevent fraudulent benefit claims by means of data linking and pattern recognition. The system created risk profiles on the basis of data about fines, compliance and education, among other factors. Although your data were anonymously encrypted until you emerged as a potential fraudster, the court decided that the violation of the right to a private life was too great.
Policy with social media data
The SyRI debacle shows that although the government has considerable scope, it is more restricted than – let’s say – Facebook. Simon Vydra, a PhD candidate supervised by Klievink, is researching whether social media data are useful for analysing the effects of policies targeting young parents. There are many technical possibilities: ‘You can do sentiment analysis, for example, and try to assess the level of support for policies.’
'Minor choices and deliberations can have unexpected consequences'
Klievink: ‘When you use a technique like that, you always make choices. You have to set a lot of parameters. If your analysis system is based on Twitter data, for example, you have to set the point at which your system identifies a Tweeter as a human or a bot. Is it ten tweets a day or a hundred? And how many conversation topics does your topic model distinguish? Will it be five large, but general topics, or do you choose a refined model with twenty more specific topics? Even minor choices and deliberations can have unexpected or unintended consequences for how the outcomes will be used. These choices are never neutral, but we can’t avoid making them.’
Decision-makers and technicians
Dilemmas relating to these choices will often stay hidden because policy-makers and the technicians who create the systems don’t speak each other’s language. ‘The AI specialist often has technical and methodological expertise but lacks the necessary content expertise to foresee the consequences of the choices that are made. Conversely, the policy-maker often doesn’t know what knobs the technician can turn, exactly what their settings are, and what this means for the outcomes.’ Klievink, therefore, concludes that the collaboration between people from diverse disciplines working on public AI projects can never be close enough.
Read more interviews in the Cyber Security Research Collection
RESEARCH STORIES: ARTIFICIAL INTELLIGENCE IN ZUID HOLLAND
Host to three top universities and two university medical centres, Zuid-Holland is a hotspot for the latest developments in the field of artificial intelligence. People come first. How does AI contribute to our well-being, prosperity and making a liveable world for everyone? Read inspiring stories of science on our special page dedicated to Artificial Intelligence:
Research Stories on Artificial Intelligence