ΑΙhub.org
 

Błażej Kuźniacki on why we need transparency around AI in tax


by
02 March 2023



share this:
Money in a jar next to three piles of coins

Over the course of a number of years, thousands of parents were falsely accused of fraud by the Dutch tax authorities due to discriminative algorithms. The consequences for families were devastating. But, the fact that the scandal was eventually brought to light might prove the Netherlands is ahead of other countries, says Assistant Professor Błażej Kuźniacki. He urges for more transparency about the use of artificial intelligence (AI) in tax related tasks.

The childcare benefit scandal led to allowances being taken away, debt, broken marriages and children being removed from their homes. Do we really need AI in tax?

AI cannot be ignored. It’s of great importance when it comes to tax. Humans are not capable of going through a massive amount of data as fast and accurately as algorithms. And since tax authorities have access to big data, it would be a waste not to use AI. You can train and improve algorithms using this great quantity of data. The point is to use AI in a right way, in particular to not harm taxpayer’s rights.

How do you then prevent AI from making discriminatory decisions?

We need to understand why AI makes certain decisions. You can’t say: “I impose tax on you because AI suggested it”. In the end there must be a human with the authority to make a decision and an understanding of the inner logic of AI. We’ve seen in the childcare benefit scandal that it goes wrong when the process is too automated and secret. AI was allegedly able to use information that has no legal importance in decision making, such as sex, religion, ethnicity, and address. That can lead to discriminatory treatment. Tax authorities must be able to explain their decisions, otherwise they can’t justify them effectively. Trust cannot be fully or even mainly converted from humans to machines (e.g. algorithms).

Do we still rely too much on AI in tax?

The problem is that many decisions and strategies are still hidden, including the use of AI. There are more and more requirements for taxpayers to be transparent. By contrast, tax authorities seem to go the opposite direction due to the increasing use of non-explainable AI systems. That is frightening. AI itself has become so complex that it is hard for humans to fully understand and explain the decisions made by machine learning (ML) algorithms. And on top of that there is tax secrecy that prevents transparency, and sometimes also trade secrecy.

Is the lack of transparency what caused the Dutch childcare benefit scandal?

That was part of it. The Dutch legislation itself doesn’t allow the AI automated decision making to be checked. And there wasn’t enough room for interaction with humans. The procedures were too automatized and secretive. One of the big mistakes in this case was even after it was clear something went wrong, the authorities did not try to help immediately. But this scandal doesn’t mean the Netherlands is one of the worst. It might be the opposite. It could be much worse in other counties. The fact that this scandal came to light a few years ago says that society was able to go through several layers that prevented transparency. It was still found out something was wrong. People eventually went to court over it and effectively defended their fundamental right to respect for private life.

What kind of future do you see for AI in tax?

We need more transparency upfront. Tax secrecy can be reduced by parliament. That is a matter of changing the rules. But understanding the systems of AI will be more difficult. There is no law that requires you to use only explainable AI. Moreover, there are laws preventing you to explain AI because of tax secrecy. We should impose minimal legal requirements for the use of AI. This will force companies and governments to think about the explainability of AI systems they develop, deploy and use because otherwise they will face legal compliance problems. The higher the risks, the higher the explainability requirements should be. We should avoid being passive until another disaster happens.




University of Amsterdam




            AIhub is supported by:


Related posts :



An interview with Nicolai Ommer: the RoboCupSoccer Small Size League

  01 Jul 2025
We caught up with Nicolai to find out more about the Small Size League, how the auto referees work, and how teams use AI.

Forthcoming machine learning and AI seminars: July 2025 edition

  30 Jun 2025
A list of free-to-attend AI-related seminars that are scheduled to take place between 1 July and 31 August 2025.
monthly digest

AIhub monthly digest: June 2025 – gearing up for RoboCup 2025, privacy-preserving models, and mitigating biases in LLMs

  26 Jun 2025
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

RoboCupRescue: an interview with Adam Jacoff

  25 Jun 2025
Find out what's new in the RoboCupRescue League this year.

Making optimal decisions without having all the cards in hand

Read about research which won an outstanding paper award at AAAI 2025.

Exploring counterfactuals in continuous-action reinforcement learning

  20 Jun 2025
Shuyang Dong writes about her work that will be presented at IJCAI 2025.

What is vibe coding? A computer scientist explains what it means to have AI write computer code − and what risks that can entail

  19 Jun 2025
Until recently, most computer code was written, at least originally, by human beings. But with the advent of GenAI, that has begun to change.

Gearing up for RoboCupJunior: Interview with Ana Patrícia Magalhães

  18 Jun 2025
We hear from the organiser of RoboCupJunior 2025 and find out how the preparations are going for the event.



 

AIhub is supported by:






©2025.05 - Association for the Understanding of Artificial Intelligence


 












©2025.05 - Association for the Understanding of Artificial Intelligence