Artificial Intelligence and the Fiscal State: When Tax Enforcement Meets Fundamental Rights

By TaxhellsMarch 8, 2026 (0)

Taxation is one of the foundational mechanisms through which modern societies function. Public infrastructure, healthcare systems, education, social protection and the administration of justice all depend on the collective contribution of citizens through taxation. The legitimacy of the tax system therefore rests on a fundamental principle: everyone must contribute fairly to the financing of public goods.

For this reason, states have always possessed strong powers to investigate economic activity and ensure compliance with tax obligations. Tax administrations have historically been granted extensive authority to audit accounts, request financial information and impose penalties in cases of fraud.

However, the digital transformation of economic activity has significantly expanded the technical capabilities available to tax authorities.

Today, governments possess unprecedented volumes of data about economic activity. Financial transactions, employment records, property ownership, business operations and digital payments generate large datasets that can be analyzed through advanced computational systems. Artificial intelligence and machine learning tools allow tax administrations to process this information at a scale that would have been impossible only a few decades ago.

These technologies promise greater efficiency in the detection of tax fraud and more accurate identification of irregular economic patterns. From a public policy perspective, such tools may strengthen the ability of states to ensure that tax systems operate fairly and effectively.

Yet the use of artificial intelligence in tax enforcement also raises fundamental legal questions.

The expansion of algorithmic systems within public administration has created a new layer of decision-making that is often opaque to the citizens affected by it. When tax authorities rely on predictive models to identify high-risk taxpayers or potential fraud, individuals may become subject to investigation or administrative sanctions based on automated assessments whose internal logic is not always transparent.

In theory, these systems are designed to support human decision-making rather than replace it. In practice, however, algorithmic risk scoring can strongly influence administrative behaviour. Once a taxpayer has been flagged as suspicious by a system, the burden of disproving that suspicion may fall on the individual.

This dynamic becomes particularly problematic when algorithmic systems generate errors.

One of the most widely discussed examples occurred in the Netherlands in what is now known as the childcare benefits scandal. Authorities used automated risk detection systems to identify alleged fraud in social benefits programmes. Thousands of families were flagged as suspicious by algorithmic models that relied on flawed assumptions and biased indicators.

Many individuals were falsely accused of fraud and forced to repay large sums of money, leading to severe financial hardship. Investigations later revealed that the systems used by authorities lacked transparency, contained discriminatory elements and operated with insufficient human oversight.

The consequences were profound. The scandal triggered a major political crisis and ultimately led to the resignation of the Dutch government in 2021. It also sparked a broader European debate about the risks associated with algorithmic decision-making within public administration.

The Dutch case illustrates an essential point. Artificial intelligence does not eliminate the risk of administrative error. In certain circumstances it may even amplify that risk if systems are deployed without appropriate safeguards.

For this reason, the expansion of algorithmic tools in tax administration must be accompanied by robust legal protections.

Within the European legal framework, several mechanisms exist to ensure that the use of automated decision-making remains compatible with fundamental rights. The European Convention on Human Rights guarantees the right to a fair process and protection against arbitrary state action. The Charter of Fundamental Rights of the European Union establishes principles related to due process, data protection and administrative fairness.

In addition, the General Data Protection Regulation contains provisions addressing automated decision-making and profiling. Individuals have the right, under certain circumstances, not to be subject to decisions based solely on automated processing that produce legal or similarly significant effects.

These principles are particularly relevant when artificial intelligence is used in tax enforcement.

Citizens must retain the ability to understand the basis on which administrative decisions affecting them are taken. They must be able to challenge those decisions through independent judicial mechanisms. Administrative authorities must ensure that algorithmic tools operate within frameworks that respect proportionality, transparency and accountability.

The emergence of new regulatory initiatives within the European Union further reflects this concern. The proposed Artificial Intelligence Act introduces a risk-based approach to the governance of AI systems. Applications of artificial intelligence used by public authorities for law enforcement or administrative decision-making may fall into categories requiring strict oversight and transparency obligations.

These regulatory developments illustrate the growing recognition that technological innovation within public administration must remain subject to legal constraints.

At the international level, institutions such as the Council of Europe, the Organisation for Economic Co-operation and Development and various United Nations agencies have also begun addressing the governance of artificial intelligence in public decision-making. Discussions increasingly focus on ensuring that algorithmic systems deployed by governments respect principles of legality, accountability and human oversight.

The objective is not to prevent states from using advanced technological tools. On the contrary, digital technologies can significantly improve administrative efficiency and help combat genuine economic crime.

The challenge lies in ensuring that the power of the state, amplified by algorithmic technologies, remains subject to democratic and legal control.

Taxation has always required a delicate balance between effective enforcement and the protection of individual rights. Artificial intelligence does not alter this fundamental principle. Instead, it intensifies the importance of maintaining robust institutional safeguards.

The legitimacy of tax systems depends not only on the obligation of citizens to contribute to public finances, but also on the obligation of states to exercise their authority within clear legal limits.

As governments continue to integrate artificial intelligence into tax administration, maintaining that balance will become one of the central governance challenges of the digital age.