What is it about?
This paper investigates the growing conflict between human decision-making and the automated, AI-driven systems used by the IRS and other tax authorities. While AI is designed to catch errors, it often creates 'Asymmetric Compliance'—a situation where the system's rigid logic fails to account for the actual intent of the taxpayer. I map out how these 'automated loops' create a Systemic Intent Shadow, where legitimate financial activity is incorrectly flagged as non-compliant simply because the AI lacks the forensic context to understand the human 'why' behind the data.
Featured Image
Photo by Olegs Jonins on Unsplash
Why is it important?
As tax enforcement moves toward total automation, the risk of 'systemic friction' increases exponentially. This research is vital because it highlights the hidden costs of removing human oversight from the compliance process. For tax professionals, it provides a vocabulary to defend clients against 'machine errors.' For policymakers, it serves as a warning: if the system becomes too rigid, it causes the very behavioral collapse it was meant to prevent. It is a foundational study for anyone navigating the intersection of financial technology and forensic law.
Perspectives
I wrote this as a front-line response to the rise of 'black-box' enforcement. In my forensic practice, I increasingly see taxpayers caught in administrative cycles they cannot escape because the automated system isn't programmed to listen to reason. I want this work to advocate for the 'Human-in-the-Loop'—ensuring that as we modernize our tax systems, we don't lose sight of the behavioral realities that drive our economy.
Mr. Julian Rodriguez
Read the Original
This page is a summary of: Asymmetric Compliance: Behavioral Intent vs. Systemic Friction in the Era of AI-Driven IRS Collections, January 2026, Elsevier,
DOI: 10.2139/ssrn.6030314.
You can read the full text:
Contributors
The following have contributed to this page







