What?
What?
As AI-assisted tools become more prevalent in audit engagements, understanding their impact on fraud risk assessments is crucial. While AI and machine learning can help to detect red flags, the timing of its introduction in the fraud process is crucial. This topic is important as auditors often lack a critical mindset in fraud risk assessments and engage in motivated reasoning, underestimating or ignoring crucial fraud indicators. We propose a between-subjects experiment where auditors either do not receive AI-generated reports or receive such reports based on machine learning (ML) before or after assessing fraud risk factors.
Why?
We predict that it will be more difficult to use motivated reasoning and underestimate fraud risk factors at the client site, when auditors have access to ML models compared to when they are on their own. Yet, we also expect that when auditors immediately resort to ML their fraud risk assessments are less
effective than when auditors first assess fraud risk factors before receiving ML. Assessing fraud risks first, before receiving ML, may encourage greater scrutiny of AI insights, fostering sufficient professional skepticism in the fraud risk assessment. Via mindset manipulations, we also explore whether ML’s influence increases when auditors feel less responsible for fraud
detection. Our study offers insights into optimizing AI integration to enhance fraud risk assessments.
No articles or publications found.
Project info
Project Lead
Research team
Involved University
Theme(s)
Newsletter
Receive updates on FAR research, publications and events.
Filter projects: