What?
What?
Audit firms are “choice architects” of audit tools, as they choose elements that guide auditors’ actions, which can act as “nudges” (Thaler and Sunstein 2021). Prior research shows that
defaults in workpaper systems⎯a type of nudge that is created by prepopulation of current year workpapers with prior year work⎯impair audit staff’s accuracy at an objective risk assessment task (Bonner, Majors, and Ritter 2018). Building on this research, we propose to examine effects of providing an AI decision aid on more experienced auditors’ accuracy in a risk assessment task that requires greater professional judgment, in weighting and combining conflicting cues. Importantly, we will compare judgments of auditors viewing the AI-suggested ratings and evidence (i.e., the AI decision aid content) prepopulated within the current year workpaper, creating an AI default, and possibly an inference that their audit firm implicitly endorses use of the AI-suggested ratings, versus auditors accessing the AI content in a separate file. We will conduct a 2 x 2 + 1 between-participants experiment, in which we manipulate 1) whether auditors receive the AI decision aid and 2) whether workpapers are prepopulated or blank.
Why?
In the prepopulation condition, auditors with no AI decision aid will use workpapers that are prepopulated with prior year work; auditors with the AI decision aid will use workpapers that are prepopulated with the AI ratings and evidence. The additional condition will have auditors use workpapers that are prepopulated with prior year work and also receive the AI decision aid (i.e., in a separate file). This condition allows us to examine auditors’ potential inference that it is acceptable to “stick with last year” when uncertain and consequent sticking behavior in response to prepopulation of prior year work, and if the AI decision aid serves as an intervention. We will manipulate within-participants whether the AI-suggested judgments are accurate or inaccurate, i.e., whether they mirror human judgment. We will also examine auditors’ inference from prepopulation that task efficiency is prioritized by their audit firm, via their time spent on the task. Our findings will speak to the audit effectiveness and efficiency implications of directly integrating AI decision aids in auditors’ work tools, and if AI decision aids can act as an intervention for effects of prepopulating prior year work. Overall, then, our proposed study will provide evidence illuminating how a “smart” default option might shape auditor behavior within the increasingly automated audit environment.
No articles or publications found.
Project info
Project Lead
Research team
Involved University
Theme(s)
Newsletter
Receive updates on FAR research, publications and events.
Filter projects: