Ben Commerford

Professor

Ben Commerford, PhD, is the Arthur Andersen Professor of Accountancy in the Von Allmen School of Accountancy at the University of Kentucky’s Gatton College of Business and Economics. He joined UK in 2015 after earning his Ph.D. in Accounting from the University of Alabama and his research focuses on judgment and decision-making in auditing and financial reporting, including topics such as earnings management, professional skepticism, evidence collection, and audit technologies, with publications in The Accounting Review, Journal of Accounting Research, and Contemporary Accounting Research.

Generative AI systems (“GenAI”) can provide auditors with natural-language recommendations that resemble professional advice. Such tools have the potential to support audit judgments. However, a lack of transparency in their processes and reasoning also raises practical questions: when, and to what extent, should auditors rely on AI-generated advice? Because GenAI recommendations are not directly explainable, auditors must rely on indirect cues to assess credibility. In practice, a key indirect cue is AI performance. Firms and software providers commonly disclose stated accuracy levels, either framed in terms of accuracy (“the AI system is 95% accurate”) or in terms of error (“the AI system has a 5% error rate”). Framing AI performance in terms of either “accuracy” or “error” may affect auditors’ reliance in unanticipated ways. The key issue for audit practice is whether these performance cues support appropriate calibration. That is, auditors should use sound advice but remain skeptical of weak output. In this practitioner report, we summarize evidence from an experimental study with practicing auditors examining the effects of stated accuracy and performance framing on reliance on high-quality and low-quality GenAI advice. Our findings show that performance communication influences reliance decisions, with implications for the design and implementation of GenAI in judgment-intensive audit tasks.
Audit firms are rapidly integrating Generative AI (GenAI) into their workflows. While these tools can enhance efficiency and support complex judgments, the key challenge is not whether AI provides useful input, but whether auditors use it appropriately. The literature shows that auditors’ reliance on AI is shaped more by behavioral responses, system design, and organizational context than by the underlying technology. Three insights emerge. First, auditors face a calibration problem. They may under-rely on AI due to algorithm aversion, discounting AI-based evidence, relative to human experts, even when it is equally reliable. At the same time, they may over-rely on AI when outputs appear authoritative, fluent, or easy to use. Both problems impair audit quality: under-reliance biases judgments toward management, while over-reliance reduces professional skepticism. Second, reliance depends critically on how AI is designed and embedded in the audit process. Features such as perceived control (e.g., the ability to provide input), adaptability of algorithms, and task–technology-fit influence whether auditors trust and use AI outputs. AI is more effective when it aligns with task uncertainty and complexity, and when auditors can meaningfully engage with the system. Poorly designed or poorly communicated tools risk being ignored or misused. Third, AI affects not only decisions but also how auditors think about decisions. GenAI can improve understanding of complex evidence and help auditors better identify when to raise issues, particularly in remote settings. However, AI can also inflate confidence while reducing self-monitoring, making auditors less aware of when they may be wrong. This creates a risk of overconfidence and inappropriate reliance. Overall, the literature highlights that successful AI adoption is also a behavioral and organizational challenge, not just a technological one. To realize the benefits of AI, audit firms should consider three key levers. First, governance: providing clear guidance on when and how AI should be used and evaluated. Second, design and communication: ensuring that tools align with task demands and enable auditors to meaningfully engage with the system. Third, training and oversight: developing auditors’ ability to critically assess AI outputs and appropriately calibrate their reliance.
No related podcasts.
No related news.
No related events.

Filter projects: 

Project Lead
University Filter
1 - 10 of 52 projects