2021B01 – Audit technologies and auditor judgment (PhD project X. Li)
Project Number – 2021B01

2021B01 – Audit technologies and auditor judgment (PhD project X. Li)

What?

What?

In this proposal we posit that, in settings where analytic tools are very well calibrated (false positive rates are lower or hit rates are higher), expressing these rates to audit staff will improve their responses to fraud red flags identified by such tools and, in turn, their skeptical actions (vs. not expressing the rates). However, under conditions where analytic tools are effective, but not as well calibrated (false positive rates are higher or hit rates are lower), we expect that the framing of the rate of calibration will impact auditor skepticism when an analytic tool identifies a fraud red flag. The costs associated with skepticism (e.g., budget overages), particularly in cases where skepticism does not identify a misstatement, can be a barrier to the application of skepticism. As such, framing the calibration of data analytic tools in the terms of “false positive rates” could reduce skepticism as it highlights the costs of skepticism. On the other hand, framing the calibration of analytic tools in the terms of “hit rates” could improve skepticism as it highlights the benefits of skepticism (e.g., identifying a misstatement).We propose to examine the effects of conveying Audit Data Analytic (ADA) calibration (explicit conveyance versus no conveyance) and the framing of such conveyance (hit rates versus false positive rates) on auditor skepticism.

Why?

The emergence of new audit data analytics (ADA) in the audit environment allows auditors to gain deeper insights into their clients’ data, but simultaneously creates unique challenges for auditors when exercising skepticism. Because data analytic approaches enable auditors to examine full populations (vs. sampling) or incorporate more diverse data into their testing, auditors are often faced with a larger number of anomalies or inconsistencies that should be investigated and evaluated. This creates a unique dilemma for auditors as they determine not only how best to utilize information from data analytic tools, but also how to manage the investigation of red flags identified by these tools.One concern regarding the use of data analytics is the presence of false positives, or the extent that these tools identify transactions or relationships as potential anomalies that, after further investigation, are determined to be reasonable, explained variations in the data. The frequency of false positives is likely to increase proportionately with the size and complexity of the data extracted from the population. Despite concerns over false positive rates, few studies have addressed the problems auditors face when processing the outliers identified by analytic tools. When false positive rates increase, auditors might be apt to ignore or dismiss red flags identified by analytic tools. Knowledge dissemination:

FAR Literature Review – Involvement in the development of data analytics and auditors’ application of professional skepticism

FARview #24 with Xiaoxing Li

FAR Practice Note – Inheriting vs. Developing Data Analytic Tests and Auditors’ Professional Skepticism

FAR Working Paper 2023/09 – 21: Inheriting vs. Developing Data Analytic Tests and Auditors’ Professional Skepticism

FAR Practice Note – An Unintended Consequence of Full Population Testing on Auditors’ Professional Skepticism

Publication in JAR by FAR PhD student Xiaoxing Li

A paper from the thesis of former FAR PhD-student Xiaoxing Li got published in the prestigious Journal of Accounting Research. This paper, co-authored with Joe Brazel, Anna Gold…
This study examines how the use of full population testing (FPT), enabled by data analytics, affects auditors’ professional skepticism. While FPT improves the sufficiency (quantity) of audit evidence by testing entire populations, it often relies on client-internal data, which may lack appropriateness (quality) and be vulnerable to management manipulation. Auditing standards emphasize that more evidence cannot compensate for poor quality, making external evidence critical for fraud detection.
The authors hypothesize that auditors using FPT may exhibit attribute substitution bias, substituting their judgment of evidence sufficiency for appropriateness. This bias could reduce skeptical actions when external evidence later reveals fraud red flags. In an experiment with 125 auditors, results show:
  • Auditors using FPT were 52% less likely to act skeptically (e.g., inquire about inconsistencies or alert managers) compared to those using sample testing when confronted with an external fraud indicator.
  • FPT inflates perceptions of evidence appropriateness because auditors perceive it as more sufficient.
  • Contrary to expectations, presenting FPT results visually (graphs) versus in tables did not significantly worsen the effect.
  • Experience with FPT amplifies the bias, meaning more experienced auditors are even less skeptical after using FPT.
The findings highlight a critical unintended consequence of advanced audit technologies: auditors may underreact to fraud risks when over-relying on internal evidence tested via FPT. Audit firms and regulators should address this through training and quality controls, emphasizing the distinction between evidence sufficiency and appropriateness and reinforcing the importance of external evidence.
This commemorative booklet marks the tenth anniversary of the Foundation for Auditing Research (FAR). It reflects on FAR’s journey as a unique platform where academic research and audit practice meet to advance audit quality. The publication highlights:
  • FAR’s Mission and Impact: How FAR evolved from an ambition into a reality, fostering collaboration between researchers and practitioners through access to real-world audit data.
  • Insights from Leadership: An interview with founding academic director Jan Bouwens and his successor Anna Gold on FAR’s achievements, challenges, and future priorities.
  • Research Highlights: Four featured studies on topics such as auditors’ commercial efforts, student expectations versus auditor experiences, data analytics and professional skepticism, and learning within audit teams.
  • Key Figures and Projects: An overview of FAR’s outputs, including practice notes, masterclasses, conferences, and a growing portfolio of research projects.
The booklet not only looks back with pride but also outlines ambitions for the future of strengthening knowledge transfer, increasing practical usability of research, and deepening engagement across audit firms and academia.
Audit firms around the globe have invested heavily in a variety of audit technologies (e.g., Alles & Gray 2016; Deloitte 2016; KPMG 2016, 2019; EY 2017, 2018; PwC 2019; Bloomberg 2020; Eilifsen, Kinserdal, Messier, & McKee 2020; Austin, Carpenter, Christ, & Nielson 2021). Of these technological developments, audit data analytics (ADA) are receiving increased attention because they enable auditors to incorporate more diverse data and visualizations into their testing (i.e., graphical representations such as charts, scatter diagrams, trend lines, or maps). The American Institute of Certified Public Accountants (AICPA) defines ADA as “the science and art of discovering and analyzing patterns, identifying anomalies, and extracting other useful information in data underlying or related to the subject matter of an audit through analysis, modeling, and visualization for the purpose of planning or performing the audit” (AICPA 2015, p.92; 2017, p.1). The current study focuses on ADA visualizations, which can aid auditors when scrutinizing audit evidence and ultimately improve audit quality (e.g., AICPA 2017; Anders 2017; FRC 2017; O’Donnell, O’Mara, Rast, & Sand 2017).  
Auditors’ use of audit data analytic (ADA) tests carries tremendous potential for the quality of financial statement audits and auditors’ application of professional skepticism (e.g., Austin, Carpenter, Christ, and Nielson 2021). As the use of ADA tests becomes increasingly established in practice, auditors will likely transition from developing ADA tests themselves to a situation where they typically inherit ADA tests developed by others. For example, auditors may inherit ADA tests that are developed by other members of their audit team or their firm’s centralized analytics team. In this study, we argue that inheriting ADA tests, as opposed to developing ADA tests by themselves, hinders auditors’ application of professional skepticism because inheriting decreases auditors’ psychological ownership of the tests. In an experiment where an ADA test identifies a fraud red flag, we find that auditors who inherited the ADA test are less skeptical than those who personally developed the ADA test. We further provide evidence that informing auditors who inherited the ADA test about the test development activities can substantially boost auditors’ skepticism levels. In practice, this development-related information could be conveyed via an ADA test development memorandum preceding the workpapers containing the ADA test. Informing auditors about ADA test development activities will likely become more important as auditors inherit more advanced forms of ADA tests, such as tests employing artificial intelligence technology.  
As the use of audit data analytic (ADA) tests matures and becomes increasingly common in practice, auditors will transition to a situation where they typically inherit ADA tests developed by others (e.g., other audit team members or a centralized data analytics team). Despite the potential benefits of ADA, using ADA tests inherited from others, rather than developed by auditors themselves, could hinder auditors’ application of professional skepticism due to their lack of psychological ownership of the ADA tests. In an experiment where an ADA test identifies a fraud red flag, we find that auditors who inherited the ADA test are less likely to exercise professional skepticism compared to those who were personally involved in the development of the ADA test. We then provide evidence that informing auditors who inherited the ADA test about the test development activities (e.g., a brief ADA memorandum documenting the ADA’s development) boosts their skepticism levels.  
KEY TAKE-AWAYS The emergence of data analytics allows auditors to test entire populations of data drawn from clients’ information systems, rather than relying solely on sampling methods. While full population testing increases the sufficiency – or quantity – of evidence examined, it typically relies heavily on client-internal data. Therefore, auditors must remain skeptical when subsequent, more appropriate evidence from external sources contradicts a client’s financial reporting. In an experiment, we find that auditors using full population testing, compared to sample testing, are less likely to subsequently exercise skeptical actions when an external, industry growth trend reveals a fraud red flag. We do not find that this unintended consequence is exacerbated when full population testing results are visualized (versus tabulated), a typical format used for presenting data analytic tests in practice. Main Takeaways
  • Auditors using full population testing, compared to sample testing, are less likely to exercise skeptical actions when subsequently confronted with a fraud red flag revealed by an external industry growth trend.
  • Auditors using full population testing, compared to sample testing, overestimate their evaluation of the appropriateness of client-internal evidence. Presenting the testing results in a visualized compared to tabulated form does not exacerbate the negative effect of full population testing on auditors’ skeptical actions.
 
No related events found.

Project info

Project Lead

Xiaoxing Li

Research team

Xiaoxing Li
Dr. Justin Leiby
Prof. Dr. Anna Gold
Prof. dr. Anna Gold
Prof. Joseph Brazel

Involved University

Timeline

No timeline

Theme(s)

No themes
Project Number – 2021B01

Newsletter

Receive updates on FAR research, publications and events.

Filter projects: 

Project Lead
Theme Filter
University Filter
1 - 10 of 51 projects