
Xiaoxing Li is an Assistant Professor in the Department of Accounting, Auditing and Law at the Norwegian School of Economics (NHH). Her research focuses on auditing, data analytics, and professional skepticism, examining how technology influences auditor judgment and decision-making. She recently published a paper in the Journal of Accounting Research titled “Inheriting Versus Developing Data Analytic Tests and Auditors’ Professional Skepticism,” co-authored with Joe Brazel, Anna Gold, and Justin Leiby. This study explores how auditors interact with data analytic tools and identifies strategies to improve audit performance when using externally developed tests.Xiaoxing earned her PhD in Accounting through the Foundation for Auditing Research program and has presented her work at leading international conferences. She teaches courses in auditing and accounting analytics at NHH and actively collaborates on projects that integrate behavioral research with technological innovation in auditing.
KEY TAKE-AWAYS
The emergence of data analytics allows auditors to test entire populations of data, rather than relying solely on sampling methods. While full population testing increases the sufficiency, or quantity, of evidence examined, it does not necessarily eliminate its lack of appropriateness, or quality. In particular, full population testing typically relies on client-internal data, which are vulnerable to management manipulation, potentially reducing their appropriateness. Therefore, auditors must remain skeptical when subsequent, more appropriate evidence from external sources contradicts a client’s financial reporting. We examine whether auditors employing full population testing mistakenly substitute their assessment of evidence sufficiency for their evaluation of evidence appropriateness, leading them to view client-internal evidence as more appropriate than auditors using sample testing. Consequently, auditors using full population testing may be less likely to act skeptically when subsequent, more appropriate external evidence reveals a fraud red flag. In an experiment, we find that auditors using full population testing, compared to sample testing, are less likely to exercise skeptical actions when a subsequent external industry growth trend reveals a fraud red flag. We also posit that this unintended consequence is exacerbated when full population testing results are visualized (versus tabulated). However, our findings do not support this prediction.
Filter projects: