
Joseph Brazel is the Jenkins Distinguished Professor of Accounting and a University Faculty Scholar at North Carolina State University, where he teaches undergraduate and graduate courses in auditing and assurance services. His research focuses on professional skepticism, fraud detection, data analytics, non-financial measures, investor and CFO responses to fraud red flags, fraud brainstorming, and judgment and decision-making in auditing.He has published in The Accounting Review, Journal of Accounting Research, Contemporary Accounting Research, Accounting, Organizations and Society, Review of Accounting Studies, Auditing: A Journal of Practice & Theory, and the Journal of Business Ethics. Dr. Brazel is also a monthly contributor at Forbes.com. The Center for Audit Quality (CAQ), Foundation for Auditing Research (FAR), Association of Certified Fraud Examiners (ACFE) Research Institute, International Association for Accounting Education and Research, Institute for Fraud Prevention, Financial Industry Regulatory Authority (FINRA) Investor Education Foundation, Institute of Management Accountants, Institute of Internal Auditors, Ernst and Young, KMPG, and North Carolina State University have all supplied him with grants to support his research. Prior to obtaining his Ph.D., Dr. Brazel was an audit manager with Deloitte.
KEY TAKE-AWAYS
The emergence of data analytics allows auditors to test entire populations of data, rather than relying solely on sampling methods. While full population testing increases the sufficiency, or quantity, of evidence examined, it does not necessarily eliminate its lack of appropriateness, or quality. In particular, full population testing typically relies on client-internal data, which are vulnerable to management manipulation, potentially reducing their appropriateness. Therefore, auditors must remain skeptical when subsequent, more appropriate evidence from external sources contradicts a client’s financial reporting. We examine whether auditors employing full population testing mistakenly substitute their assessment of evidence sufficiency for their evaluation of evidence appropriateness, leading them to view client-internal evidence as more appropriate than auditors using sample testing. Consequently, auditors using full population testing may be less likely to act skeptically when subsequent, more appropriate external evidence reveals a fraud red flag. In an experiment, we find that auditors using full population testing, compared to sample testing, are less likely to exercise skeptical actions when a subsequent external industry growth trend reveals a fraud red flag. We also posit that this unintended consequence is exacerbated when full population testing results are visualized (versus tabulated). However, our findings do not support this prediction.
Filter projects: