Prof. Joseph Brazel

Professor

Joseph Brazel is the Jenkins Distinguished Professor of Accounting and a University Faculty Scholar at North Carolina State University, where he teaches undergraduate and graduate courses in auditing and assurance services. His research focuses on professional skepticism, fraud detection, data analytics, non-financial measures, investor and CFO responses to fraud red flags, fraud brainstorming, and judgment and decision-making in auditing.He has published in The Accounting Review, Journal of Accounting Research, Contemporary Accounting Research, Accounting, Organizations and Society, Review of Accounting Studies, Auditing: A Journal of Practice & Theory, and the Journal of Business Ethics. Dr. Brazel is also a monthly contributor at Forbes.com. The Center for Audit Quality (CAQ), Foundation for Auditing Research (FAR), Association of Certified Fraud Examiners (ACFE) Research Institute, International Association for Accounting Education and Research, Institute for Fraud Prevention, Financial Industry Regulatory Authority (FINRA) Investor Education Foundation, Institute of Management Accountants, Institute of Internal Auditors, Ernst and Young, KMPG, and North Carolina State University have all supplied him with grants to support his research. Prior to obtaining his Ph.D., Dr. Brazel was an audit manager with Deloitte.

This literature review explores how audit committee involvement can enhance auditors’ application of professional skepticism which is a critical factor for audit quality. It synthesizes research on auditor traits, knowledge, and incentives, highlighting barriers such as time pressure, budget constraints, and outcome bias that discourage skepticism.
The review also examines how audit committees can support auditors beyond oversight, including through direct communication and fostering a supportive culture, to reduce these barriers. Findings suggest that audit committee support, when effectively conveyed, may strengthen skepticism and improve audit quality.
Despite the recognized importance of professional skepticism, auditors’ failure to consistently exercise a sufficient level of professional skepticism continues to be a globally recognized issue. In this study, we seek to gain a better understanding of the role audit committees, who oversee the audit process and can help/aid in improving auditors’ application of skepticism. In a survey of audit practitioners, we found that: audit committee support varies substantially between audit engagements; audit committee support is multifaceted; and the support is often not conveyed to the lower-level members of the engagement team. Given our survey findings, we experimentally investigated whether and how audit committee support being explicitly conveyed to the entire engagement team (by either the partner or audit committee chair) impacts the skeptical judgments and actions of auditors. We find that an expression of audit committee support conveyed explicitly by the audit partner can increase the skeptical actions of auditors, whereas such an expression of support by the audit committee chair does not. Our findings point to the crucial role audit partners can play in improving auditors’ application of professional skepticism.  
KEY TAKE-AWAYS The team explores how audit committees (ACs) support audit engagement teams and whether AC support can improve auditors’ professional skepticism. First they surveyed audit practitioners and found out that AC support is multidimensional, varies between engagements, and often is not communicated to the entire engagement team. Then it was experimentally investigated whether the explicit communication of AC support to the entire engagement team (by the partner vs. the AC chair) impacts the skepticism of auditors. While skeptical judgments are consistently high, auditors vary in their skeptical actions. When management attitudes towards the engagement team are poor, AC support communicated by the audit partner increases skeptical actions. Direct communication of support by the AC chair does not increase skepticism relative to when the partner conveys AC support. The findings of the team highlight the importance of AC support for audit teams, and the lack of AC support (or communication thereof) that exists on many audit engagements.  
Audit firms around the globe have invested heavily in a variety of audit technologies. Of these technological developments, audit data analytics (ADA) are receiving increased attention because they enable auditors to incorporate more diverse data and visualizations into their testing (i.e., graphical representations such as charts, scatter diagrams, trend lines, or maps). The American Institute of Certified Public Accountants (AICPA) defines ADA as “the science and art of discovering and analyzing patterns, identifying anomalies, and extracting other useful information in data underlying or related to the subject matter of an audit through analysis, modeling, and visualization for the purpose of planning or performing the audit”. The current study focuses on ADA visualizations, which can aid auditors when scrutinizing audit evidence and ultimately improve audit quality.
Auditors’ use of audit data analytic (ADA) tests carries tremendous potential for the quality of financial statement audits and auditors’ application of professional skepticism (e.g., Austin, Carpenter, Christ, and Nielson 2021). As the use of ADA tests becomes increasingly established in practice, auditors will likely transition from developing ADA tests themselves to a situation where they typically inherit ADA tests developed by others. For example, auditors may inherit ADA tests that are developed by other members of their audit team or their firm’s centralized analytics team. In this study, we argue that inheriting ADA tests, as opposed to developing ADA tests by themselves, hinders auditors’ application of professional skepticism because inheriting decreases auditors’ psychological ownership of the tests. In an experiment where an ADA test identifies a fraud red flag, we find that auditors who inherited the ADA test are less skeptical than those who personally developed the ADA test. We further provide evidence that informing auditors who inherited the ADA test about the test development activities can substantially boost auditors’ skepticism levels. In practice, this development-related information could be conveyed via an ADA test development memorandum preceding the workpapers containing the ADA test. Informing auditors about ADA test development activities will likely become more important as auditors inherit more advanced forms of ADA tests, such as tests employing artificial intelligence technology.  
As the use of audit data analytic (ADA) tests matures and becomes increasingly common in practice, auditors will transition to a situation where they typically inherit ADA tests developed by others (e.g., other audit team members or a centralized data analytics team). Despite the potential benefits of ADA, using ADA tests inherited from others, rather than developed by auditors themselves, could hinder auditors’ application of professional skepticism due to their lack of psychological ownership of the ADA tests. In an experiment where an ADA test identifies a fraud red flag, we find that auditors who inherited the ADA test are less likely to exercise professional skepticism compared to those who were personally involved in the development of the ADA test. We then provide evidence that informing auditors who inherited the ADA test about the test development activities (e.g., a brief ADA memorandum documenting the ADA’s development) boosts their skepticism levels.  
The emergence of data analytics allows auditors to test entire populations of data drawn from clients’ information systems, rather than relying solely on sampling methods. While full population testing increases the sufficiency – or quantity – of evidence examined, it typically relies heavily on client-internal data. Therefore, auditors must remain skeptical when subsequent, more appropriate evidence from external sources contradicts a client’s financial reporting. In an experiment, we find that auditors using full population testing, compared to sample testing, are less likely to subsequently exercise skeptical actions when an external, industry growth trend reveals a fraud red flag. We do not find that this unintended consequence is exacerbated when full population testing results are visualized (versus tabulated), a typical format used for presenting data analytic tests in practice. Main Takeaways
  • Auditors using full population testing, compared to sample testing, are less likely to exercise skeptical actions when subsequently confronted with a fraud red flag revealed by an external industry growth trend.
  • Auditors using full population testing, compared to sample testing, overestimate their evaluation of the appropriateness of client-internal evidence. Presenting the testing results in a visualized compared to tabulated form does not exacerbate the negative effect of full population testing on auditors’ skeptical actions.
 

KEY TAKE-AWAYS

The emergence of data analytics allows auditors to test entire populations of data, rather than relying solely on sampling methods. While full population testing increases the sufficiency, or quantity, of evidence examined, it does not necessarily eliminate its lack of appropriateness, or quality. In particular, full population testing typically relies on client-internal data, which are vulnerable to management manipulation, potentially reducing their appropriateness. Therefore, auditors must remain skeptical when subsequent, more appropriate evidence from external sources contradicts a client’s financial reporting. We examine whether auditors employing full population testing mistakenly substitute their assessment of evidence sufficiency for their evaluation of evidence appropriateness, leading them to view client-internal evidence as more appropriate than auditors using sample testing. Consequently, auditors using full population testing may be less likely to act skeptically when subsequent, more appropriate external evidence reveals a fraud red flag. In an experiment, we find that auditors using full population testing, compared to sample testing, are less likely to exercise skeptical actions when a subsequent external industry growth trend reveals a fraud red flag. We also posit that this unintended consequence is exacerbated when full population testing results are visualized (versus tabulated). However, our findings do not support this prediction.

This study examines how the use of full population testing (FPT), enabled by data analytics, affects auditors’ professional skepticism. While FPT improves the sufficiency (quantity) of audit evidence by testing entire populations, it often relies on client-internal data, which may lack appropriateness (quality) and be vulnerable to management manipulation. Auditing standards emphasize that more evidence cannot compensate for poor quality, making external evidence critical for fraud detection.
The authors hypothesize that auditors using FPT may exhibit attribute substitution bias, substituting their judgment of evidence sufficiency for appropriateness. This bias could reduce skeptical actions when external evidence later reveals fraud red flags. In an experiment with 125 auditors, results show:
  • Auditors using FPT were 52% less likely to act skeptically (e.g., inquire about inconsistencies or alert managers) compared to those using sample testing when confronted with an external fraud indicator.
  • FPT inflates perceptions of evidence appropriateness because auditors perceive it as more sufficient.
  • Contrary to expectations, presenting FPT results visually (graphs) versus in tables did not significantly worsen the effect.
  • Experience with FPT amplifies the bias, meaning more experienced auditors are even less skeptical after using FPT.
The findings highlight a critical unintended consequence of advanced audit technologies: auditors may underreact to fraud risks when over-relying on internal evidence tested via FPT. Audit firms and regulators should address this through training and quality controls, emphasizing the distinction between evidence sufficiency and appropriateness and reinforcing the importance of external evidence.
No related podcasts.
No related news.

Filter projects: 

Project Lead
Theme Filter
University Filter
1 - 10 of 52 projects