CAQ study: Deploying AI in accounting poses 12 audit risks

Written on Apr 12, 2024

Companies that deploy generative artificial intelligence in financial reporting face 12 audit risks that range from flaws in governance and compliance to failures in preventing fraud and cyberattacks, according to the Center for Audit Quality. 

Auditors for companies that use generative AI often confront a “black box” challenge when they can neither interpret nor explain how the technology generates information, CAQ said. The problem grows when “financial reporting processes and ICFR [internal control over financial reporting] become more sophisticated and outputs from the technology are unable to be independently replicated.” 

One out of three auditors see companies in their industry deploying or planning to deploy AI in financial reporting, CAQ found in a 2023 survey. The proportion will likely grow as companies explore how AI “can streamline or enhance accounting and financial reporting,” the center said. 

Accountants are increasingly using generative and other forms of AI to prepare technical accounting memos, to smooth routine tasks such as writing Excel formulas or footing financial statements and to analyze large volumes of data for unusual transactions or unauthorized system access, the CAQ said. 

Financial executives also use AI to summarize contract terms, prepare the initial suggestion for the accounting treatment of a transaction, and analyze data for budgeting or for understanding variances and costs, he said in an email reply to questions. 

CFOs should consider focusing on 12 hazards from generative AI, CAQ said, including: 

  • Governance – the failure to identify and manage AI applications throughout a company 

  • Regulation – use of generative AI in ways that violate regulations, laws or contracts 

  • Skills – employees lack the knowledge to oversee or use generative AI effectively and safely 

  • Fraud – management, employees or third parties use generative AI to commit or conceal crimes 

  • Data privacy – confidential data is erroneously entered into a generative AI application 

  • Security – generative AI is vulnerable to cyberattacks, the intentional insertion of flawed data, or deliberate efforts to prompt bogus conclusions from the applications 

  • Flawed selection or design – the choice of a generative AI application that does not achieve the desired objective 

  • Error-prone foundation model – the company adopts an unreliable large language model that generates inaccuracies or biased information 

  • Flawed training – faulty training of the generative AI model generates repeated output errors 

  • Weak performance - due to inadequate testing, the generative AI application “hallucinates,” or provides incomplete, inaccurate, unreliable or irrelevant information 

  • Defective prompts – employees fail to ask generative AI accurate questions, yielding unintended or irrelevant information 

  • Inadequate monitoring – after deploying generative AI, companies fail to closely track output to ensure the technology is functioning as intended.