Accountants using generative artificial intelligence (AI) need to embrace risk protection policies.
Accounting regularly finds itself at the forefront of technical advances as it considers how to incorporate them into everyday operations.
Every new tool or platform that comes along often promises efficiency and accuracy, but also brings with it new risks and vulnerabilities.
This time around it is generative AI, a topic which has recently received extensive publicity after the launch of a leading AI tool that has received mixed reactions.
Results accuracy
To help generate responses to a user’s questions, generative AI models are trained on a vast body of data. The current leading generative AI model has been trained on a massive corpus of data consisting of books, articles and web pages covering a very wide array of subjects, including accounting.
The main concern with such models is that the accuracy of the output greatly depends on the training data being up to date and of appropriate quality.
While the model will likely provide a correct response when queried about very basic accounting questions, such as what accounting standards govern revenue transactions, it may falter in providing more technical responses.
Consider, for example, a query about a specific accounting treatment of a transaction that requires the application of a recent IFRS standard. The model will draw information from a range of online resources, but it cannot always distinguish credible sources from unreliable ones.
It may also draw from discussions on the draft version of the standard, which may differ from the version that is subsequently issued, leading to an inaccurate response.
Moreover, generative AI systems currently fall short in grasping context and subtle differences, which is critical for reaching accurate conclusions.
These systems cannot match the depth of human understanding or scepticism. The proper application of professional standards and legislation requires these human skills.
Relying solely on AI without human oversight can be risky. Misunderstandings or misapplications in professional settings, especially without human verification, can lead to significant mistakes and potential professional liability repercussions. It is essential to critically assess AI-derived results before taking action.
Data confidentiality
The vast majority of data processed by accountants and other finance professionals is personal and proprietary. It therefore has to be protected under the relevant data-protection legislation.
Once data has been entered into a generative AI system, all control over where that data is shared and stored is lost.
The AI system may have its own data privacy policy, but it will not protect any of its users from liability in the event of a data breach, as the system’s users are still ultimately responsible for that data. Doubtless aware of such possibilities, hackers will likely target AI systems in the future, further increasing the risk.
Protection
Since the current generative A