CONTRARY to popular belief, artificial intelligence (AI) has not yet surpassed human cognitive capabilities and likely will not do so for another few years.
Yet there are still grave challenges in the sphere of cybersecurity in the early stages of AI adoption in workplaces.
Currently, open AI tools like Chat-GPT, Google Gemini, and many others can perform specific tasks, like producing minutes from a meeting or responding to an email. These are tasks regular employees at some of the world’s biggest banks and financial firms frequently ask them to perform, at great risk.
The Central Bank recently hosted the Bank of International Settlements (BIS), a bank serving about 70 central banks and other institutions globally, to speak about the risks associated with the increasing adoption of AI in sensitive environments, like central banks.
Sukhvin Notra, senior security specialist at BIS, delivered a presentation outlining the stages of AI development, the risks AI tools currently pose, and its misuse.
AI has developed beyond Artificial Narrow Intelligence, a model/program designed to perform a specific set of functions, with a limited range and the only one currently in operation.
“The second stage, Artificial General Intelligence (AGI), is just as smart as human beings across a broad spectrum of topics,” Notra explained. In concept, AGI would be able to think on the same level as a human.
“Few of us here would argue (they are) as smart as human beings… We’re not quite there yet,” Notra said. Some experts at Google predict its advent in three to five years.
Artificial Super Intelligence, the third and most powerful form of AI, will be far more intelligent than human beings in all ways. Notra describes it as a hypothetical model, not expected to be seen in the foreseeable future.
Though theoretically in its infant stages, AI and open-source tools are becoming significantly more powerful as new models come out. From November 2022 to March 2023, when Chat-GPT released its fourth model, its capabilities, defined by the “parameters” used to train it, increased from 175 billion parameters, reportedly to about 1.76 trillion parameters, when it introduced its Chat-GPT 4 model.
Chat-GPT is commonly used in large corporations and financial institutions in TT and around the world, usually as a means by employees to improve efficiency.
BIS, Notra said, conducted a survey in which it found that about 70 per cent of its member central banks allow employees to use open-source AI tools. Although BIS is not a regulatory body, it engages in exercises with its members and outside institutions to help them combat threats.
Notra identified five major risks posed by the adoption of commercial AI tools, beginning with the threat to data and confidentiality, which he described as one of the biggest risks facing central banks.
Employees use the tools to save time, for tasks like generating minutes from meetings to summarizing exhaustive documents. “Chat-GPT will do an incredible job (but there’s) a huge amount of risk