Watch out! AI is a risk to financial systems!
BankInfoSecurity.com reported that “In its annual report, the Financial Stability Oversight Council - a team made up mostly of financial regulators and chaired by the secretary of the Department of the Treasury - highlighted AI's potential to spur innovation but flagged its ability to introduce "certain risks." “. The December 18, 2023 report entitled “US Regulators Warn of AI Risk to Financial Systems” (http://tinyurl.com/fjhp5z5f) included these comments:
Generative AI models use large datasets to identify patterns that allow the generation of new content including text, software code, images and other media, introducing operational risks related to data controls, privacy and cybersecurity.
Many AI approaches have an explainability challenge, which means that humans have a tough time reverse-engineering how the AI came to a certain conclusion. This "black box" approach can make it difficult for organizations to understand the source of the information an AI model uses, and therefore to assess where and how to use the information and the model, how much to rely on them, and how to determine the accuracy and potential bias of the output the model generates.
The council also warned of "complicating factors" associated with generative AI, such as hallucinations - output that is flawed but is presented in a convincing narrative - and added that assessing the performance of such output may require specific expertise. Some generative AI outputs may be inconsistent over time. Even when posed with the same prompts, users may not know the sources used to produce the output, and the financial institutions using these tools may not have transparency or control over the dataset for the underlying model uses, the council said.
I’m sure no one is surprised by this report!