The Chair of the US Securities and Exchange Commission (SEC), Gary Gensler, has issued a stark warning about the urgent need for financial regulators to address the risks associated with the increasing concentration of power within artificial intelligence (AI) platforms. Gensler emphasized that without prompt intervention, the financial industry could face a potential crisis driven by AI within the next decade.
This alarm was sounded by Gensler during a recent address to financial stakeholders, as reported by Infostride News, wherein he stressed that the regulation of AI presents a formidable challenge for US regulators. The risks associated with AI are not confined to a specific sector of the financial markets; they extend across various domains and are exacerbated by the models crafted by tech companies that fall outside the purview of traditional Wall Street regulatory bodies.
Gensler underscored the critical issue of power concentration in a handful of AI platforms, highlighting the systemic risks this poses. He warned that if one of these platforms were to fail or be compromised by a cyberattack, it could have a cascading impact on the entire financial system.
The increasing adoption of AI in the financial sector has been remarkable, with applications ranging from fraud detection and risk assessment to investment management. However, as Gensler noted, regulating AI’s role in finance is a complex task, particularly concerning financial stability. Existing regulations are primarily focused on individual financial institutions, such as banks, money market funds, and brokers, which do not account for the interconnected nature of AI systems.
Even the SEC’s earlier proposal for a rule addressing potential conflicts of interest in predictive data analytics predominantly targeted individual models deployed by broker-dealers and investment advisers. Gensler pointed out that these measures, while important, do not address the broader issue of horizontal risk, where multiple institutions may rely on the same underlying model or data aggregator. Moreover, the underlying AI models and data aggregators often reside with large tech companies, not within the realm of traditional financial institutions.
Gensler’s concerns extend to the prevalence of cloud providers that offer AI as a service in the United States. These providers are often the backbone of AI systems, and the concentration of this service within a few major companies further amplifies the risk. The multifaceted nature of this challenge has led Gensler to raise the issue at prominent regulatory forums, including the Financial Stability Board and the Financial Stability Oversight Council, as he views it as a challenge that necessitates a collaborative, cross-regulatory approach.
Regulators worldwide are grappling with the complex task of policing AI, as technology companies and their AI models often elude the grasp of conventional watchdogs. The European Union (EU) has been proactive in this regard, drafting stringent regulations aimed at the use of AI in a groundbreaking law that is set to be fully approved by the end of the year. The United States, on the other hand, is still in the process of evaluating AI technology to determine which aspects require new regulations and which aspects are covered by existing laws.
Gensler’s primary concern revolves around the potential for herd behavior and systemic risk resulting from financial decision-making based on the same data model. He fears that such a scenario could undermine financial stability and potentially trigger the next financial crisis. Gensler noted that, in the aftermath of such a crisis, people might identify a single data aggregator or model that the financial industry heavily relied upon, whether in the mortgage market or a specific sector of the equity market.
He stressed that AI’s “economics of networks” are so powerful that a financial crisis linked to AI is “nearly unavoidable.” He warned that such a crisis could emerge as early as the late 2020s or early 2030s if the necessary regulatory safeguards are not put in place promptly. Gensler’s remarks underscore the growing urgency for regulators to grapple with the intricacies of AI and its implications for the financial sector to ensure the stability of the global financial system.
Support InfoStride News' Credible Journalism: Only credible journalism can guarantee a fair, accountable and transparent society, including democracy and government. It involves a lot of efforts and money. We need your support. Click here to Donate