The Securities and Exchange Commission (SEC) is taking steps to restrict the use of artificial intelligence (AI) by investment firms in order to protect customers. In a move that highlights the growing scrutiny of AI in the financial industry, the SEC has proposed new rules to prevent online brokerages, like Robinhood, from utilizing AI technology to prioritize generating more business over the best interests of their customers.
This development follows a wide-ranging review conducted by the SEC into the meme-stock frenzy that occurred in 2021. Regulators became concerned about the tactics employed by investment platforms to gamify trading experiences, utilizing colorful graphics and other behavioral incentives to encourage retail investors to make more risky trades that were not necessarily in their best interests but generated fees for the platforms.
While investment advisers are already obligated to recommend decisions in their clients’ best interests, the proposed rule seeks to further extend this ban on conflicts of interest to include features that utilize individuals’ data in an effort to steer their behavior. SEC Chair Gary Gensler explained that the deployment of AI raises the question of whether firms are seeking to optimize solely for investors or if they are also optimizing for the robo-adviser brokerage app, highlighting a clear conflict of interest.
Under the proposed rules, investment firms would be required to identify and eliminate any potential conflicts of interest stemming from their use of AI. Additionally, these firms would need to establish written policies, procedures, and records to prevent any violations.
However, Robinhood’s chief brokerage officer, Steve Quirk, expressed concerns that the SEC’s proposal would make it more difficult for individuals to participate in stock investments. Quirk believes that the suggested rules would bring U.S. financial markets back to an earlier era when retail investors were primarily limited to interacting with brokers or advisers through phone calls or branch offices, which does not serve anyone’s best interests, particularly the new generation of retail investors.
The SEC’s proposal has also faced criticism from its two Republican commissioners. One commissioner, Mark Uyeda, described the proposal as breathtakingly broad and wholly unnecessary, warning that the regulatory vagueness and compliance challenges may discourage innovation on Wall Street if approved.
To ensure public input, the proposed rules will be subject to a 60-day comment period before they are voted on by the five-member commission.
Gary Gensler, the newly appointed SEC Chair, has been consistently expressing concerns about the potential risks that AI presents to financial stability. He recently highlighted that AI could introduce new systemic risks by promoting herd behavior among investors, who often rely on the same data to make trading decisions, thereby destabilizing the market.
The issues surrounding AI, both its promises and perils, have become significant concerns in Washington this year. Policymaking on this matter has been a sporadic process. The Federal Trade Commission (FTC) has taken the lead by launching an investigation into OpenAI, the company behind ChatGPT, to examine whether there have been any violations of consumer protection laws.
As regulators continue to grapple with the implications of AI in the financial sector, the SEC’s proposed rules aim to strike a balance between innovation and protecting customers’ best interests. By establishing clear guidelines and oversight, the SEC seeks to create a safer environment for retail investors engaging in online trading. However, the proposed rules have sparked debates about potential unintended consequences and the impact on market dynamics.