Recent development in Artificial Intelligence technology recently has created a hotbed of conversation surrounding the potential risks it can cause. Microsoft-backed OpenAI’s ChatGPT has been in the spotlight recently, with the public voicing their concerns of it possibly being used to violate discrimination and deceptive acts found in laws. The United States Federal Trade Commission (FTC), amongst many other organizations, are now formally proceeding with their own investigations to catch companies that misuse the AI and break these laws.
Lina Khan, the Chair of FTC, as well as Commissioners Rebecca Slaughter and Alvaro Bedoya addressed these concerns in a congressional hearing recently. Bedoya emphasized the risk associated with AI and said companies cannot “black box” it and avoid explaining its use. He mentioned how AI can enhance the effectiveness of scams, making it harder for victims to identify fraudulent acts. He downplayed the value of the AI technology when it comes to protecting the civil rights of people and instead warned about the frauds that could occur using it.
The idea of AI being used to produce deep fakes has also sparked panic recently, potentially making it easier for frauds to bribe victims with convincing audios and visuals. Jerry Bui, the Managing Director at FTI, explained just how easily deep fakes can be made, with only 3 minutes of audio required to create a really convincing fake.
Khan mentioned that given the seriousness of the evolving AI technology, wrongful use should not be tolerated and any companies who misuse it, thus breaking laws, should be dealt with accordingly. Though it may take some time, the FTC and other agencies are actively monitoring the rapid progress in this technology and they hope to nip any wrongdoing in bud.