The Federal Trade Commission (FTC) has initiated an investigation into OpenAI, a prominent artificial intelligence (AI) company, over concerns that its language model, ChatGPT, may violate consumer protection laws by generating false and potentially defamatory statements. The agency is also examining whether OpenAI violated laws when a software bug led to the exposure of users’ payment details and chat history data. While OpenAI has stated its intent to comply with the investigation, the regulatory uncertainty surrounding AI has prompted other AI vendors to offer assurances to their customers.
At Fortune’s Brainstorm Tech conference and during discussions in the Bay Area, it became evident that many companies are grappling with how to best utilize generative AI technology. Sean Scott, Chief Product Officer at PagerDuty, observed that businesses are often throwing large language models (LLMs) at problems without considering if a smaller AI model or rule-based coding could be more cost-effective and efficient. The challenge lies in identifying the appropriate use cases for generative AI. Additionally, organizations are struggling to implement effective governance controls around the use of generative AI within their operations.
The FTC’s investigation into OpenAI aligns with its commitment to curbing potentially deceptive practices among AI companies. FTC Chair Lina Khan aims to showcase the agency’s capability to regulate AI through existing consumer protection laws, coinciding with lawmakers’ consideration of new regulations and the possibility of establishing a federal entity to oversee AI. However, if the FTC insists on iron-clad guarantees from creators of LLM-based AI systems to prevent any potential reputational harm or data breaches, it may significantly impact the deployment of generative AI. Therefore, AI vendors are rushing to offer assurances to their business customers to alleviate legal and ethical concerns.
One major concern surrounding generative AI is copyright infringement. Several companies, including OpenAI and Stability AI, have faced copyright infringement lawsuits due to their use of copyrighted material during AI training. To address this, Adobe has offered indemnification to users of its text-to-image generation system, Firefly, protecting them against copyright infringement lawsuits. However, this strategy does not entirely absolve companies from ethical considerations, as creators have argued that explicit consent and compensation should be mandatory when using their copyrighted work for AI training. Some creators feel Adobe’s response is inadequate, despite Adobe’s pledge to introduce tags for creators to control the use of their work and compensate them accordingly.
In contrast, Microsoft is taking measures to make its generative AI offerings more appealing and secure for its cloud customers. The company plans to share its expertise in setting up responsible AI frameworks and procedures, offering its customers access to the same responsible AI training curriculum used by Microsoft employees. Microsoft also intends to provide attestation on its implementation of the National Institute of Standards and Technology AI framework and has formed partnerships with consulting firms PwC and EY to assist customers in establishing responsible AI programs. These efforts aim to address customer concerns and foster confidence in the use of generative AI.
One might wonder if the current uncertainty surrounding commercial safety in generative AI is actually beneficial for business. This uncertainty and anxiety may create opportunities for companies to upsell premium consulting and support services to customers seeking guidance. Microsoft’s approach demonstrates how challenges arising from generative AI can be transformed into opportunities for companies to provide valuable services and support.
In conclusion, the FTC’s investigation into OpenAI highlights the need for regulation and assurance within the AI industry. As companies struggle to navigate the best applications for generative AI and establish governance controls, vendors are scrambling to offer customers the assurance they need to overcome legal and ethical challenges. Issues such as copyright infringement and data privacy continue to pose significant concerns, but companies like Adobe and Microsoft are actively working to address these issues and foster responsible use of generative AI.