OpenAI’s ChatGPT Upgraded with Access to Up-to-Date Information

Date:

To print this article, all you need is to be registered or login on Mondaq.com.

OpenAI’s ChatGPT, a widely popular chatbot, has recently been updated to access more current information, extending its knowledge up to April 2023. Despite initially launching with limitations, ChatGPT quickly gained one million users within five days. However, the use of chatbots in the workplace introduces risks that employers must address, particularly when unsanctioned use occurs.

As companies around the world, including major tech giants, have experienced conversational AI leaks, it is crucial for employers to be vigilant. These leaks involve instances where sensitive data provided to chatbots like ChatGPT inadvertently becomes exposed. The information shared with chatbots is transmitted to third-party servers for training purposes, potentially compromising confidential or personal data.

Numerous incidents have occurred where employees unintentionally disclosed sensitive employer information while using publicly available chatbots for work-related tasks. Examples include using chatbots to identify code errors, optimize source code, or generate meeting notes from recorded files.

With the rising popularity of chatbots, the occurrence of conversational AI leaks has increased. According to IBM’s Data Breach Report 2023, the global average cost of data breaches reached an all-time high of $4.45 million between March 2022 and March 2023, with South Africa’s costs exceeding ZAR50 million. These leaks can have severe financial implications for employers, emphasizing the need for proactive measures.

Employers may be tempted to ban the use of ChatGPT for specific tasks or queries to address the risks. However, alternative options exist to responsibly leverage the benefits of ChatGPT and implement Responsible AI in the workplace. Options include obtaining an enterprise license for ChatGPT or self-regulating AI tool usage through policies and training when no specific laws or regulations govern them.

See also  OpenAI Co-Founders Reunite with Microsoft in Groundbreaking AI Research Team, US

Recognizing the need for enhanced privacy, OpenAI has launched an enterprise version of ChatGPT, allowing individuals and employers to create their own chatbots while safeguarding the training information. The enterprise version promises enterprise-standard security and privacy, reducing the likelihood of conversational AI leaks.

Managing AI risks in the workplace requires a tailored approach based on each employer’s level of AI integration. However, allowing unregulated use of ChatGPT may lead to a situation known as shadow IT. Shadow IT refers to unsanctioned software or tool usage by employees, creating unapproved IT infrastructure and systems parallel to the employer’s official infrastructure. This lack of internal regulation poses security vulnerabilities, data leaks, intellectual property exposure, and other risks. Both employers and employees should exercise caution when utilizing generative AI tools, carefully considering information sourcing and sharing.

To navigate the burgeoning field of AI while upholding data privacy, security, and intellectual property, ENS’ team of expert lawyers specializing in Technology, Media, and Telecommunications as well as labor law have developed a Responsible AI toolkit. This toolkit assists clients in swiftly entering and navigating the world of AI, ensuring responsible and compliant practices.

Employers must prioritize mitigating the risks posed by conversational AI leaks. By being proactive, implementing responsible AI practices, and leveraging tools like ChatGPT’s enterprise version, employers can harness the potential of AI while safeguarding critical information.

Frequently Asked Questions (FAQs) Related to the Above News

What is OpenAI's ChatGPT?

OpenAI's ChatGPT is a popular chatbot that utilizes artificial intelligence (AI) to engage in conversations with users.

How has ChatGPT been upgraded recently?

ChatGPT has been upgraded to have access to more up-to-date information, expanding its knowledge base up until April 2023.

What has been the initial success of ChatGPT?

Within just five days of its launch, ChatGPT gained one million users, indicating its widespread popularity.

What risks do employers face when using chatbots like ChatGPT in the workplace?

The use of chatbots in the workplace introduces risks of conversational AI leaks, where sensitive data provided to the chatbot may inadvertently become exposed. This compromised information is transmitted to third-party servers for training purposes, potentially compromising confidential or personal data.

Can you provide examples of workplace incidents involving chatbots?

Workplace incidents have occurred where employees unintentionally disclosed sensitive employer information while using publicly available chatbots for tasks such as identifying code errors, optimizing source code, or generating meeting notes from recorded files.

Has there been an increase in conversational AI leaks with the rising popularity of chatbots?

Yes, the occurrence of conversational AI leaks has increased as chatbot usage grows. IBM's Data Breach Report 2023 reveals that the global average cost of data breaches reached a record high of $4.45 million between March 2022 and March 2023.

How can employers address the risks associated with chatbot usage?

Employers can consider options such as obtaining an enterprise license for ChatGPT or implementing policies and training to self-regulate AI tool usage when specific laws or regulations are absent.

What does OpenAI offer to enhance privacy and security in chatbot usage?

OpenAI has released an enterprise version of ChatGPT that allows individuals and employers to create their own chatbots while prioritizing the security and privacy of training information.

What is shadow IT, and why should employers be cautious of it?

Shadow IT refers to the unsanctioned usage of software or tools by employees, creating unofficial IT infrastructure parallel to the employer's approved infrastructure. Employers should be cautious of it, as it can pose security vulnerabilities, data leaks, and expose intellectual property.

How can employers navigate AI risks while maintaining data privacy and security?

Employers can adopt a tailored approach based on their level of AI integration, prioritize responsible AI practices, and utilize resources like ChatGPT's enterprise version to protect critical information.

Is there any legal assistance available for employers dealing with AI-related challenges?

Yes, ENS' team of expert lawyers specializing in Technology, Media, and Telecommunications, as well as labor law, have developed a Responsible AI toolkit to help clients navigate the world of AI while ensuring responsible and compliant practices.

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Share post:

Subscribe

Popular

More like this
Related

UBS Analysts Predict Lower Rates, AI Growth, and US Election Impact

UBS analysts discuss lower rates, AI growth, and US election impact. Learn key investment lessons for the second half of 2024.

NATO Allies Gear Up for AI Warfare Summit Amid Rising Global Tensions

NATO allies prioritize artificial intelligence in defense strategies to strengthen collective defense amid rising global tensions.

Hong Kong’s AI Development Opportunities: Key Insights from Accounting Development Foundation Conference

Discover key insights on Hong Kong's AI development opportunities from the Accounting Development Foundation Conference. Learn how AI is shaping the future.

Google’s Plan to Decrease Reliance on Apple’s Safari Sparks Antitrust Concerns

Google's strategy to reduce reliance on Apple's Safari raises antitrust concerns. Stay informed with TOI Tech Desk for tech updates.