Apple Prohibits Internal Use of ChatGPT Due to Risk of Leaks

Date:

Apple, a world-renowned tech giant, has recently released leaked documents confirming that internal ChatGPT-like technology has been prohibited due to an increased risk of data leaks.

Large Language Models (LLMs) have been popularized recently, so it should come as no surprise that Apple is looking into developing its own version. However, after assessing the dangers associated with such powerful and unpredictable technology, it has been decided that employees will be banned from accessing rival models.

According to The Wall Street Journal, the leaked documents detailed Apple’s withholding of ChatGPT, Bard or comparable LLMs from its employees. Moreover, anonymous sources have also revealed that Apple has begun working on their own LLM system, but no other details have been provided as of yet.

Several other large corporations, including Amazon and Verizon, have taken the same measures in regard to LLMs, which require access to extensive databases of user data. ChatGPT offers users the possibility to disable chat log features, so as to prevent information leaks, but as these tools are imperfect, organizations fear for potential security issues.

Apple has recently released the ChatGPT app on the App Store, granting users access to the technology. However, there are numerous imitation apps already available, some aimed at scammed users, and Apple have advised employees to be careful when looking for ways to access the software.

Apple has always been at the forefront of the AI movement, with their first foray into intelligent computing being the launch of Siri back in 2011. Advancements in computational photography on the iPhone further demonstrated how integral machine learning is to the company. LLMs are the newest evolution of this technology, as it builds on the same concepts and strategies, but on a far larger scale.

See also  UCLA's Screenwriting Program Chairman Encourages Writers Not to be Afraid of AI

Recent reports falsely suggested that Apple is behind in the “AI race”, but that simply is not true. It is entirely plausible that Apple will reveal their LLMs system during WWDC in June, but given the speed with which the technology has developed it is still an unknown.

Apple’s fears over data leaking from LLMs is understandable, however it appears that the cause of data leakage is still the human element. To ensure the most secure system possible would mean eliminating any potential for human error.

Frequently Asked Questions (FAQs) Related to the Above News

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Share post:

Subscribe

Popular

More like this
Related

China Dominates Generative AI Patents, Leaving US in the Dust

China surpasses the US in generative AI patents, as WIPO reports a significant lead for China's innovative AI technologies.

Absci Corporation Grants CEO Non-Statutory Stock Option

Absci Corporation grants CEO non-statutory stock option in compliance with Nasdaq Listing Rule 5635. Stay updated on industry developments.

Apple Secures Board Seat on OpenAI, Boosting AI Partnership

Apple secures board seat with OpenAI, enhancing AI partnership. Phil Schiller's appointment signifies Apple's dedication to advancing AI technologies.

Apple Secures Board Seat with OpenAI, Enhancing AI Partnership

Apple secures board seat with OpenAI, enhancing AI partnership. Phil Schiller's appointment signifies Apple's dedication to advancing AI technologies.