Apple Prohibits Internal Use of ChatGPT Due to Risk of Leaks

Date:

Apple, a world-renowned tech giant, has recently released leaked documents confirming that internal ChatGPT-like technology has been prohibited due to an increased risk of data leaks.

Large Language Models (LLMs) have been popularized recently, so it should come as no surprise that Apple is looking into developing its own version. However, after assessing the dangers associated with such powerful and unpredictable technology, it has been decided that employees will be banned from accessing rival models.

According to The Wall Street Journal, the leaked documents detailed Apple’s withholding of ChatGPT, Bard or comparable LLMs from its employees. Moreover, anonymous sources have also revealed that Apple has begun working on their own LLM system, but no other details have been provided as of yet.

Several other large corporations, including Amazon and Verizon, have taken the same measures in regard to LLMs, which require access to extensive databases of user data. ChatGPT offers users the possibility to disable chat log features, so as to prevent information leaks, but as these tools are imperfect, organizations fear for potential security issues.

Apple has recently released the ChatGPT app on the App Store, granting users access to the technology. However, there are numerous imitation apps already available, some aimed at scammed users, and Apple have advised employees to be careful when looking for ways to access the software.

Apple has always been at the forefront of the AI movement, with their first foray into intelligent computing being the launch of Siri back in 2011. Advancements in computational photography on the iPhone further demonstrated how integral machine learning is to the company. LLMs are the newest evolution of this technology, as it builds on the same concepts and strategies, but on a far larger scale.

See also  AI Summarization Tool Read AI Raises $21M, Plans Expansion

Recent reports falsely suggested that Apple is behind in the “AI race”, but that simply is not true. It is entirely plausible that Apple will reveal their LLMs system during WWDC in June, but given the speed with which the technology has developed it is still an unknown.

Apple’s fears over data leaking from LLMs is understandable, however it appears that the cause of data leakage is still the human element. To ensure the most secure system possible would mean eliminating any potential for human error.

Frequently Asked Questions (FAQs) Related to the Above News

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Share post:

Subscribe

Popular

More like this
Related

Obama’s Techno-Optimism Shifts as Democrats Navigate Changing Tech Landscape

Explore the evolution of tech policy from Obama's optimism to Harris's vision at the Democratic National Convention. What's next for Democrats in tech?

Tech Evolution: From Obama’s Optimism to Harris’s Vision

Explore the evolution of tech policy from Obama's optimism to Harris's vision at the Democratic National Convention. What's next for Democrats in tech?

Tonix Pharmaceuticals TNXP Shares Fall 14.61% After Q2 Earnings Report

Tonix Pharmaceuticals TNXP shares decline 14.61% post-Q2 earnings report. Evaluate investment strategy based on company updates and market dynamics.

The Future of Good Jobs: Why College Degrees are Essential through 2031

Discover the future of good jobs through 2031 and why college degrees are essential. Learn more about job projections and AI's influence.