Apple, a world-renowned tech giant, has recently released leaked documents confirming that internal ChatGPT-like technology has been prohibited due to an increased risk of data leaks.
Large Language Models (LLMs) have been popularized recently, so it should come as no surprise that Apple is looking into developing its own version. However, after assessing the dangers associated with such powerful and unpredictable technology, it has been decided that employees will be banned from accessing rival models.
According to The Wall Street Journal, the leaked documents detailed Apple’s withholding of ChatGPT, Bard or comparable LLMs from its employees. Moreover, anonymous sources have also revealed that Apple has begun working on their own LLM system, but no other details have been provided as of yet.
Several other large corporations, including Amazon and Verizon, have taken the same measures in regard to LLMs, which require access to extensive databases of user data. ChatGPT offers users the possibility to disable chat log features, so as to prevent information leaks, but as these tools are imperfect, organizations fear for potential security issues.
Apple has recently released the ChatGPT app on the App Store, granting users access to the technology. However, there are numerous imitation apps already available, some aimed at scammed users, and Apple have advised employees to be careful when looking for ways to access the software.
Apple has always been at the forefront of the AI movement, with their first foray into intelligent computing being the launch of Siri back in 2011. Advancements in computational photography on the iPhone further demonstrated how integral machine learning is to the company. LLMs are the newest evolution of this technology, as it builds on the same concepts and strategies, but on a far larger scale.
Recent reports falsely suggested that Apple is behind in the “AI race”, but that simply is not true. It is entirely plausible that Apple will reveal their LLMs system during WWDC in June, but given the speed with which the technology has developed it is still an unknown.
Apple’s fears over data leaking from LLMs is understandable, however it appears that the cause of data leakage is still the human element. To ensure the most secure system possible would mean eliminating any potential for human error.