Apple Inc. has just joined a major league of organizations taking a stand against internal ChatGPT use, in addition to earlier moves made by Amazon and key banking institutions like JPMorgan Chase, Bank of America, Citigroup, and Deutsche Bank. Their intention is to prevent any breach of confidential corporate data, which could manifest through AI systems employed for the purpose of helping bots become more proficient.
The restriction is not just limited to ChatGPT either. It has now extended to Copilot, a coding robot by Github, sparking speculation that Apple might be in the process of developing their own particular Large Language Model (LLM). If true, they would be a potential rival for both ChatGPT and Google Bar.
On the other hand, Samsung’s reaction to ChatGPT has been different. Their usage of it has been alternating between a ban and approval, most recently on the occasion of fixing bugs in source code and using the AI tool to convert meeting contents into minutes. But even then, the new ban was instated in order to prevent similar events from happening in the near future.
These events demonstrate the deep conundrum that corporate organizations are facing in the context of LLM bots. The warning from the United Kingdom’s GCHQ signals a potential risk of sensitive information being exposed, in the scenario of queries being identical across users. OpenAI was then nudged into recognizing a bug that potentially made conversations publicly visible that used Redis-py library, further heightening the danger of corporate secrets being discovered.
OpenAI just launched a feature to disable the saving of chat histories, as one of many efforts to deal with privacy concerns. This perhaps is in a bid that such conversations will be not be used to train their models, even if they will remain for the span of thirty days and be open to review for any instance of abuse. This has, however, not proven to be enough to satisfy the giants like Apple who, it can be presumed, are influenced by the security of data as proven by their recent decision.
With AI on the rise, data security needs to be accompanied by ethical and reliable implementation of such technology. Apple’s door close on the internal use of ChatGPT is a reflection of that, with groups like OpenAI being urged to further bolster their measures in order to make sure these kinds of applications can be used with confidence.
Looking at this incidence, this brings us to the key players involved in this matter; Apple as the leading technology giant and OpenAI as a company focusing on research in Artificial Intelligence.
More than any other company, Apple has become a staple in the world of technology, dominating the industry with simple yet powerful products like the iPhone, iPad, Mac, AirPods, and the latest Apple Watch. As part of their many statements of ethical principles, they prioritize the protection and control of user data and strictly adhere to their privacy policy of being transparent and comprehensive when it comes to collecting information.
On the other hand, OpenAI has been leading AI research for many years now. Founded in 2015 with the mission of advancing digital intelligence in the way that benefits humanity, they have been dedicated to researching and building safe Artificial Intelligence. To achieve this, they partner with companies that share their vision and develop public projects to accelerate the development of Artificial General Intelligence. One of these projects, ChatGPT, is the focus of Apple’s decision to ban its internal use.