Tech giant Apple has reportedly banned the internal use of ChatGPT, OpenAI’s popular chatbot, according to an internal document obtained by The Wall Street Journal. The move comes amid concerns that the AI chatbot might have access to private information about the company’s trade secrets, putting Apple’s brand and bottom line at risk. ChatGPT uses natural language processing to respond to queries, prompting fears of inaccurate and misleading responses that could disrupt the company’s business operations.
Apple has also restricted the use of another product, GitHub’s automatic coding tool, Copilot, as part of its effort to maintain data secrecy and minimize the risk of data leaks. The tech giant’s decision comes after Google banned its employees from using AI chatbots, warning them to use Bard, the company’s own chatbot, with extreme caution. In April, ChatGPT reportedly leaked some information about Samsung’s semiconductors.
Sources claim that Apple is continually working on a ChatGPT competitor, but the company has yet to announce any plans for chatbots and AI. However, the ban on using external AI tools such as ChatGPT and Copilot shows the company’s commitment to data privacy and data protection. The move is likely to be welcomed by data privacy advocates, who have been increasingly concerned about the use of AI tools in sensitive industries.
As the use of AI chatbots becomes more widespread, it is important for companies to take steps to ensure data privacy and minimize the risk of data leaks. The ban on using ChatGPT and Copilot by Apple and Google reflects this growing concern and highlights the need for robust data protection measures in the use of AI tools.