Apple is continuing to prioritize data privacy and has implemented a ban on employee usage of AI-related tools such as ChatGPT and Copilot. The decision was taken in an attempt to protect the company’s confidential data from possible exposure to outsiders. This follows the lead of tech giants like Samsung, who also put a halt to the use of such tools due to a data security breach. In addition, other major companies such as JPMorgan Chase and Verizon have also imposed restrictions on the use of these AI tools.
Morgan Stanley, however, has offered a workaround for those seeking to use these AI tools for their jobs- introducing a secure, private version of ChatGPT. This enables employees to use the service without any worries of exposing sensitive data.
Apple is also currently developing its own technology in the AI space, led by John Giannandrea- senior vice president of Machine Learning and AI Strategy. Giannandrea, relative newcomer to the Apple team, has his base in Artificial Intelligence and previously worked for Google. The scope of his project, and its potential, is still unknown, but it is likely that it will be an important part of Apple’s product suite and future strategy.
Putting AI to use in a secure environment is becoming an increasingly popular challenge for companies, but Apple is taking extra precautions to ensure that their data remains safe while they integrate AI into their workflow. Their new policies, while potentially roadblocks in the work place, appear to be in the best interest of the company and its security.