When Apple hosts its annual Worldwide Developers Conference, the software announcements usually generate excitement among technology enthusiasts. This year, however, there was a notable exception: Elon Musk.
Musk’s Concerns Over Privacy
The CEO of Tesla and SpaceX threatened to ban all Apple devices from his companies, claiming that a new partnership between Apple and Microsoft-backed startup OpenAI could pose significant security risks. Apple announced that with the upcoming operating system update, Siri would have the ability to pull additional information from ChatGPT when users log in.
“Apple has no idea what’s actually going on when they hand your data over to OpenAI,” Musk wrote on X. “They’re selling you out.”
Apple’s AI Integration
The partnership will allow Siri to ask iPhone, Mac and iPad users if they can use ChatGPT to answer questions, part of the operating system update coming later this year. However, Musk considers this a major security breach.
“If Apple integrates OpenAI at the operating system level, Apple devices will be banned at my companies,” Musk wrote. “That is an unacceptable breach of safety.”
Apple’s Response and Data Protection
Apple has assured users that privacy protections are built into this feature. According to the company, IP addresses are hidden and OpenAI does not store requests. Apple emphasizes that independent experts can inspect the code running on servers to verify these protections.
Many of Apple’s AI features, collectively known as “Apple Intelligence,” run on the device, but some queries are processed via the cloud. This cloud-based processing is said to be secure, as the data is not stored or accessible by Apple.
Mixed Reactions from Experts
Technology and security experts have mixed opinions about Musk’s claims. Some argue that there is no concrete evidence that the Apple-OpenAI partnership poses security risks.
“Like many things Elon Musk says, his claims are not based on technical reality, but rather on his political beliefs,” said Alex Stamos, Chief Trust Officer at SentinelOne and former Chief Security Officer at Facebook. Stamos praised Apple’s data protection efforts and the transparency they promise.
Rayid Ghani, a professor of machine learning and public policy at Carnegie Mellon University, agrees that concerns should be raised, but suggests verifying Apple’s claims. “We need to ask for evidence of how Apple ensures these protections are in place,” Ghani said. “Who is liable if something goes wrong: Apple or OpenAI?”
Implications for Apple Users
The ability for Apple users to link their ChatGPT subscription accounts to their devices raises additional questions about data collection. Pam Dixon, executive director of the World Privacy Forum, advises caution. “Linking your account to your cell phone is a big deal,” Dixon said. “I would only connect if there is more clarity about what happens to the data.”
Looking Ahead
As AI becomes more integrated into everyday life, transparency and reliability of AI tools will be critical. “AI will be built into our devices and it will be everywhere,” said Dixon. “We need to be able to trust and verify these systems.”
Apple will need to ensure that its AI products don’t face the same issues that other companies face, such as Microsoft’s security issues with the Recall feature or Google’s AI flaws. The success of Apple Intelligence during the beta phase and subsequent release will be critical in determining whether users will adopt the new technology and upgrade their devices.
Elon Musk claims WhatsApp exports user data every night