Elon Musk, who founded his own AI company, has criticized Apple’s recent decision to integrate ChatGPT into its AI offerings as an ‘unacceptable’ security risk.
Speaking at a London event in November 2023, Musk later elaborated on X, formerly known as Twitter, that Apple’s Monday announcement posed a significant security violation.
On Monday, Apple revealed a suite of highly anticipated AI features — including ChatGPT — that it will soon integrate into its devices. But not everyone was thrilled at the news.
While some observers were excited at the prospect of, for example, drawing math equations on an iPad that could then be solved by AI, billionaire tech mogul Elon Musk called Apple’s inclusion of ChatGPT — which is developed by OpenAI, not Apple — an “unacceptable security violation.”
“If Apple integrates OpenAI at the OS level, then Apple devices will be banned at my companies,” he wrote in a post on X, formerly Twitter. Musk co-founded OpenAI, but stepped down from its board in 2018 and launched a competing AI company.
He said visitors to his companies “will have to check their Apple devices at the door, where they will be stored in a Faraday cage,” which is a shield that blocks phones from sending or receiving signals.
“Apple has no clue what’s actually going on once they hand your data over to OpenAI,” he wrote in a separate post. “They’re selling you down the river.”
But Musk’s posts also contained inaccuracies — he claimed Apple was “not smart enough” to build its own AI models, when it in fact had — leading to a community fact-check on X. But his privacy concerns were spread far and wide.
But are those concerns valid? When it comes to Apple’s AI, do you need to worry about your privacy?
How privacy is built into Apple’s AI approach
Apple emphasized during Monday’s announcement at its annual developer conference that its approach to AI is designed with privacy in mind.
Apple Intelligence is the company’s name for its own AI models, which run on the devices themselves and don’t send information over the internet to do things like generate images and predict text.
But some tasks need beefier AI, meaning some information must be sent over the internet to Apple’s servers, where more powerful models exist. To make this process more private, Apple also introduced Private Cloud Compute.
When a device connects to one of Apple’s AI servers, the connection will be encrypted — meaning nobody can listen in — and the server will delete any user data after the task is finished. The company says not even its own employees can see the data that is sent to its AI servers.
The servers are built on Apple’s chips and use Secure Enclave, an isolated system that handles things like encryption keys, among other in-house privacy tech.
Anticipating that people might not take it at its word, Apple also announced that it will release some of the code powering its servers for security researchers to pick apart.
In a thread on X, Johns Hopkins computer science professor Matthew Green praised the company’s “very thoughtful design,” but also raised some concerns. Researchers won’t see the source code running on servers, for example, which Green wrote is “a little suboptimal” when it comes to investigating how the software behaves.
Importantly, users won’t be able to choose when their device sends information to Apple’s servers. “You won’t opt into this, you won’t necessarily even be told it’s happening. It will just happen. Magically. I don’t love that part,” Green wrote.
He explained that there may be many other flaws and issues that would be hard for security researchers to detect, but that ultimately, it “represents a real commitment by Apple not to ‘peek’ at your data.”
Could ChatGPT be a weak link?
Musk’s main point of contention was Apple’s upcoming integration of ChatGPT, the popular chatbot from OpenAI. While Apple’s own models will power most of what happens on your device, users can also choose to let ChatGPT handle some tasks.
ChatGPT has been the focus of privacy concerns from experts and regulators. Research has found, for example, that the an earlier iteration of ChatGPT could be forced to divulge personal information scraped from the internet — such as names, phone numbers and email addresses — and included in its training data.
Anything a user asks ChatGPT is also vacuumed up by OpenAI and used to train the chatbot, unless they opt out. This has prompted major companies, including Apple, to ban or restrict the use of ChatGPT by employees. ChatGPT is also the subject of multiple regulatory probes, including by the Office of the Privacy Commissioner of Canada.
When reached for comment via email, Apple said that ChatGPT is separate from Apple Intelligence and that it is not on by default.
Additionally, as the company showed during Monday’s announcement, people who turn on the ChatGPT option are asked via pop-up notification every time if they’re sure they want to use it. As an extra layer of privacy, Apple says it “obscures” users’ IP addresses, and that OpenAI will delete user data and not use it to improve the chatbot.
Apple did not respond to questions around how it will verify that OpenAI is deleting user data sent to its servers.
In an emailed statement to CBC News, Apple said that people will be able to use the free version of ChatGPT “anonymously” and “without their requests being stored or trained on.”
However, Apple said users can choose to link their ChatGPT account to access paid features, in which case their data is covered under OpenAI’s policies, meaning requests will be stored by the company and used for training unless the user opts out.
“The data the AI receives is used to train the model,” wrote Cat Coode in an email. The Waterloo, Ont.-based data privacy expert founded cybersecurity firm BinaryTattoo. “If you are feeding it personal information then it will take it.”
Coode noted that Apple also collects data from users, but “historically ChatGPT has been less secure.”
When reached for comment, OpenAI spokesperson Niko Felix said that “customers are informed and in control of their data when using ChatGPT.”
“IP addresses are obscured and we don’t store [data] without user permissions,” Felix said. “Users can also choose to connect their ChatGPT account, which means their data preferences will apply under ChatGPT’s policies.”
ChatGPT users with an account can opt out of their data being used for training purposes.
Apple Intelligence and ChatGPT on Apple devices aren’t just a test for AI tech, but also for new privacy approaches that are necessary to safely use large AI models over the internet.
Green, the computer science professor, wrote in his thread that this world of AI on devices is one we’re moving to.
“Your phone might seem to be in your pocket, but a part of it lives 2,000 miles away in a data center.”
Source : CBC