Elon Musk Might be Wrong — There are Risks of AI in Your Phone & Computer

Elon Musk raised concerns about Apple’s partnership with OpenAI, citing potential security and data privacy issues. However, his remarks stemmed from a misunderstanding of the partnership and Apple’s approach to artificial intelligence.

Musk criticized Apple on his social media platform X, claiming that Apple lacks the capability to develop its own AI and stating that he would not allow Apple devices in his facilities. He expressed fears of a privacy disaster resulting from the deal. Nevertheless, Musk’s comments appeared to overlook key details from the WWDC Keynote, where Apple emphasized that all data processing occurs on-device and interactions with ChatGPT are opt-in, with only prompts being sent to ChatGPT, not full conversations.

The community has noted and discussed Musk’s posts, highlighting misinformation and providing clarifications about the actual circumstances. Musk’s viewpoints may also be influenced by his ongoing disagreements with OpenAI, an organization he helped fund in its early stages.

Elon Musk views every action taken by OpenAI as a potential risk to data privacy, as he believes that any data shared with ChatGPT is used to enhance the model comprehensively. However, both OpenAI and Apple have clarified that this is not the case when data is sent through Siri.

Musk stated in his post that he would prohibit Apple devices from his companies if there is deep integration of OpenAI at the operating system level, considering it an “unacceptable security breach.” Despite community efforts to correct this misconception, Musk remains firm in his stance on deep integration, and his supporters continue to advocate for it.

ALSO READ  How was a 1562 Painting Able to Capture Creatures Resembling Dinosaurs?

Elon Musk criticized Apple in a post on X, describing them as “too incompetent” to develop their own AI models. This critique came despite Apple Research having published multiple models and the company claiming to have constructed its entire Apple Intelligence framework using its models. Musk commented that it is absurd for Apple to be incapable of creating its own AI models while simultaneously ensuring that OpenAI protects user security and privacy.

According to Musk, once data is handed over to OpenAI, Apple loses control over what happens and compromises user privacy. However, Apple has provided detailed explanations on how their local models function, how they secure data sent to their cloud servers, and how they have optimized their models for efficiency. They have also addressed the security aspects related to ChatGPT interactions.

Apple’s collaboration with OpenAI involves utilizing an API to facilitate seamless user interactions. However, contrary to speculations from Musk and others, Apple does not utilize OpenAI servers or models to handle users’ data, including phone call transcripts or any other Siri tasks discussed during their keynote or state of the union.

The integration with OpenAI is merely one component and is opt-in. Essentially, when users engage with Siri and request actions that cannot be processed on-device or through Apple’s private cloud model, Siri will seek permission to send the request to ChatGPT. Users have the option to accept or decline this request.

According to Apple and OpenAI, only the exact query is transmitted to ChatGPT, and the AI model is not trained on any other queries received through Siri or Apple Intelligence. However, this differs if the OpenAI ChatGPT app is utilized. With Siri, users maintain control and must grant permission before any of their information is shared. The ChatGPT app is accessible to anyone for free without requiring an account, while subscribers have the option to link accounts for access to premium features within the service.

ALSO READ  All You Need To Know About The Current State And Location Of The Car Elon Musk Sent To Space Over 5 Years Ago.

Leave a Comment