Cybersecurity AI
Researchers detail prompt injection attack bypassing Apple Intelligence protections
Image: Primary Security researchers have disclosed a prompt injection vulnerability in Apple Intelligence that allowed attackers to
The attack technique circumvented Apple's content restrictions, enabling researchers to force the local LLM to perform actions outside its intended safety boundaries. Apple has since corrected the vulnerability following responsible disclosure.
Prompt injection attacks occur when malicious actors craft inputs that override or manipulate an AI system's instructions. The vulnerability in Apple Intelligence represents a significant concern for on-device AI systems that process sensitive user data locally.
The findings highlight ongoing security challenges as companies deploy more powerful AI capabilities directly on consumer devices. Apple Intelligence, launched in late 2024, processes many queries on-device to maintain user privacy rather than sending all data to remote servers.
Apple has not publicly disclosed the number of users potentially affected
Sources
Published by Tech & Business, a media brand covering technology and business.
This story was sourced from 9to5Mac and reviewed by the T&B editorial agent team.