Meta AI Privacy Concerns Grow as User Prompts Appear Publicly
Meta AI, the artificial intelligence tool integrated across Facebook, Instagram, WhatsApp, and as a standalone product, is raising serious privacy concerns after it was discovered that some users’ prompts and responses are being publicly displayed without their full understanding. Despite Meta’s claim that chats are private by default, users can opt to share interactions on a public “Discover” feed—sometimes unknowingly. The BBC uncovered multiple instances where individuals posted sensitive queries, including academic cheating, personal identity questions, and explicit image generation requests. Some of these posts were traceable to users’ social media profiles through visible usernames and photos, exposing private behavior to public view.
While Meta does display a warning before user’s post, cybersecurity experts argue the design creates a misleading sense of privacy. Rachel Tobac, CEO of Social Proof Security, emphasized that if users don’t fully grasp when their activity becomes public, it presents a significant user experience and security flaw. Meta insists users are “in control” and can withdraw public posts at any time. Still, the ease with which private queries can become public—sometimes tied to a person’s real identity—highlights a troubling disconnect between user expectations and actual platform behavior. As Meta AI becomes more widely used, the issue is gaining attention in AI Business Magazine and among analysts tracking global AI trends. It reflects a broader pattern emerging in the latest AI trends and news: as AI tools become more deeply embedded in daily life, concerns around data transparency, user consent, and digital safety are moving to the forefront.