Connect with us

The Plunge Daily

Experts Warn: ChatGPT Atlas May Expose Users to Data Theft and Malware Risks

Experts Warn OpenAI ChatGPT Atlas May Expose Users to Data Theft and Malware Risks

Artificial Intelligence

Experts Warn: ChatGPT Atlas May Expose Users to Data Theft and Malware Risks

OpenAI’s newly launched ChatGPT Atlas browser is already under intense scrutiny from cybersecurity experts, who warn it could expose users to serious risks — including data theft, malware infections, and unauthorized account access.

Atlas, unveiled earlier this week, aims to revolutionize web browsing by integrating ChatGPT directly into a native browser. It can search the internet, plan trips, book hotels, and even perform tasks autonomously through its new “Agent Mode.” However, experts say this innovation comes with potentially dangerous vulnerabilities that hackers could exploit to turn AI agents against their users.

The Promise — and the Peril — of an AI Browser

ChatGPT Atlas introduces several groundbreaking features, such as browser memories, which allow ChatGPT to remember key details from a user’s browsing history, and an experimental Agent Mode, enabling it to click links, fill forms, and navigate websites autonomously.

While OpenAI promotes Atlas as a step toward a “super-assistant,” researchers warn that these same features expand the attack surface for malicious actors. The most concerning issue, experts say, is “prompt injection” — a type of cyberattack that manipulates AI systems using hidden text or code embedded in websites.

Hidden instructions could be disguised in white-on-white text or buried in webpage metadata, making them invisible to users but fully readable to the AI.

OpenAI Launches ChatGPT Atlas Browser

OpenAI Launches ChatGPT Atlas Browser

Early Exploits Already Detected

Security researchers and social media users have already begun demonstrating real-world exploit scenarios involving ChatGPT Atlas. One user showed how “clipboard injection” could make the AI unknowingly overwrite a user’s clipboard with malicious links, which, when pasted later, could redirect them to phishing or malware-laden sites.

Meanwhile, rival browser developer Brave published a report highlighting multiple risks that AI browsers like Atlas and Perplexity’s Comet face — including indirect prompt injections and image-based attack vectors, where malicious commands are hidden in images or code.

 

View this post on Instagram

 

A post shared by OpenAI (@openai)

OpenAI Responds to Security Concerns

OpenAI’s Chief Information Security Officer, Dane Stuckey, addressed the growing concerns on X (formerly Twitter), assuring users that the company had conducted extensive red-teaming and implemented new safeguards to detect and block malicious attacks.

“Our long-term goal is that you should be able to trust ChatGPT Atlas the same way you’d trust a competent, security-aware colleague,” Dane Stuckey said. However, he acknowledged that prompt injection remains an “unsolved frontier” and that adversaries are actively exploring ways to breach AI systems.

Atlas also includes user-protection measures such as “logged out mode,” “watch mode,” and restricted access to sensitive sites, which pause AI activity until the user confirms actions.

Experts caution that the integration of AI and browsing blurs the line between human intent and machine autonomy. MIT Professor Srini Devadas warned that giving AI access to personal credentials or browsing history could lead to catastrophic data leaks if compromised. “As AI agents gain more control, the risk isn’t just technical — it’s behavioral,” Devadas said. “Users may not realize how much data they’re exposing.”

  • Experts Warn OpenAI ChatGPT Atlas May Expose Users to Data Theft and Malware Risks
  • OpenAI Launches ChatGPT Atlas Browser
  • Experts Warn OpenAI ChatGPT Atlas May Expose Users to Data Theft and Malware Risks
  • OpenAI Launches ChatGPT Atlas Browser

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

More in Artificial Intelligence

To Top
Loading...