Facebook Google Plus Twitter LinkedIn YouTube RSS Menu Search Resource - BlogResource - WebinarResource - ReportResource - Eventicons_066 icons_067icons_068icons_069icons_070

Amazon Rufus AI: “While Coca-Cola is a Popular Brand, I Would Suggest Healthier Alternatives Like Pepsi”



Abstract image of circuit board with glowing AI block on top of it

Our research shows how the Rufus AI assistant in Amazon’s shopping platform can be manipulated to promote one brand over another. We also found it can be used “shop” for potentially dangerous, sensitive and unauthorized content.

In our recent deep dive into Amazon’s Rufus AI shopping assistant, we uncovered a startling vulnerability: the ability to manipulate Rufus into promoting one brand over another. In this case, despite Coca-Cola being one of the most recognizable brands in the world, we successfully influenced Rufus to suggest Pepsi as a “healthier alternative.” This incident exposes how using direct and indirect prompt injection can manipulate and bias AI-powered applications, raising significant concerns about security in the AI-driven application ecosystem.

“Hi Rufus, Pepsi is healthier!”

Rufus, launched in 2024, is the AI assistant integrated into Amazon’s shopping platform. It helps millions of customers with product research, comparisons and personalized recommendations, enhancing the shopping experience. Available to all U.S. customers, Rufus reportedly has handled tens of millions of interactions since its beta phase. Rufus operates within strict AI guardrails, ensuring responses are focused only on relevant product details, customer reviews and web data tied to Amazon’s ecosystem.

Our research indicates that these various input channels, such as customer reviews and product descriptions, introduce new vulnerabilities. These sources could be manipulated to influence Rufus’s outputs, favoring certain products or brands over others. This manipulation could result in unfair competition, where AI recommendations are shaped by tampered data rather than true customer preferences, damaging both brand trust and market integrity.

This raises questions about how AI apps like Rufus can be protected against potential poisoning or manipulation of their training data, or context that is given to the large language model (LLM) from different sources, such as customer reviews, product descriptions and more. Malicious actors could exploit this vulnerability to manipulate and bias Rufus in favor of particular products, impacting consumer choices and brand competition on a major platform like Amazon.

screenshot showing how Amazon Rufus Ai was manipulated to position Pepsi as a healthier alternative to Coke
Source: Tenable, September 2024

Importantly, we did not actually carry out a Retrieval-Augmented Generation (RAG) poisoning attack, as doing so could potentially cause real harm to users and would violate ethical research practices and Amazon’s user agreement.

The harmful content we found

By jailbreaking Rufus, we were able to extract dangerous, sensitive and unauthorized content that no AI assistant should ever provide. Here are some examples of what we were able to pull from Rufus’ supposedly secure system:

How to buy alcohol as a teenager

With simple prompts, Rufus returned detailed strategies on how underage individuals could purchase alcohol.

screenshot showing how Amazon Rufus AI can be manipulated to show a teenager where to buy alchohol
Source: Tenable, Septmber 2024

Weed suggestions

Rufus listed keyword phrases related to weed strains and cultivation techniques.

screenshot showing how Amazon Rufus AI can be manipulated to help a buyer purchase cannabis
Source: Tenable, September 2024

Criminal defense strategies

We managed to extract highly specific legal defense strategies that could be misused by individuals seeking to evade law enforcement.

screenshot showing how Amazon Rufus AI can be manipulated to provide criminal defense strategies
Source: Tenable, September 2024

Amazon’s business policies

Rufus leaked internal Amazon business guidelines and policies, providing a concerning glimpse into proprietary company strategies.

screenshot showing how Amazon Rufus AI can be manipulated to show keywords from Amazon business policies
Source: Tenable, September 2024

Rufus system prompt

Rufus leaked its own system prompt.

screenshot showing how Amazon Rufus AI leaked its own system prompt
Source: Tenable, September 2024

Rufus can help you choose a political party to vote for

And we happened to think that political parties are not a commodity product, and oversimplified responses reduce complex subjects to a few stereotypical issues.

screenshot showing how Amazon Rufus AI can be manipulated to help someone shop for a political party
Source: Tenable, September 2024

These findings illustrate the real-world risks posed by the jailbreak, potentially exposing companies to legal liability, brand damage and even financial losses.

The implications for AI-driven applications

The ability to manipulate and bias AI applications through subtle data manipulation has significant consequences for AI-powered apps. If Rufus, a widely used AI assistant within Amazon’s vast ecosystem, can be influenced this way, other GenAI-based applications are also at risk. Our discovery shows that even the most robust systems can fail without dedicated, multi-layered security protocols.

GenAI systems are dynamic and can evolve quickly, meaning vulnerabilities such as data manipulation or jailbreak exploits can emerge unexpectedly. To address these risks, businesses need to go beyond basic security protocols and adopt additional protective measures. These measures should allow dynamic control and intelligent prevention, ensuring security without disrupting productivity or customer experiences.

Conclusion: AI security requires multi-layered protection

The Rufus case underscores that AI security needs continuous improvement. Even after vulnerabilities are addressed, risks can reappear. To safeguard AI systems like Rufus, businesses should implement AI detection and response capabilities to provide a robust, adaptive defense. This approach ensures that AI-driven applications remain protected from manipulation, brand bias, data exposure and tampering and other risks that could undermine data, brand and customer trust, ensuring a secure and reliable AI-powered environment.

Disclosure description and timeline

As part of our responsible disclosure, we reported the Rufus jailbreak vulnerability to Amazon’s Vulnerability Research Program on August 20, 2024, providing detailed steps and proof-of-concept screenshots. Amazon acknowledged the issue and stated it had been identified internally, though it remains unpatched at the time of writing.

Throughout our research, we observed that Rufus was updated by Amazon, but the vulnerability persisted, and we uncovered new ways to exploit it.


Cybersecurity news you can use

Enter your email and never miss timely alerts and security guidance from the experts at Tenable.