The New Shape of Risk: How Generative AI Is Changing the Security Landscape

A decorative image showing several cubes split into different smaller shapes.

As generative AI has shifted from selective experimentation to broad operational use, large language models (LLMs) now sit inside developer environments, support workflows, internal knowledge systems, and security tooling itself. Adoption has widened for both enterprises and consumers alike, and, unsurprisingly, have a whole new set of security patterns.

Though oftentimes single catastrophic failures are the types of stories that make the news (like an AI agent pushing code to production against its explicit instructions), the truth is that there’s a bigger narrative here. Generative AI has introduced a whole new way to work, and we’re seeing a set of recurring behaviors—ways that AI systems interact with data, instructions, and people—that either introduce new risks or enhance some tried-and-true bad actor tactics (like more convincing phishing attacks, for example).

This article focuses on seven patterns that have emerged in real deployments and documented incidents. Let’s get into it. 

1. Prompt injection and instruction hijacking

Prompt injection has matured from a research concept into a practical exploit vector. The issue is structural: LLMs interpret text holistically using tokenization, which makes it difficult to maintain a strict separation between instructions and data. When untrusted content is introduced into an AI system with elevated permissions, that ambiguity becomes exploitable.

Recent incidents show how this plays out in production tools. Researchers analyzing Microsoft Copilot demonstrated that carefully crafted inputs could override intended behavior, expose system prompts, or trigger unintended actions within the model’s sandbox.

The common thread is authority. When models are allowed to act on retrieved content or invoke downstream tools, text becomes a control surface.

2. Prompt poaching and peripheral exfiltration

Not all AI-related data loss requires access to the model itself. A recent malware campaign demonstrated how attackers can siphon AI conversations by compromising the surrounding ecosystem.

Malicious Chrome extensions posing as productivity tools were found harvesting prompts, responses, and browsing context from users interacting with AI assistants; the data was quietly sent to external servers.

These attacks target trust boundaries adjacent to AI systems rather than the models directly. Browser extensions, plugins, and integrations become collection points for high-value contextual data that did not previously exist in a single place. And, they’re often less controlled by enterprise IT teams compared with other types of software. 

3. AI-powered malware and ransomware

AI-assisted malware is no longer hypothetical. Security researchers have now documented ransomware that uses generative models as part of its operational logic.

One example: PromptLock, a ransomware strain that leverages LLMs to dynamically generate portions of its code and behavior during execution.

At the ecosystem level, threat intelligence reports show ransomware groups using AI to accelerate development, customize payloads, and craft tailored extortion communications. Akamai’s 2025 ransomware trends report documents LLM usage by active groups for both technical and social components of attacks.

It’s less about how it’s done and more about how fast it’s done: Iteration cycles are shorter, and adaptation happens more quickly.

4. Acceleration and competitive pressure in the ransomware economy

Even when AI is not embedded directly into malware, it influences the broader threat environment. Ransomware activity increased throughout 2025 despite arrests and takedowns; new groups emerged quickly to replace disrupted ones.

As we said above, speed matters here. Defensive models that assume time for analysis, tuning, and response are increasingly stressed by attackers who can prototype and redeploy faster than those cycles allow.

And it’s not just speed—the volume of (credible, real) attacks matters too. The truth of the game has always been that bad actors only have to succeed once whereas defenders have to succeed every time. If better ransomware is being produced more quickly, defenders are having to adapt just as (or more) quickly to a higher volume of attacks (which makes the demand for employees in the security industry that much more understandable). 

5. Semantic noise and operational fatigue

Generative AI produces a large volume of plausible output: summaries, recommendations, alerts, explanations. In isolation, that capability is helpful; in aggregate, it introduces a new operational burden.

Security teams report growing difficulty distinguishing signal from well-formed noise. In reality, this means that over-taxed employees are getting pinged while on-call far more. 

AI-generated conclusions often require human validation, but their tone and confidence can reduce scrutiny over time. That creates opportunities for malicious activity to hide inside outputs that appear reasonable and routine; or, on the flip side, for things like process and architecture misconfigurations to masquerade as security events by creating too many requests. 

This pattern does not map cleanly to a single exploit; it shows up as delayed detection, slower response, and missed anomalies.

6. Code supply chain risk from generated code

AI-generated code compounds familiar supply-chain issues. Generated snippets often compile cleanly, pass tests, and follow common patterns; they also tend to replicate insecure defaults or omit contextual safeguards.

As these patterns are reused across services, small mistakes scale quickly. Not only that, but basic parameters like privileging recency (e.g., new security patches) vs. commonality (e.g., the most often used code) can have major implications and be weighted differently in different tools. While there is demonstrated risk of malicious insertion, it’s also the normalization of fragile or incomplete logic through automation.

7. Potential human skill erosion as a force multiplier

One of the quietest risks is also the hardest to measure. As AI tools handle more analysis, summarization, and decision support, human operators spend less time interrogating raw data. That’s both a good and a bad thing—really, it begs the question of how we go about creating and applying expertise in a new and developing epistemological framework. (Wait, you thought engineering wasn’t philosophical?)

Over time, that shifts how teams validate outcomes and how comfortable they are challenging AI-generated conclusions. This erosion does not cause incidents by itself, but it can amplify the impact of every other failure mode.

Where this leaves us

Across these examples, a consistent theme emerges. Generative AI changes how authority, context, and action flow through systems. Many of the resulting failures are subtle and blend into normal usage patterns.

The next phase of response is already taking shape. Government agencies and standards bodies are beginning to formalize guidance on securing AI systems, managing AI-related risk, and adapting existing security practices to these new patterns.

That guidance belongs in its own discussion. For now, the takeaway is simpler: AI adoption has altered the shape of risk. 

About Stephanie Doyle

Stephanie is the Associate Editor & Writer at Backblaze. She specializes in taking complex topics and writing relatable, engaging, and user-friendly content. You can most often find her reading in public places, and can connect with her on LinkedIn.