Friday, February 20, 2026

Cisco explores the increasing menace panorama of AI safety for 2026 with its newest annual report

Thanks to all the contributors of the State of AI Safety 2026, together with Amy Chang, Tiffany Saade, Emile Antone, and the broader Cisco AI analysis crew.

As synthetic intelligence (AI) expertise and enterprise AI adoption advance at a speedy tempothe safety panorama round it’s increasing quicker, leaving many defenders struggling to maintain up. Final yr, we launched our inaugural State of AI Safety report to assist safety professionals, enterprise leaders, policymakers, and the broader group make sense of this novel and complicated subject—and put together for what comes subsequent.

In reality, rather a lot can change in a yr.

At the moment, we’re proud to share the State of AI Safety 2026, our flagship report that builds upon the foundational evaluation coated in final yr’s version.

This publication sheds mild on the AI menace panorama, a snapshot in time, however one which marks the beginnings of a significant paradigm shift in AI safety. The confluence of speedy AI adoption, untested boundaries and limits of AI, non-existent norms of habits round AI safety and security, and current cybersecurity danger requires a basic change to how firms strategy digital safety. Because the report particulars, AI vulnerabilities and exploits conceptualized inside the confines of a analysis lab have materialized, evidenced by quite a few experiences of AI compromise and AI-enabled malicious campaigns from the second half of 2025. Different notable developments—the proliferation of agentic AI, adjustments in authorities regulation, and rising attacker curiosity in AI, for instance—have additional difficult the scenario.

Like its predecessor, the State of AI Safety 2026 explores new and notable developments throughout AI menace intelligence, international AI coverage, and AI safety analysis. On this weblog, we present a preview of among the areas coated in our newest report.

Threats to AI purposes and agentic programs

On the outset of 2025, the trade was characterised by a profound dissonance between AI adoption and AI readiness. Whereas 83 % of organizations we surveyed had deliberate to deploy agentic AI capabilities into their enterprise capabilities, solely 29 % of organizations felt they had been really able to leverage these applied sciences securely. Organizations that rushed to combine LLMs into important workflows could have bypassed conventional safety vetting processes in favor of velocity, sowing a fertile floor for safety lapses and opening the door for adversarial campaigns.

At the moment, AI capabilities exceed the conceptual boundaries of beforehand out there programs. Generative AI is accelerating quickly, usually with out correct testing and analysis, provide chains are rising in complexity, usually with out correct controls and governance, and highly effective, autonomous AI brokers are proliferating throughout important workflows, usually with out accountability being ensured. The potential for immense worth in these programs comes with an equally huge danger floor for organizations to deal with.

The State of AI Safety 2026 dives into the evolution of immediate injection assaults and jailbreaks of AI programs. It additionally examines the fragility of the fashionable AI provide chain, highlighting vulnerabilities that may be present in datasets, open-source fashions, instruments, and numerous different AI parts. We additionally take a look at the rising danger floor of Mannequin Context Protocol (MCP) agentic AI and be aware how adversaries can use brokers to execute assault campaigns with tireless effectivity.

An innovation-first strategy for international AI coverage

In opposition to the backdrop of an evolving menace panorama, and as agentic and generative AI applied sciences introduce new safety complexities, the State of AI Safety 2026 report additionally examines three main AI gamers’ approaches to AI coverage: america, European Union, and the Folks’s Republic of China. The trajectory of AI governance in 2025 represented a definitive shift, with previous years outlined by a stronger emphasis on AI security—non-binding agreements and regulation that had been supposed to guard constitutional or basic rights. In 2025, we witnessed a international repositioning in the direction of innovation and funding in AI growth whereas nonetheless contending with the inherent safety and security issues that generative AI could pose by means of misaligned mannequin habits or malicious exercise comparable to growing deepfakes for social engineering.

The US, underneath a brand new administration, is centered on fostering an atmosphere that encourages innovation over regulation, pivoting away from extra stringent security frameworks and counting on current legal guidelines. Within the European Union (EU), following the ratification of the EU AI Act, there was broad political consensus for the necessity to simplify guidelines and stimulate AI investing, together with by means of public funding. China has pursued a dual-track technique of deeply integrating AI through state planning whereas concurrently erecting a complicated digital equipment to handle the social dangers of anthropomorphic and emotional AI. As our report explores, every of those three regulatory blocs has adopted a definite national-level strategy to AI growth reflecting political programs, financial priorities, and normative values.

AI safety analysis and tooling at Cisco

Over the past yr, the Cisco AI Risk Intelligence & Safety Analysis crew has each pioneered and contributed to menace analysis and open-source fashions and instruments. These initiatives map on to among the most crucial modern AI safety challenges, together with AI provide chain vulnerability, agentic AI danger, and the weaponization of AI by attackers.

The State of AI Safety 2026 report offers a succinct overview of among the newest releases by our crew. These embody analysis into open-weight mannequin vulnerabilities, which sheds mild on how numerous fashions stay prone to jailbreaks and immediate injections, particularly over lengthier conversations. It additionally covers 4 open-source tasks: a structure-aware pickle fuzzer that generates adversarial pickle recordsdata and scanners for MCP, A2A, and agentic ability recordsdata to assist safe the AI provide chain.

Get the report

Able to learn the total State of AI Safety report for 2026? Test it out right here.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles