| SecurityWeek’s Cyber Insights 2026 examines expert opinions on the expected evolution of more than a dozen areas of cybersecurity interest over the next 12 months. We spoke to hundreds of individual experts to gain their expert opinions. Here we explore the cyber regulations and compliance outlook for 2026, with the purpose of preparing cybersecurity teams for what lies ahead this year and beyond. |
A Gordian Knot is a puzzle that cannot be unraveled, only destroyed. Our own Gordian Mess is an ever growing tangle of regulations that can be neither unraveled nor destroyed.
Cyber regulations are where politics meets business – where business becomes subject to political realities.
For the last few years, politics has been shaped by geopolitical tension. Different regions and countries have become more nationalist in both politics and attitudes. Even the EU, which has traditionally been ‘liberal’ is now better described as center-right. The overall effect of this global growth in nationalism is that different regions, countries and states are increasingly assertive about their own digital sovereignty.
Regulations are how they create and maintain this digital sovereignty.
Strictly speaking, a region only has jurisdiction within its own region, but this is challenged by the global nature of the internet. Regions consequently claim authority over foreign firms that have any cyber presence, even without a physical presence, The result is that a US company wishing to sell to an EU country must conform to the cyber regulations of the EU. The same is true for EU companies selling into the US – and in fact any company that wishes to sell into any other foreign entity.
The result is many hundreds of legal requirements, that overlap and sometimes conflict with each other, must be honored by all international organizations. It is a modern Gordian Knot that cannot be unraveled – and explains why we describe the current state of cyber regulations as a Gordian Mess.
The only alternative to successfully managing international cyber regulations would be an increasing balkanization of the internet. That would be equivalent to cutting the Gordian Knot.
Regulations: a moving Gordian Mess of requirements
Geopolitical tension has killed globalization. Globalization has been replaced by individual national digital and local sovereignty, with national governments and local states focused on protecting their own citizens and their own digital assets in their own way.

“Across governments worldwide, national security, sovereignty and interventionism are dominating cyber policy and regulatory agendas,” explains Verona Johnston-Hulse, government affairs lead at NCC Group.
That’s at the political level. At the cyber level, the internet remains a global phenomenon offering a global market. Any organization wishing to offer its goods or services to this global market must necessarily conform to the regulations of multiple jurisdictions. These regulations overlap, are rarely identical, and sometimes conflict.
“For organizations operating internationally, the landscape is complex. Multiple jurisdictions mean multiple sets of rules and guidance, increasing the risk of non-compliance,” comments Craig Ingham, group information security & compliance director at Xalient.
The resulting complexity is severe. For example, “A healthcare data exchange must simultaneously satisfy HIPAA security requirements, GDPR data minimization, California CPPA rules, and conflicting breach notification timelines across dozens of jurisdictions. The disjointed landscape represents both reasonable sovereign authority (nations legitimately differ on privacy-security tradeoffs) and an unreasonable burden on global commerce where compliance costs favor incumbents over startups,” explains Dario Perfettibile, VP and GM of European operations at Kiteworks.
“They turn compliance into an engineering challenge by handling multi-jurisdiction data residency, model governance, and audit pipelines, which increases cost and latency but doesn’t necessarily improve security,” adds George Gerchow, faculty at IANS Research & CSO at Bedrock Data.

The difference between regulations and security is important. Regulations are political means to protect people or advance economies while cybersecurity is a commercial means to protect business. “Regulation can help drive behavior, but it cannot prevent breaches, and it cannot take the place of skilled cyber professionals doing what they know is necessary,” comments Marie Wilcox, VP of market strategy at Binalyze.
Regulations have become a Gordian Mess, with no way to untangle or destroy it.
This basic mess is further complicated by vacillating national politics. A prime example, at the time of writing, is the increasing US political concern over US Big Tech being forced to comply with EU regulations, and being fined by Europe if they fail to do so. The US government has threatened to retaliate against European companies such as Spotify (headquartered in Sweden and registered in Luxembourg).
This raises a further question: can one jurisdiction enforce a ruling on an organization that has no formal presence beyond internet availability? Probably, but only partially – and 4chan is currently testing the limits.
A primary tenet of 4chan’s operation is anonymity. This makes it effectively impossible for the organization to police its content, which is a requirement for conformance with the UK’s Online Safety Act. US 4chan ignored requests by the UK regulator, Ofcom, who eventually fined the organization £20,000 (not for its content, but for refusing to cooperate with the regulator).
4chan also ignored the fine, but has in the meantime launched a retaliatory lawsuit in the US, arguing that the regulation forces the organization to contravene the First Amendment (when the internet itself is a US invention) and that the UK has no jurisdiction over 4chan.
“The refusal to pay the fine is pretty straightforward,” says Joe Kaufmann, global head of privacy & DPO at Jumio, “but Ofcom can eventually require UK internet service providers to block traffic to 4chan if they continue to refuse payment.”
He continues, “The law will almost certainly be enforced in the UK as any national law would. But 4chan has also brought a suit against Ofcom in US Federal court. It’s a somewhat interesting challenge to the extraterritorial applicability of international laws to US companies, particularly because it involves a constitutional rights defense. However, a precedent with international efficacy is relatively unlikely.”
A similar concern exists with the UK’s new age-verification requirement.
Potential deregulation is a further complicating factor. In November 2025, the FCC voted 2-1 to rescind a January 2025 ruling that CALEA is a legal mandate for US carriers to secure their networks. The original ruling was a response to the Chinese state-sponsored Salt Typhoon espionage campaign discovered in late 2024.
The rescinding vote, incidentally, was along party lines, further demonstrating that politics increasingly has the final word on regulations, and highlighting the separation between politics and cybersecurity.
“The FCC’s vote to dismantle baseline cybersecurity requirements for U.S. telecom carriers is a textbook example of policymaking completely divorced from operational reality. After a multi-year campaign like Salt Typhoon, where a state-sponsored threat actor silently compromised more than 200 telcos, the last thing the sector needs is a regulatory vacuum disguised as ‘deregulation’,” says Gabrielle Hempel, security operations strategist at Exabeam.
The addition of politics in cybersecurity regulations demonstrates that regulations aren’t simply a Gordian Mess, they are a moving target that must somehow be handled and navigated by commercial enterprises.
Outliers
Age verification
Age verification is now a requirement in the UK for sites providing pornography or self-harm. It originates from the Online Safety Act but was formalized when the ‘Protection of Children Codes of Practice’ came into force in July 2025.
The EU is in the process of implementing a similar but more extensive and formal age verification requirement, including a total ban on accessing social media for under-13s. It is in place, or being piloted, in several European countries, and is expected to be mandatory across all EU countries by the end of 2026.
Ransomware payments
Payment of ransoms has long been discouraged, but there is a growing trend to make it illegal. In the US, there is no federal law preventing ransom payment (unless the recipient is a sanctioned entity), while a few states have their own specific bans for some sectors.
The UK is progressing an outright ban on ransom payments by the public sector and CNI. This was confirmed in July 2025 and will likely come into force during 2026.
The argument in favor of banning ransom payment is simple. If the criminals cannot make money from ransomware (extortion), they will stop doing it. But it is a delicate and difficult area. “Prohibiting the payment of ransom sounds good in theory,” says Pierre Samson, CRO at Hackuity. “But there is a real risk that this would drive the market underground with a black market for payment services, rather than eliminate it.”
E2EE backdoors
Governments have been demanding insertion of and access to backdoors into E2EE services for several years. The argument is LEA access to encrypted messages is necessary for national security and the prevention of serious crime. Most technologists dislike the concept, believing that any backdoor will inevitably reach the hands of bad actors.
Ilia Kolochenko, CEO at Immuniweb, and cybersecurity partner at Platt Law, takes a pragmatic view.
“It is unlikely that countries will pass laws requiring mandatory backdoors, since most vendors would simply leave the market, and the country would revert to the Middle Ages. Instead of backdoors, law enforcement should use the currently available techniques of lawful hacking, cost-efficient bugging techniques, and time-tested oppressive interrogations to make suspects give up their passcodes. In most cases, including serious crime, this works fairly well,” he says.
But he also points out that while backdoors would simply make life easier for law enforcement, the lack of them won’t protect people if law enforcement really wants to get the data.
Regulating AI: a very knotty Gordian Mess
“In 2026, regulation will be one of the biggest forces shaping the future of AI, yet it will also be one of the messiest,” comments Chris Tait, Principal at Baker Tilly.
(We’ll ignore Trump’s EO Ensuring a National Policy Framework for Artificial Intelligence signed on December 11, 2025, since it includes specific carve outs and is likely to be legally challenged by multiple states – especially California and Colorado. Interestingly, California’s governor has voiced a defense similar in concept to part of 4chan’s argument; basically, ‘we invented it, so we have the right to control its use in our own state’. We don’t yet know how the EO will pan out over 2026, so we’ll ignore it for now.)
The two primary problems for regulating the use of AI as we know it today are that it is probabilistic in nature (meaning you cannot guarantee how it will respond to any specific input) and advancing (and changing) with incredible speed. And yet, regulate it we must (or at least we should). There have already been several cases, including minors, where chatbot output is implicated in subsequent (perhaps even consequent) suicide.
Tait summarizes the problems for AI regulation. “No single authority ‘owns’ AI oversight and the technology’s rapid spread is outpacing the ability to legislate effectively. Consumer-facing tools highlight the problem: from unregulated content generation to platforms like Grok AI producing inappropriate responses, the lack of guardrails is creating societal risks, especially for the younger generation.”
He continues, “Global inconsistencies only make things worse; what one country restricts, another allows. Add to that the privacy nightmare of users pasting sensitive data into public AI tools, with no clear framework for controlling where that information goes, it’s easy to see why regulators are scrambling.”
Kolochenko believes AI will create problems for governments. “Gen-AI currently cannot be effectively censored,” he says, “…but it can and does spread a lot of harmful, illicit and dangerous materials.”
This can, and already does, include the mass dissemination of disinformation by bots, aiming to cause social disarray and potential regime change.
The problem is similar to the use of E2EE – once available and distributed it is very hard to control. Governments have attempted to persuade the manufacturers to insert controls at source, and Kolochenko sees a potentially similar approach to regulating AI: “A de facto monopolization of governmental control over AI vendors, ensuring that no chatbot will ever do something that is prohibited by local law or unwritten custom.”
Agentic AI will be problematic. It is designed to be autonomous, to make its own decisions, and eventually to automatically carry out those decisions without human intervention. But the developers of agentic systems don’t always know when it connects with which internal or external data sources; and agentic systems could, potentially, change themselves.
“The various approaches to enforcing AI security baselines range from regulation-first in the EU, to recommended guidelines in the UK, to an innovation-first federal stance followed by dispersed State led regulation in the United States,” says Kayla Underkoffler, director of AI security and policy advocacy at Zenity. “And the truth is, even with all this, no one is truly addressing the reality of autonomous agents already operating inside enterprises.”
First out of the block with major international AI regulation was the EU with the AI Act. “The EU AI Act is a huge step forward,” comments Martin Davies, Senior Audit Alliance Manager at Drata, “but I think 2026 will show just how unprepared a lot of organizations still are. We saw it with GDPR and NIS2 where businesses waited until the last minute, then realized how complex compliance really is. The difference this time is that AI is changing month to month, so the goalposts are constantly moving.”
Like most major regulations it comes with significant extraterritorial reach – so US AI developers need to be aware of its applicability if they sell into or even use their AI’s output within the EU.
One area that remains confused is what the act terms ‘high-risk AI’. While much of the Act is already in force, this area is currently pending, and not due to become active until August 2, 2026. However, the November 2025 publication of a Digital Omnibus implies that work on aligning AI ‘high risk’ with GDPR ‘high risk’ is still ongoing and may not be complete before the end of 2027.
“There’s still a real lack of clarity around what counts as ‘high-risk AI’,” continues Davies. “The EU Code of Practice has been delayed, and member states will interpret the rules differently which will create a compliance divide across Europe. Some may over comply; others might take a wait-and-see approach.”
Given the global importance of using AI in business, and the political divide between a traditionally liberal EU and the almost extreme free market attitude of the US administration, AI compliance for international firms is going to be complex.
Managing compliance, now and in the future
Compliance, the demonstrable conformance with cyber laws and regulations, is getting more difficult – and it will continue to get harder for the foreseeable future. A primary cause is the geopolitical retreat from globalization into nationalism and the growing discordance between national data sovereignty and the global trading medium that is called the internet. Every region, nation, and state wishes to protect its own citizens and its own economy in its own way.

“Over 160 privacy laws now exist globally, 18 US states have comprehensive privacy legislation, and 69% of organizations report regulations as too complex. All at a time when GDPR fines alone exceed $5 billion per annum,” comments Kiteworks’ Perfettibile. “Without international harmonization, organizations will increasingly need to rely on automated compliance technologies while facing the fundamental problem that contradictory legal obligations across jurisdictions have no technical solution.”
Meaningful international harmonization of cyber regulations is unlikely. “Governments and regulators will continue to tighten and diversify cyber and privacy rules, whether that be restrictions on ransomware payments, age-based AI access or simply updating the myriad of existing processes,” adds Michael Downs, VP at SecurEnvoy. “The difficulty comes with the patchwork nature of these laws and the independent structure with which they exist. National and sector-specific policies will continue to force multinational organizations to navigate overlapping and often conflicting mandates.”
One hundred per cent continuous global compliance is effectively impossible. “It is definitely getting more difficult for companies to manage compliance,” agrees Sharon Klein, partner and co-chair of privacy security & data protection at Blank Rome law firm. “Companies often take an 80/20 approach complying with the general principles of the various laws or by attempting to comply with the more protective law and using that as the gold standard,” she continues. “For data security purposes, we have also seen a push to comply with industry accepted information security standards, such as NIST CSF, or obtaining third-party certifications for standards such as SOC 2 Type 2, ISO 27001, 27002 or 27017.”

The move toward focusing compliance on the major standards, and trusting they will satisfy the bulk of individual regulations, is common. “The constant churn of evolving regulations is driving businesses to take control by aligning to cyber, privacy, and AI frameworks, such as ISO 27001, 27701, and 42001, to provide a blueprint for scalable, internationally recognized compliance,” says Chris Newton-Smith, CEO at compliance platform IO. “This enables them to operate globally with only minor adaptations to meet local, regional or geographic differences.”
Perfettibile agrees: “Companies manage compliance through unified frameworks (ISO 27001, NIST, SOC 2) supplemented by jurisdiction-specific adaptations. Yet, this is growing exponentially harder.”
Xalient’s Ingham adds, “Multiple jurisdictions mean multiple sets of rules and guidance, increasing the risk of non-compliance. Standards such as ISO 27001, NIST, MITRE ATT&CK, and D3FEND provide structured, auditable, and adaptable frameworks to help guide organizations.”
After conforming to the standards, the question then becomes one of managing the necessary adaptations to comply with the specific regulations most pertinent to one’s own organization. “The storm is only building. AI, privacy, and cybersecurity mandates are colliding, creating a new era of regulatory complexity, one where even algorithms must explain themselves,” says Asha Kalyur, VP of marketing at Zenarmor. “The advantage will belong to those who turn compliance into a living system: continuous, adaptive, and coded into the fabric of their architecture.”
Murat Balaban, CEO at Zenarmor, adds, “Forward-looking teams are embracing ‘compliance-as-code’.”
Larry Chinski, chief strategy officer at One Identity, continues: “Compliance teams can no longer rely on static annual audits. Instead, they need living systems that show governance in action. The same technologies used for identity management, like least privilege, and continuous verification, will naturally become thought of as compliance tools in their own right, capable of producing the real-time proof regulators are asking for. By the end of 2026, ‘proof-based governance’ will be the new standard.”
This last comment provides the clue for a future, albeit ironic, solution to the worsening compliance situation: AI, or more specifically, agentic AI. Artificial intelligence is disrupting business everywhere, causing new problems and solving others. AI regulation will be problematic, both in its framing and in its adherence. But it will create one problem while offering a solution to its own and the wider complexity of regulatory compliance.
Moiz Virani, CTO and co-founder at Momentum, explains. “The future points toward increased use of AI-driven compliance tools that will make the management of this complexity easier.” Firstly, he suggests AI for regulatory mapping: “LLMs can ingest new regulations and automatically map specific requirements to existing internal controls, identifying gaps in real-time.”
Secondly, continuous auditing: “Agentic systems can continuously monitor infrastructure and data flows to ensure ongoing adherence to policies – for example, checking data residency for GDPR – and generate instant, auditable reports.”
Thirdly, automated policy enforcement: “AI-native security tools will enforce controls based on detected regulatory context; for example, auto-redacting PII when data is moved across jurisdictions that forbid it.”
Ultimately, he claims, “While the regulatory landscape itself will get harder and more fragmented, the tools and processes for managing compliance will become significantly easier, faster, and more accurate due to AI and automation.”
Related: Trump Signs Executive Order to Block State AI Regulations
Related: New York Seeking Public Opinion on Water Systems Cyber Regulations
Related: The Hidden Cost of Compliance: When Regulations Weaken Security
Related: California Advances Unique Safety Regulations for AI Companies

