An Impact Team White Paper

How to AI.

How open-weight AI turned every laptop into a potential weapon of mass deception—and why no one can put the genie back in the bottle.

A few years ago, AI still felt like sci-fi. Now it’s just… Tuesday.

Models can pass exams, write code, solve your kid’s maths homework, and spit out film-quality images and video from a two-line prompt. The glossy AI assistants we used to see in movies? They’re basically here.

And the honest answer to “Can I have my own?” is: yes. If you’ve got the right hardware, anyone can. That’s because of a flood of open-weight and open-source models you can download, run locally, and fine-tune on your own data for very specific jobs.

That part is already happening at scale.

But before we get into the fallout, it’s worth being precise about what we’re talking about.

Open-weight vs open-source: same hype, different risk

People tend to mash these terms together, but they’re not the same thing.

  • Open-weight models – like Llama, Mistral, Gemma, DeepSeek, Falcon – ship the final trained weights and usually the inference code. You can run them, customise them, and deploy them.
  • What you don’t get: the original training code or the full training dataset. The guts of how they were built often remain proprietary.
  • Open-source models, strictly speaking, would expose the whole stack: code, data (at least in principle), training recipes.

Right now, the energy – and the risk – sits with open weights, not true open source. There are no genuinely frontier-class fully open-source models in the wild. But there are plenty of powerful open-weight ones you can grab today.

Because they’re so capable and so widely available, they’re creating very real, very current problems.

The upside: an entire industry on top of open weights

Let’s be fair: this isn’t all doom.

The open-weight boom has enabled a legitimate ecosystem of companies that specialise in slices of the AI stack instead of burning billions training their own frontier model.

  • Fine-tuning specialists like Nous Research take base models such as Llama and push them hard for better dialogue, reasoning, or coding performance.
  • Product players like Perplexity combine these models with live web search and retrieval to build tools that feel more like “AI browsers” than chatbots.
  • Infra unicorns like Groq and Together AI solve the nasty problems of running these models at speed and scale, whether that’s with custom chips (Groq’s LPUs) or highly optimised cloud platforms.

In other words: open weights have democratised access to serious AI. You no longer need to be OpenAI-sized to build something impressive.

But the exact same openness that powers all this innovation also enables something much darker.

When “anyone can build with AI” includes the worst people

So what happens when the same models are pointed, deliberately, at harm?

Picture an election year.

Your feed is full of angry, emotional posts. Hundreds of thousands of accounts hammering the same talking points, sharing “leaks,” pushing clips that feel designed to make you furious. It looks like a huge grassroots movement.

It doesn’t have to be.

It could be one well-funded organisation running a customised open-weight model as a digital propaganda engine. That system spins up and manages an army of fake accounts, each with its own personality, posting schedule, and language style.

The AI understands context. It replies to comments with tailored arguments. It cites “sources” that look credible unless you dig several layers deep. It’s tuned to ride right up to the edge of platform rules – just toxic enough to shift opinion, not quite bad enough to trigger bans.

And then it goes further.

These systems can generate deepfake video and audio in local accents, mirroring the slang, humour, and cultural cues of the exact group they’re trying to influence. They can scrape your public social media and run hyper-personalised psychological operations against you and people like you.

At that point, this isn’t just spam. It’s cognitive warfare.

From propaganda to cognitive warfare

Traditional propaganda tries to change what you think, whereas cognitive warfare aims to change how you think.

It exploits bugs in the human operating system: our biases, our fear of missing out, our tendency to trust familiar faces, our inability to fact-check a firehose of information in real time. The goal isn’t just to sell you a story – it’s to erode your ability to trust anything.

And open-weight AI is the missing piece that makes this scalable.

For years, this kind of operation was constrained by human effort. You needed legions of trolls, content farms, and call centres. Now, one well-engineered system can impersonate thousands of “real people” at once, 24/7.

We’re not theory-crafting. We’re already seeing early versions of this.

Case studies: when the threat stopped being hypothetical

United States: the “phantom” candidate

In a recent US election cycle, voters received robocalls where President Biden apparently told them not to vote. It sounded like him. It wasn’t. It was a cheap AI voice clone that still required federal action to shut down.

At the same time, a Russian “Doppelganger” campaign went beyond fake articles. It used AI to recreate the entire look and feel of major news sites – think cloned versions of The Washington Post or Fox News – and filled them with anti-Ukraine stories that looked indistinguishable from the real thing at a glance.

Russia–Ukraine: the first AI war

Early in the invasion, hacked Ukrainian TV stations briefly ran a video of President Zelenskyy at a lectern, instructing his troops to surrender.

It never happened. It was a deepfake.

By today’s standards it was clunky, but it proved a chilling point: you can hijack the face and voice of a head of state and use it to try to break a country’s will.

Israel–Palestine: the “liar’s dividend” in action

During the Israel–Gaza conflict, reality and fabrication began to blur completely.

The “All Eyes on Rafah” image — a pristine, AI-generated camp scene — went mega-viral, shared tens of millions of times, shaping emotion and opinion around an event that never looked like that.

At the same time, genuine images of horrific violence were dismissed by many as “AI fakes.” That’s the liar’s dividend: once the public knows deepfakes exist, anyone can claim that inconvenient real footage is “just AI.”

The weapon is no longer just the fake. It’s the collapse of trust in anything that looks like evidence.

Influence operations at industrial scale

Major powers have noticed.

Open-weight models are being folded into state-sponsored influence operations to build what some analysts call “synthetic consensus”: flood the information space with bots until fringe views feel like the majority.

Examples:

  • The Russia-linked CopyCop network uses AI to rewrite legitimate news articles with a pro-Kremlin slant, then publish them on lookalike sites.
  • The China-linked Spamouflage network spews out endless comment spam to harass critics and amplify pro-China narratives. In some cases, they’ve used AI to create sexually explicit deepfakes of female politicians and journalists as a form of intimidation and reputational attack.

This isn’t sci-fi. It’s already part of the day-to-day information environment.

Cybercrime: when open weights arm tiny teams

If disinformation is the visible side, cybersecurity is the quiet, arguably more dangerous flank.

Open-weight models are force multipliers for attackers. A small Advanced Persistent Threat (APT) group no longer needs a floor of elite hackers. With the right model and training data, they can:

  • Generate sophisticated phishing campaigns tailored to specific industries, companies, or individuals.
  • Write, mutate, and obfuscate malware at speed.
  • Spin up convincing fake websites and payment portals that dynamically adapt to evasion tools.

What used to take months of R&D and significant money can now be packaged into “Crime-as-a-Service” offerings on the dark web.

We’re already seeing products like WormGPT, FraudGPT, and DarkBERT – open-weight models fine-tuned on criminal data and sold on subscription. They help criminals write better scam emails, build more convincing fraud sites, and automate parts of their attacks.

Open weights have effectively put advanced offensive capability on the shelf.

How regulators are trying (and failing) to keep up

Different regions are approaching this tension between “open” and “safe” in very different ways.

European Union & United States

  • The EU AI Act carves out an apparent “safe harbour” for open-source models to support research – but that exemption disappears if a model is powerful enough to be considered “systemic risk” or is used in high-risk applications. In practice, serious open-weight developers are still facing heavy transparency and documentation requirements.
  • The US, via the NTIA and others, is currently resisting hard limits on open weights. The argument: the innovation upside still outweighs the documented risks. Instead, they’re increasingly using tools like export controls to stop powerful dual-use models from landing in the hands of adversaries.

China

China has chosen a very different path: tight domestic control, aggressive global release.

  • At home, “Interim Measures” require that all public generative models – including open-weight – align with “core socialist values” and pass security checks before release. That makes true fast-moving open development difficult.
  • Abroad, Chinese labs are pushing open-weight families like Qwen and DeepSeek into the global market, particularly in the Global South, as a way of setting technical standards and embedding themselves into other countries’ digital infrastructure.

Open weights have become a geopolitical instrument, not just a technical choice.

The security calculus has changed

Once you accept that powerful open-weight models are out in the wild, you’re forced into a new kind of realism.

You can’t regulate them out of existence without cutting yourself off from the global AI economy. They are essential building blocks for local innovation – the only way many countries and companies can realistically build AI tailored to their own languages, laws, and industries.

But that open door lets a cold wind in.

The same tools that power local startups and research labs also give small, hostile teams the ability to run operations that once required nation-state level capability. The buffer between “breakthrough” and “weapon” has basically disappeared.

From access control to digital immunity

So where does that leave policymakers and builders?

The focus can’t just be “who’s allowed to download a model” anymore. That ship has sailed.

The priority has to shift from controlling access to building resilience:

  • Invest in detection, attribution, and rapid response for deepfakes and influence operations.
  • Treat disinformation defence and cyber-resilience as hard national security problems, not PR or comms challenges.
  • Support a healthy open-source / open-weight ecosystem domestically, so you’re not just consuming someone else’s infrastructure but shaping it.
  • Build norms, tooling, and regulation around how powerful open-weight models are trained, audited, documented, and released – even if you can’t fully control their spread once they’re out.

We’re not going back to a world where only a handful of companies can train or run powerful models.

The question now is whether we grow the immune system to match the power of the tools we’ve just handed to everyone.