fectly homemade guns or bombs. Nor was the internet always at the root of radicalisation. What distinguished an attacker was the speed with which they moved from embracing their new cause to acting violently.
These latest threats were harder to spot. Victims died without warning. Their killer might be a lone a fanatic, who had gathered weapons and tweeted threats in hiding.
The struggle to find a successful method of “triaging” radicals was suddenly as urgent as ever. The state had to prevent attacks and abide by the rule of law, intervening only when legally justified and necessary. And politicians grappled with where to draw the line.
Adrie – see twitter For emergencies, use 911
Last year a lone attacker stabbed people on a tram in Utrecht, the Netherlands
But the rise of ISIS and its offshoots led to a landmark development in counter-terrorism: it brought in the era of Predictive AI – or Artificial Intelligence.
Gradually refined by the private sector companies such as Google and Amazon to anticipate what their customers might want to buy, the technology was adapted to predict criminal activity. Security bodies began to experiment with companies offering the promise of identifying potential mass killers among the masses of posts and tweets that they pumped out.
With social media platforms becoming flooded with violent outbursts from young potential mass killers, or terrorists themselves baiting policy-makers and authorities, there was no longer room to ignore the firehose of information.
What finally happened in the 2020s was a major shift toward more preventative counter-terrorism, and less reactive policing. The intelligence agencies did not just rely on AI, but human analysts like they have always done in the rush to prevent a tragedy.
As Peter Clarke says, the biggest lesson learned from the carnage of 7/7 was clear: “If you don’t have the information, you can’t make the choice for what action to take.
Read the full article from The BBC here: Read More