Dr. Usama M. Fayyad, Executive Director of the Institute for Experiential AI at Northeastern University
Doha, Qatar: As artificial intelligence (AI) reshapes industries, it also fuels sophisticated cyber threats, warned Dr. Usama M. Fayyad, Executive Director of the Institute for Experiential AI at Northeastern University.
Speaking to The Peninsula on the sidelines of the Global Security Forum, Dr. Fayyad, who is one of the panellists at the three-day event which ended yesterday, noted that from deepfake Zoom calls to AI-generated misinformation, the public faces unprecedented challenges in distinguishing real from fake, prompting calls for robust solutions to safeguard digital interactions.
“Verifying identity is the cornerstone of combating AI-driven cyber threats. If you resolve reliable identity, you resolve 80 to 90 percent of the problems,” he said.
He added that current methods, like banks sending authentication codes to phones, are a start, but more advanced, real-time verification systems are needed.
“These could certify that a person, not an AI, is on the other end of a communication, thwarting scams that exploit replicated voices or video feeds. Cybersecurity, however, lags behind other fields in adopting AI,” he said. The IT expert, who is also a former chief data officer at Barclays Bank, noted that while malicious actors leverage AI to orchestrate attacks, security teams rely heavily on manual processes.
“Existing AI-based monitoring tools often overwhelm teams with alerts, making it hard to distinguish genuine threats from false alarms. There’s a huge opportunity to prioritize AI in elevating cybersecurity operations,” Fayyad urged, advocating for smarter tools to filter and prioritize alerts effectively.
He noted that for now, statistical analysis can still detect AI-generated content, such as fabricated videos or text, but he cautioned that this advantage is temporary. “As AI evolves, its outputs will increasingly mimic human creations, closing the detection gap,” he said.
He stressed the need to “use AI to fight AI,” urging investment in advanced cybersecurity tools to keep pace with rapidly improving threats.
For the average person, Fayyad offered practical advice: approach online content with skepticism. “Algorithms are getting better at simulating misinformation,” he warned, suggesting that offline media, like print, may regain trust as digital channels grow riskier. He also highlighted AI’s current limitations, describing it as a “stochastic parrot” that mimics without understanding. While not inherently dangerous, AI’s automation capabilities can be misused in routine tasks, from phishing emails to fraudulent transactions.