FIU Researchers Develop AI Protection Against Data Poisoning

August 13, 2025
Researchers at Florida International University have introduced a new method to protect AI systems from data poisoning attacks by combining federated learning and blockchain technology.

Researchers at Florida International University have developed a novel method to safeguard AI systems from data poisoning attacks, announced in a press release. This innovative approach integrates federated learning and blockchain technology to detect and eliminate malicious data before it can compromise AI models.

Data poisoning involves inserting false information into datasets used for training AI, potentially leading to dangerous outcomes such as autonomous vehicles ignoring traffic signals. The FIU team, led by Hadi Amini, has addressed this threat by using federated learning, which allows AI models to train across multiple devices without centralizing sensitive data. However, federated learning alone is vulnerable to poisoned updates.

To enhance security, the researchers incorporated blockchain technology, known for its tamper-proof verification capabilities. This addition helps flag and discard outliers in data updates, preventing potential threats from reaching the global model. The research is being further developed with partners at the National Center for Transportation Cybersecurity and Resiliency, aiming to integrate quantum encryption for even stronger data protection.

We hope you enjoyed this article.

Consider subscribing to one of our newsletters like Cybersecurity AI Weekly, AI Policy Brief or Daily AI Brief.

Also, consider following us on social media:

Subscribe to Cybersecurity AI Weekly

Weekly newsletter about AI in Cybersecurity.

Market report

2025 State of Data Security Report: Quantifying AI’s Impact on Data Risk

Varonis Systems, Inc.

The 2025 State of Data Security Report by Varonis analyzes the impact of AI on data security across 1,000 IT environments. It highlights critical vulnerabilities such as exposed sensitive cloud data, ghost users, and unsanctioned AI applications. The report emphasizes the need for robust data governance and security measures to mitigate AI-related risks.

Read more