White Paper

AI under Attack

AI under Attack

Pages 7 Pages

Kaspersky’s whitepaper AI under Attack warns that machine learning in cybersecurity, while powerful, is vulnerable to adversarial manipulation. Common threats include label poisoning, dataset poisoning, and white/black box attacks where adversaries reverse-engineer or brute-force models. Other risks stem from outsourced ML models, data leaks, and hardware-based inconsistencies. Defenses include multi-layered protection, robust in-house training, cloud-based ML, anomaly detection, and provably stable models. The paper stresses that ML is not a silver bullet—it must be part of a layered security strategy with human oversight, low false positives, and regular red-teaming.

Join for free to read