When we hear about the risks of AI, we mostly hear about the risks of hallucinations. The risks go much further than that.
Cisco's AI Defense offers security teams AI visibility, access control and threat protection for AI security threats.
[Related: The AI Danger Zone: ‘Data Poisoning’ Targets LLMs] Notably, given that the Change Healthcare incident was just one of the many attacks to disrupt health care and other critical ...
Adding just a little medical misinformation to an AI model’s training data increases the chances that chatbots will spew harmful false content about vaccines and other topics ...
While the paper is focused on the intentional "poisoning" of an LLM during training, it also has implications for the body of ...
Casio was the target of a ransomware attack on October 5, which saw hackers access sensitive data and render many of the ...
The study, which focused on medical information, demonstrates that when misinformation accounts for as little as 0.001 ...
By conducting tests under an experimental scenario, a team of medical researchers and AI specialists at NYU Langone Health has demonstrated how easy it is to taint the data pool used to train LLMs.
Cisco AI Defense is the company’s proprietary AI tool that can validate enterprise AI models and protect them against attacks ...