Advertisement
Top
image credit: Pexels

Hackers Developing Malicious LLMs After WormGPT Falls Flat

March 27, 2024

Cybercrooks are exploring ways to develop custom, malicious large language models after existing tools such as WormGPT failed to cater to their demands for advanced intrusion capabilities, security researchers said.

Undergrounds forums teem with hackers’ discussions about how to exploit guardrails put in place by artificial intelligence-powered chatbots developed by OpenAI and Google-owned Gemini, said Etay Maor, senior director of security strategy at Cato Networks.

Read More on DataBreach Today