• Home
  • Podcasts
  • Charts

A New Trick Uses AI to Jailbreak AI Models—Including GPT-4

Security, Spoken - A podcast by WIRED

Try Bookbeat 60! days for free, click here

Try Bookbeat 60! days for free, click here

Enjoy a whole world of audiobooks and e-books, everything from new releases to the classics

Sponsored

Categories:

Technology

Adversarial algorithms can systematically probe large language models like OpenAI’s GPT-4 for weaknesses that can make them misbehave. Read the story here.

Visit the podcast's native language site

  • All podcasts
  • Episodes
  • Blog
  • About us
  • Privacy Policy
  • What is a podcast?
  • How to listen to a podcast?

© Podcast24.co.uk 2025