You can trick ChatGPT into breaking its own rules, but it’s not easy



ChatGPT homepage

From the moment OpenAI launched ChatGPT, the chatbot had guardrails to prevent abuse. The chatbot might know where to download the latest movies and TV shows in 4K quality, so you can stop paying for Netflix. It might know how to make explicit deepfake images of your favorite actors. Or how to sell a kidney on the black market for the best possible price. But ChatGPT will never give you any of that information willingly. OpenAI built the AI in a way that avoids providing assistance with any sort of nefarious activities or morally questionable prompts.


That doesn't mean ChatGPT will always stick to its script. Users have been able to find ways to "jailbreak" ChatGPT to have the chatbot answer questions it shouldn't. Generally, however, those tricks have a limited shelf life, as OpenAI usually disables them quickly.


This is the standard for GenAI products. It's not just ChatGPT that operates under strict safety rules. The same goes for Copilot, Gemini, Claude, Meta's AI, and any other GenAI products you can think of.


It turns out that there are sophisticated ways to jailbreak ChatGPT and other AI models. But it's not easy, and it's not available to just anyone.


Continue reading...


The post You can trick ChatGPT into breaking its own rules, but it’s not easy appeared first on BGR.




Today's Top Deals



  1. Best Fire TV Stick deals for May 2024

  2. Best Ring Video Doorbell deals in May 2024

  3. Today’s deals: $3 Alexa smart plugs, Peloton sale, $250 HP laptop, Sony XM5 headphones, more

  4. Amazon gift card deals, offers & coupons 2024: Get $415+ free