-
Researchers bypassed the GPT-4o model security fence and successfully programmed it with a vulnerability using "hexadecimal strings".
Marco Figueroa, a researcher at cybersecurity firm 0Din, has discovered a new type of GPT jailbreak attack that breaks through GPT-4o's built-in "security fences" and enables it to write malicious attack programs. According to OpenAI, ChatGPT-4o has a series of "security fences" built in to prevent the AI from being used inappropriately by users, which analyze incoming prompts to determine if the user is asking the model to generate malicious content. But Marco Figueroa ...- 1.3k
❯
Search
Scan to open current page
Top
Checking in, please wait
Click for today's check-in bonus!
You have earned {{mission.data.mission.credit}} points today!
My Coupons
-
¥CouponsLimitation of useExpired and UnavailableLimitation of use
before
Limitation of usePermanently validCoupon ID:×Available for the following products: Available for the following products categories: Unrestricted use:Available for all products and product types
No coupons available!
Unverify
Daily tasks completed: