“In short, AI driven offense is real but still somewhat clumsy, and transparency from model providers turns that clumsiness into a detection advantage,” Roberts said. “Security teams should press vendors for similar reporting and wire those indicators into their SOC before the next [genAI-fueled attack] shows up.”
Tactics of attackers
The OpenAI report, published in June, detailed a variety of defenses the company has deployed against fraudsters. One, for example, involved bogus job applications.
“We identified and banned ChatGPT accounts associated with what appeared to be multiple suspected deceptive employment campaigns. These threat actors used OpenAI’s models to develop materials supporting what may be fraudulent attempts to apply for IT, software engineering, and other remote jobs around the world,” the report said. “Although we cannot determine the locations or nationalities of the threat actors, their behaviors were consistent with activity publicly attributed to IT worker schemes connected to North Korea (DPRK). Some of the actors linked to these recent campaigns may have been employed as contractors by the core group of potential DPRK-linked threat actors to perform application tasks and operate hardware, including within the US.”