Leaders Opinion: After WormGPT, FraudGPT Makes it Easier for CybercriminalsLeaders Opinion

The emergence of these AI-powered tools poses a significant threat to enterprises.

In the dark corners of the internet, a disturbing development has recently come to light, thanks to the efforts of Netenrich security researcher Rakesh Krishnan. It’s a model known as FraudGPT, and it has been circulating on darknet forums and Telegram channels since July 22, 2023. What sets this model apart is that it’s available through a subscription pricing model: $200 per month, $1,000 for six months, or $1,700 for a year.

Aswin Sreenivas, Head of Data Science/Business Intelligence/Customer Insights at StarHub, offers a thought-provoking insight into this disturbing development: “The evolution of Generative AI has led to concerns over criminal use, exemplified by WormGPT and FraudGPT. Born from GPT-J, WormGPT creates malware without limitations, while FraudGPT crafts undetectable malware and malicious content.”

“The emergence of these AI-powered tools poses a significant threat to enterprises,” Aswin Sreenivas warns. “Many companies have been slow to adopt Generative AI due to concerns about security infrastructure. Educating the workforce about these threats is crucial, as is developing robust cybersecurity measures to safeguard against data breaches and other cyber threats.”

Fraud GPT’s capabilities are truly alarming. They include generating malicious code to exploit system vulnerabilities, creating undetectable malware, identifying Non-Verified by Visa (Non-VBV) bins for unauthorized transactions, crafting convincing phishing pages, locating hidden hacker groups and black markets, generating scam content, finding data leaks, and aiding in learning coding and hacking techniques.

To quote Aswin Sreenivas again, “Additionally, it assists in identifying cardable sites for fraudulent credit card transactions. WormGPT, another tool launched in July 2023, specializes in crafting convincing fake emails for business email compromise (BEC) attacks, bypassing spam filters.”

The rapid advancement of AI models has made it challenging for security experts to combat automated machine-generated outputs, providing cybercriminals with more efficient ways to defraud and target victims. While detection tools for AI-generated text exist, their effectiveness has been questioned, and the cybersecurity landscape remains challenging to navigate.

As Aswin Sreenivas emphasizes, “Proactive measures encompassing technology, collaboration, education, and ethics are imperative to ensure responsible AI advancement and curb potential misuse.” The growing interest in AI within the underground community amplifies the need for vigilance. While current capabilities may not be groundbreaking, these models signify a concerning step towards AI weaponization.

📣 Want to advertise in AIM Media House? Book here >

Picture of 雯
Bhasker Gupta is a seasoned technology leader and entrepreneur, recognized for building platforms and communities at the intersection of AI, data, and innovation. With over two decades of experience, he has consistently driven impactful initiatives empowering enterprises and tech ecosystems worldwide. Reach out to me at bhasker.gupta@aim.media
Global leaders, intimate gatherings, bold visions for AI.
CDO Vision is a premier, year-round networking initiative connecting top Chief
Data Officers (CDOs) & Enterprise AI Leaders across major cities worldwide.

Subscribe to our Newsletter: AIM Research’s most stimulating intellectual contributions on matters molding the future of AI and Data.