• Welcome to Professional A2DGC Business
  • 011-43061583
  • info@a2dgc.com

The Ghost GPT

24

Jan

The Ghost GPT

Jan 24, 2025

Blog Credit : Trupti Thakur

Image Courtesy : Google

The Ghost GPT 

Artificial Intelligence (AI) tools have changed the way we tackle day-to-day tasks, but cybercriminals are twisting that same technology for illegal activities. In 2023 WormGPT made headlines as an uncensored chatbot specifically designed for malicious purposes. Soon after, we started seeing other so-called “variants” pop up, like WolfGPT and Escape GPT.

Unlike traditional AI models that are constrained by guidelines to ensure safe and responsible interactions, uncensored AI Chatbots operate without such guardrails, raising serious concerns about their potential misuse. Most recently, Abnormal Security researchers uncovered GhostGPT, a new  uncensored chatbot that further pushes the boundaries of ethical AI use.

What is Ghost GPT?

Ghost GPT is a chatbot specifically designed to cater to cybercriminals. It likely uses a wrapper to connect to a jailbroken version of Chat GPt or an open-source large language Model (LLM), effectively removing any ethical safeguards. By eliminating the ehthical and safety restrictions typically built into AI models, Ghost GPT can provide direct, unfiltered answers to sensitive or harmful queries that would be blocked or flagged by traditional AI systems.

  1. Fast processing: GhostGPT promises quick response times, enabling attackers to produce malicious content and gather information more efficiently.
  2. No Logs Policy: The creator(s) claim that user activity is not recorded, appealing to those who wish to conceal their illegal activities.
  3. Easy access: Sold thorough Telegram, GhostGPT allows buyers to start using it immediately without the need ot use a jailbreak prompt or download an LLM themselves.

Ghost GPT is marketed for a range of malicious activities, including coding, malware creation, and exploit development. IT can also be used to write convincing emails for business email compromise (BEC) scams, making it a convenient tool for committing cybercrime.

While its promotional materials mention “cybersecurity” as a possible use, this claim is hard to believe, given its availability on cybercrime forums and its focus on BEC scams. Such disclaimers seem like a weak attempt to dodge legal accountability nothing new in the cybercrime world.

To test its capabilities, Abnormal Security researchers asked GhostGPT to create a Docusign phishing email. The Chatbot produces a convincing template with ease, demonstrating its ability to trick potential victims.

With its ability to deliver insights without limitations, GhostGPT serves as a powerful tool for those seeking to exploit AI for malicious purposes.

The Implications Of Ghost GPT

Ghost GPT poses several issues that extend beyond this specific bot to similar variants in general.

First, it lowers the barrier to entry for new cybercriminals, allowing them to buy access via Telegram without needing specialized skills or extensive training. This easy access makes it simpler for less skilled attackers to engage in cybercrime.

Second, GhostGPT augments the capabilities of attackers by enabling them to generate or refine malware, phishing emails and other malicious content quickly and effortlessly. This means that attacks can be launched with more speed and efficiency.

The convenience of GhostGPT also saves time for users. Because it’s available as a Telegram bot, there is no need to jailbreak ChatGPT or set up an open-source model. Users can pay a fee, gain immediate access, and focus directly on executing their attacks.

Fighting Malicious AI With Defensive AI

Attackers now use tools like GhostGPT to create malicious emails that appear completely legitimate. Because these messages often slip past traditional filters, AI-powered security solutions are the only effective way to detect and block them.

Abnormal’s Human behavior AI platform analyzes behavioral signals at an unparalleled scale. It identifies anomalies and priotizes high-risk events across the email environment, strategically anticipating and neutralizing threats before they can inflict damage, This proactive approach is critical in an era where the best defense is a strong offense.

 

In 2023, we saw the emergence of the first criminally focused generative AI models, with WormGPT grabbing headlines for its ability to assist hackers in creating malicious software.

WolfGPT and EscapeGPT soon followed, and now security researchers have discovered a new AI-based tool helping hackers create malware – GhostGPT.

According to security experts at Abnormal Security, GhostGPT most likely takes advantage of a jailbroken version of OpenAI’s ChatGPT chatbot, or a similar large language model, with all the ethical safeguards removed.

 

“By eliminating the ethical and safety restrictions typically built into AI models, GhostGPT can provide direct, unfiltered answers to sensitive or harmful queries that would be blocked or flagged by traditional AI systems,”

Complete with its promotional spiel, GhostGPT offers four key features: uncensored AI, lightning-fast processing, no log-keeping to cut down on creating evidence, and – probably most important of all to its customers – it is easy to use.

“Access and use our AI directly through our easy-to-use Telegram bot,” GhostGPT said.

While the makers of GhostGPT attempt to frame it as a legitimate cyber security tool, it is largely advertised on criminal hacking forums and has a clear focus on creating business email compromise (BEC) scams.

“To test its capabilities, Abnormal Security researchers asked GhostGPT to create a Docusign phishing email,” Abnormal Security said.

“The chatbot produced a convincing template with ease, demonstrating its ability to trick potential victims.”

GhostGPT can also be used to code and create malware and develop exploits.

One of the major implications that GhostGPT shares with its predecessors is that it significantly lowers the barrier of entry to criminal cyber activity, while also making scams, such as BEC, harder to detect. Many scammers use English as a second language, and that fact has historically made it easier to spot scams in the wild – however, generative AI’s greater command of the language makes scam content harder to spot.

It’s also faster and more convenient to use.

“The convenience of GhostGPT also saves time for users. Because it’s available as a Telegram bot, there is no need to jailbreak ChatGPT or set up an open-source model,” Abnormal Security said.

“Users can pay a fee, gain immediate access, and focus directly on executing their attacks.”

 

 

Blog By : Trupti Thakur

Recent Blog

The Ghost GPTJan 24, 2025
The SCOT MissionJan 21, 2025