- Attackers are using an old tool with a new attack method.
- PipeMagic, a backdoor, is hidden within a fake AI app creating a vector attackers can abuse.
- Kaspersky advises businesses don’t use AI apps from third-parties as attackers leverage AI as a lure with more regularity.
Nearly half of local businesses are making use of AI in some way or another. These solutions can get costly quickly and so scammers are leveraging the desire for AI as a lure to hook in new targets.
The Kaspersky Global Research and Analysis Team has uncovered a campaign where the attackers use a fake ChatGPT application to create a backdoor in a network. The attackers leverage the PipeMagic Trojan and they have shifted their focus from Asia to Saudi Arabia of late.
The backdoor was first spotted in 2022 by Kaspersky which it described a custom modular backdoor. As mentioned, attacks were limited to SMEs in Asia initially but now it appears the attackers are trawling new waters.
The latest version of the attack sees cybercriminals hiding a malicious payload in an app that looks like it should be a ChatGPT application. The cybersecurity firm says that at first blush the file contains the correct libraries and applications, but once the application is executed it delivers the payload. From here the second stage of the attack is launched and malware searches for Windows API functions using a names hashing algorithm. From here settings are adjusted as required and PipeMagic is installed.
“One of the unique features of PipeMagic is that it generates a 16-byte random array to create a named pipe in the format \.\pipe\1.. It spawns a thread that continuously creates this pipe, reads data from it, and then destroys it. This pipe is used for receiving encoded payloads, stop signals via the default local interface. PipeMagic usually works with multiple plugins downloaded from a command-and-control (C2) server,” Kaspersky explains.
The firm warns that the focus on Saudi Arabia could point to a new wave of attacks leveraging the backdoor and as such, caution is advised when shopping for AI applications. If AI is something you want to explore, look for reputable solutions from trusted names.
Taking a chance on an AI application could not only open you up to a malware attack, but it could put your data at risk, opening you up to potential legal problems. While reputable solutions do cost money, that price comes with piece of mind that the developer is accountable and dependable.