How ChatGPT—and Bots Like It—Can Spread Malware

How ChatGPT—and Bots Like It—Can Spread Malware

Are chatbots evil? Well-known chatbots like ChatGPT present a very polite, convincingly Goody 2-shoes public image to the world. But in the wrong hands, they could be used to turn the internet into a hellscape.

There’s a lot of product development going on behind the scenes. Researchers are racing to fix weaknesses, expand access, and improve security, resulting in regular upgrades and integrations into a range of other software tools. But Pandora’s box has been opened. We’ve already seen numerous ways in which chatbots can be used against mankind. In particular, it’s being used to increase the spreading of malware.

How are bots being used by criminals? Is the future of the internet in the hands of better malware removal tools? What else should we be considering?

What are chatbots, and how do chatbots work?

Chatbots are designed to simulate conversations with human users. They can’t “think” in the same way that humans do. Instead, they use pre-programmed responses and algorithms to generate responses to user input. Some chatbots use machine learning (ML) and natural language processing (NLP) to improve their responses over time. However, they still rely on pre-existing data and algorithms to generate their responses.

Can Chatbots be evil?

Chatbots cannot function before it has been trained on data appropriate to their intended use. For example, you’d train a customer service chatbot on frequently asked questions, such as how to regain access to an account. Using the data it has been trained on, the chatbot reads the user’s prompt, analyzes it, and replies using the words or phrases that its training has provided.  A live chatbot on a website might, e.g., provide instructions with a link to help you regain access to your account.

If the chatbot owner has ill intent, he could include dangerous information and links in his original training data to get users to visit dangerous links or expose their private information.

Can a chatbot be used to infect your computer with malware?

Cybercriminals create dangerous websites intending to spread malware or steal data. They use phishing and even black hat SEO to entice people to visit their sites to trigger an automatic malware download. Normally, antivirus software can identify such dangerous links and prevent you from clicking on them.

But your antivirus software may miss the danger if a criminal hide the dangerous links within their chatbots. That’s why it’s better to also use a VPN with a link checker to screen the output from a chatbot.

Examples of criminal use of chatbots

Besides the danger of flat-out evil chatbots, there are plenty of other ways in which AI can turbo-charge the spreading of malware. Remember that chatbots are tools, and anyone can buy access. That means it can be used by cybercriminals too.

Linux PC

ChatGPT can write undetectable malware

Even inexperienced coders can write undetectable malware with ChatGPT if they know how to get past the limiters and rules. There’s at least one such self-reported case, but there are bound to be many others for sale on the dark web.

Celebrity Scams

ChatGPT can imitate the communication style (the “voice”) of a well-known person, brand, group, or company by asking it to phrase the output to make it sound just the way someone famous would have sounded. This capability can be abused to mimic celebrities and entice their followers into scams.

Voice cloning and deep-fake sound clips

In a recent fake kidnapping scam attempt, scammers cloned a 15-year-old teenager’s voice and sent her mom a harrowing plea for help. But things are set to get much worse. Meta has just introduced Voicebox, an AI tool for producing and editing high-quality audio clips. But even they believe it is too dangerous for the public release because Voicebox could potentially create realistic “deep fake” sound clips of famous or influential people saying things they never said.

Phishing

In the past, it was often possible to identify phishing attacks because they were frequently riddled with spelling and grammar mistakes. Now, criminals use ChatGPT as a writing aid to write persuasive, convincing emails that can entice people into clicking on phishing links.

Propaganda and disinformation

AI writing tools are already being used on a massive scale to generate fake news and spread disinformation. Deep fakes are particularly dangerous. Unscrupulous people can use it to provoke outrage and fear and entice public reaction, such as when a fake picture of the pope in a designer jacket is circulated on social media. Rampant disinformation is a great danger to the stability of society.

Fake apps

There are already thousands of fake apps in the Google and Apple app stores that can spread malware, steal data, and spy on their victims. Examples include fake VPNs, PDF-viewer, games, and even fake anti-virus tools. Google and Apple try, but they can’t check or regulate each app, and they leave it to users to do their own research. Researchers are racing to discover and report them.

LAPTOP

How can I protect myself from dangerous chatbots?

All of the old cybersecurity rules still apply, but we should add an extra level of watchfulness:

  • Double your guard against phishing: Regard every item in your inbox with suspicion. If you receive a message that seems slightly off, do not reply or click on any links. If the message appears from a legitimate organization, call them using a phone number from their official website to confirm that they sent it.
  • Stop sharing personal information online: Don’t add fuel to the phishing and social engineering fire by being an open book on social media.
  • Navigate the internet cautiously: Criminals always find new ways to lure unsuspecting people to their dangerous websites. Resist sensationalist headlines and extravagant offers. They are called “clickbait,” for good reason!
  • Use a VPN to protect your network and data: Protect your privacy with a VPN. A VPN encrypts data, protects it from eavesdroppers, and protects you from invasive cookies and trackers that follow you around the internet.
  • Install antivirus software: Add an additional security layer. The newest trend is to either add antimalware functions to a trusted VPN or enhance antivirus tools with a VPN. Stay safe, and get both! Keep your software up to date with the latest security patches.

Conclusion

AI tools like chatbots are perhaps the biggest two-sided sword humanity has ever had to face. While it can make lives easier and better, it can also be used to outwit our defenses, automate cyber attacks, and turn the internet against its users.

 

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts