ETV Bharat / technology

Hackers Using ChatGPT to Improve Cyber Attacks, Reveal Microsoft and OpenAI

author img

By ETV Bharat Tech Team

Published : Feb 15, 2024, 10:19 AM IST

Microsoft said that United States adversaries, chiefly Iran and North Korea and to a lesser extent Russia and China, are beginning to use its generative artificial intelligence to mount or organise offensive cyber operations.
Representative Image

Microsoft said that adversaries of the US, chiefly Iran and North Korea and to a lesser extent Russia and China, are beginning to use its generative artificial intelligence to mount or organise offensive cyber operations. The detection was jointly done by the tech major's business partner OpenAI, maker of ChatGPT.

Hyderabad: Microsoft and OpenAI Wednesday revealed that hackers are taking advantage of large language models like ChatGPT to refine and improve their existing cyber attacks. The revelation is deemed critical by technocrats as it comes in a year when over 50 countries including India will go to elections later this year.

The tech giant and its business partner OpenAI said they had jointly detected and disrupted the malicious cyber actors' use of their AI technologies — shutting down their accounts. Both companies have detected attempts by Russian, North Korean, Iranian, and Chinese-backed groups using tools like ChatGPT for research into targets, to improve scripts, and to help build social engineering techniques.

“We disrupted two China-affiliated threat actors known as Charcoal Typhoon and Salmon Typhoon; the Iran-affiliated threat actor known as Crimson Sandstorm; the North Korea-affiliated actor known as Emerald Sleet; and the Russia-affiliated actor known as Forest Blizzard,” said Sam Altman-run company.

In the research paper, Microsoft revealed how a number of adversaries are implementing AI functionality in their tactics, techniques, and procedures. The identified OpenAI accounts associated with these actors were terminated. These bad actors sought to use OpenAI services for querying open-source information, translating, finding coding errors, and running basic coding tasks.

“Cybercrime groups, nation-state threat actors, and other adversaries are exploring and testing different AI technologies as they emerge, in an attempt to understand potential value to their operations and the security controls they may need to circumvent,” Microsoft said in a statement.

Here are some of the cases that Microsoft mentioned in its research

Russia- Fancy Bear, a Russian military intelligence unit, has been using the models to investigate satellite and radar technology that could be related to the Ukraine war.

China- Aquatic Panda, a Chinese cyber-espionage group that targets a variety of industries, universities, and governments around the world, has been interacting with the models in a way that suggests a limited examination of how the LLMs can improve their technical capabilities.

North Korea- Kimsuky, a North Korean cyber espionage group, used the models to study foreign think tanks that conduct research on the country and to create content that is likely to be used for spear-phishing attacks.

Iran- The Iranian Revolutionary Guard used large-language models for social engineering, software error troubleshooting, and even to study how intruders might avoid detection in a vulnerable network. The AI helps speed up and enhance email production. For example, it generates phishing emails such as one pretending to be from an International Development Agency (IDA) and another trying to lure prominent feminists into an attacker-built site on feminism.

While attackers will remain interested in AI and probe technologies’ current capabilities and security controls, it’s important to keep these risks in context, the company said.

“As always, hygiene practices such as multi factor authentication (MFA) and Zero Trust defenses are essential because attackers may use AI-based tools to improve their existing cyberattacks that rely on social engineering and finding unsecured devices and accounts,” the tech giant noted.

“AI can help attackers bring more sophistication to their attacks, and they have resources to throw at it. We’ve seen this with the 300+ threat actors Microsoft tracks, and we use AI to protect, detect, and respond,” said Homa Hayatyfar, principal detection analytics manager for Microsoft.

You can read the entire research paper here.

Read More

  1. Amid Artificial Intelligence Boom - AI Girlfriends, Boyfriends Are Making Their Mark
  2. Microsoft Teams Up with News Website Semafor for AI-Assisted News Stories
  3. Meta to Label AI-Generated Images on Facebook, Instagram and Threads
ETV Bharat Logo

Copyright © 2024 Ushodaya Enterprises Pvt. Ltd., All Rights Reserved.