Microsoft, OpenAI Warn of Nation-State Hackers Weaponizing AI for Cyberattacks

Cyber Security

Feb 14, 2024NewsroomArtificial Intelligence / Cyber Attack

Nation-state actors associated with Russia, North Korea, Iran, and China are experimenting with artificial intelligence (AI) and large language models (LLMs) to complement their ongoing cyber attack operations.

The findings come from a report published by Microsoft in collaboration with OpenAI, both of which said they disrupted efforts made by five state-affiliated actors that used its AI services to perform malicious cyber activities by terminating their assets and accounts.

“Language support is a natural feature of LLMs and is attractive for threat actors with continuous focus on social engineering and other techniques relying on false, deceptive communications tailored to their targets’ jobs, professional networks, and other relationships,” Microsoft said in a report shared with The Hacker News.

While no significant or novel attacks employing the LLMs have been detected to date, adversarial exploration of AI technologies has transcended various phases of the attack chain, such as reconnaissance, coding assistance, and malware development.

“These actors generally sought to use OpenAI services for querying open-source information, translating, finding coding errors, and running basic coding tasks,” the AI firm said.

For instance, the Russian nation-state group tracked as Forest Blizzard (aka APT28) is said to have used its offerings to conduct open-source research into satellite communication protocols and radar imaging technology, as well as for support with scripting tasks.

Some of the other notable hacking crews are listed below –

  • Emerald Sleet (aka Kimusky), a North Korean threat actor, has used LLMs to identify experts, think tanks, and organizations focused on defense issues in the Asia-Pacific region, understand publicly available flaws, help with basic scripting tasks, and draft content that could be used in phishing campaigns.
  • Crimson Sandstorm (aka Imperial Kitten), an Iranian threat actor who has used LLMs to create code snippets related to app and web development, generate phishing emails, and research common ways malware could evade detection
  • Charcoal Typhoon (aka Aquatic Panda), a Chinese threat actor which has used LLMs to research various companies and vulnerabilities, generate scripts, create content likely for use in phishing campaigns, and identify techniques for post-compromise behavior
  • Salmon Typhoon (aka Maverick Panda), a Chinese threat actor who used LLMs to translate technical papers, retrieve publicly available information on multiple intelligence agencies and regional threat actors, resolve coding errors, and find concealment tactics to evade detection

Microsoft said it’s also formulating a set of principles to mitigate the risks posed by the malicious use of AI tools and APIs by nation-state advanced persistent threats (APTs), advanced persistent manipulators (APMs), and cybercriminal syndicates and conceive effective guardrails and safety mechanisms around its models.

“These principles include identification and action against malicious threat actors’ use notification to other AI service providers, collaboration with other stakeholders, and transparency,” Redmond said.

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.

Articles You May Like

Google, Meta Urge Australia to Delay Bill on Social Media Ban for Children
Indonesia wants Apple to sweeten its $100 million proposal as tech giant lobbies for iPhone 16 sales
Sony in ‘Early Stages’ of Developing New PS5 Gaming Handheld to Compete With Nintendo Switch: Report
Google Must Sell Chrome to Restore Competition in Online Search, DOJ Argues
Chinese Scientists Claim to Have Built Microwave Weapon That Converges Energy Beams on a Single Target