Microsoft Claims Hackers From China, Russia and Iran Are Using its AI Tools

Navid

Navid

Feb 14, 2024

Reuters reporting – State-backed hackers from Russia, China, and Iran have been utilizing tools from Microsoft-supported OpenAI to refine their cyberattack techniques and deceive their targets, as revealed in a recent report by Microsoft.

The report identifies hacking groups affiliated with Russian military intelligence, Iran’s Revolutionary Guard, and the governments of China and North Korea, who have been employing large language models to enhance their hacking campaigns. These AI models, which generate human-like responses by analyzing vast amounts of text, have become a tool for these state-backed groups to perfect their cyber operations. In response, Microsoft has implemented a comprehensive ban on these hacking groups using its AI products, emphasizing a zero-tolerance policy towards the misuse of their technology by identified threat actors.

China’s reaction to the accusations was to oppose the “groundless smears and accusations” and to advocate for a “safe, reliable and controllable” use of AI technology to promote global well-being. The report sheds light on the increasing concerns surrounding the proliferation of AI technology and its potential for abuse by state-backed entities for espionage purposes.

Western cybersecurity officials have previously warned about the misuse of AI tools by malicious actors, but specific instances of such abuse have been largely unreported until now. OpenAI and Microsoft have characterized the hackers’ use of AI as “early-stage” and “incremental,” indicating that while there have been no significant breakthroughs, the potential for misuse remains a concern.

The report details various uses of AI tools by hackers, including efforts by Russian groups to research military technologies relevant to operations in Ukraine, North Korean hackers generating content for spear-phishing campaigns, and Iranian hackers crafting more convincing phishing emails. Chinese hackers were also reported to be experimenting with large language models for intelligence gathering purposes. Despite the broad ban on hacking groups using their AI products, Microsoft has not extended this prohibition to other offerings such as Bing, citing the novelty and power of AI technology as reasons for their cautious approach. This stance reflects the growing awareness and concern over the deployment of AI technologies in cybersecurity threats and espionage activities.

Join Our Free Newsletter Here!