A list of 100 Greek names. That was one of the prompts.
It sounds trivial. It isn’t. According to a Microsoft Threat Intelligence report, North Korean threat actors tracked as Jasper Sleet (Storm-0287) are feeding exactly these kinds of requests into generative AI platforms to construct fake identities — complete with culturally plausible names, email address formats, and tailored resumes — then using those personas to infiltrate Western technology companies as remote IT workers.
The report documents how AI now touches every phase of a cyberattack, not just the headline-grabbing malware stage. Reconnaissance, phishing, infrastructure setup, post-compromise data handling — the technology is present throughout, the report states, functioning as what Microsoft calls “a force multiplier that reduces technical friction and accelerates execution.”
From Fake Resumes to Live Infrastructure
Jasper Sleet actors use AI to scan job postings on professional platforms, prompting tools to extract and summarize required skills, then shaping fake identities around those outputs to match specific roles. A second North Korean group, Coral Sleet (Storm-1877), takes a different approach: the report says the actors use AI to rapidly generate fake company websites, provision infrastructure, and troubleshoot deployments.
Neither group is operating autonomously. The report is explicit on this point. “Human operators retain control over objectives, targeting, and deployment decisions,” Microsoft writes. AI handles the labor-intensive middle layer — drafting lures, translating content, debugging code, summarizing stolen data — while people make the strategic calls.
Where AI safeguards get in the way, threat actors are using jailbreaking techniques to push past them, tricking large language models into producing malicious code or content the systems were designed to block.
Agentic AI on the Horizon
The report also flags something newer. Microsoft researchers have begun observing threat actors experimenting with agentic AI — systems capable of performing tasks autonomously and adjusting based on results. For now, the company says this use remains primarily in the decision-support category rather than fully autonomous attack execution. But the experimentation is happening.
Some malware samples the firm examined show signs of AI-enabled code that can dynamically generate scripts or modify behavior at runtime, suggesting the tooling is still evolving.
Because many of the IT worker schemes depend on abusing legitimate access rather than forcing entry, Microsoft advises organizations to treat this activity as an insider risk problem. Defenders, the report says, should focus on detecting abnormal credential use, hardening identity systems against phishing, and securing AI infrastructure that may itself become a target.
Microsoft is not the only firm tracking this pattern. Google recently reported that threat actors are abusing its Gemini AI across all stages of cyberattacks. Separately, Amazon and a cybersecurity blog documented a campaign in which a threat actor used multiple generative AI services as part of an operation that breached more than 600 FortiGate firewalls.
This article is a curated summary based on third-party sources. Source: Read the original article