Microsoft has prevented $4 billion in fraud attempts between April 2024 and April 2025, according to the latest issue of its Cyber Signals report. The company warned that cybercriminals are now using artificial intelligence (AI) tools to run scams more efficiently and with less technical knowledge.
Microsoft said scammers are turning to AI software, ranging from commonly used tools to more targeted apps developed within the cybercrime space, to create fake content for phishing, scams, and other forms of online fraud. Over the past year, Microsoft blocked about 1.6 million bot-driven sign-up attempts per hour and rejected 49,000 fraudulent partnership applications.
AI makes scam setup faster and cheaper
The Microsoft Anti-Fraud Team reported that AI has helped cybercriminals set up fake websites in just minutes, compared to days or weeks in the past. Many of these websites closely resemble legitimate e-commerce sites, tricking consumers into giving away personal or financial information.
Regions with strong digital economies, like Germany and parts of China, are among the more frequent sources of these fraud attempts. Germany, in particular, has become a target due to its large online services market.
Rise of fake job offers
Generative AI has also played a role in job scams. Scammers now create fake employer profiles, job postings, and even emails and interviews using AI. These fake listings often appear on popular job platforms, making it harder for job seekers to tell what’s real.
Microsoft recommends that job platforms implement stronger security measures, like multifactor authentication for employer accounts, to prevent misuse.
Social engineering remains a major threat
Some attacks, like tech support scams, don’t rely on AI but still present serious risks. In April 2024, Microsoft saw a group known as Storm-1811 impersonating IT support using a built-in Windows tool called Quick Assist. These scammers tricked victims into giving them access to their devices through phone calls, a method known as vishing.
Storm-1811 also used social engineering, a method that gathers personal details to make a scam look more believable. AI helps speed up this process, but traditional defenses still work well. These include Zero Trust principles, which limit how much access each user has, and training employees to verify tech support contacts.
Microsoft continues to monitor such threats and has taken down several fake accounts and services used by attackers. The company urges users to treat any unsolicited tech support offer with suspicion and verify claims through official channels.
Microsoft tools fight back
To help defend users, Microsoft uses several layers of protection across its products. Microsoft Defender for Cloud offers security for Azure-based resources such as virtual machines and container images. Microsoft Edge includes features like domain impersonation detection and typo protection to warn users before they click on suspicious links.
Edge also blocks fake support pop-ups using a Scareware Blocker. These deceptive screens often show false system alerts to scare users into calling fake tech support lines or installing malware.