Google suspended 39.2 million malicious advertisers in 2024 thanks to AI
Google may have finally found an application of large language models (LLMs) that even AI skeptics can get behind. The company just released its 2024 Ads Safety report, confirming that it used a collection of newly upgraded AI models to scan for bad ads. The result is a huge increase in suspended spammer and scammer accounts, with fewer malicious ads in front of your eyeballs.
While stressing that it was not asleep at the switch in past years, Google reports that it deployed more than 50 enhanced LLMs to help enforce its ad policy in 2024. Some 97 percent of Google’s advertising enforcement involved these AI models, which reportedly require even less data to make a determination. Therefore, it’s feasible to tackle rapidly evolving scam tactics.
Google says that its efforts in 2024 resulted in 39.2 million US ad accounts being suspended for fraudulent activities. That’s over three times more than the number of suspended accounts in 2023 (12.7 million). The factors that trigger a suspension usually include ad network abuse, improper use of personalization data, false medical claims, trademark infringement, or a mix of violations.
Despite these efforts, some bad ads still make it through. Google says it identified and removed 1.8 billion bad ads in the US and 5.1 billion globally. That’s a small drop from 5.5 billion ads removed in 2023, but the implication is that Google had to remove fewer ads because it stopped the fraudulent accounts before they could spread. The company claims most of the 39.2 million suspended accounts were caught before they ran a single ad.