Fake Celebrity Ads Fuel $5.7 Billion Social Media Scams

Social media scams cost billions yearly. States push Meta to act, but can tech curb fraud without stifling freedom?

Social media scams cost $5.7 billion in 2024, exploiting fake endorsements to drain victims' savings. NewsVane

Published: June 11, 2025

Written by Miguel Maguire

A Flood of Fraud Hits Hard

Social media has become a hunting ground for investment scams, with losses soaring to $5.7 billion in 2024, according to the Federal Trade Commission. Ads featuring fake endorsements from figures like Warren Buffett or Cathie Wood draw users into traps that empty bank accounts. For many, the impact is catastrophic, erasing life savings and leaving emotional scars that linger long after the money is gone.

These schemes follow a ruthless pattern. A slick ad on platforms like Facebook promises easy wealth, leading users to WhatsApp groups where scammers tout penny stocks. They inflate the stock's price in a "pump and dump" ploy, then sell off, leaving victims with worthless shares. State attorneys general, including California's Rob Bonta, have raised urgent concerns, noting a sharp rise in complaints about these devastating cons.

States Unite to Demand Change

In June 2025, 42 state attorneys general, representing a bipartisan coalition, sent a letter to Meta, pressing for stronger defenses against scam ads. They criticized the company's automated filters for letting fraudulent promotions slip through and urged better human oversight or a ban on investment ads altogether. Covering over 90% of the U.S. population, this group wields significant clout, signaling Meta faces pressure to act swiftly or risk potential legal action.

This push reflects a broader trend of state enforcers tackling tech's blind spots. Recent victories include California's $6.75 million cybersecurity settlement with Blackbaud in 2024 and Massachusetts' $165 million judgment against insurers for deceptive practices in 2025. Yet, with Meta still grappling with unresolved issues like account takeovers, the coalition's demand underscores a growing impatience with the industry's pace of reform.

Platforms face intense pressure to protect users from scams that hit vulnerable groups hard, like seniors, who lost nearly $5 billion to online fraud in 2024, per the Consumer Federation of America. At the same time, excessive moderation could silence legitimate advertisers or content creators, a worry for those who value open digital spaces. The debate hinges on who bears responsibility: platforms, users, or regulators.

Legally, the issue is thorny. Section 230 of the Communications Decency Act protects platforms from liability for user content, but a 2024 TikTok ruling hinted that algorithmic promotion of harm could weaken this shield. Ideas to limit immunity for paid ads are circulating, though some fear this could lead to over-censorship or favor deep-pocketed firms. The tension between safety and freedom remains a sticking point.

Technology offers both hope and hurdles. One bank has cut fraud losses by 30% using AI-driven detection, but scammers exploit similar tools to create deepfakes that evade filters. Meta blocks millions of bad ads weekly, yet persistent gaps allow scams to reach users. Experts advocate for hybrid human-AI systems and rigorous advertiser checks, but implementing these at scale is a daunting challenge.

Perspectives in the Debate

Those pushing for tougher rules argue platforms profit from ads while sidestepping the consequences. They cite FTC data showing $1.9 billion in social-media fraud losses in 2024 and back state-led efforts to enforce accountability. Ongoing lawsuits, like one targeting Meta's impact on youth mental health, reflect a view that voluntary fixes fall short and legal pressure is essential.

Others emphasize the need for open platforms and user vigilance. They argue fraud is a matter for law enforcement, and they favor industry-led solutions or narrow liability tweaks over tech mandates. Some state attorneys general, while supporting anti-scam efforts, express caution about broad regulations that could limit innovation or expression, highlighting a nuanced divide.

The Road Ahead

The battle against social media scams demands sustained effort. With state attorneys general united, platforms like Meta face mounting pressure to overhaul their systems. Yet the problem's scale, $16 billion in online scam losses in 2024, suggests no single fix will suffice. Collaboration among tech firms, regulators, and users is critical to stem the tide.

Victims' stories, from lost retirement funds to shattered dreams, underscore the urgency. As AI fuels both scams and defenses, platforms need to outpace fraudsters without compromising what makes social media valuable. The challenge is steep, but the coalition's resolve signals a turning point in the fight for a safer online world.

Uncertainty looms over the outcome. Will state demands spark lasting change, or will scams continue to evolve faster than solutions? The conversation is heating up, and its resolution will shape how we navigate the digital age, balancing trust, innovation, and accountability.