Ben Sperry: Congress should focus on protecting teens from real harms, not targeted ads

0
1265
Shutterstock

The topic of social media’s impact on childhood mental health has rapidly emerged as a hot-button political debate, becoming the subject of a hearing of the Senate Judiciary Committee and earning a mention in President Joe Biden’s State of the Union address.

And indeed, there is a growing body of research that shows children are increasingly struggling with mental health issues. That is a real problem, but it’s one that shouldn’t be unfairly conflated with the practice of data collection for targeted advertising.

There is much that should be done to protect teens, both online and offline. This includes some of what Biden proposed, like more access to mental health care in schools. But the evidence that social-media usage causes more mental health issues for teens is mixed, at best. While further research is certainly welcome, it is highly unlikely that all the blame for bullying, depression, anxiety, and other forms of trauma can be laid at the feet of Big Tech.

The biggest problem with the recent push against Big Tech is that it seeks to link the problem of children’s online safety (and child sexual exploitation material, or “CSAM”) with data collection for targeted advertising. The argument is that the reason Big Tech platforms allow so much harmful content and conduct is because it keeps users engaged and thus gets them more eyeballs for targeted advertising. But there are several problems with this logic.

First, it ignores that Big Tech social-media companies are what economists call multi-sided platforms.

On one side are users, including teens and adults, who use the platforms to connect with others and share content. On the other side are advertisers, who fund users’ free use of the platform by looking to sell their products and services to them, sometimes based on their stated preferences and browsing histories.

In the middle are the platforms themselves, which must balance the interests of users and advertisers in a way that maximizes the platform’s value.

To the extent that harassment and bullying makes users less likely to stay online, then the platforms have a strong reason to moderate such abuses in order to keep engagement high and create more value for the advertisers who fund the platform. Moreover, since most advertisers don’t want to be associated with a platform that hosts CSAM, bullying, harassment, or fat-shaming, there is an even stronger incentive for the platforms to moderate such content. This is particularly true given the very limited monetary benefits that can be derived from targeting advertising to children or teens, who generally lack either the bank accounts or payment cards for online transactions.

Second, consistent with what you would expect from the incentives they face, social-media platforms do, in fact, offer a number of features designed to protect the mental health of teenagers who use the platforms.

For instance, Instagram announced a number of initiatives to help those struggling with body image, including putting resources from local eating-disorder hotlines in the search results for terms related to such problems. They also announced stricter penalties for abusive speech and introduced a new feature to filter abusive messages. Instagram also created a “take a break” function, allows users to set daily time limits, and has a “Quiet Mode” that lets other users know you are not using the app. Instagram also, by default, limits the amount of “sensitive content” that teens can access. Meta, which owns Facebook and Instagram, has an entire “Family Center” to provide resources and tools to limit harm to teen users.

Snapchat introduced a set of new features called “Here for You,” designed to help those experiencing a “mental health or emotional crisis.” This includes safety resources from experts that are shown when users of the platform search for topics associated with “anxiety, depression, stress, grief, suicidal thoughts, and bullying.”

Twitter introduced a “Safety Mode” that allows users to limit contact with abusive posters. When turned on, the feature “temporarily blocks accounts for seven days for using potentially harmful language – such as insults or hateful remarks – or sending repetitive and uninvited replies or mentions.”

TikTok also has introduced a “Family Safety Mode” that links a parent’s account with a teen’s and allows for screen-time management, limits on direct messages, and the restriction of inappropriate content. Much like Snapchat and Instagram, TikTok also offers support when users search for terms associated with eating disorders.

In sum, far from “doing nothing to protect children,” major social-media platforms have all created tools to help protect teens’ mental health. The economics of multi-sided platforms explain why it’s actually in the platforms’ interest to do so. Many of the tools may actually limit how much teens use the platforms, which would be counterintuitive if the expectation were that these platforms simply wanted to maximize the time teens spend on the platform in order to sell them products. Policymakers in Congress should focus on protecting children and teens from real harms online and offline and avoid the temptation of regulation based on theories that don’t stand up to scrutiny.

Online bullying and harassment, unsuitable online content, and getting kids the mental-health help they need are all important topics. The myopic focus on targeted advertising is a distraction from tackling these important issues.

Ben Sperry is associate director of legal research with the International Center for Law & Economics.