Thursday, July 4, 2024
HomeInternet and Social MediaNew York aims to ban algorithmic feeds to teens

New York aims to ban algorithmic feeds to teens

As concerns continue about the harms that social media poses, especially to young children, various states in the United States are now implementing their own laws and regulations aimed at curbing such harms wherever possible.

But the different approaches highlight the broader challenge of policing social media misuse and protecting children online.

New York is the latest state to enact child protection laws. New York Governor Kathy Hockal said today:“Prevent Addictive Food Exploitation for Children (SAFE) Act” and the Children’s Data Protection Act.

The Addictive Foods Bill is the more controversial of the two bills, and the bill’s purpose is to “Ban social media platforms from providing information for addicts“Feeding children under 18 without parental consent is prohibited.”

In the bill, “addictive feeds” seem to refer to any algorithmically defined news feeds within social apps.

From the bill:

Addictive feeds are a relatively new technology used primarily by social media companies. Addictive feeds display a customized feed of media for users in an effort to capture their interest and keep them watching for longer. They began to be used by social media platforms in 2011 and have become the primary way people experience social media. As addictive feeds have proliferated, companies have developed advanced machine learning algorithms. The algorithms automatically process data about user behavior, including tens or hundreds of thousands of data points, such as not only what a user formally “liked,” but also how long a user looked at a particular post. The machine learning algorithms then predict mood and what is most likely to capture each user’s interest for as long as possible, creating a customized feed to keep each user on the platform at the expense of everything else.

If these new regulations go into effect, social media platforms operating in New York state will no longer be able to offer algorithmic news feeds to teen users and will instead have to offer algorithm-free versions of their apps.

Additionally, social platforms will be prohibited from sending notifications to minors between the hours of 12am and 6am.

To be clear, the bill has not yet been implemented and will likely face challenges in gaining full approval, but the proposal aims to provide additional protections for teenagers and prevent them from becoming entangled in the harmful influence of social apps.

Various reports have shown that social media use can be especially harmful to younger people, and Meta’s own research has shown that Instagram can have a negative impact on teenagers’ mental health.

Mehta subsequently refuted these findings (along with his own findings), stating:The only area where teenage girls who reported struggling with this issue said Instagram made the situation worse was with Odi images.However, many other reports have also pointed to social media as a contributing factor in affecting teenagers’ mental health, with negative comparison and bullying cited as major concerns.

So while it is natural for regulators to take action, the concern here is that in the absence of comprehensive federal regulation, individual state-based actions could create an increasingly complicated landscape for social platforms to operate.

In fact, Florida already has the following law in place: 14 and 15 year olds need parental consent to create or maintain social media accountsMeanwhile, new regulations are being proposed in Maryland, It will limit the data that can be collected from young people online and put further safeguards in place.

In related regulatory terms, Montana also sought to ban TikTok last year, citing national security concerns, but the ban was overturned before it could go into effect.

But it’s also an example of state lawmakers stepping in to protect their constituents on areas they feel federal policymakers are falling short on.

In Europe, EU policy groups have drawn up extensive regulations on data use and child protection, with all EU member states falling under their purview.

This has also caused headaches for the social media giants operating in the region, who have been able to meet all of these demands, including algorithm-free user experiences and no ads.

As a result, U.S. regulators know that such requests are possible, and ultimately state pressure will likely force similar restrictions or alternative solutions here.

But really, this needs to be a national approach.

For example, there needs to be national regulation on permitted age verification processes, national agreement on the impact of algorithmic amplification on teenagers and whether it should be allowed, and notice and usage limitations.

Banning push notifications seems like a good step in this regard, but it should be up to the White House to set acceptable rules on this, not state governments.

But in the absence of action, states are trying to come up with their own solutions, most of which will be challenged and defeated. While the Senate debates a more universal solution, it seems like a lot of the responsibility is being pushed onto lower levels of government, which are wasting time and resources on problems they shouldn’t be responsible for solving.

In essence, these announcements are more a reflection of dissatisfaction and the Senate should take note.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments

error: Content is protected !!