Product Policy Review: Pinterest’s (fairly) Recent Safety Policy for Teenagers

Recently, Pinterest rolled out some new safety features and policies for its teenage users. In the article below, I review these fairly recent policy.  

Controlled Audience: Private Accounts by Default for Users Under 16

For users under 16, Pinterest now sets their accounts to private by default. This ensures a controlled audience, as only approved followers can view these teenagers’ profiles, boards, and pins. Users can share a private link to invite people they know to follow them, with each new link expiring in three days or after being used by five people.

The concept of “controlled audience” in trust and safety for children users is quite important because of the simple equation of reach; that is – that the reach of one’s audience is directly proportional to the potential of exposure. The algorithms of many social platforms are designed to facilitate exposure to increase connection and engagement. However, children users need their exposure to be controlled rather than promoted. Just like what Pinterest does with this new feature where accounts of teenagers under 16 cannot even be discoverable by ‘strangers’. While most platforms now have the option to make an account private, these features are built as opt-in options and not as default like with Pinterest. Additionally, under 16 users on Pinterest do not even have the option to change their accounts to public if they wanted to. I believe this policy does great to minimise the risks of cyberbullying, grooming and exposure to indecent content and characters. 

Question for thought: 

  • Should the company consider reviewing and updating the age threshold for default private accounts as children evolve and become more digitally literate? Or do some things never change?

Online Oversight: Parental Support

Pinterest offers a range of controls to parents of teenagers under 18. The primary feature is a passcode mechanism that allows parents to set up a four-digit passcode to lock/edit settings such as email, password, profile visibility, ad personalisation, messaging, mentions, comments, and shopping recommendations.

These features are helpful for – online oversight – which is another critical component on online safety policies for children. Oversight provides an additional layer of protection, controlling children’s exposure to potentially harmful content and individuals.

However, this approach comes with inherent risks, such as the potential for overly restrictive access to information and the suppression of free speech. The fundamental policy question is: what constitutes the greater harm? While this might seem straightforward, it is actually a complex issue because what constitutes the greater harm for each child can be subjective. Policymakers must skillfully balance rights—such as access to information, free speech, and privacy—with the need for online safety, as both are essential.

The education industry offers valuable insights for addressing this balance. For many years, educational institutions have navigated the fine line between granting children access to information and creative expression while upholding parents’ rights to know and responsibility to oversee their children’s learning. 

Some key elements of their approach include:

  • Inclusive Policy Development: Educational institutions often see parents as policy collaborators rather than just policy consumers. Initiatives like parent-teacher forums allow schools to gather diverse perspectives from a broad range of parents. This engagement provides evidence-based insights into what primary caregivers consider safe. While there may be instances where a widely accepted policy may be detrimental to a few children, the goal is to first establish a sound general rule and then address exceptions as needed.
  • Age-Appropriate Intervention: Schools typically adjust their education frameworks and the level of autonomy given to students based on age and maturity. Younger children are generally more supervised than older ones. Similarly, online parental controls should be dynamic and tailored to the child’s level of autonomy.
  • Parent Education: Although parents are experts on their children, they may not be equipped with the technical or ideological knowledge of the online space. Mirroring initiatives such as parent-teacher conferences to enlighten parents about their children’s digital activities and empower them to guide their use of digital technologies may prove to be helpful.

Some extra thought on gleaning from the education industry:
Curriculum Integration: Schools often integrate skills learning into existing courses and classes. A similar approach can be applied to digital literacy for children. By embedding digital literacy tips into the platform’s user experience, children can learn best practices in real-time. For example, before a child account sends or posts a picture, a warning prompt could be displayed: “Hey! Have you double-checked that this picture does not contain any private imagery?”

Curbing Cross-Cutting Risks: No Fake Filters on Beauty. No Body Shaming. 

Pinterest is ensuring that its filters do not alter users’ appearance. The company is also addressing body shaming by, for instance, disallowing any weight loss ads. 

This is interesting because historically, our approach to online harms has been to focus on protecting children from more explicit and overt harms such as cyberbullying, online predators, and exposure to inappropriate content. While these concerns remain crucial, acknowledging new forms of risks that cross-cut other categories of risks or manifest in different dimensions such mental health disruptions or addictions extends the scope of protection.

I foresee these types of policies being amplified in the coming years to address issues such as unrealistic beauty standards, curated lifestyles, body image, self-esteem and so on. There will be evidence-gathering and research initiatives in collaboration with mental health experts to curb the impact of these kinds of risks. 

My caution would however be to policymakers to ensure that these policies are not too restrictive or broad to the point of limiting actual relevant information. Disallowing, for instance, any form of weight loss ads on the platform may potentially prevent access to beneficial and life-saving health resources.  It may be more effective to have a nuanced approach towards addressing the subject, distinguishing between harmful and helpful resources. 

And of course… Age Verification

Pinterest has implemented an age verification process that requires users to enter their date of birth for new accounts, coupled with enhanced measures to verify changes to birthdates for those initially registered as under 18.

Age verification is a critical topic that raises many concerns about privacy, feasibility/effectiveness and compatibility with a non-standardised regulatory landscape. Common age verification methods include self-declaration, credit cards, biometrics/AI-backed facial recognition, parental consent, offline verification, government-issued digital ID and so on. More recently, governments (like France and the EU) are exploring ways to build interoperable tools that can aid verification. 

While the jurisprudence of this is evolving the priority ​​must remain the online safety and protection.

Conclusion

Pinterest’s recent updates to its safety and privacy policies for teenage users are commendable and, perhaps, exemplary. As will all policies, there will always be more room for improvement, for stakeholder feedback, for testing, for evidence-gathering – because the work of keeping children and young people safe online never really ends.

.

One response to “Product Policy Review: Pinterest’s (fairly) Recent Safety Policy for Teenagers”

  1. […] previous article explored Pinterest’s policies around children and young […]

Leave a Reply

Your email address will not be published. Required fields are marked *