Tag: safety

How Platforms are Promoting Safety and Mental Health

A growing number of platforms are publicizing their dedication to make good on a fundamental responsibility to prioritize and invest in the health and wellness of its users.

Pinterest recently introduced a series of emotional wellness activities including deep breathing exercises for users searching for solutions to better manage their stress and anxiety. Instagram has also made strides in this regard, releasing a tool, Restrict and expanding its suicide and self-harm content ban.

Additional apps are taking cues from these efforts including Snapchat, TikTok, and Facebook. Here’s a look at the latest and why they matter in the grand scheme of our industry.

Snapchat: ‘Here For You’

Ninety-percent of all 13- to 24-year-olds use Snapchat to engage with their friends. This particular demographic is especially vulnerable and internal company research has proven this by showing feelings of stress, depression and anxiety to be the top mental health issues reported by users and their close friends.

Similarly to Pinterest’s mission, ‘Here For You’ is geared more toward offering resources and starting important conversations that resolve these feelings and issues and less about uprooting the product. Specifically, the process works by linking users to a “special section within Snapchat’s search results” when they search criteria indicating they’re in need of support around issues such as anxiety, depression, stress, grief, suicidal thoughts, and bullying.

Illustrated below, if a user were to type in the word “anxiety” they’d be given a selection of short shows to pick from including the series “Chill Pill.” A mixture of original programming made with support from local experts will also be available targeted to topics of suicide, depression, and eating disorders.

According to the announcement, proactive in-app support is just one step towards “what will be a bigger health and wellness push from Snapchat” to be rolled out over the next few months.

“We feel a real responsibility to try to make a positive impact with some of our youngest, sometimes most vulnerable users on our platform,” said Vice President of Global Policy, Jen Stout in a statement to Fast Company. “We know this is the first step of a lot of work we want to do to provide the right resources to our users.”

‘TikTok Tips’: An Influencer-Led Safety and Well-Being Advice Account

Last month, TikTok updated its Community Guidelines to address potential issues with misinformation and expanded its rule around acceptable in-app behavior. Today, the company is taken yet another stand towards helping users make better decisions that are safer and better for their mental health and wellbeing through a new influencer led account dubbed TikTok Tips.

The premise is to use familiar TikTokers to run the feed and dish out fun and friendly reminders to fellow users around how to manage their privacy settings and to take a break from the app. Messages encompass simply getting some added rest while others reinforce the benefit of participating in IRL activities with family and friends to as crucial for building memories.

“We’re on a mission to promote privacy, safety, and positive vibes!” states the account’s description — one that aligns with the platform’s broader mission to serve as an environment of a positive, safe space free of judgment.

While it’s too early to make any declarations on how effective it will be in getting people to re-check their usage and take mental breaks from their constant scrolling, initial video uploads show promising engagement. Two, in particular, have garnered 16.9 million views and 17.2 million views respectively.

‘Facebook’s ‘Hobbi’: An App Dedicated to Tracking Your Personal Progress

In a nod to Pinterest, Facebook is looking to help users focus on their personal growth and development in a new app, Hobbi, via its New Product Experimentation (NPE) team.

As you may guess from its name, Hobbi is dedicated towards giving users an outlet to collect images of their hobbies and interests and sort them into boards so they can easily map their progress. Themed collections can include gardening, cooking, DIY arts and crafts, and more. Outside of the ability to create video highlight reels of your work to share externally on other platforms, Hobbi is not a social networking app, rather an editor and organizer. The editing options and controls are limited, a stark contrast compared to the likes of Instagram. It’s unique in that rather than serving as an outlet to broadcast, its intended use is as a personal log for your achievements, a resource for personal reflection and a compass for growth.

“You might just surprise yourself with how much you have done,” the app description states, encouraging people to push the boundaries and meaningfully engage in the activities that bring them joy, relief, and happiness.

If you’re a company that caters to younger demographics, especially Gen Z, you’ll want to keep tabs on these initiatives and fundamental shifts. Why? Because they are at the heart of what these audiences care about, are interested in, and expect when establishing their loyalty to the brands they purchase from, and the apps they spend their time on.

Learn more about Empathy Economics as part of our 2020 global theme: HUMAN.X and help us establish a human-first, experience-driven approach to digital marketing. Read the official announcement here and secure your early-bird discount today to save 10% on your full-conference pass to #SMWNYC (May 5-7, 2020).

WATCH THE SMWNYC 2019 RECAP

WATCH THE SMWNYC 2019 RECAP

The post How Platforms are Promoting Safety and Mental Health appeared first on Social Media Week.

http://socialmediaweek.org/blog/2020/02/how-platforms-are-promoting-safety-and-mental-health/

YouTube Promises “Blunt Actions” to Secure Child Safety and Soothe Advertisers

A 2018 Cheq and IPG Media survey revealed that most consumers believe that ad placement is intentional; further, they are 2.8 times less willing to associate with a brand when its ads are displayed in environments that are unsafe or unsavory. Given that tidbit, it’s no surprise to see the exodus of advertisers from YouTube in the wake of parallel scandals over sexually inappropriate and anti-vaccination content.

This week, the platform hastened a number of measures in the works to combat this content, and addressed its advertisers in the most comprehensive way yet.

A copy of the letter obtained by AdWeek revealed a combination of contrition and concentration, as the company works to crack down on offenders and demonetize content that presents danger to their users. “Because of the importance of getting child safety right,” they wrote, “we announced a series of blunt actions to sharpen our ability to act more precisely.

Among the measures swiftly undertaken this week:

  • The disabling of comments on “tens of millions of videos” that could attract predatory behavior
  • “Reducing the discoverability” of content that has been flagged (a strategy that Pinterest has used with some success)
  • Terminating offending accounts
  • “Increasing accountability” within the community of creators on the site, a community that uploads a staggering 400 hours of video to the site each second
  • “A more unforgiving stance” for creators who post inappropriate children’s content
  • The hastening of development on a machine learning powered “comments classifier,” which will ultimately be able to flag and remove twice as many comments as the existing system, a combination of algorithms and manual reviewers.

These measures have been deployed in hopes of stopping YouTube’s proverbial bleeding of advertisers, which in recent days has included AT&T, Disney, Nestle, McDonalds, Epic Games, and several others. AT&T reportedly told the New York Times, “until Google (YouTube’s parent company) can protect our brand from offensive content of any kind, we are removing all advertising from YouTube.” A similar exodus of advertisers took place back in 2017, when AT&T ads, along with those of Johnson & Johnson and Verizon were placed alongside racist content and videos posted by terrorist groups. This time, the company’s countermeasures have been swift and public.

The problem is a complicated one, as one executive (speaking anonymously) shared with FastCompany.

“There is no such thing as 100% safety when it comes to user-generated content, and marketers need to know that although there can be a zero-tolerance effort, there’s no such thing as 100% brand safety or 0% risk.”

The measures above, along with demonetizing videos in certain categories, can move a platform toward being a safer place. The 10,000 content reviewers that YouTube brought on in the wake of their 2017 scandal can also make a dent in the problem. But it still may not be enough.

In a statement, YouTube acknowledged, “there’s more to be done, and we continue to work to improve and catch abuse more quickly.” With any luck, this aggressive and comprehensive attention to addressing these concerns – for the sake of advertisers, and the viewing public using the platform – will continue long after the media firestorm dies down.

Join 100,000+ fellow marketers who advance their skills and knowledge by subscribing to our weekly newsletter.

WATCH OUR 2019 PROMO

The post YouTube Promises “Blunt Actions” to Secure Child Safety and Soothe Advertisers appeared first on Social Media Week.

http://socialmediaweek.org/blog/2019/03/youtube-promises-blunt-actions-to-secure-child-safety-and-soothe-advertisers/

YouTube Promises “Blunt Actions” to Secure Child Safety and Soothe Advertisers

A 2018 Cheq and IPG Media survey revealed that most consumers believe that ad placement is intentional; further, they are 2.8 times less willing to associate with a brand when its ads are displayed in environments that are unsafe or unsavory. Given that tidbit, it’s no surprise to see the exodus of advertisers from YouTube in the wake of parallel scandals over sexually inappropriate and anti-vaccination content.

This week, the platform hastened a number of measures in the works to combat this content, and addressed its advertisers in the most comprehensive way yet.

A copy of the letter obtained by AdWeek revealed a combination of contrition and concentration, as the company works to crack down on offenders and demonetize content that presents danger to their users. “Because of the importance of getting child safety right,” they wrote, “we announced a series of blunt actions to sharpen our ability to act more precisely.

Among the measures swiftly undertaken this week:

  • The disabling of comments on “tens of millions of videos” that could attract predatory behavior
  • “Reducing the discoverability” of content that has been flagged (a strategy that Pinterest has used with some success)
  • Terminating offending accounts
  • “Increasing accountability” within the community of creators on the site, a community that uploads a staggering 400 hours of video to the site each second
  • “A more unforgiving stance” for creators who post inappropriate children’s content
  • The hastening of development on a machine learning powered “comments classifier,” which will ultimately be able to flag and remove twice as many comments as the existing system, a combination of algorithms and manual reviewers.

These measures have been deployed in hopes of stopping YouTube’s proverbial bleeding of advertisers, which in recent days has included AT&T, Disney, Nestle, McDonalds, Epic Games, and several others. AT&T reportedly told the New York Times, “until Google (YouTube’s parent company) can protect our brand from offensive content of any kind, we are removing all advertising from YouTube.” A similar exodus of advertisers took place back in 2017, when AT&T ads, along with those of Johnson & Johnson and Verizon were placed alongside racist content and videos posted by terrorist groups. This time, the company’s countermeasures have been swift and public.

The problem is a complicated one, as one executive (speaking anonymously) shared with FastCompany.

“There is no such thing as 100% safety when it comes to user-generated content, and marketers need to know that although there can be a zero-tolerance effort, there’s no such thing as 100% brand safety or 0% risk.”

The measures above, along with demonetizing videos in certain categories, can move a platform toward being a safer place. The 10,000 content reviewers that YouTube brought on in the wake of their 2017 scandal can also make a dent in the problem. But it still may not be enough.

In a statement, YouTube acknowledged, “there’s more to be done, and we continue to work to improve and catch abuse more quickly.” With any luck, this aggressive and comprehensive attention to addressing these concerns – for the sake of advertisers, and the viewing public using the platform – will continue long after the media firestorm dies down.

Join 100,000+ fellow marketers who advance their skills and knowledge by subscribing to our weekly newsletter.

WATCH OUR 2019 PROMO

The post YouTube Promises “Blunt Actions” to Secure Child Safety and Soothe Advertisers appeared first on Social Media Week.

http://socialmediaweek.org/blog/2019/03/youtube-promises-blunt-actions-to-secure-child-safety-and-soothe-advertisers/

YouTube Promises “Blunt Actions” to Secure Child Safety and Soothe Advertisers

A 2018 Cheq and IPG Media survey revealed that most consumers believe that ad placement is intentional; further, they are 2.8 times less willing to associate with a brand when its ads are displayed in environments that are unsafe or unsavory. Given that tidbit, it’s no surprise to see the exodus of advertisers from YouTube in the wake of parallel scandals over sexually inappropriate and anti-vaccination content.

This week, the platform hastened a number of measures in the works to combat this content, and addressed its advertisers in the most comprehensive way yet.

A copy of the letter obtained by AdWeek revealed a combination of contrition and concentration, as the company works to crack down on offenders and demonetize content that presents danger to their users. “Because of the importance of getting child safety right,” they wrote, “we announced a series of blunt actions to sharpen our ability to act more precisely.

Among the measures swiftly undertaken this week:

  • The disabling of comments on “tens of millions of videos” that could attract predatory behavior
  • “Reducing the discoverability” of content that has been flagged (a strategy that Pinterest has used with some success)
  • Terminating offending accounts
  • “Increasing accountability” within the community of creators on the site, a community that uploads a staggering 400 hours of video to the site each second
  • “A more unforgiving stance” for creators who post inappropriate children’s content
  • The hastening of development on a machine learning powered “comments classifier,” which will ultimately be able to flag and remove twice as many comments as the existing system, a combination of algorithms and manual reviewers.

These measures have been deployed in hopes of stopping YouTube’s proverbial bleeding of advertisers, which in recent days has included AT&T, Disney, Nestle, McDonalds, Epic Games, and several others. AT&T reportedly told the New York Times, “until Google (YouTube’s parent company) can protect our brand from offensive content of any kind, we are removing all advertising from YouTube.” A similar exodus of advertisers took place back in 2017, when AT&T ads, along with those of Johnson & Johnson and Verizon were placed alongside racist content and videos posted by terrorist groups. This time, the company’s countermeasures have been swift and public.

The problem is a complicated one, as one executive (speaking anonymously) shared with FastCompany.

“There is no such thing as 100% safety when it comes to user-generated content, and marketers need to know that although there can be a zero-tolerance effort, there’s no such thing as 100% brand safety or 0% risk.”

The measures above, along with demonetizing videos in certain categories, can move a platform toward being a safer place. The 10,000 content reviewers that YouTube brought on in the wake of their 2017 scandal can also make a dent in the problem. But it still may not be enough.

In a statement, YouTube acknowledged, “there’s more to be done, and we continue to work to improve and catch abuse more quickly.” With any luck, this aggressive and comprehensive attention to addressing these concerns – for the sake of advertisers, and the viewing public using the platform – will continue long after the media firestorm dies down.

Join 100,000+ fellow marketers who advance their skills and knowledge by subscribing to our weekly newsletter.

WATCH OUR 2019 PROMO

The post YouTube Promises “Blunt Actions” to Secure Child Safety and Soothe Advertisers appeared first on Social Media Week.

http://socialmediaweek.org/blog/2019/03/youtube-promises-blunt-actions-to-secure-child-safety-and-soothe-advertisers/