Facebook, YouTube share how they remove offensive posts

serverpoint hosting banner

Facebook and YouTube are asking for consumer trust—and taking steps to
improve their transparency to earn it.

The organizations are responding to complaints that they don’t do enough to
remove abusive content from their sites. Whether it’s Facebook’s fake news
problem or disrespectful YouTube videos, consumers have clamored for
regulation—and Silicon Valley is trying to get a head start.

Facebook publishes guidelines

Facebook published 25 pages of new guidelines for acceptable posts,
including the process for how posts will be removed and how users can
appeal a removal. The rules had never been published before, in an attempt
to keep users from gaming the system.


TechCrunch
wrote
:

Facebook is effectively shifting where it will be criticized to the
underlying policy instead of individual incidents of enforcement mistakes
like when it
took down posts
of the newsworthy “Napalm Girl” historical photo because it contains child
nudity before eventually restoring them. Some groups will surely find
points to take issue with, but Facebook has made some significant
improvements. Most notably, it no longer disqualifies minorities from
shielding from hate speech because an unprotected characteristic like
“children” is appended to a protected characteristic like “black”.

While users might still find loopholes, Facebook hopes that announcing its
guidelines will change user behavior.

TechCrunch
continued:

Facebook’s VP of Global Product Management Monika Bickert who has been
coordinating the release of the guidelines since September told reporters
at Facebook’s Menlo Park HQ last week that “There’s been a lot of research
about how when institutions put their policies out there, people change
their behavior, and that’s a good thing.” She admits there’s still the
concern that terrorists or hate groups will get better at developing
“workarounds” to evade Facebook’s moderators, “but the benefits of being
more open about what’s happening behind the scenes outweighs that.”

Facebook created a website
detailing its many efforts to curb abusive content, but it admits that it
can still do more.

[RELATED: Take a three-minute communicator evaluation to see how you stack up]

In a blog post it wrote:

We’re under no illusion that the job is done or that the progress we have
made is enough. Terrorist groups are always trying to circumvent our
systems, so we must constantly improve. Researchers and our own teams of
reviewers regularly find material that our technology misses. But we learn
from every misstep, experiment with new detection methods and work to
expand what terrorist groups we target.

Facebook hopes that by offering more transparency it can avoid criticism
and backlash, like when
it took down a famous war photograph.

Ultimately, Facebook says it wants to allow as much as possible.

NPR reported:

For the social media giant, it’s a question of balance. Balance between
free speech and user safety. Balance between curbing “fake news” and
encouraging open political discourse. And balance between Facebook’s
obligation to serve as a steward of a welcoming environment and the
realities of running a for-profit, publicly owned corporation.

“We do try to allow as much speech as possible,” Bickert said, “and we know
sometimes that might make people uncomfortable.”

YouTube recruits reviewers and AI

YouTube has also faced criticism for insensitive or abusive content online,
even from established YouTubers. Recently,
Logan Paul was harshly criticized for a video showing a person
who had just committed suicide in Japan.

In December, YouTube announced tougher actions to police content on its
site, including hiring more human reviewers and being more transparent in
how artificial intelligence was flagging bad actors on the platform.

Now, YouTube is backing up that promise with its first ever community
guidelines enforcement report. The report shows that the organization has removed more than 8 million
videos, 6.6 million of which were flagged by robots.

Although Google wants consumers to focus on the incredible ability of its
AI, some are still wondering if the platform is doing enough to remove bad
content.

Recode wrote:

The glass-half-empty argument would go something like this: YouTube’s
massive scale, combined with its platform philosophy—let users upload
whatever they want, then review it when necessary—means that it will never
be able to get a complete handle on problematic clips.

Another way of putting it: YouTube says it intercepted some 4.5 million bad
videos before anyone saw them. But some of its billion-plus users saw
another 1.5 million clips—in just three months—that ultimately needed to be
pulled off the site. And even if most of those clips only generated a few
views, any one of them has the potential to upset a user, or an
advertiser—or a government official who thinks the site needs more
oversight.

What do you think of these attempts to offer more transparency, PR Daily readers?

(Image via)



You might also like More from author

Leave A Reply

Your email address will not be published.