1 00:00:00,570 --> 00:00:04,933 MICHELLE: Apps with user-generated content often rely on user policing. 2 00:00:04,933 --> 00:00:09,046 On Facebook, you can report inappropriate or abusive content. 3 00:00:09,046 --> 00:00:12,630 The content review team reviews the offending content and 4 00:00:12,630 --> 00:00:17,034 conducts disciplinary action if it is deemed to violate the policies. 5 00:00:17,034 --> 00:00:21,768 The burden of finding harmful content is shifted to users because Facebook 6 00:00:21,768 --> 00:00:26,593 doesn't want the responsibility of proactively policing it themselves. 7 00:00:26,593 --> 00:00:29,836 But content moderation is a colossal task and 8 00:00:29,836 --> 00:00:33,684 unsafe content can trigger the people reporting it. 9 00:00:33,684 --> 00:00:39,265 In the book Behind the Screen, Sarah T. Roberts explores the emotional toll that 10 00:00:39,265 --> 00:00:45,111 content moderators endure as they review thousands of offensive material daily. 11 00:00:45,111 --> 00:00:49,890 According to a speech Facebook CEO Mark Zuckerberg gave in 2019, 12 00:00:49,890 --> 00:00:52,326 there is some preemptive action. 13 00:00:52,326 --> 00:00:57,131 Facebook's AI systems identify 99% of the terrorist content 14 00:00:57,131 --> 00:01:00,109 which is blocked before anyone sees it. 15 00:01:00,109 --> 00:01:04,497 Per Facebook's reporting, 80% of all hate speech acted on 16 00:01:04,497 --> 00:01:08,573 by their platform was flagged by algorithms in 2019. 17 00:01:08,573 --> 00:01:13,886 Granted, that doesn't include user flagged content that results in no action, 18 00:01:13,886 --> 00:01:18,210 and the algorithms only cover 40 languages worldwide. 19 00:01:18,210 --> 00:01:22,776 The remaining languages only have users reporting content and 20 00:01:22,776 --> 00:01:25,884 human moderators reviewing those flags. 21 00:01:25,884 --> 00:01:30,801 When asked why Facebook doesn't do more to moderate its content, Zuckerberg argues 22 00:01:30,801 --> 00:01:35,583 the danger of limiting free speech as it blocks out potentially positive voices for 23 00:01:35,583 --> 00:01:36,553 social change. 24 00:01:36,553 --> 00:01:40,745 Instead, he focuses on eliminating fake accounts known 25 00:01:40,745 --> 00:01:44,161 as bots that spread misinformation and hate. 26 00:01:44,161 --> 00:01:47,980 Facebook's AI systems can now detect clusters of fake 27 00:01:47,980 --> 00:01:51,224 accounts that aren't behaving like humans. 28 00:01:51,224 --> 00:01:54,125 Government regulation can force companies to do 29 00:01:54,125 --> 00:01:56,753 more to manage their platform's content. 30 00:01:56,753 --> 00:02:01,470 Germany has a law against hate speech, and this has resulted in Facebook 31 00:02:01,470 --> 00:02:05,814 doing extra work to filter hateful content from users in Germany. 32 00:02:05,814 --> 00:02:10,892 HOPE: Content heavy apps also struggle with the spread of misinformation. 33 00:02:10,892 --> 00:02:16,334 Twitter has begun evaluating tweets based on their propensity for harm. 34 00:02:16,334 --> 00:02:20,790 Tweets that contain potentially misleading content such as this one 35 00:02:20,790 --> 00:02:25,564 mentioning COVID-19 are labeled with a link to trusted information. 36 00:02:25,564 --> 00:02:29,435 Tweets suspected to be inaccurate, not credible, or 37 00:02:29,435 --> 00:02:33,484 have more propensity for harm are marked with a warning. 38 00:02:33,484 --> 00:02:37,542 Tweets confirmed to be false are removed. 39 00:02:37,542 --> 00:02:38,676 On a final note, 40 00:02:38,676 --> 00:02:44,170 online harassment is rampant across social media apps and forums. 41 00:02:44,170 --> 00:02:49,103 We don't have time to get into it, but there's cyberstalking, 42 00:02:49,103 --> 00:02:54,494 fake impersonations, catfishing, doxxing, trolling, and so on. 43 00:02:54,494 --> 00:02:58,513 Block Party is an app to help with online harassment. 44 00:02:58,513 --> 00:03:03,894 It mutes accounts on social media that are likely to send unwanted content. 45 00:03:03,894 --> 00:03:09,693 The lockout folder contains the filtered content and is accessible anytime. 46 00:03:09,693 --> 00:03:15,334 There's also the Helper View which enables a user's friends to review their 47 00:03:15,334 --> 00:03:21,333 lockout folder content and take action on their behalf, such as blocking users. 48 00:03:21,333 --> 00:03:24,146 Check out the teacher's notes for further reading. 49 00:03:24,146 --> 00:03:29,110 While many apps have anti-harassment rules, they are inconsistently enforced. 50 00:03:29,110 --> 00:03:32,986 We need better policies, procedures, and tools to mitigate and 51 00:03:32,986 --> 00:03:34,897 prevent harassment and abuse. 52 00:03:34,897 --> 00:03:38,910 Even more, we need people like you to advocate for change within.