Looking Under the Hood: How Facebook Builds AI with Privacy and Ethics by Design
AI Under the Hood workshop series: Hate Speech Interactive
Facebook launched the AI Under the Hood program at the 2018 General Privacy Assembly for data policy commissioners and has since expanded it to Brazil, Uruguay, Colombia, Spain, Singapore and the United States. Not only have these sessions helped regulators better understand how this technology works, it also helped them understand the challenges that companies face when building AI systems and the crucial important role that humans play in these processes. These sessions helped to demystify and humanize the technology, showing the massive human work (in terms of assumptions, discussions, compromises, balancing between competing interests and decisions) that goes into building and deploying AI.
In this interactive, imagine you're building your own platform and ML classifier to detect Hate Speech. For that, you'll first need to find examples of content that should be allowed and examples that should be deleted from your platform. These will be the examples you'll use to train a machine learning classifier. Based on that data and correspondent classification, you will then formulate rules justifying those allow/delete decisions.