Facebook acknowledged that its artificial intelligence detection technology 'still doesn't work that well,' particularly when it comes to hate speech, and that it needs to be checked by human moderators.
Facebook承认,它的人工智能检测技术“不够强大”,尤其是在仇恨言论方面,它需要由人为检查。
'It's important to stress that this is very much a work in progress and we will likely change our methodology as we learn more about what's important and what works,' said Guy Rosen, vice president of Product Management at Facebook, in a statement.
Facebook负责产品管理的副总裁Guy Rosen在一份声明中说,”重要的是要强调这是一项正在进行的工作,我们可能会改变我们的方法,因为我们会更多地了解什么是重要的,什么是有效的。“
'...We have a lot of work still to do to prevent abuse', he added.
“…我们还有很多工作要做,以防止虐待,“他补充道。
The firm has said previously that it plans to hire thousands more human moderators to 'make Facebook safer for everyone'.
该公司此前曾表示,计划招聘数千名真人主持人,以“让Facebook更安全”。”
Facebook moderated 2.5 million posts for violating hate speech rules, but only 38% of these were flagged by automation, which fails to interpret nuances like counter speech, self-referential comments or sarcasm.
【脸书用户大清理!13亿虚假账号全中枪】相关文章:
最新
2019-01-07
2019-01-07
2019-01-07
2019-01-07
2019-01-07
2019-01-05