Facebook AI equated Black men with 'primates'. Cue a toothless apology.

行业动态 2024-09-22 12:49:29 2651

Some Facebook users who recently watched a Daily Mailvideo depicting Black men reported seeing a label from Facebook asking if they were interested in watching more videos about "primates."

The label appeared in bold text under the video, stating "Keep seeing videos about Primates?" next to "Yes" and "Dismiss" buttons that users could click to answer the prompt. It's part of an AI-powered Facebook process that attempts to gather information on users' personal interests in order to deliver relevant content into their News Feed.

The video in question showed several instances of white men calling the police on Black men and the resulting events, and had nothing to do with primates. Facebook issued an apology, telling the New York Timesthat it was an "unacceptable error" and that it was looking into ways to prevent this happening in the future.

The label came to Facebook's attention when Darci Groves, a former Facebook content design manager, posted it to a product feedback forum for current and former Facebook employees and shared it on Twitter. Groves said that a friend came across the label and screenshotted and shared it with her.

Mashable Light SpeedWant more out-of-this world tech, space and science stories?Sign up for Mashable's weekly Light Speed newsletter.By signing up you agree to our Terms of Use and Privacy Policy.Thanks for signing up!

The offensive label feels particularly unacceptable considering the extremely expansive database of user-uploaded photos that Facebook has access to, and could presumably use to ensure proper facial recognition by its tools. While AI can always make mistakes, it is the company's responsibility to properly train its algorithms, and this misstep cannot be blamed on a lack of resources.

In addition to mishandling past racial justice issues within the company, Facebook's lack of transparent plan to address its AI problem continues to sow distrust. While the apology was needed, the company's lack of apparent actionable steps beyond disabling the feature and a vague promise to "prevent this from happening again" doesn't cut it.

SEE ALSO:Facebook buried an earlier report on popular posts. So much for transparency.

The approach is especially lackluster following Facebook's recent move to cut off researchers' access to tools and accounts used to explore user data and ad activity on the platform, citing possible violation of a settlement with the Federal Trade Commission. The FTC has directly disputed that defense.

Combining a vague response with decreased access to facts makes it rather hard to simply trust that Facebook will handle this inappropriate AI gaffe with any kind of immediacy or results. If Facebook is committed to creating and using AI tools in an inclusive manner, it needs to specify exactly how it plans to fix this issue, and it needs to do so soon.

本文地址:http://r.zzzogryeb.bond/html/677e698895.html
版权声明

本文仅代表作者观点,不代表本站立场。
本文系作者授权发表,未经许可,不得转载。

全站热门

10 Places to Get to Know Paul Bunyan

Happy puppy eating a popsicle will melt your cold, cold heart

The ‘cheap’ Vision Pro — 3 features Apple is reportedly dropping from the headset

芦山县人民检察院推进灾后工程建设项目廉洁重建阳光重建

Newborns hit new low, but births to those unmarried reach record high: data

PHOTO NEWS

US to open embassy in Pyongyang after Trump

Court allows live broadcast of former President Lee's trial

友情链接