Nearly three weeks after the abrupt exit of Black artificial intelligence ethicist Timnit Gebru, more details are emerging about the shady new set of policies Google has rolled out for its research team.
After reviewing internal communications and speaking to researchers affected by the rule change, Reuters reported on Wednesday that the tech giant recently added a “sensitive topics” review process for its scientists’ papers, and on at least three occasions explicitly requested that scientists abstain from casting Google’s technology in a negative light.
Under the new procedure, scientists are required to meet with special legal, policy and public relations teams before pursuing AI research related to so-called controversial topics that might include facial analysis and categorizations of race, gender or political affiliation.
In one example reviewed by Reuters, scientists who had researched the recommendation AI used to populate user feeds on platforms like YouTube — a Google-owned property — had drafted a paper detailing concerns that the tech could be used to promote “disinformation, discriminatory or otherwise unfair results” and “insufficient diversity of content,” as well as lead to “political polarization.” After review by a senior manager who instructed the researchers to strike a more positive tone, and the final publication instead suggests that the systems can promote “accurate information, fairness, and diversity of content.”
“Advances in technology and the growing complexity of our external environment are increasingly leading to situations where seemingly inoffensive projects raise ethical, reputational, regulatory or legal issues,” one internal webpage outlining the policy reportedly states.
In recent weeks — and particularly after the departure of Gebru, a widely-renowned researcher who reportedly fell out of favor with higher-ups after she raised the alarm about censorship infiltrating the research process — Google has faced increased scrutiny over the potential biases in its internal research division.
Four staff researchers who spoke to Reuters validated Gebru’s claims, saying that they too believe that Google is beginning to interfere with critical studies of technology’s potential to do harm.
“If we are researching the appropriate thing given our expertise, and we are not permitted to publish that on grounds that are not in line with high-quality peer review, then we’re getting into a serious problem of censorship,” Margaret Mitchell, a senior scientist at the company, said.
In early December, Gebru claimed that she had been fired by Google after she pushed back against an order not to publish research claiming that AI capable of mimicking speech could put marginalized populations at a disadvantage.
Technology - Latest - Google News
December 24, 2020 at 09:32AM
https://ift.tt/37MDCEg
Google Reportedly Told AI Scientists To 'Strike A Positive Tone' In Research - Gizmodo
Technology - Latest - Google News
https://ift.tt/2AaD5dD
Shoes Man Tutorial
Pos News Update
Meme Update
Korean Entertainment News
Japan News Update
Bagikan Berita Ini
0 Response to "Google Reportedly Told AI Scientists To 'Strike A Positive Tone' In Research - Gizmodo"
Post a Comment