Another AI ethics specialist fired from Google. Artificial intelligence doesn’t have to be ethical?

photo: Fitore F | Unsplash

Google is unlucky with artificial intelligence specialists. For the second time in just three months, an ethics researcher at AI claims Google fired her. Margaret Mitchell says she was fired from the company’s artificial intelligence lab, Google Brain, where she previously co-led the ethical approach to artificial intelligence. Its former co-chairman, Timnit Gebru, left Google in December. In circumstances that raise considerable doubts about the work culture that has prevailed in Mountain View in recent years.

Gebru, she says, was fired after refusing to remove or remove her name from an article advising caution with AI word processing systems, including technology used by Google in its search engine. Sounds bad anyway, but it doesn’t stop there. Gebru says the dispute could have been used as an excuse, and the real reason for his firing was his criticism of Google’s treatment of black workers and women.

Mitchell learned by email Friday afternoon that she had been fired and her team had been told she would not be returning after last month’s suspension. Interest itself has let the world know in three words:

Mitchell went on to write that after his criticism of Google, the company blocked his business accounts in January. According to Axios, Google did this after discovering that Mitchell was using “automated scripts” to research instances of discrimination within the company during the Dr. Gebru era.

What does Google say? The company claims Mitchell released “confidential business documents and private data of other employees” outside of the company.

After reviewing the conduct of this manager, we confirmed that there were numerous violations of our code of conduct as well as our security policies, including the extraction of confidential business documents and personal data of other employees .

The question therefore seems very worrying. Gebru, Mitchell, and the entire Ethical AI team at Google studied AI development to understand and mitigate the potential downsides of AI. Among other things, they contributed to the decision to limit certain functionalities related, for example, to the image recognition function, which tried to identify the gender of people in photos.

The departures of the two women from the company therefore highlighted the tensions inherent in companies looking for ways to monetize solutions based on artificial intelligence. Experts studying the restrictions to be placed on this technology could potentially block new revenue streams for companies.

After Gebru’s departure, some AI experts commented that Google had long ceased to be a trusted partner, and the words of AI research manager Jeff Dean, who said the research paper which led to Gebru leaving was just bad, no help to clear up the situation. And it’s no surprise that more than 2,600 Google employees signed the letter to protest Gebru’s treatment.

Dr. Piotr Kaczmarek-Kurczak
Department of Entrepreneurship and Business Ethics
Kozminsky University

Why would an ethics specialist be hired on the AI ​​team? What problems could it point out?

Dr. Timnit Gebru and Dr. Margaret Mitchell worked in the company of Google as ethics specialists and their task was to oversee work on artificial intelligence systems so that the algorithms created did not contain not the biases and prejudices of their creators. Algorithms can show certain preferences or tendencies that are the result of rules that software developers have written in them. The task of the two experts was to ensure that these algorithms did not discriminate against anyone. The case of the image recognition algorithm, which identified black people as gorillas, was highly publicized (resulting from the image identification rules adopted which did not assume that people could have a certain face shape and a certain skin color). Algorithms other than Google also tended to reject candidates of non-white skin color, lower their rankings, use different criteria to assess female performance, and so on.

It was the dispute over how biases arising from organizational culture influence how AI algorithms work that sparked an argument between the two ladies and the company’s management.*

* The conflict was not strictly related to AI design activity, but to the deterioration of Google’s organizational culture. A company that once proclaimed the slogan “Do no evil” and built its image as a company that builds a better future for all mankind. This image has become increasingly questionable with the company’s concessions to the Chinese authorities, which Google has helped build the famous “Great Wall of China”, i.e. various solutions that cut off Chinese citizens from the “reckless” (inconsistent with the expectations of the Chinese authorities) information. This shattered trust in this company – especially among specialists and programmers, who also began to question the difficult working conditions (long working hours, high intensity, responsibility, internal competition) in this company, which , in their view, were not justified in the social mission of the organization. As a result, internal disputes erupted, including attempts to form a union of Google employees.

The dispute with Dr. Gebru and Dr. Mitchell was precisely linked to the culture of the organization: the two experts pointed to the low level of ethnic diversity in the company, the low proportion of women, etc. it cannot be explained by the lack of suitable candidates and candidates.

Could Google’s decisions seem disturbing?

Yes, because the company has enormous power resulting from the data it collects and the role of the services provided by the company. Would we like, for example, an artificial intelligence guiding us in the navigation system, knowing our land status, to choose a path that would prevent us from crossing neighborhoods inhabited by rich people? Or she did not show us certain offers in advance, considering us inappropriate customers? Imagine that the algorithm recommending a treatment excludes certain therapies in advance, assuming (as its creators thought) that we cannot afford this therapy or that applying it will be a waste of resources and energy. The company’s activities are interpreted by journalists and politicians as hypocrisy, consisting in the fact that in official communication the company emphasizes its commitment to solving global problems (for example, the climate and the environment), and pays disturbing attention to social and ethical issues in its organizational culture. , focusing primarily on profits.

Leave a Comment