This article is more than 1 year old

Big Tech bankrolling AI ethics research and events seems very familiar. Ah, yes, Big Tobacco all over again

Who knows whether algorithms really harm society?

Analysis + update Big tech's approach to avoiding AI regulation looks a lot like Big Tobacco's campaign to shape smoking rules, according to academics who say machine-learning ethics standards need to be developed outside of the influence of corporate sponsors.

In a paper included in the Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (AIES ’21) next month, Mohamed Abdalla, a doctoral student in computer science at the University of Toronto, and Moustafa Abdalla, a doctoral student on deferral from Harvard Medical School, explore how Big Tech has adopted strategies similar to those used by Big Tobacco.

The analogy "is not perfect," the two brothers acknowledge, but is intended to provide a historical touchstone and "to leverage the negative gut reaction to Big Tobacco’s funding of academia to enable a more critical examination of Big Tech." The comparison is also not an assertion that Big Tech is deliberately buying off researchers; rather, the researchers argue that "industry funding warps academia regardless of intentionality due to perverse incentives."

The authors point out that Google has said as much about unwelcome research, insisting that criticism from Oracle-funded advocacy group Campaign for Accountability should be discounted because it is financed by a hostile competitor. Coincidentally, the Campaign for Accountability in 2017 published a post that begins, "Google has paid scholars millions to produce hundreds of papers supporting its policy interests, following in the footsteps of the oil and tobacco industries."

Big tech in this instance is defined as: Google, Amazon, Facebook, Microsoft, Apple, Nvidia, Intel, IBM, Huawei, Samsung, Uber, Alibaba, Element AI, and OpenAI. But the boffins' argument applies to a far larger set of companies that have a commercial interest in AI-powered systems.

Sigh, oh for the Noughties

The brothers Abdalla cite the mid 2010s as the point at which public attitudes about Big Tech began to sour. And they see similarities in Facebook CEO Mark Zuckerberg's 2018 acknowledgement that "it’s clear now that we didn’t do enough" to prevent interference in the 2016 US election to "A Frank Statement to Cigarette Smokers," Big Tobacco's 1954 acknowledgement that smoking has health implications.

"Just like Big Tobacco, in response to a worsening public image, Big Tech had started to fund various institutions and causes to 'ensure the ethical development of AI,' and to focus on 'responsible development,'" they state in their paper.

hiring

I'm fired: Google AI in meltdown as ethics unit co-lead forced out just weeks after coworker ousted

READ MORE

"Facebook promised its 'commitment to the ethical development and deployment of AI.' Google published its best practices for the 'ethical' development of AI. Microsoft has claimed to be developing an ethical checklist, a claim that has recently been called into question. Amazon co-sponsored, alongside the National Science Foundation, a $20m program on 'fairness in AI.'"

The researchers see parallels between the way Big Tech's funds academic research and conferences and the way Big Tobacco funded the Tobacco Industry Research Committee, later called the Council for Tobacco Research.

Big Tech gains influence over AI ethicists through the selective funding of research projects, they contend. And they show that 58 per cent of AI ethics faculties have received funding from Big Tech, which they say can influence their work.

"This is because, to bring in research funding, faculty will be pressured to modify their work to be more amenable to the views of Big Tech," they state in their paper. "This influence can occur even without the explicit intention of manipulation, if those applying for awards and those deciding who deserve funding do not share the same underlying views of what ethics is or how it 'should be solved.'"

They point to Partnership on AI, founded in 2016 by Amazon, Facebook Google, and Microsoft, among others, to formulate AI best practices as a group. They say that it has shown little interest in engaging with civil society, citing the departure of human rights group Access Now from the organization as a sign of its narrow focus on corporate concerns.

The researchers also point to the problematic nature of conference funding, noting that NeurIPS, a leading machine-learning conference, has had at least two Big Tech sponsors at the highest funding tier since 2015 and has had even more lately.

"When considering workshops relating to ethics or fairness, all but one have at least one organizer who is affiliated or was recently affiliated with Big Tech," the paper says. "For example, there was a workshop about 'Responsible and Reproducible AI' sponsored solely by Facebook."

The brothers Abdalla acknowledge there have been many remedies proposed to deal with Big Tech's influence on society and they leave those to policymakers. But they do ask academics to consider adopting a stricter code of ethics for AI research and operating separately from the traditional computer science department.

"Such a separation would permit academia-industry relationships for technical problems where such funding is likely more acceptable, while ensuring that our development of ethics remains free of influence from Big Tech money," they argue.

Exploiting uncertainty

In a phone interview with The Register, Frank Pasquale, professor of Law at Brooklyn Law School and author of The Black Box Society: The Secret Algorithms That Control Money and Information, suggested the comparison between Big Tobacco and Big Tech while provocative has some merit.

"I think it really is important that we find concrete metaphors that represent to people the type of harms that are at stake online," he said, noting that it's difficult to illustrate to people the impact of irresponsible or malicious decisions by tech firms.

Pasquale said he's seen a draft of the paper and observed that the parallel that really struck him was the way that tobacco companies and tech companies have weaponized uncertainty.

Tobacco firms, he said, would raise doubts by saying things like, "Who knows whether smoking really causes cancer?"

I think the merchants-of-doubt approach successfully deflected a lot of lawmaking

"I think the merchants-of-doubt approach successfully deflected a lot of lawmaking," he said, noting that a lot of academics today say the same thing about potential harms from YouTube and other online platforms, as a justification for further funding and study.

Pasquale argues that the key is to have more support for public and private sector researchers so they don't have to depend on funding from the firms they're investigating. He also stressed the importance of making data from these firms available to unaffiliated, independent researchers.

Nobody at Amazon, Facebook, and Microsoft wanted to comment.

“Academic collaborations have always been part of Google’s DNA," a spokesperson for the internet giant told The Register. "In the past 15 years, we’ve provided more than 6,500 grants to academic and external research communities, and we’re committed to continuing these important collaborations.

“Partnering with the external research ecosystem brings fresh perspective to shared problems, and supporting their research helps advance critical areas of computer science. We support these collaborations through a variety of open-application programs including the Google Faculty Research Award Program, the PhD Fellowship Program, the Visiting Researcher Program, the Research Scholar Program and Award for Inclusion Research Program which give unrestricted funding to faculty and graduate students.” ®

Updated to add

After this article was filed, Mohamed Abdalla, of the University of Toronto, has been in touch to tell us it's difficult to tell whether other academics want to rethink how funding in AI research occurs.

"While there are some researchers who have been receptive to our call for more attention, we believe that many of them also fundamentally disagree with the thesis of the article (or refuse to talk about it)," he said. "The paper is clearly divisive; we submitted this paper to FAccT and got the lowest possible scores from all reviewers (quite rare given the usual variation in reviews), while 2/3 of our reviews in AIES got the highest possible scores (the remaining one got one from best).

"Excluding Max Tegmark (Professor at MIT) and our supervisors, we have not directly received any affirmation of the work regarding our ideas (from professors). However, some researchers (professors and students alike) have expressed agreement on social media. We believe that, given the sensitive topic, some may be wary to air their opinions."

Mohamed added that social media and academic Twitter tends to skew toward the negative, making it difficult to assess how the paper has been received. He also expressed skepticism that regulators have the technical resources to adequately deal with AI regulation.

"I do not believe that existing governance structures are sufficient to enable governments to do their job," he said. "This is a statement which I believe holds true for the US, Canada, and the EU.

"As such, I believe that a novel regulatory body that possesses expert knowledge and the ability (by governments) to demand companies open their trade-secreted algorithms (both code and data) for investigation is something that will eventually happen. My only fear is that such a regulatory body will be ineffective at pushing for real change because of industrial co-option."

What's more, while he believes EU lawmakers may actually be ready to address the issue, he expressed doubt that regulators in the US and Canada have the will to take on Big Tech.

"I do not see regulators in Canada or the US being ready to address this," he said. "It seems to me (though I am not the most politically informed analyst) that regulators in these two nations are scared to impact the tech sector negatively in any way lest they lose their golden egg (prioritizing growth/GDP/money over public good).

"Furthermore, these companies are able to run very large propaganda campaigns against any politicians who may wish to push bills against them thereby dissuading all but the most dedicated politicians from acting."

More about

TIP US OFF

Send us news


Other stories you might like