It was deja vu all over again.
The chief executives of Google parent Alphabet Inc.
“It is now painfully clear that neither the market nor public pressure will force these social-media companies to take the aggressive action they need to take to eliminate disinformation and extremism from their platforms,” committee chairman Frank Pallone Jr. (D., N.J.) said in opening remarks during the hearing. “And, therefore, it is time for Congress and this Committee to legislate and realign these companies’ incentives to effectively deal with disinformation and extremism.”
“These platforms are hotbeds of disinformation despite new policies,” committee member Jan Schakowsky (D., Ill.), who has introduced a bill to protect consumers online, said in a lacerating statement. “Disinformation was rampant” during the 2020 election and pandemic, she added.
Federal lawmakers’ distrust of tech may have taken on a personal edge following the Jan. 6 attack on the Capitol fomented in great part by far-right vitriol on Facebook, Google’s YouTube, and Twitter.
Twitter CEO Jack Dorsey acknowledged that social media bore some responsibility for spreading false information that led to the insurrection but the problem was “more complex” as part of a larger information ecosystem and overheated political climate.
Whether all the talk and hand-wringing leads to legislation is the next logical step, say antitrust and tech law experts.
“If Jan. 6 wasn’t enough to get them to acknowledge their role, it’s unclear anything ever will be,” Elizabeth Renieris, founding director of the Notre Dame-IBM Technology Ethics Lab at the University of Notre Dame, told MarketWatch. “It’s time to move beyond self-regulation and pass meaningful legislation to limit the power of these platforms, including through comprehensive federal privacy legislation, competition-related measures, and new consumer protection rules, among others.”
Facebook CEO Mark Zuckerberg said the lashing tone of a small percentage of the platform’s content reflects a bitterly divided nation, and that the company had taken several steps to remove it. He embraced the idea of accountability standards for clearly illegal types of content such as child trafficking and terrorism.
But it did little to mollify the committee — particularly Reps. Mike Doyle (D., Pa.) and Cathy McMorris Rodgers (R., Wash.) — who relentlessly pressed the executives on the effects of social media on children and underrepresented groups, among others.
Social-media companies have been vilified for spreading falsehoods about the 2020 presidential election result, COVID, and vaccinations, as well as helping organizers of the Jan. 6 assault on the Capitol recruit attendees and stoke violent unrest.
Nearly two-thirds of anti-vaccine content on major social-media platforms are linked to a dozen individuals or organizations, according to a new report by the Center for Countering Digital Hate and Anti-Vax Watch. The report is based on analysis of a sample of anti-vaccine content shared or posted on Facebook and Twitter over 812,000 times between Feb. 1 and March 16.
“The platforms’ inability to deal with the violence, hate and disinformation they promote on their platforms shows that these companies are failing to regulate themselves,” says Emma Ruby-Sachs, executive director of SumOfUs, a nonprofit advocacy organization. “After the past five years of manipulation, data harvesting, and surveillance, the time has come to rein in Big Tech.”
Still, the “half measures” floated during the hearings, “aren’t going to help,” Ben Pring, co-founder of Center for the Future of Work at Cognizant, told MarketWatch.
“Repealing Section 230, prohibiting political advertising on social media, and making people own what they say on social media are steps that need to be taken ASAP, before things get really out of hand,” Pring said.