A social media website, including Facebook . . . ESPECIALLY Facebook . . . is not the place to get news and information.
(Really, who would ever think that it is???)

YOU DON'T HAVE TO BE ON FACEBOOK!
At a White House briefing last week, Mr. Trump suggested that disinfectants and ultraviolet light were possible treatments for the virus. His remarks immediately found their way onto Facebook, Instagram and other social media sites, and people rushed to defend the president’s statements as well as mock them.

But Facebook, Twitter and YouTube have declined to remove Mr. Trump’s statements posted online in video clips and transcriptions of the briefing, saying he did not specifically direct people to pursue the unproven treatments. That has led to a mushrooming of other posts, videos and comments about false virus cures with UV lights and disinfectants that the companies have largely left up.

A New York Times analysis found 768 Facebook groups, 277 Facebook pages, nine Instagram accounts and thousands of tweets pushing UV light therapies that were posted after Mr. Trump’s comments and that remained on the sites as of Wednesday. More than 5,000 other posts, videos and comments promoting disinfectants as a virus cure were also on Facebook, Instagram, Twitter and YouTube this week. Only a few of the posts have been taken down.

The social media companies have always trod delicately when it comes to President Trump. Yet their inaction on posts echoing his remarks on UV lights and disinfectants stands out because the companies have said for weeks that they would not permit false information about the coronavirus to proliferate.

Renee DiResta, a technical research manager at the Stanford Internet Observatory, said most of the tech companies developed health misinformation policies “with the expectation that there would be a competent government and reputable health authority to point to.” Given that false information is coming from the White House, the companies have been thrown for a loop, she said.
Source: "Trump’s Disinfectant Talk Trips Up Sites’ Vows Against Misinformation" By Sheera Frenkel and Davey Alba - NY Times - April 30, 2020



On the day of the school shooting last month in Parkland, Fla., a screenshot of a BuzzFeed News article, “Why We Need to Take Away White People’s Guns Now More Than Ever,” written by a reporter named Richie Horowitz, began making the rounds on social media.

The whole thing was fake. No BuzzFeed employee named Richie Horowitz exists, and no article with that title was ever published on the site. But the doctored image pulsed through right-wing outrage channels and was boosted by activists on Twitter.

Online misinformation, no matter how sleekly produced, spreads through a familiar process once it enters our social distribution channels. The hoax gets 50,000 shares, and the debunking an hour later gets 200. The carnival barker gets an algorithmic boost on services like Facebook and YouTube, while the expert screams into the void.

There’s no reason to believe that deepfake videos will operate any differently. People will share them when they’re ideologically convenient and dismiss them when they’re not. The dupes who fall for satirical stories from The Onion will be fooled by deepfakes, and the scrupulous people who care about the truth will find ways to detect and debunk them.
Source: "Here Come the Fake Videos, Too" By Kevin Roose NY Times - March 4, 2018



If Facebook were a religion, it would be the world’s largest faith, with just under a third of the planet, 2.4 billion people, tied into a network controlled largely by one man. It’s a business, of course, with a business model that makes much of its money by channeling tidings of sludge around, often to great harm.

After helping give us Donald Trump as president and mass killings in places where a lie can dash around the world before truth puts on its pants, Facebook has lately been playacting as a good citizen. Which is to say, it’s courting lawmakers and regulators in a bid to remain the world’s most powerful gatekeeper of information.

“I think we’re in the right place on this,” { Facebook’s chief executive, Mark} Zuckerberg said, explaining why the company would not stop politicians from using Facebook to spread lies. “I think people should be able to hear for themselves what politicians are saying.”

Yes, of course — let the people hear for themselves, no matter if it’s true or not. They can decide. Except, they can’t, especially older users. A study in Science recently found that it’s possible “an entire cohort of Americans” lacks the digital smarts to distinguish made-up garbage from the truth on Facebook.

And what if these politicians push their exemption from fact-checking to devious extremes — a bundle of lies that results in voter suppression, or confusion on polling places and time, or violence?

Facebook is one of the main reasons democracy is in such peril. The company’s algorithms favor the echo chamber, backing a user’s bias. That black hole is so full of fantasies and half-truths that it’s impossible for millions of people to have a basic grasp of the facts needed to make informed decisions.

What are {Facebook users} sharing? The sources for top 10 news stories across Facebook from a given day this week included an assortment of far-right and truth-challenged sites. The latest virus planted in the bloodstream of public opinion by the Trump campaign was a Facebook ad, with false and debunked information about Joe Biden, viewed by more than five million people.

The Georgetown speech showed just how much Zuckerberg fears offending the politicians who depend on Facebook’s liars’ market. It was a profile in cowardice.

{C}coordinated authentic lies spoken by political actors are still welcome — so much so that Facebook has become perhaps the world’s leading incubator of falsehoods hatched by those who want to govern.

Every newspaper, online news site, radio or television station that feels any responsibility to its community of consumers will generally do the right thing and refuse a political ad that is defiantly and provably false. It’s not that difficult.

At Georgetown, Zuckerberg said Facebook’s focus was to “bring people together.” He can start by disarming the missiles of misinformation, under his control, that tear people apart.
Source: "Why Doesn’t Mark Zuckerberg Get It?" By Timothy Egan - NY Times - 10/25/2019



This week, Facebook is embroiled in a different kind of election interference scandal.

The current controversy stems from two separate but related events. The first revolves around leaked audio of Mark Zuckerberg speaking privately to employees at recent town hall meetings, where he called Senator Elizabeth Warren’s plans to break up Facebook an “existential” threat to the company and one he would fight. The second is a recent announcement by Facebook that it is exempting political figures from its policy forbidding spreading misinformation in advertisements (yes, politicians spreading false claims in their ads is just a part of the political conversation, according to Facebook).

Both developments have attracted the ire of Warren who, in a series of tweets this week, argued that the public has a right to know how Facebook “intends to use their influence in this election” and implied that Mr. Zuckerberg and Facebook might be intentionally emboldening the Trump campaign by relaxing advertising rules.

Yes, Facebook’s willingness to let politicians lie sets a worrying precedent. And yes, lack of oversight into the platform’s decisions opens up a host of plausible election interference conspiracies. But Facebook’s essential threat to democracy isn’t that Mr. Zuckerberg will intervene on behalf of his preferred candidate — it’s more fundamental than that. Mark Zuckerberg need not intervene, because Facebook, the platform, will do so instinctively. With its algorithmic mandate of engagement over all else, Facebook has redefined what it means to be a good candidate — and provided a distinct natural advantage to those who distort the truth and seek to divide.

In the Philippines, the autocratic leader Rodrigo Duterte utilized a similar Facebook campaign in 2016 to win the presidency. A Duterte campaign document, obtained by BuzzFeed News and titled “Winning the Social Media Wars of 2016,” detailed how the campaign used Facebook’s algorithms to gin up anger, hope and pride with an onslaught of fake news and incendiary memes. “To fight with limited funds, the campaign must organize a series of dramatic events that stoke these emotions in escalating fashion,” Pompee La Vińa, a Duterte supporter and social media director for his 2016 campaign, told BuzzFeed News. Mr. Duterte was lauded by Facebook itself as the “undisputed king of Facebook conversations.”

Once campaigns realize that divisive rhetoric pays, the incentive to up the ante with hyperpartisan ads and misinformation grows. And the campaigns have certainly taken notice.

{T}he Trump campaign spent over $1.5 million for ads in just the past week, some of which featured debunked or misleading claims, according to Popular Information’s Judd Legum.

{T}here’s a compelling case to be made that Facebook should be forced out of the game entirely to counter the spread of divisive, toxic content. But Facebook’s “incredible power,” as Ms. Warren tweeted, has less to do with Facebook’s opaque moderation policy and far more to do with a structural flaw in the platform’s original architecture.
Source: "Could Facebook Actually Nuke Elizabeth Warren’s Campaign?" By Charlie Warzel - NY Times - 10/10/2019



In Vietnam, citizens were enlisted to post pro-government messages on their personal Facebook pages. The Guatemalan government used hacked and stolen social media accounts to silence dissenting opinions. Ethiopia’s ruling party hired people to influence social media conversations in its favor.

Despite increased efforts by internet platforms like Facebook to combat internet disinformation, the use of the techniques by governments around the world is growing, according to a report released Thursday by researchers at Oxford University. Governments are spreading disinformation to discredit political opponents, bury opposing views and interfere in foreign affairs.

The researchers compiled information from news organizations, civil society groups and governments to create one of the most comprehensive inventories of disinformation practices by governments around the world. They found that the number of countries with political disinformation campaigns more than doubled to 70 in the last two years, with evidence of at least one political party or government entity in each of those countries engaging in social media manipulation.

In addition, Facebook remains the No. 1 social network for disinformation, the report said. Organized propaganda campaigns were found on the platform in 56 countries.

Philip N. Howard, director of the Oxford Internet Institute and one of the authors of the report, said that such online disinformation campaigns can no longer be understood to be the work of “lone hackers, or individual activists, or teenagers in the basement doing things for clickbait.”

There is a new professionalism to the activity, with formal organizations that use hiring plans, performance bonuses and receptionists, he said.

“But from our research, we know that this problem of microtargeting ads is actually only a very small part of the problems,” {Samantha Bradshaw, a researcher at the Oxford Internet Institute, a department at Oxford University, and co-author of the study} said. Facebook has not addressed deeper structural problems that make it easy to spread false and misleading information, she said.

Most government-linked disinformation efforts were focused domestically, researchers concluded. But at least seven countries had tried to influence views outside their borders: China, India, Iran, Pakistan, Russia, Saudi Arabia and Venezuela.
Source: "At Least 70 Countries Have Had Disinformation Campaigns, Study Finds" By Davey Alba and Adam Satariano - NY Times - Sept. 26, 2019



Facebook, the platform of origin for this video, did exactly what it was designed to do. It brought people with similar interests together, incubated vibrant communities that spawned intense discussion and then gave those communities the tools to amplify their messages loudly across the internet. The rest of the social media ecosystem followed suit. The world is a little smaller, just as the platforms intended.

Last week, a series of manipulated videos — subtly slowed down and then pitch-corrected to make it appear as if the House speaker, Nancy Pelosi, was drunk or incapacitated — were published across Facebook and other social networks, including YouTube and Twitter.

The swift spread of agitation propaganda and the creep of hyperpartisanship across social media isn't a bug, it is a feature.

The videos were viewed millions of times. They were shared by the president’s personal lawyer, Rudolph W. Giuliani (the tweet was later deleted) as well as dozens of supporters in the pro-Trump media. The president didn’t share the agitprop, but he did bang out a tweet questioning the speaker’s well-being.

Twitter and Facebook did not remove the video (Facebook eventually added “fact check” links to the clips).

It’s easy to fall back on the notion that the Pelosi viral videos are an example of a broken system. But that’s not exactly true. Many of the forces that led this particular doctored video to become news are part of an efficient machine designed to do exactly this. Our media distribution systems are working just as intended.

Facebook, the platform of origin for this video, did exactly what it was designed to do. It brought people with similar interests together, incubated vibrant communities that spawned intense discussion and then gave those communities the tools to amplify their messages loudly across the internet. The rest of the social media ecosystem followed suit. The world is a little smaller, just as the platforms intended.

The distribution mechanics, rules and terms of service of Facebook’s platform — and the rest of social media — are no match for professional propagandists, trolls, charlatans, political operatives and hostile foreign actors looking to sow division and blur the lines of reality.

Facebook was designed to connect the world and shrink it — and to elevate new voices. Unfortunately, that design was dreamed up by naďve and well-intentioned engineers in a vacuum. Baby photos, weddings, bird-watching communities, Game of Thrones memes and the Ice Bucket Challenge: yes! White nationalists, race and I.Q. debates, clandestine prescription drug and weapons selling groups and Pepe the Frog memes: unthinkable!

This disconnect between the platform ideal and the platform reality is why Facebook’s rules are arbitrarily enforced. It’s why Facebook’s fact-checking system doesn’t take effect until it’s too late and a piece of content has achieved massive distribution.

Similarly, the press has few answers for how to cover propaganda in an online ecosystem that is designed to spread hoaxes. The heart of the reporting process breaks down when your adversaries’ only goal is to hijack attention.

Yet it’s malpractice to ignore a false narrative that’s reaching millions. There are no easy answers, no obvious fixes.

And then there’s the political reality; the media has even fewer answers for how to deal with a president and his associates who’re prone to trafficking in conspiracies. The media becomes trapped in a vicious cycle of newsworthiness, diverting attention and outrage to false claims and viral hoaxes. After all, the Pelosi fakes weren’t newsworthy because they were high-tech, but because the lie was so blatant and spread by powerful individuals.
Source: "The Fake Nancy Pelosi Video Hijacked Our Attention. Just as Intended." By Charlie Warzel - NY Times - 5/26/19



While television remains the main source of news for most Americans, viewers today tend to select a network in line with their political preferences. Even more significantly, The Pew Research Center has found that two-thirds of Americans are getting at least some of their news through social media.

The sheer quantity of shares that misleading stories get on Facebook is staggering. Using a database of 156 election-related news stories that fact-checking websites deemed false, economists from New York University and Stanford University determined that these false stories had been shared by American social media users 38 million times in the three months before the 2016 presidential election.

Russia has keenly exploited our growing reliance on new media — and the absence of real umpires. Last year the Russian government supplemented the growing reach of its state-owned, English-language media outlets — RT and Sputnik — by employing a network of trolls, bots and thousands of fake Twitter and Facebook accounts that amplified damaging stories on Hillary Clinton.

Russia has had time to refine these strategies in the oft-overlooked disinformation campaigns that accompanied Russia’s military incursions into Georgia and Ukraine, and the now-familiar mix of trolls, bots and state-sponsored journalism was also used in attempts to deflect responsibility onto the United States for the 2014 downing of Malaysia Airlines Flight 17.

Most worrisome, many Americans are questioning not only whether they are obtaining objective facts — 60 percent believe news stories today are “often inaccurate,” according to Gallup, a major increase from 34 percent in 1985 — but also whether objective facts exist at all. The sense of an epistemological free-for-all provides an opening to all comers.

Another reason for concern is that in 2017 any well-financed actor — political campaigns, companies, foreign governments — can harvest data (location, age, gender, likes, shares) on its target audience, personalizing messages to suit the taste of those it aims to reach and employing this customized propaganda to skew the political debate. Kremlin-linked ads have likely reached millions of Americans, and some were geographically directed.

{The bipartisan Alliance for Securing Democracy} showed how Russia-linked accounts promoted alt-right conspiracies about the violence in Charlottesville, Va., as well as stories that slammed those — like Senator John McCain — who had criticized Mr. Trump’s equivocal response.

It is a testament to our times that it now seems unthinkable that the State Department — much less the president — would publicly call out the misinformation being spread. But now that there is a genuine risk of foreign powers who, in George Washington’s words, “practice the arts of seduction, to mislead public opinion,” it is incumbent on the rest of us to enhance our vigilance.
Source: "Why Foreign Propaganda Is More Dangerous Now" By SAMANTHA POWER - NY Times - SEPT. 19, 2017



In a largely automated platform like Facebook, what matters most is not the political beliefs of the employees but the structures, algorithms and incentives they set up, as well as what oversight, if any, they employ to guard against deception, misinformation and illegitimate meddling. And the unfortunate truth is that by design, business model and algorithm, Facebook has made it easy for it to be weaponized to spread misinformation and fraudulent content. Sadly, this business model is also lucrative, especially during elections. Sheryl Sandberg, Facebook’s chief operating officer, called the 2016 election “a big deal in terms of ad spend” for the company, and it was. No wonder there has been increasing scrutiny of the platform.

This right-wing strategy has been used to pressure Facebook since before the presidential election. It was revealed in April 2016, for example, that Facebook was employing a small team of contractors to vet its “trending topics,” providing quality control such as weeding out blatant fake news. A single source from that team claimed it had censored right-wing content, and a conservative uproar ensued, led by organizations like Breitbart. {Mark Zuckerberg, the chief executive of Facebook} promptly convened an apologetic meeting with right-wing media personalities and other prominent conservatives to assure them the site was not biased against them.

Facebook got rid of those contractors, who were already too few for meaningful quality control.
Source: "Zuckerberg’s Preposterous Defense of Facebook" by Zeynep Tufekci - NY Times - SEPT. 29, 2017




No one has submitted a comment on this statement yet.
Be the first and submit your feedback below.



Submit your comment below
Contributor
(optional)

Location
(optional)

Date
Submitted

7/12/2025

Use your browsers BACK button to return to the Latest News list .