Ep. 75 The national security state is the main driver of censorship and election interference in the United States. "What I’m describing is military rule," says Mike Benz. "It’s the inversion of democracy." pic.twitter.com/hDTEjAf89T
UNREAL: The censorship technologies originally intended for terrorist organizations have been weaponized against the American people.
TUCKER CARLSON: “So you’re saying the Pentagon, our Pentagon, the US Department of Defense, censored Americans during the 2020 election cycle?”
MIKE BENZ: “Yes. The two most censored events in human history, I would argue, to date, are the 2020 election and the COVID-19 pandemic.”
Benz calls artificial intelligence-based censorship technologies, which were created by DARPA to take on ISIS, “WEAPONS OF MASS DELETION.”
That technology, Benz says, has “the ability to censor tens of millions of posts with just a few lines of code.”
Benz detailed how the government-funded Virality Project identified 66 dissident narratives related to COVID-19, breaking them down into sub-claims for monitoring and censorship through machine learning models, aiming to control the spread of information harmful to official narratives or individuals like Tony Fauci.
“And whenever something started to trend that was bad for what the Pentagon wanted or was bad for what Tony Fauci wanted, they were able to take down tens of millions of posts. They did this in the 2020 election with mail-in ballots.”
WaPo | Today, the world faces an inflection point, where the choices we make — including in the crises in Europe and the Middle East — will determine the direction of our future for generations to come. What will our world look like on the other side of these conflicts?
Will we deny Hamas the ability to carry out pure, unadulterated evil? Will Israelis and Palestinians one day live side by side in peace, with two states for two peoples?
Will we hold Vladimir Putin accountable for his aggression, so the people of Ukraine can live free and Europe remains an anchor for global peace and security?
And the overarching question: Will we relentlessly pursue our positive vision for the future, or will we allow those who do not share our values to drag the world to a more dangerous and divided place?
Both Putin and Hamas are fighting to wipe a neighboring democracy off the map. And both Putin and Hamas hope to collapse broader regional stability and integration and take advantage of the ensuing disorder. America cannot, and will not, let that happen. For our own national security interests — and for the good of the entire world.
The United States is the essential nation. We rally allies and partners to stand up to aggressors and make progress toward a brighter, more peaceful future. The world looks to us to solve the problems of our time. That is the duty of leadership, and America will lead. For if we walk away from the challenges of today, the risk of conflict could spread, and the costs to address them will only rise. We will not let that happen.
We have also seen throughout history how conflicts in the Middle East can unleash consequences around the globe.
We stand firmly with the Israeli people as they defend themselves against the murderous nihilism of Hamas. On Oct. 7, Hamas slaughtered 1,200 people, including 35 American citizens, in the worst atrocity committed against the Jewish people in a single day since the Holocaust. Infants and toddlers, mothers and fathers, grandparents, people with disabilities, even Holocaust survivors were maimed and murdered. Entire families were massacred in their homes. Young people were gunned down at a music festival. Bodies riddled with bullets and burned beyond recognition. And for over a month, the families of more than 200 hostages taken by Hamas, including babies and Americans, have been living in hell, anxiously waiting to discover whether their loved ones are alive or dead. At the time of this writing, my team and I are working hour by hour, doing everything we can to get the hostages released.
jonathanturley | Below is my column in The Messenger on the view of diplomats in the
Biden Administration that the President is spreading “misinformation.”
My interest in the story is less the merits than the allegation. The
President is facing the same allegation of ignoring fact and spreading
disinformation that has resulted in thousands being banned or
blacklisted on social media. The Biden Administration has pushed for
such censorship in areas where doctors and pundits held opposing views
on subjects ranging from Covid-19 to climate control. The question is
whether Joe Biden himself should be banned under the standards
promulgated by his own Administration.
Here is the column:
An internal State Department dissent memo was
leaked this past week, opposing the Biden administration’s position on
the war between Israel and Hamas. What was most notable about the memo
is that some administration staffers accused President Joe Biden of “spreading misinformation.”
It was a moment of crushing irony for some of us who have written and testified against
the Biden administration’s censorship efforts. The question is whether,
under the administration’s own standards, President Biden should now be
banned or blacklisted to protect what his administration has called our
“cognitive infrastructure.”
For years, the administration and many
Democrats in Congress have resisted every effort to expose the sprawling
government censorship program that one federal judge described as an “Orwellian ‘Ministry of Truth.'” As I have written previously, it included grants to academic and third-party organizations to create a global system of blacklists and to pressure advertisers to withdraw support from conservative sites.
As a result, over the last four years,
researchers, politicians, and even satirical sites have been banned or
blacklisted for offering dissenting views of COVID measures, climate
change, gender identity or social justice, according to the House
Judiciary report. No level of censorship seemed to be sufficient for
President Biden, who once claimed that social media companies were “killing people” by not silencing more dissenting voices.
Now, though, President Biden himself is
accused — by some in his own administration — of spreading
misinformation and supporting war criminals.
CTH | According to a recent media report, Senator Chuck Schumer led an AI
insight forum that included tech industry leaders: Google CEO Sundar
Pichai, Tesla, X and SpaceX CEO Elon Musk, NVIDIA President Jensen
Huang, Meta founder and CEO Mark Zuckerberg, technologist and Google
alum Eric Schmidt, OpenAI CEO Sam Altman and Microsoft CEO Satya
Nadella.
Additionally, representatives from labor and civil rights advocacy
groups which included: AFL-CIO President Liz Shuler, Leadership
Conference on Civil and Human Rights President and CEO Maya Wiley, and
AI accountability researcher Deb Raji. The group was joined by a list of
prominent AI executives, including OpenAI CEO Sam Altman and Nvidia CEO
Jensen Huang.
Notably absent from the Sept 13th forum was anyone with any
real-world experience that is not a beneficiary of government spending.
This is not accidental. Technocracy advances regardless of the citizen
impact. Technocrats advance their common interests, not the interests of
the ordinary citizen.
That meeting comes after DHS established independent guidelines we previously discussed {GO DEEP}.
DHS’ AI task force is coordinating with the Cybersecurity and
Infrastructure Security Agency on how the department can partner with critical infrastructure organizations
“on safeguarding their uses of AI and strengthening their cybersecurity
practices writ large to defend against evolving threats.”
Remember, in addition to these groups assembling, the Dept of Defense
(DoD) will now conduct online monitoring operations, using enhanced AI
to protect the U.S. internet from “disinformation” under the auspices of
national security. {link}
So, the question becomes, what was Chuck Schumer’s primary reference for this forum?
(FED NEWS)
[…] Schumer said that tackling issues around AI-generated content that
is fake or deceptive that can lead to widespread misinformation and
disinformation was the most time-sensitive problem to solve due to the
upcoming 2024 presidential election.
[…] The top Democrat in the Senate
said there was much discussion during the meeting about the creation of a
new AI agency and that there was also debate about how to use some of
the existing federal agencies to regulate AI.
South Dakota Sen. Mike Rounds,
Schumer’s Republican counterpart in leading the bipartisan AI forums,
said: “We’ve got to have the ability to provide good information to
regulators. And it doesn’t mean that every single agency has to have all
of the top-end, high-quality of professionals but we need that group of
professionals who can be shared across the different agencies when it
comes to AI.”
Although there were no significant
voluntary commitments made during the first AI insight forum, tech
leaders who participated in the forum said there was much debate around
how open and transparent AI developers and those using AI in the federal
government will be required to be. (read more)
There
isn’t anything that is going to stop the rapid deployment of AI in the
tech space. However, for the interests of the larger American
population, the group unrepresented in the forum, is the use of AI to
identify, control, and impede information distribution that is against
the interests of the government and the public-private partnership the
technocrats are assembling.
The words “disinformation” and “deep fakes” are as disingenuous as
the term “Patriot Act.” The definitions of disinformation and deep
fakes are where the government regulations step in, using their portals
into Big Tech, to identify content on platforms that is deemed in
violation.
It doesn’t take a deep political thinker to predict that memes and
video segments against the interests of the state will be defined for
removal.
CTH |The US Special Operations Command (USSOCOM) has contracted New York-based Accrete AI to deploy software that detects “real time” disinformation threats on social media.
The company’s Argus anomaly detection
AI software analyzes social media data, accurately capturing “emerging
narratives” and generating intelligence reports for military forces to
speedily neutralize disinformation threats.
“Synthetic media, including
AI-generated viral narratives, deep fakes, and other harmful social
media-based applications of AI, pose a serious threat to US national
security and civil society,” Accrete founder and CEO Prashant Bhuyan said.
“Social media is widely recognized as
an unregulated environment where adversaries routinely exploit reasoning
vulnerabilities and manipulate behavior through the intentional spread
of disinformation.
“USSOCOM is at the tip of the spear in
recognizing the critical need to identify and analytically predict
social media narratives at an embryonic stage before those narratives
evolve and gain traction. Accrete is proud to support USSOCOM’s
mission.”
But wait… It gets worse!
[PRIVATE SECTOR VERSION]
– The company also revealed that it will launch an enterprise version
of Argus Social for disinformation detection later this year.
The AI software will provide
protection for “urgent customer pain points” against AI-generated
synthetic media, such as viral disinformation and deep fakes.
Providing this protection requires AI
that can automatically “learn” what is most important to an enterprise
and predict the likely social media narratives that will emerge before
they influence behavior. (read more)
Now, take a deep breath…. Let me explain.
The goal is the “PRIVATE SECTOR VERSION.” USSOCOM is the mechanical
funding mechanism for deployment, because the system itself is too
costly for a private sector launch. The Defense Dept budget is used to
contract an Artificial Intelligence system, the Argus anomaly detection AI, to monitor social media under the auspices of national security.
Once the DoD funded system is created, the “Argus detection protocol”
– the name given to the AI monitoring and control system, will then be
made available to the public sector. “Enterprise Argus” is then the
commercial product, created by the DoD, which allows the U.S. based tech
sectors to deploy.
The DoD cannot independently contract for the launch of an operation
against a U.S. internet network, because of constitutional limits via
The Posse Comitatus Act, which limits the powers of the federal
government in the use of federal military personnel to enforce domestic
policies within the United States. However, the DoD can fund the
creation of the system under the auspices of national defense, and then
allow the private sector to launch for the same intents and purposes. See how that works?
theintercept | On Tuesday evening, Ross Coulthart, an Australian independent journalist who covers UFOs and has interviewed Grusch, posted a statement attributed to Grusch on X, the platform formerly known as Twitter.
“It has come to my attention that The Intercept intends to
publish an article about two incidents in 2014 and 2018 that highlights
previous personal struggles I had with Post Traumatic Stress Disorder
(PTSD), Grief and Depression,” the statement reads. “As I stated under
oath in my congressional testimony, over 40 credentialed intelligence
and military personnel provided myself and my colleagues the information
I transmitted to the Intelligence Community Inspector General (ICIG)
and I took the leadership role to represent the concerns of these
distinguished and patriotic individuals.”
Grusch’s wife, Jessica Grusch, did not respond to several requests for comment.
A former colleague of Grusch’s expressed shock that he retained his
clearance after the 2014 incident, which was also documented in public
records obtained by The Intercept.
“I think it’s like any insular group: Once you’re in, they generally
protect their own,” said the former colleague, who asked not to be named
because they feared professional reprisals.
The former colleague said that the 2014 incident was known to
Grusch’s superiors, a claim that Coulthart appeared to confirm in an
interview on NewsNation, a subscription television network owned by
Nexstar Media.
“The intelligence community and the Defense Department clearly
accepted there was no issue because he was allowed to keep his security
clearance,” Coulthart told Chris Cuomo Tuesday night.
Two Republican members of the House Oversight Committee, Reps. Anna
Paulina Luna and Tim Burchett, were tasked with organizing the July 26
hearing after Grusch’s whistleblower claims became public. Not all House
Republicans are supportive of the effort. Rep. Mike Turner, chair of
the House Intelligence Committee, has taken a dim view of Grusch’s
claims.
“Every decade there’s been individuals who’ve said the United States
has such pieces of unidentified flying objects that are from outer
space,” Turner said.
“There’s no evidence of this and certainly it would be quite a
conspiracy for this to be maintained, especially at this level.”
Grusch emerged as the hearing’s star witness, but his evidence was largely secondhand: When asked, Grusch said
he hasn’t seen any of the recovered alien vehicles or bodies himself.
While two former Navy fighter pilots alleged unidentified aerial
phenomena, neither said anything about their provenance. Grusch was alone among the witnesses in attributing them to extraterrestrials.
“My testimony is based on information I have been given by
individuals with a longstanding track record of legitimacy,” Grusch said
in his opening statement.
Shortly after The Intercept reached out to Grusch for comment for
this story, Coulthart went on Cuomo’s show and said that The Intercept
was planning to publish “confidential medical records” about Grusch that
had been leaked by the intelligence community. Coulthart, an ardent
defender of Grusch, told NewsNation that “Grusch believes the government
may now be behind an effort to release his medical records in an effort
to smear his credibility.”
“This is a document that would be, if the media had done the right
thing, it would be in his police department file, in the file in the
county sheriff’s office,” Coulthart said in his interview with Cuomo.
“But Dave has checked today, because he assumed that the journalist had
done his homework and just asked the local sheriff for the files. The
sheriff has confirmed it did not come from him. The only other place
that had this information is the intelligence community, Dave’s personal
files inside the intelligence community, where quite properly, when
anybody is security assist, things like this have to be looked at, and
somebody inside the intelligence community leaked it.”
Coulthart went on to compare the purported leak to Richard Nixon’s
attempts to discredit Daniel Ellsberg, who shared the Pentagon Papers
with the New York Times.
“I think there should be an inquiry into the circumstances of how
sensitive records pertaining to a decorated combat veteran’s file found
their way to a journalist not through the proper channels,” Coulthart
said. “This could’ve been requested under FOI, as is normal, but the
county sheriff has confirmed that did not happen.”
theintercept |While perception management
involves denying, or blocking, propaganda, it can also entail advancing
the U.S.’s own narrative. The Defense Department defines perception
management in its official dictionary
as “[a]ctions to convey and/or deny selected information and indicators
to foreign audiences to influence their emotions, motives, and
objective reasoning.” This is the part that has, historically, tended to
raise the public’s skepticism of the Pentagon’s work.
The term “perception management” hearkens back to
the Reagan administration’s attempts to shape the narrative around the
Contras in Nicaragua. The Reagan administration sought to kick what his
Vice President George H.W. Bush would later call the “Vietnam syndrome,”
which it believed was driving American public opposition to support for
the Contras. Ronald Reagan’s CIA director, William Casey, directed
the agency’s leading propaganda specialist to oversee an interagency
effort to portray the Contras — who had been implicated in grisly
atrocities — as noble freedom fighters.
“An elaborate system of inter-agency committees was eventually formed
and charged with the task of working closely with private groups and
individuals involved in fundraising, lobbying campaigns and
propagandistic activities aimed at influencing public opinion and
governmental action,” an unpublished draft chapter of Congress’s
investigation into Iran-Contra states. (Democrats dropped the chapter in
order to get several Republicans to sign the report.)
The Smith-Mundt Act, passed in 1948 in the wake of the Second World
War, prohibits the the State Department from disseminating “public
diplomacy” — i.e., propaganda — domestically, instead requiring that
those materials be targeted at foreign audiences. The Defense Department
considered itself bound by this requirement as well.
After the invasion of Iraq, the Pentagon triggered backlash after
U.S. propaganda was disseminated in the U.S. In 2004, the military signaled that it had begun its siege on Fallujah. Just hours later, CNN discovered that this was not true.
But in 2012, the law was amended to allow propaganda to be circulated
domestically, under the bipartisan Smith-Mundt Modernization Act,
introduced by Reps. Adam Smith, D-Wash., and Mac Thornberry, R-Texas,
which was later rolled into the National Defense Authorization Act.
“Proponents of amending these two sections argue that the ban on
domestic dissemination of public diplomacy information is impractical
given the global reach of modern communications, especially the
Internet, and that it unnecessarily prevents valid U.S. government
communications with foreign publics due to U.S. officials’ fear of
violating the ban,” a congressional research service report said
at the time of the proposed amendments. “Critics of lifting the ban
state that it may open the door to more aggressive U.S. government
activities to persuade U.S. citizens to support government policies, and
might also divert the focus of State Department and the BBG
[Broadcasting Board of Governors] communications from foreign publics,
reducing their effectiveness.”
The Obama administration subsequently approved a highly classified
covert action finding designed to counter foreign malign influence
activities, a finding renewed and updated by the Biden administration,
as The Intercept has reported.
The IPMO memo produced for the academic institution hints at its role
in such propagandistic efforts now. “Among other things, the IPMO is
tasked with the development of broad thematic messaging guidance and
specific strategies for the execution of DoD activities designed to
influence foreign defense-related decision-makers to behave in a manner
beneficial to U.S. interests,” the memo states.
As the global war on terror
draws to a close, the Pentagon has turned its attention to so-called
great power adversaries like Russia and China. Following Russia’s
meddling in the 2016 election, which in part involved state-backed
efforts to disseminate falsehoods on social media, offices tasked with
combating disinformation started springing up all over the U.S.
government, as The Intercept has reported.
The director of national intelligence last year established a new
center to oversee all the various efforts, including the Department of
Homeland Security’s Countering Foreign Influence Task Force and the
FBI’s Foreign Influence Task Force.
The Pentagon’s IPMO differs from the others in one key respect:
secrecy. Whereas most of the Department of Homeland Security’s
counter-disinformation efforts are unclassified in nature — as one
former DHS contractor not authorized to speak publicly explained to The
Intercept — the IPMO involves a great deal of highly classified work.
That the office’s work goes beyond simple messaging into the rarefied
world of intelligence is clear from its location within the Pentagon
hierarchy. “The Influence and Perception Management Office will serve as
the senior advisor to the USD(I&S) [Undersecretary of Defense for
Intelligence and Security] for strategic operational influence and
perception management (reveal and conceal) matters,” the budget notes.
When asked about the intelligence community’s counter-disinformation
efforts, Lt. Gen. Scott Berrier, director of the Defense Intelligence
Agency, told Congress this month, “I think DIA’s perspective on this,
senator, is really speed: We want to be able to detect that and it’s
really with our open-source collection capability working with our
combatant command partners where this is happening all over the world —
and then the ability to turn something quickly with them, under the
right authorities, to counter that disinformation, misinformation.”
racket | Years ago, when I first began to have doubts about the Trump-Russia
story, I struggled to come up with a word to articulate my suspicions.
If
the story was wrong, and Trump wasn’t a Russian spy, there wasn’t a
word for what was being perpetrated. This was a system-wide effort to
re-frame reality itself, which was both too intellectually ambitious to
fit in a word like “hoax,” but also probably not against any one law,
either. New language would have to be invented just to define the
wrongdoing, which not only meant whatever this was would likely go
unpunished, but that it could be years before the public was ready to
talk about it.
Around that same time, writer Jacob Siegel — a former army infantry and intelligence officer who edits Tablet’s afternoon digest, The Scroll—
was beginning the job of putting key concepts on paper. As far back as
2019, he sketched out the core ideas for a sprawling, illuminating
13,000-word piece that just came out this week. Called “A Guide to Understanding the Hoax of the Century: Thirteen ways of looking at disinformation,” Siegel’s Tablet article
is the enterprise effort at describing the whole anti-disinformation
elephant I’ve been hoping for years someone in journalism would take on.
It will escape no one’s notice that Siegel’s lede recounts the Hamilton 68 story
from the Twitter Files. Siegel says the internal dialogues of Twitter
executives about the infamous Russia-tracking “dashboard” helped him
frame the piece he’d been working on for so long. Which is great, I’m
glad about that, but he goes far deeper into the topic than I have, and
in a way that has a real chance to be accessible to all political
audiences.
Siegel threads together all the disparate
strands of a very complex story, in which the sheer quantity of themes
is daunting: the roots in counter-terrorism strategy, Russiagate as a
first great test case, the rise of a public-private
“counter-disinformation complex” nurturing an “NGO Borg,” the importance
of Trump and “domestic extremism” as organizing targets, the
development of a new uniparty politics anointing itself “protector” of
things like elections, amid many other things.
He concludes
with an escalating string of anxiety-provoking propositions. One is
that our first windows into this new censorship system, like Stanford’s Election Integrity Partnership,
might also be our last, as AI and machine learning appear ready to step
in to do the job at scale. The National Science Foundation just
announced it was “building a set of use cases”
to enable ChatGPT to “further automate” the propaganda mechanism, as
Siegel puts it. The messy process people like me got to see, just
barely, in the outlines of Twitter emails made public by a
one-in-a-million lucky strike, may not appear in recorded human
conversations going forward. “Future battles fought through AI
technologies,” says Siegel, “will be harder to see.”
More
unnerving is the portion near the end describing how seemingly smart
people are fast constructing an ideology of mass surrender. Siegel
recounts the horrible New York Times Magazine article (how did I forget it?) written by Yale law graduate Emily Bazelon just before the 2020 election, whose URL is titled “The Problem of Free Speech in an Age of Disinformation.” Shorter Bazelon could have been Fox Nazis Censorship Derp: the article the Times
really ran was insanely long and ended with flourishes like, “It’s time
to ask whether the American way of protecting free speech is actually
keeping us free.”
Both the actors in the Twitter Files and
the multitudinous papers produced by groups like the Aspen Institute
and Harvard’s Shorenstein Center are perpetually concerned with
re-thinking the “problem” of the First Amendment, which of course is not
popularly thought of as a problem. It’s notable that the
Anti-Disinformation machine, a clear sequel to the Military-Industrial
Complex, doesn’t trumpet the virtues of the “free world” but rather the
“rules-based international order,” within which (as Siegel points out)
people like former Labor Secretary Robert Reich talk about digital
deletion as “necessary to protect American democracy.” This idea of
pruning fingers off democracy to save it is increasingly popular; we
await the arrival of the Jerzy Kozinski character who’ll propound this
political gardening metaphor to the smart set.
WaPo | The Kremlin’s disinformation
casts the United States — and Ukraine — as villains for creating germ
warfare laboratories, giving Mr. Putin another pretext for a war that
lacks all justification. The disinformation undermines the biological
weapons treaty, showing that Mr. Putin has little regard for maintaining
the integrity of this international agreement. The disinformation
attempts to divert attention from Russia’s barbaric onslaught against
civilians in Ukraine. In 2018, the Kremlin may have been seeking to
shift attention from the attempted assassination of former double agent
Sergei Skripal in Britain, or from the Robert S. Mueller III
investigation that year of Russian meddling in the U.S. presidential
campaign.
The biological laboratories are just one example of Russia’s wider disinformation campaigns.
Data shared by Facebook shows Russians “built manipulative Black Lives
Matter and Blue Lives Matter pages, created pro-Muslim and pro-Christian
groups, and let them expand via growth from real users,” says author Samuel Woolley in “The Reality Game.”
He adds, “The goal was to divide and conquer as much as it was to dupe
and convince.” During the pandemic, Russia similarly attempted to aggravate existing tensions
over public health measures in the United States and Europe. It has
also spread lies about the use of chemical weapons, undermining the
treaty that prohibits them and the organization that enforces it. In the
Ukraine war, Russia has fired off broadsides of disinformation, such as
claiming
the victims of the Mariupol massacre were “crisis actors.” Russia used
disinformation to mask its responsibility for the shoot-down of the
Malaysia Airlines flight MH-17 over Ukraine in 2014.
The
disinformation over Ukraine, repeated widely in the Russian media,
plays well with social groups that support Putin: the poor, those
living in rural areas and small towns, and those being asked to send
young men to the front. Mr. Putin so tightly controls the news media
that it is difficult for alternative news and messages to break through.
Disinformation
is a venom. It does not need to flip everyone’s, or even most people’s,
views. Its methods are to creep into the lifeblood, create uncertainty,
enhance established fears and sow confusion.
The best way to strike back is with the facts, and fast. Thomas Kent, the former president of Radio Free Europe/Radio Liberty, has pointed out
that the first hours are critical in such an asymmetrical conflict:
Spreaders of disinformation push out lies without worrying about their
integrity, while governments and the news media try to verify
everything, and take more time to do so. Mr. Kent suggests speeding the
release of information that is highly likely to be true, rather than
waiting. For example, it took 13 days for the British government to
reach a formal conclusion that Russia was behind the poisoning of Mr.
Skripal, but within 48 hours of the attack, then-Foreign Secretary Boris
Johnson told Parliament that it appeared to be Russia, which helped tip
the balance in the press and public opinion.
In
Ukraine, when Russia was on the threshold of invasion, government and
civil society organizations rapidly coordinated an informal “early
warning system”to detect and identify Russia’s false claims and
narratives. It was successful when the war began, especially with use of
the Telegram app. In a short time, Telegram use leapt from 12 percent
adoption to 65 percent, according to those involved in the effort
Also
in Ukraine, more than 20 organizations, along with the National
Democratic Institute in Washington, had created a disinformation
debunking hub in 2019 that has played a key role in the battle against the onslaught of lies. A recent report
from the International Forum for Democratic Studies at the National
Endowment for Democracy identified three major efforts that paid off for
Ukraine in the fight against Russian disinformation as war began. One
was “deep preparation” (since Russia was recycling old claims from 2014,
they were ready); active and rapid cooperation of civil society groups;
and use of technology, such as artificial intelligence and machine
learning, to help sift through the torrents of Russian disinformation
and rapidly spot malign narratives.
Governments
can’t do this on their own. Free societies have an advantage that
autocrats don’t: authentic civil society that can be agile and
innovative. In the run-up to the Ukraine war, all across Central and
Eastern Europe, civil society groups were sharpening techniques for spotting and countering Russian disinformation.
Plain old media literacy among readers and viewers — knowing how to discriminate among sources, for example — is also essential.
Open societies are vulnerable because
they are open. The asymmetries in favor of malign use of information
are sizable. Democracies must find a way to adapt. The dark actors morph
constantly, so the response needs to be systematic and resilient.
Slate | Carlson also made a big show of his “exclusive” interview with Tarik Johnson, a former Capitol officer who has actually been interviewed before by NPR.
The House’s select committee on Jan. 6 did a fine job of connecting
larger dots, drawing a straight line from the Stop the Steal rhetoric
through to the insurrection. But though it interviewed Capitol police
officers, it skipped an interview with Johnson, who was pictured that
day wearing a MAGA hat. “The frontline officers and supervisors were not
prepared at all,” Johnson said on the air. He told Carlson he asked
leadership for direction after the Capitol was breached. “I got no
response,” he said. (He said that he used the MAGA hat to avoid being
assaulted by the crowds of rioters himself; the Capitol police have
denied no one responded to Johnson.) Johnson offered seemingly sincere
answers to Carlson’s leading and partisan questions, and gave Carlson’s
audience a fair representation of the riot: “They focused on Donald
Trump, not the failures of the Capitol police,” he said of the
committee. “Some people there had planned on being violent. Some people
may have turned violent after what they were going through. I think
people wanted to support their president. Some of those people just
wanted to support him, and some of those people didn’t commit violence,
and some of those people didn’t plan on it.”
Rand | This week marks one year since Russia's full-scale invasion of
Ukraine began, igniting the largest armed conflict in Europe since World
War II.
RAND researchers have been analyzing the war from countless angles, providing insights on Russian and Ukrainian capabilities, the potential for diplomacy, refugee assistance, and much more.
What have we learned? And what might lie ahead?
We asked nearly 30 RAND experts to reflect on this
grim anniversary by highlighting notable takeaways from the first year
of Russia's all-out war—and sharing what they're watching as the
conflict in Ukraine grinds on. Here's what they said.
“Russia seems poised to resume limited offensives. Ukraine also seeks
another successful counteroffensive. Yet both sides' capabilities are
being worn down. Ukraine will need continued and predictable support as
Russia digs deep into its reserves.”
What stood out in Year One
“The trajectory of Russia-Ukraine negotiations seems odd in retrospect. The sides came closest to outlining the contours of a settlement in the first six weeks of the conflict. What was nearly agreed to then would be inconceivable now.”
What to watch in Year Two
“I will be watching closely to see if Russia is learning from its mistakes or just perpetuating them.”
What stood out in Year One
“Of the war's many takeaways, perhaps the most fundamental is that
large, conventional wars are not just confined to history books. It's a
lesson that many only half-believed until February 24, and one that the
world must never forget going forward.”
What to watch in Year Two
“The big strategic question is whether the front lines will stagnate
and eventually turn the war into a frozen conflict. The answer will
ultimately come down to whether Western military aid or the ongoing
Russian mobilization gains the upper hand.”
What stood out in Year One
“The strategic failure of the Russian leadership and the incompetence of the Russian military.”
What to watch in Year Two
“The evolving views of the Russian elite and the Russian populace toward Putin and the war.”
NYTimes | Soon
after ChatGPT debuted last year, researchers tested what the artificial
intelligence chatbot would write after it was asked questions peppered
with conspiracy theories and false narratives.
The
results — in writings formatted as news articles, essays and television
scripts — were so troubling that the researchers minced no words.
“This
tool is going to be the most powerful tool for spreading misinformation
that has ever been on the internet,” said Gordon Crovitz, a co-chief
executive of NewsGuard, a company that tracks online misinformation and
conducted the experiment last month. “Crafting a new false narrative can
now be done at dramatic scale, and much more frequently — it’s like
having A.I. agents contributing to disinformation.”
Disinformation is difficult to wrangle when it’s created manually by humans. Researchers predict
that generative technology could make disinformation cheaper and easier
to produce for an even larger number of conspiracy theorists and
spreaders of disinformation.
Personalized,
real-time chatbots could share conspiracy theories in increasingly
credible and persuasive ways, researchers say, smoothing out human
errors like poor syntax and mistranslations and advancing beyond easily
discoverable copy-paste jobs. And they say that no available mitigation
tactics can effectively combat it.
Predecessors
to ChatGPT, which was created by the San Francisco artificial
intelligence company OpenAI, have been used for years to pepper online
forums and social media platforms with (often grammatically suspect)
comments and spam. Microsoft had to halt activity from its Tay chatbot within 24 hours of introducing it on Twitter in 2016 after trolls taught it to spew racist and xenophobic language.
ChatGPT
is far more powerful and sophisticated. Supplied with questions loaded
with disinformation, it can produce convincing, clean variations on the
content en masse within seconds, without disclosing its sources. On
Tuesday, Microsoft and OpenAI introduced a new Bing search engine and web browser that can use chatbot technology to plan vacations, translate texts or conduct research.
MIT | Since 2014, viral images of Black people being
killed at the hands of the police—Michael Brown, Eric Garner, Breonna
Taylor, and many, many others—have convinced much of the public that the
American criminal legal system is broken. In the summer of 2020,
nationwide protests against police racism and violence in the wake of
George Floyd’s murder were, according to some analysts, the largest
social movement in the history of the United States.2 Activists and academics have demanded defunding the police and reallocating the funds to substitutes or alternatives.3 And others have called for abolishing the police altogether.4
Here is the article in which Harvard profs call for greatest expansion of militarized police surveillance bureaucracy in Western history. Below I discuss the flagrant ethical and intellectual problems and how elite academia can be so dangerous. https://t.co/Cq1fFCdWR2
It has become common knowledge that the police do not solve serious
crime, they focus far too much on petty offenses, and they are far too
heavy-handed and brutal in their treatment of Americans—especially poor,
Black people. This is the so-called paradox of under-protection and
over-policing that has characterized American law enforcement since
emancipation.5
The American criminal legal system is unjust and
inefficient. But, as we argue in this essay, over-policing is not the
problem. In fact, the American criminal legal system is characterized by
an exceptional kind of under-policing, and a heavy reliance on
long prison sentences, compared to other developed nations. In this
country, roughly three people are incarcerated per police officer
employed. The rest of the developed world strikes a diametrically
opposite balance between these twin arms of the penal state, employing
roughly three and a half times more police officers than the number of
people they incarcerate. We argue that the United States has it
backward. Justice and efficiency demand that we strike a balance between
policing and incarceration more like that of the rest of the developed
world. We call this the “First World Balance.”
We defend this idea in much more detail in a forthcoming book titled What’s Wrong with Mass Incarceration.
This essay offers a preliminary sketch of some of the arguments in the
book. In the spirit of conversation and debate, in this essay we err
deliberately on the side of comprehensiveness rather than argumentative
rigor. One of us is a social scientist, and the other is a philosopher
and legal scholar. Our primary goal for this research project, and
especially in this essay, is not to convince readers that we are
correct—but rather to encourage a more explicit discussion of the
empirical and normative bases of some pressing debates about the
American criminal legal system. Even if our answers prove unsound, we
hope that the combination of empirical social science and analytic moral
and political philosophy we contribute can help illuminate what
alternative answers to those questions might have to look like to be
sound. In fact, because much of this essay (and the underlying book
project) strikes a pessimistic tone, we would be quite happy to be wrong
about much of what we argue here.
In the first part of this essay, we outline five
comparative facts that contradict much of the prevailing way of thinking
about what is distinctive about the American criminal legal system. In
the second part, we draw out the normative implications of those facts
and make the case for the First World Balance.
At
the end of the call, I remind her that she is engaging in illegal acts
by telling social media companies what content to censor. See page 7 for her message to Twitter showing them what to censor.
Twitter
follows orders, even though Carol is breaking the law. They aren’t
going to turn her in. On the contrary, they are in on it. This is
collusion to censor free speech.
If you’d like to tell her you support my suggestion, you can reach her at ccrawford@cdc.gov.
I’m sure she’d be delighted to hear from you.
You Tube censored my video within minutes of posting
I also uploaded the video on YouTube,
but it was censored after just 6 views! YouTube will censor anything
that makes the government look bad. So if you document government
corruption, YouTube is not the place to post it.
caitlinjohnstone | The
empire has had mixed feelings about the internet since its creation. On
one hand it allows for unprecedented surveillance and information
gathering and the rapid distribution of propaganda, which it likes, but
on the other it allows for the unprecedented democratization of
information, which it doesn’t like.
Its
answer to this quandary has been to come up with “fact checking”
services and Silicon Valley censorship protocols for restricting
“misinformation” (with “facts” and “information” defined as “whatever
advances imperial interests”). That’s all we’re seeing with continually
expanding online censorship policies, and with government-tied
oligarchic narrative management operations like NewsGuard.
Twitter has imposed a weeklong suspension on the account of writer and political activist Danny Haiphong for a thread he made on the platform disputing the mainstream Tiananmen Square massacre narrative.
The
notification Haiphong received informed him that Twitter had locked his
account for “Violating our rules against abuse and harassment,”
presumably in reference to a rule the platform put in place a year ago
which prohibits “content that denies that mass murder or other mass
casualty events took place, where we can verify that the event occured,
and when the content is shared with abusive intent.”
“This may include references to such an event as a ‘hoax’ or claims that victims or survivors are fake or ‘actors,’” Twitter said
of the new rule. “It includes, but is not limited to, events like the
Holocaust, school shootings, terrorist attacks, and natural disasters.”
That
we are now seeing this rule applied to protect narratives which support
the geostrategic interests of the US-centralized empire is not in the
least bit surprising.
Haiphong is far from the first
to dispute the mainstream western narrative about exactly what happened
around Tiananmen Square in June of 1989 as the Soviet Union was
crumbling and Washington’s temporary Cold War alignment with Beijing was
losing its strategic usefulness.
But we can expect more acts of online censorship like this as Silicon
Valley continues to expand into its role as guardian of imperial
historic records.
This
idea that government-tied Silicon Valley institutions should act as
arbiters of history on behalf of the public consumer is gaining steadily
increasing acceptance in the artificially manufactured echo chamber of
mainstream public opinion. We saw another example of this recently in
Joe Lauria’s excellent refutation of accusations against Consortium News of historic inaccuracy by the imperial narrative management firm NewsGuard.
As journalists like Whitney Webb and Mnar Adley
noted years ago, NewsGuard markets itself as a “news rating agency”
designed to help people sort out good from bad sources of information
online, but in reality functions as an empire-backed weapon against
media who question imperial narratives about what’s happening in the
world. The Grayzone’s Max Blumenthal outlined the company’s many partnerships with imperial swamp monsters like former NATO Secretary General Anders Fogh Rasmussen and “chief propagandist”
Richard Stengel as well as “imperialist cutouts like the German
Marshall Fund” when its operatives contacted his outlet for comment on
their accusations.
ukraina.ru |Opening the meeting, Alexei Pushkov , Chairman of the Federation Council Commission on Information Policy and Interaction with the Media , noted the importance that information policy has today. And he expressed a number of considerations as to what should be taken into account in its formation in the foreseeable period. According
to Pushkov, we already live in a qualitatively changed world, we should
not expect that after the end or settlement of the conflict in Ukraine,
everything will return to normal.
“We
see a big reformatting of the geopolitical space, geo-economic space,
transport links, reformatting of energy supplies, etc. we are entering a qualitatively new world,
” the politician said, emphasizing that we also see a geopolitical gap
between Russia and the West: “these are no longer fluctuations, this is
a different state of relations, and if this is less pronounced on our
part, then the West clearly took the line on the maximum break in
relations and contacts with Russia.
The
politician noted that this big gap is accompanied by "a fundamental
ideological and informational delimitation - at least it concerns the
demarcation with the official line of Western states and the mainstream
media, which determine the informational and propaganda agenda."
“This big gap was a matter of time, since Russia fully decided on its choice to develop as an independent center of power. This was not provided for in Western doctrine and Western policy, ” Pushkov said.
He
added that the doctrine of the “end of history”, which justifies the
triumph of the Western liberal model of the world order, is unacceptable
for Russia.
The
vaunted pluralism is over, now the West has one doctrine that everyone
must obey, and a number of tools are used to ensure this doctrine
(cancellation culture, censorship, blocking objectionable resources,
banning journalists, etc.). All this is well felt in the information field of Russia, which, of course, cannot be put up with.
The
disengagement manifests itself in various forms - for example, the West
has rejected the principle of equal security, is ready to impose its
interests by force, imposes a "deliberately muddy" system based on rules
that are constantly changing and have no logical justification - in
fact, a new international law is being introduced, "the law de facto".
In this regard, the information policy of Russia faces a set of important tasks, the senator continued. There are several directions. This is, in particular, the development and approval of their own criteria for assessing what is happening in the world; gain an independent view of themselves; to
reconsider the ideas of the superiority of the Western liberal model -
“it turned out that the Soviet Union was not so bad”, and Western
democracy, in its current form, clearly cannot be a model ...
In
a word, in the Russian Federation “it is necessary to form a new
information, educational, educational and cultural space in which the
Western world will no longer be a subject of fetishization ... but also
not a subject of rejection,” the politician said.
British empire smut rag The Times has a new article out titled “Azov Battalion drops neo-Nazi symbol exploited by Russian propagandists,”
which has got to be the most hilarious headline of 2022 so far (and I’m
including The Onion and other intentionally funny headlines in the
running).
“The Azov Battalion has removed a neo-Nazi symbol from
its insignia that has helped perpetuate Russian propaganda about Ukraine
being in the grip of far-right nationalism,” The Times informs us. “At
the unveiling of a new special forces unit in Kharkiv, patches handed to
soldiers did not feature the wolfsangel, a medieval German symbol that
was adopted by the Nazis and which has been used by the battalion since
2014. Instead, they featured a golden trident, the Ukrainian national
symbol worn by other regiments.”
Yeah that’s how you solve Ukraine’s Nazi problem. A logo change.
Claiming
it’s “Russian propaganda” to say the Azov Battalion uses neo-Nazi
insignia, and is ideologically neo-Nazi, is itself propaganda. A month
ago Moon of Alabama published an incomplete list
of the many mainstream western outlets who have described various
Ukrainian paramilitaries as such, so if it’s only “Russian
propagandists” who’ve been saying the Azov Battalion is neo-Nazi then
Silicon Valley social media platforms should immediately ban outlets
like NBC News, the BBC, The Guardian, and Reuters.
Before
this war started this past February it wasn’t seriously controversial
to say that Ukraine has a Nazi problem except in the very most virulent
of empire spinmeister echo chambers. Even in the early days of the
conflict it was still happening with mainstream publications who hadn’t
yet gotten the memo that history had been rewritten, like this NBC News article from March titled “Ukraine’s Nazi problem is real, even if Putin’s ‘denazification’ claim isn’t.”
So plainly it is not “Russian propaganda” to highlight the
established fact that there are neo-Nazi paramilitaries in Ukraine who
are receiving weapons from the US and its allies. The change in insignia
isn’t being made to correct a misperception, it’s being made to obscure
a correct perception.
The change in insignia is a rebranding to a more mainstream-friendly logo, very much like Aunt Jemima rebranding to Pearl Milling Company due to the Jim Crow racism
the previous branding evoked. The primary difference is that the
corporate executives of Pearl Milling probably aren’t still interested
in turning America back into an apartheid state.
As journalist Alex Rubenstein noted on Twitter, al Qaeda in Syria went through a similar rebranding not long ago for the exact same reasons:
Rejuvenation Pills
-
No one likes getting old. Everyone would like to be immorbid. Let's be
careful here. Immortal doesnt include youth or return to youth. Immorbid
means you s...
Death of the Author — at the Hands of Cthulhu
-
In 1967, French literary theorist and philosopher Roland Barthes wrote of
“The Death of the Author,” arguing that the meaning of a text is divorced
from au...
9/29 again
-
"On this sacred day of Michaelmas, former President Donald Trump invoked
the heavenly power of St. Michael the Archangel, sharing a powerful prayer
for pro...
Return of the Magi
-
Lately, the Holy Spirit is in the air. Emotional energy is swirling out of
the earth.I can feel it bubbling up, effervescing and evaporating around
us, s...
New Travels
-
Haven’t published on the Blog in quite a while. I at least part have been
immersed in the area of writing books. My focus is on Science Fiction an
Historic...
Covid-19 Preys Upon The Elderly And The Obese
-
sciencemag | This spring, after days of flulike symptoms and fever, a man
arrived at the emergency room at the University of Vermont Medical Center.
He ...