Wednesday, April 05, 2023

SSRI Antidepressants Cause Mass Shootings

amidwesterndoctor |  Much like the vaccine industry, the psychiatric industry will always try to absolve their dangerous medications of responsibility and will aggressively gaslight their victims. Despite these criticisms,there are three facts can be consistently found throughout the literature on akathisia homicides which Gøtzsche argues irrefutably implicate psychiatric medications as the cause of violent homicides:

• These violent events occur in people of all ages, who by all objective and subjective measures were completely normal before the act and where no precipitating factors besides the psychiatric medication could be identified.
• The events were preceded by clear symptoms of akathisia.
• The violent offenders returned to their normal personality when they came off the antidepressant.

Numerous cases where this has happened are summarized within this article from the Palm Beach Post. In most of those cases, a common trend of these spontaneous acts of violence emerges: the act of violence was immediately preceded by a significant change in the psychiatric medications used by the individual. In one case, shortly before committing one of these murders, one of the perpetrators also wrote on a blog that, while taking Prozac, he felt as if he was observing himself "from above." 

Individuals with a mutation in the gene that metabolizes psychiatric drugs are much more vulnerable to developing excessive levels of these drugs and triggering severe symptoms such as akathisia and psychosis. There is a good case to be made that individuals with this gene are responsible for many of the horrific acts of iatrogenic (medically induced) violence that occur, however to my knowledge, this is never considered when psychiatric medications are prescribed. Gøtzsche summarized a peer-reviewed forensic investigation of 10 cases where this happened (all but one of these involved an SSRI or an SNRI):

Note: This original version of this article (which has been revised and updated) was published a year ago, but sadly is just as pertinent now as it was then. Each time one of these shootings happen, I watch people get up in arms over what needs to be done to stop murdering our children, but at the same time this, the elephant in the room, the clear and irrefutable evidence linking psychiatric medications to homicidal violence is never discussed (which I believe is due their sales making approximately 40 billion dollars a year).

Many of the stories in here are quite heart wrenching, and I humbly request that you make the effort to bear witness to these tragic events.

Prior to the Covid vaccinations, psychiatric medications were the mass-prescribed medication that had the worst risk-to-benefit ratio on the market. In addition to rarely providing benefits to patients, there is a wide range of severe complications that commonly result from psychiatric medications. Likewise, I and many colleagues believe the widespread adoption of psychotropic drugs has distorted the cognition of the demographic of the country which frequently utilizes them (which to some extent stratifies by political orientation) and has created a wide range of detrimental shifts in our society. 

Selective serotonin reuptake inhibitors (SSRIs) have a similar primary mechanism of action to cocaine. SSRIs block the reuptake of Serotonin, SNRIs, also commonly prescribed block the reuptake of Serotonin and Norepinephrine (henceforth “SSRI refers to both SSRI and SNRI), and Cocaine blocks the reuptake of Serotonin, Norepinephrine, and Dopamine. SSRIs (and SNRIs) were originally used as anti-depressants, then gradually had their use marketed into other areas and along the way have amassed a massive body count.

Once the first SSRI entered the market in 1988, Prozac quickly distinguished itself as a particularly dangerous medication and after nine years, the FDA received 39,000 adverse event reports for Prozac, a number far greater than for any other drug. This included hundreds of suicides, atrocious violent crimes, hostility and aggression, psychosis, confusion, distorted thinking, convulsions, amnesia, and sexual dysfunction (long-term or permanent sexual dysfunction is one of the most commonly reported side effects from anti-depressants, which is ironic given that the medication is supposed to make you less, not more depressed). 

SSRI homicides are common, and a website exists that has compiled thousands upon thousands of documented occurrences. As far as I know (there are most likely a few exceptions), in all cases where a mass school shooting has happened, and it was possible to know the medical history of the shooter, the shooter was taking a psychiatric medication that was known for causing these behavioral changes. After each mass shooting, memes illustrating this topic typically circulate online, and the recent events in Texas [this article was written shortly after the shooting last year] are no exception. I found one of these and made an updates version of it (the one I originally used contained some inaccuracies)

Oftentimes, “SSRIs cause mass shootings” is treated as just another crazy conspiracy theory. However, much in the same way the claim “COVID Vaccines are NOT safe and effective” is typically written off as a conspiracy theory, if you go past these labels and dig into the actual data, an abundantly clear and highly concerning picture emerges.

There are many serious issues with psychiatric medications. For brevity, this article will exclusively focus on their tendency to cause horrific violent crimes. This was known long before they entered the market by both the drug companies and the FDA. While there is a large amount of evidence for this correlation, it is the one topic that is never up for debate when a mass shooting occurs. I have a lot of flexibility to discuss highly controversial topics with my colleagues, but this topic is met with so much hostility that I can never bring it up. It is, for this reason, I am immensely grateful to have an anonymous forum I can use.

 

How Big Pharma And The FDA Buried The Dangers Of SSRI Antidepressants

pierrekory |   One of the pharmaceutical executives directly involved in obtaining the approval for the original SSRI antidepressant, Prozac, developed a great deal of guilt for what he was complicit in once a large number of SSRI-linked deaths occurred. John Virapen, along with Peter Rost are the only pharmaceutical executives I know of who have become whistleblowers and shared the intimate details of how these companies actually operate. Although the events Virapen alleged seem hard to believe, other whistleblowers have also made similar observations to Virapen (the accounts of the Pfizer whistleblowers can be found in this article and this article).

John Virapen chronicled the events in which he was complicit in “Side Effects: Death—Confessions of a Pharma Insider.” These included outrageous acts of bribery to get his drugs approved, and photographing physicians with prostitutes provided by Eli Lilly so that they could be blackmailed into serving Eli Lilly. For those interested, this is a brief talk that Virapen gave about his experiences. I greatly appreciate the fact he used candid language rather than euphemisms like almost everyone else does:

At the start of the saga, Lilly was in dire financial straits and the company’s survival hinged on the approval of Prozac. Prozac had initially been proposed as a treatment for weight loss (as this side effect of Prozac had been observed in treatment subjects), but Lilly subsequently concluded it would be easier to get approval for treating depression and then get a post-marketing approval for the treatment of weight loss.

As Prozac took off, it became clear that depression was a much better market, and the obesity aspect was forgotten. Lilly then used a common industry tactic and worked tirelessly to expand the definition of depression so that everyone could become eligible for the drug and aggressively marketed this need for happiness to the public, before long, transforming depression from a rare to a common condition. For those wishing to learn more, Peter Gøtzsche has extensively documented how this fraud transpired and both this brief documentary and this article show how depression became popularized in Japan so that treatments for it could be sold.

Unfortunately, while the marketing machine had no difficulties creating a demand for Prozac, the initial data made it abundantly clear that the first SSRI, Prozac, was dangerous and ineffective. Lilly settled on the strategy of obtaining regulatory approval in Sweden, and using this approval as a precedent to obtain approval in other countries. Virapen was assigned to this task and told by his superiors that if he failed, his career was over. Virapen, unfortunately, discovered that whenever he provided Lilly’s clinical trial data to experts, they had trouble believing he was actually seeking regulatory approval, as Prozac’s trial data was just that bad. 

Sweden (following their regulatory procedures) elected to allow an outside independent expert to make the final determination on whether Prozac should be approved or not. The identity of this expert witness was concealed, but Virapen was able to determine that it was Anders Forsman, a forensic psychiatrist and member of the legal council on the Swedish National Board of Health. After meeting with Virapen, Forsman proposed an untraceable bribe. Then, upon receiving payment, wrote a glowing letter in support of Prozac, fully reversing his position on Prozac (he had ridiculed it two weeks before) and guided Virapen through re-writing the trial to conceal the 5 attempted (4 of which were successful) SSRI suicides in Lilly’s trial. 

Forsman’s expert opinion resulted in Prozac being partially approved and formally priced for reinbursement in Sweden, which was used as a precedent to market it around the world at that same lucrative price. Virapen noted that during this time, German drug regulators who had clearly and unambiguously stated that Prozac was “totally unsuitable for the treatment of depression” suddenly reversed their position, leading Virapen to suspect that similar under-the-table activity must have occurred in Germany. David Healey, a doctor and director of the North Wales School of psychological medicine, likewise concluded that the German approval was due to “unorthodox lobbying methods exercised on independent members of the regulatory authorities.”

Not long after saving Eli Lilly, Virapen was fired. Virapen believes he was fired because he was a man of color in an otherwise Caucasian company (he was told this by his supervisor). Gøtzsche, a leading expert in pharmaceutical research fraud and meta-analyses, on the other hand, attributed this to typical organized crime tactics where Lilly sought to conceal their illegal activity by firing Virapen and his two assistants to bribe Forsman (because immediately afterwards, none of them were permitted to access their offices, and thus could not obtain any of the files that proved that this bribery occurred). Fortunately, as happened with Peter Rost, this unjust firing eventually motivated Virapen to become an invaluable whistleblower.

Heavily Abused Legal Drugs Adderall And Xanax Blocked By "Secret Limits"

Word on the street, and what I've witnessed with my very own lying eyes, information technology CHUDS and medical students alike have been crying like little bishes about the market failure to keep them supplied with their longtime legal drugs of dependency.

Bloomberg  |  Patients diagnosed with conditions like anxiety and sleep disorders have become caught in the crosshairs of America’s opioid crisis, as secret policies mandated by a national opioid settlement have turned filling legitimate prescriptions into a major headache.

In July, limits went into effect that flag and sometimes block pharmacies’ orders of controlled substances such as Adderall and Xanax when they exceed a certain threshold. The requirement stems from a 2021 settlement with the US’s three largest drug distributors — AmerisourceBergen Corp., Cardinal Health Inc. and McKesson Corp. But pharmacists said it curtails their ability to fill prescriptions for many different types of controlled substances — not just opioids.

Independent pharmacists said the rules force them come up with creative workarounds. Sometimes, they must send patients on frustrating journeys to find pharmacies that haven’t yet exceeded their caps in order to buy prescribed medicines.

“I understand the intention of this policy is to have control of controlled substances so they don’t get abused, but it’s not working,” said Richard Glotzer, an independent pharmacist in Millwood, New York. “There’s no reason I should be cut off from ordering these products to dispense to my legitimate patients that need it.”

It's unclear how the thresholds are impacting major chain pharmacies. CVS Health Corp. didn’t provide comment. A spokesperson for Walgreens Boots Alliance Inc. said its pharmacists “work to resolve any specific issues when possible, in coordination with our distributors.” 

The Drug Enforcement Administration regulates the manufacturing, distribution and sale of controlled substances, which can be dangerous when used improperly. Drugmakers and wholesalers were always supposed to keep an eye out for suspicious purchases and have long had systems to catch, report and halt these orders. The prescription opioid crisis, enabled by irresponsible drug company marketing and prescribing, led to a slew of lawsuits and tighter regulations on many parts of the health system, including monitoring of suspicious orders. One major settlement required the three largest distributors to set thresholds on orders of controlled substances starting last July.

The “suspicious order” terminology is a bit of a misnomer, pharmacists said. The orders themselves aren't suspicious, it's just that the pharmacy has exceeded its limit for a specific drug over a certain time period. Any order that puts the pharmacy over its limit can be stopped. As a result, patients with legitimate prescriptions get caught up in the dragnet.

Adding to the confusion, the limits themselves are secret. Drug wholesalers are barred by the settlement agreement from telling pharmacists what the thresholds are, how they’re determined or when the pharmacy is getting close to hitting them.

Tuesday, April 04, 2023

Physics From Computation

00:00:00 Introduction 

00:02:58 Physics from computation 

00:11:30 Generalizing Turing machines  

00:17:34 Dark matter as Indicating "atoms of space"  

00:22:13 Energy as density of space itself  

00:30:30 Entanglement limit of all possible computations  

00:34:53 What persists across the universe are "concepts"  

00:40:09 How does ChatGPT work?  

00:41:41 Irreducible computation, ChatGPT, and AI  

00:49:20 Recovering general relativity from the ruliad (Wolfram Physics Project)  

00:58:38 Coming up: David Chalmers, Ben Goertzel, and more Wolfram

India Beware: ChatGPT Is A Missile Aimed Directly At Low-Cost Software Production

theguardian  | “And so for me,” he concluded, “a computer has always been a bicycle of the mind – something that takes us far beyond our inherent abilities. And I think we’re just at the early stages of this tool – very early stages – and we’ve come only a very short distance, and it’s still in its formation, but already we’ve seen enormous changes, [but] that’s nothing to what’s coming in the next 100 years.”

Well, that was 1990 and here we are, three decades later, with a mighty powerful bicycle. Quite how powerful it is becomes clear when one inspects how the technology (not just ChatGPT) tackles particular tasks that humans find difficult.

Writing computer programs, for instance.

Last week, Steve Yegge, a renowned software engineer who – like all uber-geeks – uses the ultra-programmable Emacs text editor, conducted an instructive experiment. He typed the following prompt into ChatGPT: “Write an interactive Emacs Lisp function that pops to a new buffer, prints out the first paragraph of A Tale of Two Cities, and changes all words with ‘i’ in them red. Just print the code without explanation.”

ChatGPT did its stuff and spat out the code. Yegge copied and pasted it into his Emacs session and published a screenshot of the result. “In one shot,” he writes, “ChatGPT has produced completely working code from a sloppy English description! With voice input wired up, I could have written this program by asking my computer to do it. And not only does it work correctly, the code that it wrote is actually pretty decent Emacs Lisp code. It’s not complicated, sure. But it’s good code.”

Ponder the significance of this for a moment, as tech investors such as Paul Kedrosky are already doing. He likens tools such as ChatGPT to “a missile aimed, however unintentionally, directly at software production itself. Sure, chat AIs can perform swimmingly at producing undergraduate essays, or spinning up marketing materials and blog posts (like we need more of either), but such technologies are terrific to the point of dark magic at producing, debugging, and accelerating software production quickly and almost costlessly.”

Since, ultimately, our networked world runs on software, suddenly having tools that can write it – and that could be available to anyone, not just geeks – marks an important moment. Programmers have always seemed like magicians: they can make an inanimate object do something useful. I once wrote that they must sometimes feel like Napoleon – who was able to order legions, at a stroke, to do his bidding. After all, computers – like troops – obey orders. But to become masters of their virtual universe, programmers had to possess arcane knowledge, and learn specialist languages to converse with their electronic servants. For most people, that was a pretty high threshold to cross. ChatGPT and its ilk have just lowered it.

Monday, April 03, 2023

Transformers: Robots In Disguise?

quantamagazine |  Recent investigations like the one Dyer worked on have revealed that LLMs can produce hundreds of “emergent” abilities — tasks that big models can complete that smaller models can’t, many of which seem to have little to do with analyzing text. They range from multiplication to generating executable computer code to, apparently, decoding movies based on emojis. New analyses suggest that for some tasks and some models, there’s a threshold of complexity beyond which the functionality of the model skyrockets. (They also suggest a dark flip side: As they increase in complexity, some models reveal new biases and inaccuracies in their responses.)

“That language models can do these sort of things was never discussed in any literature that I’m aware of,” said Rishi Bommasani, a computer scientist at Stanford University. Last year, he helped compile a list of dozens of emergent behaviors, including several identified in Dyer’s project. That list continues to grow.

Now, researchers are racing not only to identify additional emergent abilities but also to figure out why and how they occur at all — in essence, to try to predict unpredictability. Understanding emergence could reveal answers to deep questions around AI and machine learning in general, like whether complex models are truly doing something new or just getting really good at statistics. It could also help researchers harness potential benefits and curtail emergent risks.

“We don’t know how to tell in which sort of application is the capability of harm going to arise, either smoothly or unpredictably,” said Deep Ganguli, a computer scientist at the AI startup Anthropic.

The Emergence of Emergence

Biologists, physicists, ecologists and other scientists use the term “emergent” to describe self-organizing, collective behaviors that appear when a large collection of things acts as one. Combinations of lifeless atoms give rise to living cells; water molecules create waves; murmurations of starlings swoop through the sky in changing but identifiable patterns; cells make muscles move and hearts beat. Critically, emergent abilities show up in systems that involve lots of individual parts. But researchers have only recently been able to document these abilities in LLMs as those models have grown to enormous sizes.

Language models have been around for decades. Until about five years ago, the most powerful were based on what’s called a recurrent neural network. These essentially take a string of text and predict what the next word will be. What makes a model “recurrent” is that it learns from its own output: Its predictions feed back into the network to improve future performance.

In 2017, researchers at Google Brain introduced a new kind of architecture called a transformer. While a recurrent network analyzes a sentence word by word, the transformer processes all the words at the same time. This means transformers can process big bodies of text in parallel.

Transformers enabled a rapid scaling up of the complexity of language models by increasing the number of parameters in the model, as well as other factors. The parameters can be thought of as connections between words, and models improve by adjusting these connections as they churn through text during training. The more parameters in a model, the more accurately it can make connections, and the closer it comes to passably mimicking human language. As expected, a 2020 analysis by OpenAI researchers found that models improve in accuracy and ability as they scale up.

But the debut of LLMs also brought something truly unexpected. Lots of somethings. With the advent of models like GPT-3, which has 175 billion parameters — or Google’s PaLM, which can be scaled up to 540 billion — users began describing more and more emergent behaviors. One DeepMind engineer even reported being able to convince ChatGPT that it was a Linux terminal and getting it to run some simple mathematical code to compute the first 10 prime numbers. Remarkably, it could finish the task faster than the same code running on a real Linux machine.

As with the movie emoji task, researchers had no reason to think that a language model built to predict text would convincingly imitate a computer terminal. Many of these emergent behaviors illustrate “zero-shot” or “few-shot” learning, which describes an LLM’s ability to solve problems it has never — or rarely — seen before. This has been a long-time goal in artificial intelligence research, Ganguli said. Showing that GPT-3 could solve problems without any explicit training data in a zero-shot setting, he said, “led me to drop what I was doing and get more involved.”

He wasn’t alone. A raft of researchers, detecting the first hints that LLMs could reach beyond the constraints of their training data, are striving for a better grasp of what emergence looks like and how it happens. The first step was to thoroughly document it.

Tranformers: More Than Meets The Eye?

quantamagazine  |  Imagine going to your local hardware store and seeing a new kind of hammer on the shelf. You’ve heard about this hammer: It pounds faster and more accurately than others, and in the last few years it’s rendered many other hammers obsolete, at least for most uses. And there’s more! With a few tweaks — an attachment here, a twist there — the tool changes into a saw that can cut at least as fast and as accurately as any other option out there. In fact, some experts at the frontiers of tool development say this hammer might just herald the convergence of all tools into a single device.

A similar story is playing out among the tools of artificial intelligence. That versatile new hammer is a kind of artificial neural network — a network of nodes that “learn” how to do some task by training on existing data — called a transformer. It was originally designed to handle language, but has recently begun impacting other AI domains.

The transformer first appeared in 2017 in a paper that cryptically declared that “Attention Is All You Need.” In other approaches to AI, the system would first focus on local patches of input data and then build up to the whole. In a language model, for example, nearby words would first get grouped together. The transformer, by contrast, runs processes so that every element in the input data connects, or pays attention, to every other element. Researchers refer to this as “self-attention.” This means that as soon as it starts training, the transformer can see traces of the entire data set.

Before transformers came along, progress on AI language tasks largely lagged behind developments in other areas. “In this deep learning revolution that happened in the past 10 years or so, natural language processing was sort of a latecomer,” said the computer scientist Anna Rumshisky of the University of Massachusetts, Lowell. “So NLP was, in a sense, behind computer vision. Transformers changed that.”

Transformers quickly became the front-runner for applications like word recognition that focus on analyzing and predicting text. It led to a wave of tools, like OpenAI’s Generative Pre-trained Transformer 3 (GPT-3), which trains on hundreds of billions of words and generates consistent new text to an unsettling degree.

The success of transformers prompted the AI crowd to ask what else they could do. The answer is unfolding now, as researchers report that transformers are proving surprisingly versatile. In some vision tasks, like image classification, neural nets that use transformers have become faster and more accurate than those that don’t. Emerging work in other AI areas — like processing multiple kinds of input at once, or planning tasks — suggests transformers can handle even more.

“Transformers seem to really be quite transformational across many problems in machine learning, including computer vision,” said Vladimir Haltakov, who works on computer vision related to self-driving cars at BMW in Munich.

Just 10 years ago, disparate subfields of AI had little to say to each other. But the arrival of transformers suggests the possibility of a convergence. “I think the transformer is so popular because it implies the potential to become universal,” said the computer scientist Atlas Wang of the University of Texas, Austin. “We have good reason to want to try transformers for the entire spectrum” of AI tasks.

Sunday, April 02, 2023

Unaccountable Algorithmic Tyranny

alt-market |  In this article I want to stress the issue of AI governance and how it might be made to appeal to the masses. In order to achieve the dystopian future the globalists want, they still have to convince a large percentage of the population to applaud it and embrace it.

The comfort of having a system that makes difficult decisions for us is an obvious factor, as mentioned above. But, AI governance is not just about removing choice, it’s also about removing the information we might need to be educated enough to make choices. We saw this recently with the covid pandemic restrictions and the collusion between governments, corporate media and social media. Algorithms were widely used by web media conglomerates from Facebook to YouTube to disrupt the flow of information that might run contrary to the official narrative.

In some cases the censorship targeted people merely asking pertinent questions or fielding alternative theories. In other cases, the censorship outright targeted provably factual data that was contrary to government policies. A multitude of government claims on covid origins, masking, lockdowns and vaccines have been proven false over the past few years, and yet millions of people still blindly believe the original narrative because they were bombarded with it nonstop by the algorithms. They were never exposed to the conflicting information, so they were never able to come to their own conclusions.

Luckily, unlike bots, human intelligence is filled with anomalies – People who act on intuition and skepticism in order to question preconceived or fabricated assertions. The lack of contrary information immediately causes suspicion for many, and this is what authoritarian governments often refuse to grasp.

The great promise globalists hold up in the name of AI is the idea of a purely objective state; a social and governmental system without biases and without emotional content. It’s the notion that society can be run by machine thinking in order to “save human beings from themselves” and their own frailties. It is a false promise, because there will never be such a thing as objective AI, nor any AI that understand the complexities of human psychological development.

Furthermore, the globalist dream of AI is driven not by adventure, but by fear. It’s about the fear of responsibility, the fear of merit, the fear of inferiority, the fear of struggle and the fear of freedom. The greatest accomplishments of mankind are admirable because they are achieved with emotional content, not in spite of it. It is that content that inspires us to delve into the unknown and overcome our fears. AI governance and an AI integrated society would be nothing more than a desperate action to deny the necessity of struggle and the will to overcome.

Globalists are more than happy to offer a way out of the struggle, and they will do it with AI as the face of their benevolence. All you will have to do is trade your freedoms and perhaps your soul in exchange for never having to face the sheer terror of your own quiet thoughts. Some people, sadly, believe this is a fair trade.

The elites will present AI as the great adjudicator, the pure and logical intercessor of the correct path; not just for nations and for populations at large but for each individual life. With the algorithm falsely accepted as infallible and purely unbiased, the elites can then rule the world through their faceless creation without any oversight – For they can then claim that it’s not them making decisions, it’s the AI.  How does one question or even punish an AI for being wrong, or causing disaster? And, if the AI happens to make all its decisions in favor of the globalist agenda, well, that will be treated as merely coincidental.

Disingenuously Shaping The Narrative Around Large Language Model Computing

vice  |  More than 30,000 people—including Tesla’s Elon Musk, Apple co-founder Steve Wozniak, politician Andrew Yang, and a few leading AI researchers—have signed an open letter calling for a six-month pause on training AI systems more powerful than GPT-4. 

The letter immediately caused a furor as signatories walked back their positions, some notable signatories turned out to be fake, and many more AI researchers and experts vocally disagreed with the letter’s proposal and approach. 

The letter was penned by the Future of Life Institute, a nonprofit organization with the stated mission to “reduce global catastrophic and existential risk from powerful technologies.” It is also host to some of the biggest proponents of longtermism, a kind of secular religion boosted by many members of the Silicon Valley tech elite since it preaches seeking massive wealth to direct towards problems facing humans in the far future. One notable recent adherent to this idea is disgraced FTX CEO Sam Bankman-Fried. 

Specifically, the institute focuses on mitigating long-term "existential" risks to humanity such as superintelligent AI. Musk, who has expressed longtermist beliefs, donated $10 million to the institute in 2015.  

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4,” the letter states. “AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts.”

“This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities,” the letter clarifies, referring to the arms race between big tech companies like Microsoft and Google, who in the past year have released a number of new AI products. 

Other notable signatories include Stability AI CEO Emad Mostaque, author and historian Yuval Noah Harari, and Pinterest co-founder Evan Sharp. There are also a number of people who work for the companies participating in the AI arms race who have signed, including Google DeepMind and Microsoft. All signatories were confirmed to Motherboard by the Future of Life Institute to be “independently verified through direct communication.” No one from OpenAI, which develops and commercializes the GPT series of AI models, has signed the letter. 

Despite this verification process, the letter started out with a number of false signatories, including people impersonating OpenAI CEO Sam Altman, Chinese president Xi Jinping, and Chief AI Scientist at Meta, Yann LeCun, before the institute cleaned the list up and paused the appearance of signatures on the letter as they verify each one. 

The letter has been scrutinized by many AI researchers and even its own signatories since it was published on Tuesday. Gary Marcus, a professor of psychology and neural science at New York University, who told Reuters “the letter isn’t perfect, but the spirit is right.” Similarly, Emad Mostaque, the CEO of Stability.AI, who has pitted his firm against OpenAI as a truly "open" AI company, tweeted, “So yeah I don't think a six month pause is the best idea or agree with everything but there are some interesting things in that letter.” 

AI experts criticize the letter as furthering the “AI hype” cycle, rather than listing or calling for concrete action on harms that exist today. Some argued that it promotes a longtermist perspective, which is a worldview that has been criticized as harmful and anti-democratic because it valorizes the uber-wealthy and allows for morally dubious actions under certain justifications.

Emily M. Bender, a Professor in the Department of Linguistics at the University of Washington and the co-author of the first paper the letter cites, tweeted that this open letter is “dripping with #Aihype” and that the letter misuses her research. The letter says, “AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research,” but Bender counters that her research specifically points to current large language models and their use within oppressive systems—which is much more concrete and pressing than hypothetical future AI. 

“We wrote a whole paper in late 2020 (Stochastic Parrots, published in 2021) pointing out that this head-long rush to ever larger language models without considering risks was a bad thing. But the risks and harms have never been about ‘too powerful AI’,” she tweeted. “Instead: They're about concentration of power in the hands of people, about reproducing systems of oppression, about damage to the information ecosystem, and about damage to the natural ecosystem (through profligate use of energy resources).” 

“It's essentially misdirection: bringing everyone's attention to hypothetical powers and harms of LLMs and proposing a (very vague and ineffective) way of addressing them, instead of looking at the harms here and now and addressing those—for instance, requiring more transparency when it comes to the training data and capabilities of LLMs, or legislation regarding where and when they can be used,” Sasha Luccioni, a Research Scientist and Climate Lead at Hugging Face, told Motherboard.


Saturday, April 01, 2023

Don't Sleep On That Tablet Anti-Disinformation Grand Opus

racket  |  Years ago, when I first began to have doubts about the Trump-Russia story, I struggled to come up with a word to articulate my suspicions.

If the story was wrong, and Trump wasn’t a Russian spy, there wasn’t a word for what was being perpetrated. This was a system-wide effort to re-frame reality itself, which was both too intellectually ambitious to fit in a word like “hoax,” but also probably not against any one law, either. New language would have to be invented just to define the wrongdoing, which not only meant whatever this was would likely go unpunished, but that it could be years before the public was ready to talk about it.

Around that same time, writer Jacob Siegel — a former army infantry and intelligence officer who edits Tablet’s afternoon digest, The Scroll — was beginning the job of putting key concepts on paper. As far back as 2019, he sketched out the core ideas for a sprawling, illuminating 13,000-word piece that just came out this week. Called “A Guide to Understanding the Hoax of the Century: Thirteen ways of looking at disinformation,” Siegel’s Tablet article is the enterprise effort at describing the whole anti-disinformation elephant I’ve been hoping for years someone in journalism would take on.

It will escape no one’s notice that Siegel’s lede recounts the Hamilton 68 story from the Twitter Files. Siegel says the internal dialogues of Twitter executives about the infamous Russia-tracking “dashboard” helped him frame the piece he’d been working on for so long. Which is great, I’m glad about that, but he goes far deeper into the topic than I have, and in a way that has a real chance to be accessible to all political audiences.

Siegel threads together all the disparate strands of a very complex story, in which the sheer quantity of themes is daunting: the roots in counter-terrorism strategy, Russiagate as a first great test case, the rise of a public-private “counter-disinformation complex” nurturing an “NGO Borg,” the importance of Trump and “domestic extremism” as organizing targets, the development of a new uniparty politics anointing itself “protector” of things like elections, amid many other things.

He concludes with an escalating string of anxiety-provoking propositions. One is that our first windows into this new censorship system, like Stanford’s Election Integrity Partnership, might also be our last, as AI and machine learning appear ready to step in to do the job at scale. The National Science Foundation just announced it was “building a set of use cases” to enable ChatGPT to “further automate” the propaganda mechanism, as Siegel puts it. The messy process people like me got to see, just barely, in the outlines of Twitter emails made public by a one-in-a-million lucky strike, may not appear in recorded human conversations going forward. “Future battles fought through AI technologies,” says Siegel, “will be harder to see.”

More unnerving is the portion near the end describing how seemingly smart people are fast constructing an ideology of mass surrender. Siegel recounts the horrible New York Times Magazine article (how did I forget it?) written by Yale law graduate Emily Bazelon just before the 2020 election, whose URL is titled “The Problem of Free Speech in an Age of Disinformation.” Shorter Bazelon could have been Fox Nazis Censorship Derp: the article the Times really ran was insanely long and ended with flourishes like, “It’s time to ask whether the American way of protecting free speech is actually keeping us free.”

Both the actors in the Twitter Files and the multitudinous papers produced by groups like the Aspen Institute and Harvard’s Shorenstein Center are perpetually concerned with re-thinking the “problem” of the First Amendment, which of course is not popularly thought of as a problem. It’s notable that the Anti-Disinformation machine, a clear sequel to the Military-Industrial Complex, doesn’t trumpet the virtues of the “free world” but rather the “rules-based international order,” within which (as Siegel points out) people like former Labor Secretary Robert Reich talk about digital deletion as “necessary to protect American democracy.” This idea of pruning fingers off democracy to save it is increasingly popular; we await the arrival of the Jerzy Kozinski character who’ll propound this political gardening metaphor to the smart set.

Biden Administration Leads Massive Speech Censorship Operation

foxnews  |  EXCLUSIVE: The Biden administration has led "the largest speech censorship operation in recent history" by working with social media companies to suppress and censor information later acknowledged as truthful," former Missouri attorney general Eric Schmitt will tell the House Weaponization Committee Thursday.

Schmitt, now a Republican senator from Missouri, is expected to testify alongside Louisiana Attorney General Jeff Landry and former Missouri deputy attorney general for special litigation, D. John Sauer.

LAWSUIT FILED AGAINST BIDEN, TOP OFFICIALS FOR 'COLLUDING' WITH BIG TECH TO CENSOR SPEECH ON HUNTER, COVID

The three witnesses will discuss the findings of their federal government censorship lawsuit, Louisiana and Missouri v. Biden et al—which they filed in May 2022 and which they describe as "the most important free speech lawsuit of this generation."

The testimony comes after Missouri and Louisiana filed a lawsuit against the Biden administration, alleging that President Biden and members of his team "colluded with social media giants Meta, Twitter, and YouTube to censor free speech in the name of combating so-called ‘disinformation’ and ‘misinformation.’"

The lawsuit alleges that coordination led to the suppression and censorship of truthful information "on a scale never before seen" using examples of the COVID lab-leak theory, information about COVID vaccinations, Hunter Biden’s laptop, and more.

The lawsuit is currently in discovery, and Thursday’s hearing is expected to feature witness testimony that will detail evidence collected to show the Biden administration has "coerced social media companies to censor disfavored speech."

"Discovery obtained by Missouri and Louisiana demonstrated that the Biden administration’s coordination with social media companies and collusion with non-governmental organizations to censor speech was far more pervasive and destructive than ever known," Schmitt will testify, according to prepared testimony obtained by Fox News Digital.

 

 

Friday, March 31, 2023

What Is The Restrict Act And Why Is It Bad?

A short booster thread on this issue 👇

The RESTRICT ACT did not surface in a vacuum. It was preceded by Biden groundwork that is much deeper.

2) The “TicTok ban” legislation (SB686), which is a fraudulent auspice for total internet control by the intelligence community, comes from within bipartisan legislation spearheaded by the aligned interests of Senator Warner, the SSCI and DHS.

3) None of this is accidental, and the legislative branch is walking into the creation of an online control mechanism that has nothing whatsoever to do with banning TikTok.
5) If you have followed the history of how the Fourth Branch of Government has been created, you will immediately recognize the intent of this new framework. Image
6) The “National Cybersecurity Strategy” aligns with, supports, and works in concert with a total U.S. surveillance system, where definitions of information are then applied to “cybersecurity” and communication vectors.
7) This policy is both a surveillance system and an information filtration prism where the government will decide what is information, disinformation, misinformation and malinformation, then act upon it.
8) Now put the March 2nd announcement, the executive branch fiat, together with Senate Bill 686 “The Restrict Act” also known as the bipartisan bill to empower the executive branch to shut down TikTok.

10) /END

Tablet Calls The American "Disinformation Regime" The Hoax Of The Century

tablet  |  It was not enough for a few powerful agencies to combat disinformation. The strategy of national mobilization called for “not only the whole-of-government, but also whole-of-society” approach, according to a document released by the GEC in 2018. “To counter propaganda and disinformation,” the agency stated, “will require leveraging expertise from across government, tech and marketing sectors, academia, and NGOs.”

This is how the government-created “war against disinformation” became the great moral crusade of its time. CIA officers at Langley came to share a cause with hip young journalists in Brooklyn, progressive nonprofits in D.C., George Soros-funded think tanks in Prague, racial equity consultants, private equity consultants, tech company staffers in Silicon Valley, Ivy League researchers, and failed British royals. Never Trump Republicans joined forces with the Democratic National Committee, which declared online disinformation “a whole-of-society problem that requires a whole-of-society response.”

Even trenchant critics of the phenomenon—including Taibbi and the Columbia Journalism Review’s Jeff Gerth, who recently published a dissection of the press’s role in promoting false Trump-Russia collusion claims—have focused on the media’s failures, a framing largely shared by conservative publications, which treat disinformation as an issue of partisan censorship bias. But while there’s no question that the media has utterly disgraced itself, it’s also a convenient fall guy—by far the weakest player in the counter-disinformation complex. The American press, once the guardian of democracy, was hollowed out to the point that it could be worn like a hand puppet by the U.S. security agencies and party operatives.

It would be nice to call what has taken place a tragedy, but an audience is meant to learn something from a tragedy. As a nation, America not only has learned nothing, it has been deliberately prevented from learning anything while being made to chase after shadows. This is not because Americans are stupid; it’s because what has taken place is not a tragedy but something closer to a crime. Disinformation is both the name of the crime and the means of covering it up; a weapon that doubles as a disguise.

The crime is the information war itself, which was launched under false pretenses and by its nature destroys the essential boundaries between the public and private and between the foreign and domestic, on which peace and democracy depend. By conflating the anti-establishment politics of domestic populists with acts of war by foreign enemies, it justified turning weapons of war against Americans citizens. It turned the public arenas where social and political life take place into surveillance traps and targets for mass psychological operations. The crime is the routine violation of Americans’ rights by unelected officials who secretly control what individuals can think and say.

What we are seeing now, in the revelations exposing the inner workings of the state-corporate censorship regime, is only the end of the beginning. The United States is still in the earliest stages of a mass mobilization that aims to harness every sector of society under a singular technocratic rule. The mobilization, which began as a response to the supposedly urgent menace of Russian interference, now evolves into a regime of total information control that has arrogated to itself the mission of eradicating abstract dangers such as error, injustice, and harm—a goal worthy only of leaders who believe themselves to be infallible, or comic-book supervillains.

The first phase of the information war was marked by distinctively human displays of incompetence and brute-force intimidation. But the next stage, already underway, is being carried out through both scalable processes of artificial intelligence and algorithmic pre-censorship that are invisibly encoded into the infrastructure of the internet, where they can alter the perceptions of billions of people.

Something monstrous is taking shape in America. Formally, it exhibits the synergy of state and corporate power in service of a tribal zeal that is the hallmark of fascism. Yet anyone who spends time in America and is not a brainwashed zealot can tell that it is not a fascist country. What is coming into being is a new form of government and social organization that is as different from mid-twentieth century liberal democracy as the early American republic was from the British monarchism that it grew out of and eventually supplanted. A state organized on the principle that it exists to protect the sovereign rights of individuals, is being replaced by a digital leviathan that wields power through opaque algorithms and the manipulation of digital swarms. It resembles the Chinese system of social credit and one-party state control, and yet that, too, misses the distinctively American and providential character of the control system. In the time we lose trying to name it, the thing itself may disappear back into the bureaucratic shadows, covering up any trace of it with automated deletions from the top-secret data centers of Amazon Web Services, “the trusted cloud for government.”

When the blackbird flew out of sight,
It marked the edge
Of one of many circles.

In a technical or structural sense, the censorship regime’s aim is not to censor or to oppress, but to rule. That’s why the authorities can never be labeled as guilty of disinformation. Not when they lied about Hunter Biden’s laptops, not when they claimed that the lab leak was a racist conspiracy, not when they said that vaccines stopped transmission of the novel coronavirus. Disinformation, now and for all time, is whatever they say it is. That is not a sign that the concept is being misused or corrupted; it is the precise functioning of a totalitarian system.

If the underlying philosophy of the war against disinformation can be expressed in a single claim, it is this: You cannot be trusted with your own mind. What follows is an attempt to see how this philosophy has manifested in reality. It approaches the subject of disinformation from 13 angles—like the “Thirteen Ways of Looking at a Blackbird,” Wallace Stevens’ 1917 poem—with the aim that the composite of these partial views will provide a useful impression of disinformation’s true shape and ultimate design.

Less than three weeks before the 2020 presidential election, The New York Times published an important article titled “The First Amendment in the age of disinformation.” The essay’s author, Times staff writer and Yale Law School graduate Emily Bazelon, argued that the United States was “in the midst of an information crisis caused by the spread of viral disinformation” that she compares to the “catastrophic” health effects of the novel coronavirus. She quotes from a book by Yale philosopher Jason Stanley and linguist David Beaver: “Free speech threatens democracy as much as it also provides for its flourishing.”

So the problem of disinformation is also a problem of democracy itself—specifically, that there’s too much of it. To save liberal democracy, the experts prescribed two critical steps: America must become less free and less democratic. This necessary evolution will mean shutting out the voices of certain rabble-rousers in the online crowd who have forfeited the privilege of speaking freely. It will require following the wisdom of disinformation experts and outgrowing our parochial attachment to the Bill of Rights. This view may be jarring to people who are still attached to the American heritage of liberty and self-government, but it has become the official policy of the country’s ruling party and much of the American intelligentsia.

Former Clinton Labor Secretary Robert Reich responded to the news that Elon Musk was purchasing Twitter by declaring that preserving free speech online was “Musk’s dream. And Trump’s. And Putin’s. And the dream of every dictator, strongman, demagogue, and modern-day robber baron on Earth. For the rest of us, it would be a brave new nightmare.” According to Reich, censorship is “necessary to protect American democracy.”

To a ruling class that had already grown tired of democracy’s demand that freedom be granted to its subjects, disinformation provided a regulatory framework to replace the U.S. Constitution. By aiming at the impossible, the elimination of all error and deviation from party orthodoxy, the ruling class ensures that it will always be able to point to a looming threat from extremists—a threat that justifies its own iron grip on power.

A siren song calls on those of us alive at the dawn of the digital age to submit to the authority of machines that promise to optimize our lives and make us safer. Faced with the apocalyptic threat of the “infodemic,” we are led to believe that only superintelligent algorithms can protect us from the crushingly inhuman scale of the digital information assault. The old human arts of conversation, disagreement, and irony, on which democracy and much else depend, are subjected to a withering machinery of military-grade surveillance—surveillance that nothing can withstand and that aims to make us fearful of our capacity for reason.

 

Why US Has 30 Biolabs Inside Ukraine Controlled By US Department Of Defense?

WaPo  |  The Kremlin’s disinformation casts the United States — and Ukraine — as villains for creating germ warfare laboratories, giving Mr. Putin another pretext for a war that lacks all justification. The disinformation undermines the biological weapons treaty, showing that Mr. Putin has little regard for maintaining the integrity of this international agreement. The disinformation attempts to divert attention from Russia’s barbaric onslaught against civilians in Ukraine. In 2018, the Kremlin may have been seeking to shift attention from the attempted assassination of former double agent Sergei Skripal in Britain, or from the Robert S. Mueller III investigation that year of Russian meddling in the U.S. presidential campaign.

The biological laboratories are just one example of Russia’s wider disinformation campaigns. Data shared by Facebook shows Russians “built manipulative Black Lives Matter and Blue Lives Matter pages, created pro-Muslim and pro-Christian groups, and let them expand via growth from real users,” says author Samuel Woolley in “The Reality Game.” He adds, “The goal was to divide and conquer as much as it was to dupe and convince.” During the pandemic, Russia similarly attempted to aggravate existing tensions over public health measures in the United States and Europe. It has also spread lies about the use of chemical weapons, undermining the treaty that prohibits them and the organization that enforces it. In the Ukraine war, Russia has fired off broadsides of disinformation, such as claiming the victims of the Mariupol massacre were “crisis actors.” Russia used disinformation to mask its responsibility for the shoot-down of the Malaysia Airlines flight MH-17 over Ukraine in 2014.

The disinformation over Ukraine, repeated widely in the Russian media, plays well with social groups that support Putin: the poor, those living in rural areas and small towns, and those being asked to send young men to the front. Mr. Putin so tightly controls the news media that it is difficult for alternative news and messages to break through.

Disinformation is a venom. It does not need to flip everyone’s, or even most people’s, views. Its methods are to creep into the lifeblood, create uncertainty, enhance established fears and sow confusion.

The best way to strike back is with the facts, and fast. Thomas Kent, the former president of Radio Free Europe/Radio Liberty, has pointed out that the first hours are critical in such an asymmetrical conflict: Spreaders of disinformation push out lies without worrying about their integrity, while governments and the news media try to verify everything, and take more time to do so. Mr. Kent suggests speeding the release of information that is highly likely to be true, rather than waiting. For example, it took 13 days for the British government to reach a formal conclusion that Russia was behind the poisoning of Mr. Skripal, but within 48 hours of the attack, then-Foreign Secretary Boris Johnson told Parliament that it appeared to be Russia, which helped tip the balance in the press and public opinion.

In Ukraine, when Russia was on the threshold of invasion, government and civil society organizations rapidly coordinated an informal “early warning system” to detect and identify Russia’s false claims and narratives. It was successful when the war began, especially with use of the Telegram app. In a short time, Telegram use leapt from 12 percent adoption to 65 percent, according to those involved in the effort

Also in Ukraine, more than 20 organizations, along with the National Democratic Institute in Washington, had created a disinformation debunking hub in 2019 that has played a key role in the battle against the onslaught of lies. A recent report from the International Forum for Democratic Studies at the National Endowment for Democracy identified three major efforts that paid off for Ukraine in the fight against Russian disinformation as war began. One was “deep preparation” (since Russia was recycling old claims from 2014, they were ready); active and rapid cooperation of civil society groups; and use of technology, such as artificial intelligence and machine learning, to help sift through the torrents of Russian disinformation and rapidly spot malign narratives.

Governments can’t do this on their own. Free societies have an advantage that autocrats don’t: authentic civil society that can be agile and innovative. In the run-up to the Ukraine war, all across Central and Eastern Europe, civil society groups were sharpening techniques for spotting and countering Russian disinformation.

Plain old media literacy among readers and viewers — knowing how to discriminate among sources, for example — is also essential.

Open societies are vulnerable because they are open. The asymmetries in favor of malign use of information are sizable. Democracies must find a way to adapt. The dark actors morph constantly, so the response needs to be systematic and resilient.

 

Thursday, March 30, 2023

How Concerned Will The Anglo Establishment Become About Democracy In Israel?

korybko  |  At all costs, America believes that it must do whatever’s necessary to prevent the Israeli state from exercising its sovereign right under Bibi’s restored leadership to balance between the US-led West’s Golden Billion and the Sino-Russo Entente instead of decisively take the former’s side against the latter. Most immediately, its “deep state” wants Israel to arm Kiev, which Bibi himself warned earlier this month could abruptly catalyze a crisis with Russia in Syria.  

It's precisely this outcome that the US wants to have happen because it could open a so-called “second front” in its Eurasian-wide “containment’ campaign against Russia after the most recent efforts to do so in Georgia and Moldova have thus far failed. Furthermore, a major crisis in West Asia could impede the region’s accelerated rise as an independent pole of influence in the emerging Multipolar World Order, the scenario of which became viable after the Chinese-mediated Iranian-Saudi rapprochement.

That aforementioned development coupled with Bibi’s envisaged multi-alignment between the US-led West’s Golden Billion and the Sino-Russo Entente could lead to the near-total loss of American influence over West Asia, especially if Israel starts de-dollarizing its trade like Saudi Arabia is soon expected to do. Simply put, the entire region’s future role in the ongoing global systemic transition is at stake, thus explaining the grand strategic significance of Israel’s US-exacerbated crisis.

The socio-political (soft security) dynamics aren’t in Bibi’s favor, which could lead to him either backing down or being overthrown, with either of those outcomes raising the chances that Israel submits to being the US’ New Cold War vassal instead of continuing its trajectory as an independent player. If the military (hard security) dynamics become more difficult such as in the event of a tacitly US-approved Intifada, then his removal could be a fait accompli unless he succeeds in imposing a military dictatorship.

So as not to be misunderstood, the preceding scenario doesn’t imply that the Palestinian cause is illegitimate, but just that it can be exploited by the US like all others in advance of its larger interests. In any case, the situation is extremely combustible and it’s difficult to predict what’ll happen next. Nothing like this has ever happened before in Israel, neither domestically nor in terms of its ties with the US. This is literally unprecedented, especially in terms of its impact on International Relations as explained.

Elite Donor Level Conflicts Openly Waged On The National Political Stage

thehill  |   House Ways and Means Committee Chair Jason Smith (R-Mo.) has demanded the U.S. Chamber of Commerce answer questions about th...