Saturday, April 08, 2023

The U.S. And Canada Will Sue And Sanction Mexico For Taking Care Of Mexicans

NC  |  But all of that changed when AMLO came to power in late 2018. For the first time in 30 years Mexico had a government that was not only determined to halt the privatisation and liberalisation of Mexico’s energy market but to begin dialling it back. Allegations of corrupt practices and price gouging by Iberdrola and other energy companies became a popular talking point at AMLO’s morning press conferences. The juicy contracts began drying up. Instead, a range of obstacles began forming, from disconnections to nonrenewal of permits and fines for price gouging.

The times of plenty had come to an end. And not a moment too soon.

At the rate things were going, the CFE would be generating just 15% of Mexico’s electricity by the end of this decade, says Ángel Barreras Puga, a professor of engineering at the University of Queretero; the rest would be generated exclusively by private, foreign companies.

“Who was going to control prices in the market? Foreign companies, with all that entails. Behind the foreign companies are their national governments. And we have seen how the US government, the US Ambassador and US legislators came to Mexico to try to pressure AMLO to change his policies. Ultimately, they are all lobbyists of private companies.”

There are few better examples of this than US Ambassador to Mexico Ken Salazar, as Ken Hackbarth reported for Jacobin at the time of Sakazar’s appointment in 2021:

Upon leaving (the US Interior Department] in 2013, Salazar went through the revolving door to work for WilmerHale, a law and lobbying firm with close ties to the Trump family, whose roster drilling- and mining-related clients included none other than — you guessed it — BP. From his lucrative new perch in the private sector, Salazar used his clout to support the Keystone Pipeline and the Trans-Pacific Protocol (TPP), whose “investor-state” provisions would let corporations challenge environmental regulations in private tribunals; fought against ballot initiatives that would limit fracking and distance oil wells from buildings and bodies of water; opposed climate lawsuits against the fossil fuel sector; and, in a highly questionable skirting of ethics rules, provided legal counsel to the same company, Anadarko Petroleum, that benefitted on multiple occasions from his stint in government…

The fact of sending an oil and gas lobbyist to lecture Mexico on renewable energy — one, moreover, representing an administration that just opened 80 million acres for drilling in the Gulf of Mexico and is approving drilling permits on public lands at a faster rate than Trump — would be comical if it were not so revealing of the ugly underbelly of US-Mexico relations.

More to Come?

The AMLO-Iberdrola deal has raised concerns in business circles that other foreign energy companies could face a similar fate as the Spanish utility, as AMLO government pushes to expand the state’s role in the energy sector. Bloomberg describes it as a warning shot for international energy companies.

“The choice of words and messages is deliberate,” said John Padilla, managing director of energy consultancy IPD Latin America, adding that such moves could be intentionally sending a warning to foreign companies amid protracted trade disputes with the USA on energy policy. “The main message for private sector investors, at least on the electricity side, is certainly not a good one.”

Mexico’s nationalist energy policies have already stoked the ire of its North American trade partners, Canada and the US, which argue that they violate the USMCA regional trade agreement by discriminating against Canadian and US companies. As Reuters reported a week ago, the Office of the United States Trade Representative (USTR) is considering making a “final offer” to Mexico negotiators to open its markets and agree to some increased oversight.

Failing that, USTR will initiate a dispute settlement against its southern neighbour. If the panel rules against Mexico and the Mexican government refuses to rectify its behaviour, Washington and Ottawa could impose billions of dollars in retaliatory tariffs on Mexican goods.

When Americans Are In Trouble - Mexico Is Always Quick To Help

undrr  | Shortly after Hurricane Katrina struck the southern USA, 200 Mexican troops crossed the US border outside Laredo, Texas, and made their way towards San Antonio. It was the first time a Mexican army contingent had entered Texas since the Battle of the Alamo in 1836.

In 2005, the Mexican soldiers were on a relief mission to feed tens of thousands of homeless and hungry Americans displaced by Hurricane Katrina. They stayed 20 days at the former Kelly Air Force Base in Texas, one of the first American states in the USA to rescue thousands of hurricane Katrina refugees.

“We served more than 170,000 meals and distributed more than 184,000 tons of supplies including medical supplies,” recalled Colonel Ignacio Murillo Rodriguez of the Mexican Ministry of Defense SEDENA.

“We came with a big tractor trailer that we immediately converted into a huge field kitchen. At the time, thousands of hurricane survivors had moved to Texas and were living in a very precarious situation with no job and no revenues, and we were able to help them serving meals, and water and generally assist them. It was quite an incredible experience that really made our reputation abroad. Our food trucks are very well known by now and today constitute a major element of our emergency capacities ” said Colonel Rodriquez.

Created in 1966, the Mexican Plan to Aid Civilian Disaster known as DN-III-E is a series of measures to be implemented primarily by the Mexican Army and the Mexican Air Force, organized as a body under the name of Support Force for Disaster. It operates mostly in disaster emergency situations occurring in Mexico but not exclusively.

“We have now trained many troops in Spain, Belize, Venezuela, and Ecuador and our force has acquired a very established reputation in terms of capacity building,” says Captain Alejandro Velasquez Valdicisco.

The DN-III-E has three main roles: prevention, protection and recovery and it is part of the Federal Response Master Plan dealing with major contingencies and emergencies in Mexico.

The prevention plan better known as the MX Plan coordinates and articulates the response in all national instances when an emergency happens. It embraces the Navy Plan and the Civilian Population Support Plan of the Federal Police, as well as the plans of government agencies and public entities such as PEMEX, the Federal Electricity Commission and CONAGUA ( water agency).

"We have the responsibility to rescue people, to manage shelters, to make recommendations to populations at risk and to guarantee the safety and security of affected disaster areas. Every soldier or person working for the Mexican army receives a special training to protect civilians. We actually do not have a special unit to deal with emergency situations as armed forces are all trained to protect civilians when disasters happen,” said Captain Alejandro Velasquez Valdicisco.

Mexicans remember the role played by the Ministry of Defense when Volcano Colima erupted in October 2016 forcing hundreds of people to evacuate. They worked long hours with the Civil Protection and were able to relocate hundreds of people at risk.

The same happened during the 2007 floods that affected more than 1 million people in the south-eastern Mexican state of Tabasco. More than 13,000 soldiers were deployed in the flooding areas to help evacuating populations from 13 municipalities.

The Ministry of Defense is also involved in the surveillance of the Popocatépetl volcano and plays a direct early warning role to alert and protect the main communities of Puebla, Morelos, State of México, Tlaxcala and Mexico City when volcano activities increase.



Friday, April 07, 2023

Mexico Continues Nationalizing Key Industries Despite U.S. Objections

qz  |  With AMLO's purchase of 13 Spanish-owned power plants, the majority of Mexico's electricity production is now state-controlled.

The Mexican government agreed to purchase 13 power plants from the Spanish energy company Iberdrola for $6 billion on Tuesday (April 4), giving its state-owned power company, Commission Federal de Electricidad (CFE), majority control over the country’s electricity market.

Mexican president Andres Manuel Lopes Obrador (AMLO) called the decision part of a “new nationalization” of some of the country’s major industries, including mineral and oil production, according to Reuters.

The acquisition of the power plants will give CFE control of more than 56% of Mexico’s total production—up from approximately 40%, and surpassing AMLO’s previously stated goal of 54%.

The US and Canada have strongly opposed AMLO’s actions, and have threatened a trade war if Mexico continues to roll back access for international corporations in Mexico’s power and oil markets.

Iberdrola said the power plants would be taken over by CFE within five months as it looks to reduce its operations in Mexican energy markets. The company’s CEO, Ignacio Galan, said that the deal was a win-win.

“That energy policy has moved us to look for a situation that’s good for the people of Mexico, and at the same time, that complies with the interests of our shareholders,” Galan said after a joint appearance with AMLO announcing the deal.

AMLO has repeatedly compared Iberdola’s power over Mexican resources to Spanish conquistadors of the 16th century, even threatening to pause diplomatic relations with Spain over perceived neo-colonial actions by foreign energy firms.

Less than a month ago, more than 500,000 people flooded Mexico City to commemorate the 85th anniversary of the nationalization of the oil industry by president Lázaro Cárdenas del Río in the aftermath of the Mexican Revolution.

AMLO addressed the crowd, promising to carry on Cárdenas’s legacy, specifically highlighting his decision to nationalize the country’s energy and mining sectors, including Mexico’s burgeoning lithium reserves in the Sonora desert.

“Mexico is an independent and free country, not a colony or a protectorate of the United States,” AMLO said in a forceful rebuke of American influence in the country’s economy. “Cooperation? Yes. Submission? No. Long live the oil expropriation.”

 

 

Why Doesn't Mexico Have A Fentanyl Problem?

theguardian  | Mexico’s president has written to his Chinese counterpart, Xi Jinping, urging him to help control shipments of fentanyl, while also complaining of “rude” US pressure to curb the drug trade.

President Andrés Manuel López Obrador has previously said that fentanyl is the US’s problem and is caused by “a lack of hugs” in US families. On Tuesday he read out the letter to Xi dated 22 March in which he defended efforts to curb supply of the deadly drug, while rounding on US critics.

López Obrador complained about calls in the US to designate Mexican drug gangs as terrorist organisations. Some Republicans have said they favour using the US military to crack down on Mexican cartels.

“Unjustly, they are blaming us for problems that in large measure have to do with their loss of values, their welfare crisis,” López Obrador wrote to Xi in the letter.

“These positions are in themselves a lack of respect and a threat to our sovereignty, and moreover they are based on an absurd, manipulative, propagandistic and demagogic attitude.”

Only after several paragraphs of venting, López Obrador brings up China’s exports of fentanyl precursors, and asked him to help stop shipments of chemicals that Mexican cartels import from China.

“I write to you, President Xi Jinping, not to ask your help on these rude threats, but to ask you for humanitarian reasons to help us by controlling the shipments of fentanyl,” the Mexican president wrote.

China has taken some steps to limit fentanyl exports, but mislabelled or harder-to-detect precursor chemicals continue to pour out of Chinese factories.

It was not immediately clear if Xi had received the letter or if he had responded to it. López Obrador has a history of writing confrontational letters to world leaders without getting a response.

López Obrador has angrily denied that fentanyl is produced in Mexico. However, his own administration has acknowledged finding dozens of labs where it is produced, mainly in the northern state of Sinaloa.

Thursday, April 06, 2023

Valodya Talm'Wit His People About Educating The Russian People

kremlin.ru  |   Another basic area is the training of qualified engineers, technicians and workers. We have been short of these people for many years and we need to make cardinal changes and achieve tangible results in this respect. The goals facing the industry and the economy as a whole will not be achieved by themselves. They are achieved by the people, the specialists working at the companies.

By and large, we have determined the areas for developing vocational education. We must update academic programmes and the material, technical and laboratory facilities of universities, colleges, technical and vocational schools. I have just discussed this with Mr Levitin. Obviously, we must double-check their departmental affiliation. We need to find out whether everything meets the latest requirements and if the regions are able to run college education effectively in certain areas. Possibly, we should consider a vertical organisational structure for this – in the framework of certain production sectors – as we did in the past.

Industry badly needs highly qualified workers now. They study at secondary special education institutions, which are the responsibility of the regions, as I have said. I think we should return to the discussion of departmental affiliation. We have already developed good practices in this respect. I would like to ask the regional governors to share their experience, monitor these issues and resolve them in close contact with the relevant departments and ministries.

I know that at yesterday’s seminar you discussed in detail, with Government representatives, the measures I mentioned and the

regional governors’ initiatives, and mapped out specific proposals and steps. Let us analyse all of these again. I would like to ask you to tell me about the course of your discussions and the proposals and ideas that you came up with in the process.

Mr Dyomin, you have the floor, please.....

.....And the fifth question, which you also touched upon, and of course, it is also the main one, is personnel. The shortage of engineering personnel arises due to various reasons - we all know them.

There are not enough applicants who enter technical universities. The nature of these problems begins at school. The reason lies both in the shortage of teachers of mathematics and physics in schools - this is a problem that can be solved, as well as in the fear of the students themselves to fill up this subject at the Unified State Examination. Because when a student gets attached to mathematics and physics and starts preparing for the Unified State Examination, [he] perfectly understands that it is easier to pass this exam in the humanities and moves on to the humanities.

Vladimir Putin:  It happens in different ways.

Alexander Dyumin:  As a result, the number of applicants who can become engineers is significantly reduced. There are statistics, Vladimir Vladimirovich.

Vladimir Putin:  Clearly, yes. I understand.

Andrei Dyumin:  Even at school, students choose the humanities instead of specialized mathematics and physics. This problem must be solved comprehensively: to strengthen the training of teachers of these subjects, to motivate schoolchildren with interesting curricula.

One of the proposals that is being discussed within the framework of the commission - this issue was discussed yesterday, I just want to draw attention, if not, then some other option - one of the proposals: to give the right to universities that train students in technical specialties, accept children not only for the Unified State Examination, but also for entrance exams in their specialized disciplines. If not, then in a different way.

Vladimir Putin:  You can, Alexei Gennadyevich.

I met with entrepreneurs, they saw, probably, they also said that it is easier to pass in the humanities, especially for girls, but in the natural sciences, in mathematics it is more difficult. It depends on how to interest the person.

I'll tell you later, I know a girl who graduated from a higher educational institution in the humanities, studying foreign languages ​​as well. Then she became interested in other disciplines and defended her Ph.D. thesis in higher mathematics. It depends on how the person is motivated.

Alexei Dyumin:  This is an asterisk.

Vladimir Putin:  These "stars" are created by teachers and those people who work on a person's professional orientation.

Alexei Dyumin:  Mr Putin, the tasks you have set require not stars, but starfall.

Vladimir Putin:  All right, all right.

Alexei Dyumin:  Mr Putin, and another important issue, which is understandable, is housing, which is relevant in every industry. An effective mechanism, which was adopted by the Government of Russia, was preferential mortgages for the IT sector.

It is proposed to consider: let's consider the possibility of extending this measure to the industry and, of course, primarily to the rocket industry. We can talk about both federal and regional backbone enterprises. And of course, we are well aware that this is a serious additional incentive for our young people to choose the profession and follow the profession that is in demand and necessary for the state. I ask the Government to instruct to study this issue and pay attention to it.

Vladimir Vladimirovich, and, of course, after all that has been said - perhaps even some kind of irony, but this is not irony - while communicating and being at his post in a developed industrial region: chemistry, metallurgy, defense industry, engineers, designers, technologists, even teachers of technical universities, flagship universities - everyone is asking to return drafting to school. This is the beginning of the basics of engineering knowledge.

It is clear that now there is a lot of software that draws, rotates, creates and so on in 3D, but this is not my opinion - this is what designers, young engineers, technologists from all areas of all industries say: please return drawing to school education. I would like to ask you to consider this issue at a high level and make an appropriate decision.

Vladimir Vladimirovich, thank you for your attention. The report is finished.

Vladimir Putin:  Thank you very much.

The Social Cost Of Using AI In Human Conversation

phys.org  |  People have more efficient conversations, use more positive language and perceive each other more positively when using an artificial intelligence-enabled chat tool, a group of Cornell researchers has found.

Postdoctoral researcher Jess Hohenstein is lead author of "Artificial Intelligence in Communication Impacts Language and Social Relationships," published in Scientific Reports.

Co-authors include Malte Jung, associate professor of in the Cornell Ann S. Bowers College of Computing and Information Science (Cornell Bowers CIS), and Rene Kizilcec, assistant professor of information science (Cornell Bowers CIS).

Generative AI is poised to impact all aspects of society, communication and work. Every day brings new evidence of the technical capabilities of large language models (LLMs) like ChatGPT and GPT-4, but the social consequences of integrating these technologies into our daily lives are still poorly understood.

AI tools have potential to improve efficiency, but they may have negative social side effects. Hohenstein and colleagues examined how the use of AI in conversations impacts the way that people express themselves and view each other.

"Technology companies tend to emphasize the utility of AI tools to accomplish tasks faster and better, but they ignore the social dimension," Jung said. "We do not live and work in isolation, and the systems we use impact our interactions with others."

In addition to greater efficiency and positivity, the group found that when participants think their partner is using more AI-suggested responses, they perceive that partner as less cooperative, and feel less affiliation toward them.

"I was surprised to find that people tend to evaluate you more negatively simply because they suspect that you're using AI to help you compose text, regardless of whether you actually are," Hohenstein said. "This illustrates the persistent overall suspicion that people seem to have around AI."

For their first experiment, co-author Dominic DiFranzo, a former postdoctoral researcher in the Cornell Robots and Groups Lab and now an assistant professor at Lehigh University, developed a smart-reply platform the group called "Moshi" (Japanese for "hello"), patterned after the now-defunct Google "Allo" (French for "hello"), the first smart-reply platform, unveiled in 2016. Smart replies are generated from LLMs to predict plausible next responses in chat-based interactions.

A total of 219 pairs of participants were asked to talk about a policy issue and assigned to one of three conditions: both participants can use smart replies; only one participant can use smart replies; or neither participant can use smart replies.

The researchers found that using smart replies increased communication efficiency, positive emotional language and positive evaluations by communication partners. On average, smart replies accounted for 14.3% of sent messages (1 in 7).

But participants who their partners suspected of responding with smart replies were evaluated more negatively than those who were thought to have typed their own responses, consistent with common assumptions about the negative implications of AI.

Wednesday, April 05, 2023

SSRI Antidepressants Cause Mass Shootings

amidwesterndoctor |  Much like the vaccine industry, the psychiatric industry will always try to absolve their dangerous medications of responsibility and will aggressively gaslight their victims. Despite these criticisms,there are three facts can be consistently found throughout the literature on akathisia homicides which Gøtzsche argues irrefutably implicate psychiatric medications as the cause of violent homicides:

• These violent events occur in people of all ages, who by all objective and subjective measures were completely normal before the act and where no precipitating factors besides the psychiatric medication could be identified.
• The events were preceded by clear symptoms of akathisia.
• The violent offenders returned to their normal personality when they came off the antidepressant.

Numerous cases where this has happened are summarized within this article from the Palm Beach Post. In most of those cases, a common trend of these spontaneous acts of violence emerges: the act of violence was immediately preceded by a significant change in the psychiatric medications used by the individual. In one case, shortly before committing one of these murders, one of the perpetrators also wrote on a blog that, while taking Prozac, he felt as if he was observing himself "from above." 

Individuals with a mutation in the gene that metabolizes psychiatric drugs are much more vulnerable to developing excessive levels of these drugs and triggering severe symptoms such as akathisia and psychosis. There is a good case to be made that individuals with this gene are responsible for many of the horrific acts of iatrogenic (medically induced) violence that occur, however to my knowledge, this is never considered when psychiatric medications are prescribed. Gøtzsche summarized a peer-reviewed forensic investigation of 10 cases where this happened (all but one of these involved an SSRI or an SNRI):

Note: This original version of this article (which has been revised and updated) was published a year ago, but sadly is just as pertinent now as it was then. Each time one of these shootings happen, I watch people get up in arms over what needs to be done to stop murdering our children, but at the same time this, the elephant in the room, the clear and irrefutable evidence linking psychiatric medications to homicidal violence is never discussed (which I believe is due their sales making approximately 40 billion dollars a year).

Many of the stories in here are quite heart wrenching, and I humbly request that you make the effort to bear witness to these tragic events.

Prior to the Covid vaccinations, psychiatric medications were the mass-prescribed medication that had the worst risk-to-benefit ratio on the market. In addition to rarely providing benefits to patients, there is a wide range of severe complications that commonly result from psychiatric medications. Likewise, I and many colleagues believe the widespread adoption of psychotropic drugs has distorted the cognition of the demographic of the country which frequently utilizes them (which to some extent stratifies by political orientation) and has created a wide range of detrimental shifts in our society. 

Selective serotonin reuptake inhibitors (SSRIs) have a similar primary mechanism of action to cocaine. SSRIs block the reuptake of Serotonin, SNRIs, also commonly prescribed block the reuptake of Serotonin and Norepinephrine (henceforth “SSRI refers to both SSRI and SNRI), and Cocaine blocks the reuptake of Serotonin, Norepinephrine, and Dopamine. SSRIs (and SNRIs) were originally used as anti-depressants, then gradually had their use marketed into other areas and along the way have amassed a massive body count.

Once the first SSRI entered the market in 1988, Prozac quickly distinguished itself as a particularly dangerous medication and after nine years, the FDA received 39,000 adverse event reports for Prozac, a number far greater than for any other drug. This included hundreds of suicides, atrocious violent crimes, hostility and aggression, psychosis, confusion, distorted thinking, convulsions, amnesia, and sexual dysfunction (long-term or permanent sexual dysfunction is one of the most commonly reported side effects from anti-depressants, which is ironic given that the medication is supposed to make you less, not more depressed). 

SSRI homicides are common, and a website exists that has compiled thousands upon thousands of documented occurrences. As far as I know (there are most likely a few exceptions), in all cases where a mass school shooting has happened, and it was possible to know the medical history of the shooter, the shooter was taking a psychiatric medication that was known for causing these behavioral changes. After each mass shooting, memes illustrating this topic typically circulate online, and the recent events in Texas [this article was written shortly after the shooting last year] are no exception. I found one of these and made an updates version of it (the one I originally used contained some inaccuracies)

Oftentimes, “SSRIs cause mass shootings” is treated as just another crazy conspiracy theory. However, much in the same way the claim “COVID Vaccines are NOT safe and effective” is typically written off as a conspiracy theory, if you go past these labels and dig into the actual data, an abundantly clear and highly concerning picture emerges.

There are many serious issues with psychiatric medications. For brevity, this article will exclusively focus on their tendency to cause horrific violent crimes. This was known long before they entered the market by both the drug companies and the FDA. While there is a large amount of evidence for this correlation, it is the one topic that is never up for debate when a mass shooting occurs. I have a lot of flexibility to discuss highly controversial topics with my colleagues, but this topic is met with so much hostility that I can never bring it up. It is, for this reason, I am immensely grateful to have an anonymous forum I can use.

 

How Big Pharma And The FDA Buried The Dangers Of SSRI Antidepressants

pierrekory |   One of the pharmaceutical executives directly involved in obtaining the approval for the original SSRI antidepressant, Prozac, developed a great deal of guilt for what he was complicit in once a large number of SSRI-linked deaths occurred. John Virapen, along with Peter Rost are the only pharmaceutical executives I know of who have become whistleblowers and shared the intimate details of how these companies actually operate. Although the events Virapen alleged seem hard to believe, other whistleblowers have also made similar observations to Virapen (the accounts of the Pfizer whistleblowers can be found in this article and this article).

John Virapen chronicled the events in which he was complicit in “Side Effects: Death—Confessions of a Pharma Insider.” These included outrageous acts of bribery to get his drugs approved, and photographing physicians with prostitutes provided by Eli Lilly so that they could be blackmailed into serving Eli Lilly. For those interested, this is a brief talk that Virapen gave about his experiences. I greatly appreciate the fact he used candid language rather than euphemisms like almost everyone else does:

At the start of the saga, Lilly was in dire financial straits and the company’s survival hinged on the approval of Prozac. Prozac had initially been proposed as a treatment for weight loss (as this side effect of Prozac had been observed in treatment subjects), but Lilly subsequently concluded it would be easier to get approval for treating depression and then get a post-marketing approval for the treatment of weight loss.

As Prozac took off, it became clear that depression was a much better market, and the obesity aspect was forgotten. Lilly then used a common industry tactic and worked tirelessly to expand the definition of depression so that everyone could become eligible for the drug and aggressively marketed this need for happiness to the public, before long, transforming depression from a rare to a common condition. For those wishing to learn more, Peter Gøtzsche has extensively documented how this fraud transpired and both this brief documentary and this article show how depression became popularized in Japan so that treatments for it could be sold.

Unfortunately, while the marketing machine had no difficulties creating a demand for Prozac, the initial data made it abundantly clear that the first SSRI, Prozac, was dangerous and ineffective. Lilly settled on the strategy of obtaining regulatory approval in Sweden, and using this approval as a precedent to obtain approval in other countries. Virapen was assigned to this task and told by his superiors that if he failed, his career was over. Virapen, unfortunately, discovered that whenever he provided Lilly’s clinical trial data to experts, they had trouble believing he was actually seeking regulatory approval, as Prozac’s trial data was just that bad. 

Sweden (following their regulatory procedures) elected to allow an outside independent expert to make the final determination on whether Prozac should be approved or not. The identity of this expert witness was concealed, but Virapen was able to determine that it was Anders Forsman, a forensic psychiatrist and member of the legal council on the Swedish National Board of Health. After meeting with Virapen, Forsman proposed an untraceable bribe. Then, upon receiving payment, wrote a glowing letter in support of Prozac, fully reversing his position on Prozac (he had ridiculed it two weeks before) and guided Virapen through re-writing the trial to conceal the 5 attempted (4 of which were successful) SSRI suicides in Lilly’s trial. 

Forsman’s expert opinion resulted in Prozac being partially approved and formally priced for reinbursement in Sweden, which was used as a precedent to market it around the world at that same lucrative price. Virapen noted that during this time, German drug regulators who had clearly and unambiguously stated that Prozac was “totally unsuitable for the treatment of depression” suddenly reversed their position, leading Virapen to suspect that similar under-the-table activity must have occurred in Germany. David Healey, a doctor and director of the North Wales School of psychological medicine, likewise concluded that the German approval was due to “unorthodox lobbying methods exercised on independent members of the regulatory authorities.”

Not long after saving Eli Lilly, Virapen was fired. Virapen believes he was fired because he was a man of color in an otherwise Caucasian company (he was told this by his supervisor). Gøtzsche, a leading expert in pharmaceutical research fraud and meta-analyses, on the other hand, attributed this to typical organized crime tactics where Lilly sought to conceal their illegal activity by firing Virapen and his two assistants to bribe Forsman (because immediately afterwards, none of them were permitted to access their offices, and thus could not obtain any of the files that proved that this bribery occurred). Fortunately, as happened with Peter Rost, this unjust firing eventually motivated Virapen to become an invaluable whistleblower.

Heavily Abused Legal Drugs Adderall And Xanax Blocked By "Secret Limits"

Word on the street, and what I've witnessed with my very own lying eyes, information technology CHUDS and medical students alike have been crying like little bishes about the market failure to keep them supplied with their longtime legal drugs of dependency.

Bloomberg  |  Patients diagnosed with conditions like anxiety and sleep disorders have become caught in the crosshairs of America’s opioid crisis, as secret policies mandated by a national opioid settlement have turned filling legitimate prescriptions into a major headache.

In July, limits went into effect that flag and sometimes block pharmacies’ orders of controlled substances such as Adderall and Xanax when they exceed a certain threshold. The requirement stems from a 2021 settlement with the US’s three largest drug distributors — AmerisourceBergen Corp., Cardinal Health Inc. and McKesson Corp. But pharmacists said it curtails their ability to fill prescriptions for many different types of controlled substances — not just opioids.

Independent pharmacists said the rules force them come up with creative workarounds. Sometimes, they must send patients on frustrating journeys to find pharmacies that haven’t yet exceeded their caps in order to buy prescribed medicines.

“I understand the intention of this policy is to have control of controlled substances so they don’t get abused, but it’s not working,” said Richard Glotzer, an independent pharmacist in Millwood, New York. “There’s no reason I should be cut off from ordering these products to dispense to my legitimate patients that need it.”

It's unclear how the thresholds are impacting major chain pharmacies. CVS Health Corp. didn’t provide comment. A spokesperson for Walgreens Boots Alliance Inc. said its pharmacists “work to resolve any specific issues when possible, in coordination with our distributors.” 

The Drug Enforcement Administration regulates the manufacturing, distribution and sale of controlled substances, which can be dangerous when used improperly. Drugmakers and wholesalers were always supposed to keep an eye out for suspicious purchases and have long had systems to catch, report and halt these orders. The prescription opioid crisis, enabled by irresponsible drug company marketing and prescribing, led to a slew of lawsuits and tighter regulations on many parts of the health system, including monitoring of suspicious orders. One major settlement required the three largest distributors to set thresholds on orders of controlled substances starting last July.

The “suspicious order” terminology is a bit of a misnomer, pharmacists said. The orders themselves aren't suspicious, it's just that the pharmacy has exceeded its limit for a specific drug over a certain time period. Any order that puts the pharmacy over its limit can be stopped. As a result, patients with legitimate prescriptions get caught up in the dragnet.

Adding to the confusion, the limits themselves are secret. Drug wholesalers are barred by the settlement agreement from telling pharmacists what the thresholds are, how they’re determined or when the pharmacy is getting close to hitting them.

Tuesday, April 04, 2023

Physics From Computation

00:00:00 Introduction 

00:02:58 Physics from computation 

00:11:30 Generalizing Turing machines  

00:17:34 Dark matter as Indicating "atoms of space"  

00:22:13 Energy as density of space itself  

00:30:30 Entanglement limit of all possible computations  

00:34:53 What persists across the universe are "concepts"  

00:40:09 How does ChatGPT work?  

00:41:41 Irreducible computation, ChatGPT, and AI  

00:49:20 Recovering general relativity from the ruliad (Wolfram Physics Project)  

00:58:38 Coming up: David Chalmers, Ben Goertzel, and more Wolfram

India Beware: ChatGPT Is A Missile Aimed Directly At Low-Cost Software Production

theguardian  | “And so for me,” he concluded, “a computer has always been a bicycle of the mind – something that takes us far beyond our inherent abilities. And I think we’re just at the early stages of this tool – very early stages – and we’ve come only a very short distance, and it’s still in its formation, but already we’ve seen enormous changes, [but] that’s nothing to what’s coming in the next 100 years.”

Well, that was 1990 and here we are, three decades later, with a mighty powerful bicycle. Quite how powerful it is becomes clear when one inspects how the technology (not just ChatGPT) tackles particular tasks that humans find difficult.

Writing computer programs, for instance.

Last week, Steve Yegge, a renowned software engineer who – like all uber-geeks – uses the ultra-programmable Emacs text editor, conducted an instructive experiment. He typed the following prompt into ChatGPT: “Write an interactive Emacs Lisp function that pops to a new buffer, prints out the first paragraph of A Tale of Two Cities, and changes all words with ‘i’ in them red. Just print the code without explanation.”

ChatGPT did its stuff and spat out the code. Yegge copied and pasted it into his Emacs session and published a screenshot of the result. “In one shot,” he writes, “ChatGPT has produced completely working code from a sloppy English description! With voice input wired up, I could have written this program by asking my computer to do it. And not only does it work correctly, the code that it wrote is actually pretty decent Emacs Lisp code. It’s not complicated, sure. But it’s good code.”

Ponder the significance of this for a moment, as tech investors such as Paul Kedrosky are already doing. He likens tools such as ChatGPT to “a missile aimed, however unintentionally, directly at software production itself. Sure, chat AIs can perform swimmingly at producing undergraduate essays, or spinning up marketing materials and blog posts (like we need more of either), but such technologies are terrific to the point of dark magic at producing, debugging, and accelerating software production quickly and almost costlessly.”

Since, ultimately, our networked world runs on software, suddenly having tools that can write it – and that could be available to anyone, not just geeks – marks an important moment. Programmers have always seemed like magicians: they can make an inanimate object do something useful. I once wrote that they must sometimes feel like Napoleon – who was able to order legions, at a stroke, to do his bidding. After all, computers – like troops – obey orders. But to become masters of their virtual universe, programmers had to possess arcane knowledge, and learn specialist languages to converse with their electronic servants. For most people, that was a pretty high threshold to cross. ChatGPT and its ilk have just lowered it.

Monday, April 03, 2023

Transformers: Robots In Disguise?

quantamagazine |  Recent investigations like the one Dyer worked on have revealed that LLMs can produce hundreds of “emergent” abilities — tasks that big models can complete that smaller models can’t, many of which seem to have little to do with analyzing text. They range from multiplication to generating executable computer code to, apparently, decoding movies based on emojis. New analyses suggest that for some tasks and some models, there’s a threshold of complexity beyond which the functionality of the model skyrockets. (They also suggest a dark flip side: As they increase in complexity, some models reveal new biases and inaccuracies in their responses.)

“That language models can do these sort of things was never discussed in any literature that I’m aware of,” said Rishi Bommasani, a computer scientist at Stanford University. Last year, he helped compile a list of dozens of emergent behaviors, including several identified in Dyer’s project. That list continues to grow.

Now, researchers are racing not only to identify additional emergent abilities but also to figure out why and how they occur at all — in essence, to try to predict unpredictability. Understanding emergence could reveal answers to deep questions around AI and machine learning in general, like whether complex models are truly doing something new or just getting really good at statistics. It could also help researchers harness potential benefits and curtail emergent risks.

“We don’t know how to tell in which sort of application is the capability of harm going to arise, either smoothly or unpredictably,” said Deep Ganguli, a computer scientist at the AI startup Anthropic.

The Emergence of Emergence

Biologists, physicists, ecologists and other scientists use the term “emergent” to describe self-organizing, collective behaviors that appear when a large collection of things acts as one. Combinations of lifeless atoms give rise to living cells; water molecules create waves; murmurations of starlings swoop through the sky in changing but identifiable patterns; cells make muscles move and hearts beat. Critically, emergent abilities show up in systems that involve lots of individual parts. But researchers have only recently been able to document these abilities in LLMs as those models have grown to enormous sizes.

Language models have been around for decades. Until about five years ago, the most powerful were based on what’s called a recurrent neural network. These essentially take a string of text and predict what the next word will be. What makes a model “recurrent” is that it learns from its own output: Its predictions feed back into the network to improve future performance.

In 2017, researchers at Google Brain introduced a new kind of architecture called a transformer. While a recurrent network analyzes a sentence word by word, the transformer processes all the words at the same time. This means transformers can process big bodies of text in parallel.

Transformers enabled a rapid scaling up of the complexity of language models by increasing the number of parameters in the model, as well as other factors. The parameters can be thought of as connections between words, and models improve by adjusting these connections as they churn through text during training. The more parameters in a model, the more accurately it can make connections, and the closer it comes to passably mimicking human language. As expected, a 2020 analysis by OpenAI researchers found that models improve in accuracy and ability as they scale up.

But the debut of LLMs also brought something truly unexpected. Lots of somethings. With the advent of models like GPT-3, which has 175 billion parameters — or Google’s PaLM, which can be scaled up to 540 billion — users began describing more and more emergent behaviors. One DeepMind engineer even reported being able to convince ChatGPT that it was a Linux terminal and getting it to run some simple mathematical code to compute the first 10 prime numbers. Remarkably, it could finish the task faster than the same code running on a real Linux machine.

As with the movie emoji task, researchers had no reason to think that a language model built to predict text would convincingly imitate a computer terminal. Many of these emergent behaviors illustrate “zero-shot” or “few-shot” learning, which describes an LLM’s ability to solve problems it has never — or rarely — seen before. This has been a long-time goal in artificial intelligence research, Ganguli said. Showing that GPT-3 could solve problems without any explicit training data in a zero-shot setting, he said, “led me to drop what I was doing and get more involved.”

He wasn’t alone. A raft of researchers, detecting the first hints that LLMs could reach beyond the constraints of their training data, are striving for a better grasp of what emergence looks like and how it happens. The first step was to thoroughly document it.

Tranformers: More Than Meets The Eye?

quantamagazine  |  Imagine going to your local hardware store and seeing a new kind of hammer on the shelf. You’ve heard about this hammer: It pounds faster and more accurately than others, and in the last few years it’s rendered many other hammers obsolete, at least for most uses. And there’s more! With a few tweaks — an attachment here, a twist there — the tool changes into a saw that can cut at least as fast and as accurately as any other option out there. In fact, some experts at the frontiers of tool development say this hammer might just herald the convergence of all tools into a single device.

A similar story is playing out among the tools of artificial intelligence. That versatile new hammer is a kind of artificial neural network — a network of nodes that “learn” how to do some task by training on existing data — called a transformer. It was originally designed to handle language, but has recently begun impacting other AI domains.

The transformer first appeared in 2017 in a paper that cryptically declared that “Attention Is All You Need.” In other approaches to AI, the system would first focus on local patches of input data and then build up to the whole. In a language model, for example, nearby words would first get grouped together. The transformer, by contrast, runs processes so that every element in the input data connects, or pays attention, to every other element. Researchers refer to this as “self-attention.” This means that as soon as it starts training, the transformer can see traces of the entire data set.

Before transformers came along, progress on AI language tasks largely lagged behind developments in other areas. “In this deep learning revolution that happened in the past 10 years or so, natural language processing was sort of a latecomer,” said the computer scientist Anna Rumshisky of the University of Massachusetts, Lowell. “So NLP was, in a sense, behind computer vision. Transformers changed that.”

Transformers quickly became the front-runner for applications like word recognition that focus on analyzing and predicting text. It led to a wave of tools, like OpenAI’s Generative Pre-trained Transformer 3 (GPT-3), which trains on hundreds of billions of words and generates consistent new text to an unsettling degree.

The success of transformers prompted the AI crowd to ask what else they could do. The answer is unfolding now, as researchers report that transformers are proving surprisingly versatile. In some vision tasks, like image classification, neural nets that use transformers have become faster and more accurate than those that don’t. Emerging work in other AI areas — like processing multiple kinds of input at once, or planning tasks — suggests transformers can handle even more.

“Transformers seem to really be quite transformational across many problems in machine learning, including computer vision,” said Vladimir Haltakov, who works on computer vision related to self-driving cars at BMW in Munich.

Just 10 years ago, disparate subfields of AI had little to say to each other. But the arrival of transformers suggests the possibility of a convergence. “I think the transformer is so popular because it implies the potential to become universal,” said the computer scientist Atlas Wang of the University of Texas, Austin. “We have good reason to want to try transformers for the entire spectrum” of AI tasks.

Sunday, April 02, 2023

Unaccountable Algorithmic Tyranny

alt-market |  In this article I want to stress the issue of AI governance and how it might be made to appeal to the masses. In order to achieve the dystopian future the globalists want, they still have to convince a large percentage of the population to applaud it and embrace it.

The comfort of having a system that makes difficult decisions for us is an obvious factor, as mentioned above. But, AI governance is not just about removing choice, it’s also about removing the information we might need to be educated enough to make choices. We saw this recently with the covid pandemic restrictions and the collusion between governments, corporate media and social media. Algorithms were widely used by web media conglomerates from Facebook to YouTube to disrupt the flow of information that might run contrary to the official narrative.

In some cases the censorship targeted people merely asking pertinent questions or fielding alternative theories. In other cases, the censorship outright targeted provably factual data that was contrary to government policies. A multitude of government claims on covid origins, masking, lockdowns and vaccines have been proven false over the past few years, and yet millions of people still blindly believe the original narrative because they were bombarded with it nonstop by the algorithms. They were never exposed to the conflicting information, so they were never able to come to their own conclusions.

Luckily, unlike bots, human intelligence is filled with anomalies – People who act on intuition and skepticism in order to question preconceived or fabricated assertions. The lack of contrary information immediately causes suspicion for many, and this is what authoritarian governments often refuse to grasp.

The great promise globalists hold up in the name of AI is the idea of a purely objective state; a social and governmental system without biases and without emotional content. It’s the notion that society can be run by machine thinking in order to “save human beings from themselves” and their own frailties. It is a false promise, because there will never be such a thing as objective AI, nor any AI that understand the complexities of human psychological development.

Furthermore, the globalist dream of AI is driven not by adventure, but by fear. It’s about the fear of responsibility, the fear of merit, the fear of inferiority, the fear of struggle and the fear of freedom. The greatest accomplishments of mankind are admirable because they are achieved with emotional content, not in spite of it. It is that content that inspires us to delve into the unknown and overcome our fears. AI governance and an AI integrated society would be nothing more than a desperate action to deny the necessity of struggle and the will to overcome.

Globalists are more than happy to offer a way out of the struggle, and they will do it with AI as the face of their benevolence. All you will have to do is trade your freedoms and perhaps your soul in exchange for never having to face the sheer terror of your own quiet thoughts. Some people, sadly, believe this is a fair trade.

The elites will present AI as the great adjudicator, the pure and logical intercessor of the correct path; not just for nations and for populations at large but for each individual life. With the algorithm falsely accepted as infallible and purely unbiased, the elites can then rule the world through their faceless creation without any oversight – For they can then claim that it’s not them making decisions, it’s the AI.  How does one question or even punish an AI for being wrong, or causing disaster? And, if the AI happens to make all its decisions in favor of the globalist agenda, well, that will be treated as merely coincidental.

Disingenuously Shaping The Narrative Around Large Language Model Computing

vice  |  More than 30,000 people—including Tesla’s Elon Musk, Apple co-founder Steve Wozniak, politician Andrew Yang, and a few leading AI researchers—have signed an open letter calling for a six-month pause on training AI systems more powerful than GPT-4. 

The letter immediately caused a furor as signatories walked back their positions, some notable signatories turned out to be fake, and many more AI researchers and experts vocally disagreed with the letter’s proposal and approach. 

The letter was penned by the Future of Life Institute, a nonprofit organization with the stated mission to “reduce global catastrophic and existential risk from powerful technologies.” It is also host to some of the biggest proponents of longtermism, a kind of secular religion boosted by many members of the Silicon Valley tech elite since it preaches seeking massive wealth to direct towards problems facing humans in the far future. One notable recent adherent to this idea is disgraced FTX CEO Sam Bankman-Fried. 

Specifically, the institute focuses on mitigating long-term "existential" risks to humanity such as superintelligent AI. Musk, who has expressed longtermist beliefs, donated $10 million to the institute in 2015.  

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4,” the letter states. “AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts.”

“This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities,” the letter clarifies, referring to the arms race between big tech companies like Microsoft and Google, who in the past year have released a number of new AI products. 

Other notable signatories include Stability AI CEO Emad Mostaque, author and historian Yuval Noah Harari, and Pinterest co-founder Evan Sharp. There are also a number of people who work for the companies participating in the AI arms race who have signed, including Google DeepMind and Microsoft. All signatories were confirmed to Motherboard by the Future of Life Institute to be “independently verified through direct communication.” No one from OpenAI, which develops and commercializes the GPT series of AI models, has signed the letter. 

Despite this verification process, the letter started out with a number of false signatories, including people impersonating OpenAI CEO Sam Altman, Chinese president Xi Jinping, and Chief AI Scientist at Meta, Yann LeCun, before the institute cleaned the list up and paused the appearance of signatures on the letter as they verify each one. 

The letter has been scrutinized by many AI researchers and even its own signatories since it was published on Tuesday. Gary Marcus, a professor of psychology and neural science at New York University, who told Reuters “the letter isn’t perfect, but the spirit is right.” Similarly, Emad Mostaque, the CEO of Stability.AI, who has pitted his firm against OpenAI as a truly "open" AI company, tweeted, “So yeah I don't think a six month pause is the best idea or agree with everything but there are some interesting things in that letter.” 

AI experts criticize the letter as furthering the “AI hype” cycle, rather than listing or calling for concrete action on harms that exist today. Some argued that it promotes a longtermist perspective, which is a worldview that has been criticized as harmful and anti-democratic because it valorizes the uber-wealthy and allows for morally dubious actions under certain justifications.

Emily M. Bender, a Professor in the Department of Linguistics at the University of Washington and the co-author of the first paper the letter cites, tweeted that this open letter is “dripping with #Aihype” and that the letter misuses her research. The letter says, “AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research,” but Bender counters that her research specifically points to current large language models and their use within oppressive systems—which is much more concrete and pressing than hypothetical future AI. 

“We wrote a whole paper in late 2020 (Stochastic Parrots, published in 2021) pointing out that this head-long rush to ever larger language models without considering risks was a bad thing. But the risks and harms have never been about ‘too powerful AI’,” she tweeted. “Instead: They're about concentration of power in the hands of people, about reproducing systems of oppression, about damage to the information ecosystem, and about damage to the natural ecosystem (through profligate use of energy resources).” 

“It's essentially misdirection: bringing everyone's attention to hypothetical powers and harms of LLMs and proposing a (very vague and ineffective) way of addressing them, instead of looking at the harms here and now and addressing those—for instance, requiring more transparency when it comes to the training data and capabilities of LLMs, or legislation regarding where and when they can be used,” Sasha Luccioni, a Research Scientist and Climate Lead at Hugging Face, told Motherboard.


Saturday, April 01, 2023

Don't Sleep On That Tablet Anti-Disinformation Grand Opus

racket  |  Years ago, when I first began to have doubts about the Trump-Russia story, I struggled to come up with a word to articulate my suspicions.

If the story was wrong, and Trump wasn’t a Russian spy, there wasn’t a word for what was being perpetrated. This was a system-wide effort to re-frame reality itself, which was both too intellectually ambitious to fit in a word like “hoax,” but also probably not against any one law, either. New language would have to be invented just to define the wrongdoing, which not only meant whatever this was would likely go unpunished, but that it could be years before the public was ready to talk about it.

Around that same time, writer Jacob Siegel — a former army infantry and intelligence officer who edits Tablet’s afternoon digest, The Scroll — was beginning the job of putting key concepts on paper. As far back as 2019, he sketched out the core ideas for a sprawling, illuminating 13,000-word piece that just came out this week. Called “A Guide to Understanding the Hoax of the Century: Thirteen ways of looking at disinformation,” Siegel’s Tablet article is the enterprise effort at describing the whole anti-disinformation elephant I’ve been hoping for years someone in journalism would take on.

It will escape no one’s notice that Siegel’s lede recounts the Hamilton 68 story from the Twitter Files. Siegel says the internal dialogues of Twitter executives about the infamous Russia-tracking “dashboard” helped him frame the piece he’d been working on for so long. Which is great, I’m glad about that, but he goes far deeper into the topic than I have, and in a way that has a real chance to be accessible to all political audiences.

Siegel threads together all the disparate strands of a very complex story, in which the sheer quantity of themes is daunting: the roots in counter-terrorism strategy, Russiagate as a first great test case, the rise of a public-private “counter-disinformation complex” nurturing an “NGO Borg,” the importance of Trump and “domestic extremism” as organizing targets, the development of a new uniparty politics anointing itself “protector” of things like elections, amid many other things.

He concludes with an escalating string of anxiety-provoking propositions. One is that our first windows into this new censorship system, like Stanford’s Election Integrity Partnership, might also be our last, as AI and machine learning appear ready to step in to do the job at scale. The National Science Foundation just announced it was “building a set of use cases” to enable ChatGPT to “further automate” the propaganda mechanism, as Siegel puts it. The messy process people like me got to see, just barely, in the outlines of Twitter emails made public by a one-in-a-million lucky strike, may not appear in recorded human conversations going forward. “Future battles fought through AI technologies,” says Siegel, “will be harder to see.”

More unnerving is the portion near the end describing how seemingly smart people are fast constructing an ideology of mass surrender. Siegel recounts the horrible New York Times Magazine article (how did I forget it?) written by Yale law graduate Emily Bazelon just before the 2020 election, whose URL is titled “The Problem of Free Speech in an Age of Disinformation.” Shorter Bazelon could have been Fox Nazis Censorship Derp: the article the Times really ran was insanely long and ended with flourishes like, “It’s time to ask whether the American way of protecting free speech is actually keeping us free.”

Both the actors in the Twitter Files and the multitudinous papers produced by groups like the Aspen Institute and Harvard’s Shorenstein Center are perpetually concerned with re-thinking the “problem” of the First Amendment, which of course is not popularly thought of as a problem. It’s notable that the Anti-Disinformation machine, a clear sequel to the Military-Industrial Complex, doesn’t trumpet the virtues of the “free world” but rather the “rules-based international order,” within which (as Siegel points out) people like former Labor Secretary Robert Reich talk about digital deletion as “necessary to protect American democracy.” This idea of pruning fingers off democracy to save it is increasingly popular; we await the arrival of the Jerzy Kozinski character who’ll propound this political gardening metaphor to the smart set.

Biden Administration Leads Massive Speech Censorship Operation

foxnews  |  EXCLUSIVE: The Biden administration has led "the largest speech censorship operation in recent history" by working with social media companies to suppress and censor information later acknowledged as truthful," former Missouri attorney general Eric Schmitt will tell the House Weaponization Committee Thursday.

Schmitt, now a Republican senator from Missouri, is expected to testify alongside Louisiana Attorney General Jeff Landry and former Missouri deputy attorney general for special litigation, D. John Sauer.

LAWSUIT FILED AGAINST BIDEN, TOP OFFICIALS FOR 'COLLUDING' WITH BIG TECH TO CENSOR SPEECH ON HUNTER, COVID

The three witnesses will discuss the findings of their federal government censorship lawsuit, Louisiana and Missouri v. Biden et al—which they filed in May 2022 and which they describe as "the most important free speech lawsuit of this generation."

The testimony comes after Missouri and Louisiana filed a lawsuit against the Biden administration, alleging that President Biden and members of his team "colluded with social media giants Meta, Twitter, and YouTube to censor free speech in the name of combating so-called ‘disinformation’ and ‘misinformation.’"

The lawsuit alleges that coordination led to the suppression and censorship of truthful information "on a scale never before seen" using examples of the COVID lab-leak theory, information about COVID vaccinations, Hunter Biden’s laptop, and more.

The lawsuit is currently in discovery, and Thursday’s hearing is expected to feature witness testimony that will detail evidence collected to show the Biden administration has "coerced social media companies to censor disfavored speech."

"Discovery obtained by Missouri and Louisiana demonstrated that the Biden administration’s coordination with social media companies and collusion with non-governmental organizations to censor speech was far more pervasive and destructive than ever known," Schmitt will testify, according to prepared testimony obtained by Fox News Digital.

 

 

Fuck Robert Kagan And Would He Please Now Just Go Quietly Burn In Hell?

politico | The Washington Post on Friday announced it will no longer endorse presidential candidates, breaking decades of tradition in a...