Showing posts with label scientific morality. Show all posts
Showing posts with label scientific morality. Show all posts

Friday, March 15, 2024

Dr. Martin Kulldorf Did Nothing Wrong

childrenshealthdefense  |  Martin Kulldorff, Ph.D., an epidemiologist and professor of Medicine at Harvard University, on Monday confirmed the university fired him.

Kulldorff has been a critic of lockdown policies, school closures and vaccine mandates since early in the COVID-19 pandemic. In October 2020, he published the Great Barrington Declaration, along with co-authors Oxford epidemiologist Sunetra Gupta, Ph.D., and Stanford epidemiologist and health economist Jay Bhattacharya, M.D., Ph.D.

In an essay published Monday in City Journal, Kulldorff wrote that his anti-mandate position got him fired from the Mass General Brigham hospital system, where he also worked, and consequently from his Harvard faculty position.

Kulldorff detailed how his commitment to scientific inquiry put him at odds with a system that he alleged had “lost its way.”

“I am no longer a professor of medicine at Harvard,” Kulldorff wrote. “The Harvard motto is Veritas, Latin for truth. But, as I discovered, truth can get you fired.”

He noted that it was clear from early 2020 that lockdowns would be futile for controlling the pandemic.

“It was also clear that lockdowns would inflict enormous collateral damage, not only on education but also on public health, including treatment for cancer, cardiovascular disease, and mental health,” Kulldorff wrote.

“We will be dealing with the harm done for decades. Our children, the elderly, the middle class, the working class, and the poor around the world — all will suffer.”

That viewpoint got little debate in the mainstream media until the epidemiologist and his colleagues published the Great Barrington Declaration, signed by nearly 1 million public health professionals from across the world.

The document made clear that no scientific consensus existed for lockdown measures in a pandemic. It argued instead for a “focused protection” approach for pandemic management that would protect high-risk populations, such as elderly or medically compromised people, and otherwise allow the COVID-19 virus to circulate among the healthy population.

Although the declaration merely summed up what previously had been conventional wisdom in public health, it was subject to tremendous backlash. Emails obtained through a Freedom of Information Act request revealed that Dr. Francis Collins, then-director of the National Institutes of Health called for a “devastating published takedown” of the declaration and of the authors, who were subsequently slandered in mainstream and social media.

 

 

Friday, February 10, 2023

Chatbots Replace Clinicians In Therapeutic Contexts?

medpagetoday  |  Within a week of its Nov. 30, 2022 release by OpenAI, ChatGPT was the most widely used and influential artificial intelligence (AI) chatbot in history with over a millionopens in a new tab or window registered users. Like other chatbots built on large language models, ChatGPT is capable of accepting natural language text inputs and producing novel text responses based on probabilistic analyses of enormous bodies or corpora of pre-existing text. ChatGPT has been praised for producing particularly articulate and detailed text in many domains and formats, including not only casual conversation, but also expository essays, fiction, song, poetry, and computer programming languages. ChatGPT has displayed enough domain knowledge to narrowly miss passing a certifying examopens in a new tab or window for accountants, to earn C+ grades on law school examsopens in a new tab or window and B- grades on business school examsopens in a new tab or window, and to pass parts of the U.S. Medical Licensing Examsopens in a new tab or window. It has been listed as a co-author on at least fouropens in a new tab or window scientific publications.

At the same time, like other large language model chatbots, ChatGPT regularly makes misleading or flagrantly false statements with great confidence (sometimes referred to as "AI hallucinations"). Despite significant improvements over earlier models, it has at times shown evidenceopens in a new tab or window of algorithmic racial, gender, and religious bias. Additionally, data entered into ChatGPT is explicitly stored by OpenAI and used in training, threatening user privacy. In my experience, I've asked ChatGPT to evaluate hypothetical clinical cases and found that it can generate reasonable but inexpert differential diagnoses, diagnostic workups, and treatment plans. Its responses are comparable to those of a well-read and overly confident medical student with poor recognition of important clinical details.

This suddenly widespread use of large language model chatbots has brought new urgency to questions of artificial intelligence ethics in education, law, cybersecurity, journalism, politics -- and, of course, healthcare.

As a case study on ethics, let's examine the results of a pilot programopens in a new tab or window from the free peer-to-peer therapy platform Koko. The program used the same GPT-3 large language model that powers ChatGPT to generate therapeutic comments for users experiencing psychological distress. Users on the platform who wished to send supportive comments to other users had the option of sending AI-generated comments rather than formulating their own messages. Koko's co-founder Rob Morris reported: "Messages composed by AI (and supervised by humans) were rated significantly higher than those written by humans on their own," and "Response times went down 50%, to well under a minute." However, the experiment was quickly discontinued because "once people learned the messages were co-created by a machine, it didn't work." Koko has made ambiguous and conflicting statements about whether users understood that they were receiving AI-generated therapeutic messages but has consistently reported that there was no formal informed consent processopens in a new tab or window or review by an independent institutional review board.

ChatGPT and Koko's therapeutic messages raise an urgent question for clinicians and clinical researchers: Can large language models be used in standard medical care or should they be restricted to clinical research settings?

In terms of the benefits, ChatGPT and its large language model cousins might offer guidance to clinicians and even participate directly in some forms of healthcare screening and psychotherapeutic treatment, potentially increasing access to specialist expertise, reducing error rates, lowering costs, and improving outcomes for patients. On the other hand, they entail currently unknown and potentially large risks of false information and algorithmic bias. Depending on their configuration, they can also be enormously invasive to their users' privacy. These risks may be especially harmful to vulnerable individuals with medical or psychiatric illness.

As researchers and clinicians begin to explore the potential use of large language model artificial intelligence in healthcare, applying principals of clinical research will be key. As most readers will know, clinical research is work with human participants that is intended primarily to develop generalizable knowledge about health, disease, or its treatment. Determining whether and how artificial intelligence chatbots can safely and effectively participate in clinical care would prima facie appear to fit perfectly within this category of clinical research. Unlike standard medical care, clinical research can involve deviations from the standard of care and additional risks to participants that are not necessary for their treatment but are vital for generating new generalizable knowledge about their illness or treatments. Because of this flexibility, clinical research is subject toopens in a new tab or window additional ethical (and -- for federally funded research -- legal) requirements that do not apply to standard medical care but are necessary to protect research participants from exploitation. In addition to informed consent, clinical research is subject to independent review by knowledgeable individuals not affiliated with the research effort -- usually an institutional review board. Both clinical researchers and independent reviewers are responsible for ensuring the proposed research has a favorable risk-benefit ratio, with potential benefits for society and participants that outweigh the risks to participants, and minimization of risks to participants wherever possible. These informed consent and independent review processes -- while imperfect -- are enormously important to protect the safety of vulnerable patient populations.

There is another newer and evolving category of clinical work known as quality improvement or quality assurance, which uses data-driven methods to improve healthcare delivery. Some tests of artificial intelligence chatbots in clinical care might be considered quality improvement. Should these projects be subjected to informed consent and independent review? The NIH lays out a number of criteriaopens in a new tab or window for determining whether such efforts should be subjected to the added protections of clinical research. Among these, two key questions are whether techniques deviate from standard practice, and whether the test increases the risk to participants. For now, it is clear that use of large language model chatbots is both a deviation from standard practice and introduces novel uncertain risks to participants. It is possible that in the near future, as AI hallucinations and algorithmic bias are reduced and as AI chatbots are more widely adopted, that their use may no longer require the protections of clinical research. At present, informed consent and institutional review remain critical to the safe and ethical use of large language model chatbots in clinical practice.

Thursday, February 09, 2023

The Application Of Machine Learning To Evidence Based Medicine

 
What if, bear with me now, what if the phase 3 clinical trials for mRNA therapeutics conducted on billions of unsuspecting, hoodwinked and bamboozled humans, was a new kind of research done to yield a new depth and breadth of clinical data exceptionally useful toward breaking up logjams in clinical terminology as well as experimental sample size? Vaxxed vs. Unvaxxed the subject of long term gubmint surveillance now. To what end?

Nature  | Recently, advances in wearable technologies, data science and machine learning have begun to transform evidence-based medicine, offering a tantalizing glimpse into a future of next-generation ‘deep’ medicine. Despite stunning advances in basic science and technology, clinical translations in major areas of medicine are lagging. While the COVID-19 pandemic exposed inherent systemic limitations of the clinical trial landscape, it also spurred some positive changes, including new trial designs and a shift toward a more patient-centric and intuitive evidence-generation system. In this Perspective, I share my heuristic vision of the future of clinical trials and evidence-based medicine.

Main

The last 30 years have witnessed breathtaking, unparalleled advancements in scientific research—from a better understanding of the pathophysiology of basic disease processes and unraveling the cellular machinery at atomic resolution to developing therapies that alter the course and outcome of diseases in all areas of medicine. Moreover, exponential gains in genomics, immunology, proteomics, metabolomics, gut microbiomes, epigenetics and virology in parallel with big data science, computational biology and artificial intelligence (AI) have propelled these advances. In addition, the dawn of CRISPR–Cas9 technologies has opened a tantalizing array of opportunities in personalized medicine.

Despite these advances, their rapid translation from bench to bedside is lagging in most areas of medicine and clinical research remains outpaced. The drug development and clinical trial landscape continues to be expensive for all stakeholders, with a very high failure rate. In particular, the attrition rate for early-stage developmental therapeutics is quite high, as more than two-thirds of compounds succumb in the ‘valley of death’ between bench and bedside1,2. To bring a drug successfully through all phases of drug development into the clinic costs more than 1.5–2.5 billion dollars (refs. 3, 4). This, combined with the inherent inefficiencies and deficiencies that plague the healthcare system, is leading to a crisis in clinical research. Therefore, innovative strategies are needed to engage patients and generate the necessary evidence to propel new advances into the clinic, so that they may improve public health. To achieve this, traditional clinical research models should make way for avant-garde ideas and trial designs.

Before the COVID-19 pandemic, the conduct of clinical research had remained almost unchanged for 30 years and some of the trial conduct norms and rules, although archaic, were unquestioned. The pandemic exposed many of the inherent systemic limitations in the conduct of trials5 and forced the clinical trial research enterprise to reevaluate all processes—it has therefore disrupted, catalyzed and accelerated innovation in this domain6,7. The lessons learned should help researchers to design and implement next-generation ‘patient-centric’ clinical trials.

Chronic diseases continue to impact millions of lives and cause major financial strain to society8, but research is hampered by the fact that most of the data reside in data silos. The subspecialization of the clinical profession has led to silos within and among specialties; every major disease area seems to work completely independently. However, the best clinical care is provided in a multidisciplinary manner with all relevant information available and accessible. Better clinical research should harness the knowledge gained from each of the specialties to achieve a collaborative model enabling multidisciplinary, high-quality care and continued innovation in medicine. Because many disciplines in medicine view the same diseases differently—for example, infectious disease specialists view COVID-19 as a viral disease while cardiology experts view it as an inflammatory one—cross-discipline approaches will need to respect the approaches of other disciplines. Although a single model may not be appropriate for all diseases, cross-disciplinary collaboration will make the system more efficient to generate the best evidence.

Over the next decade, the application of machine learning, deep neural networks and multimodal biomedical AI is poised to reinvigorate clinical research from all angles, including drug discovery, image interpretation, streamlining electronic health records, improving workflow and, over time, advancing public health (Fig. 1). In addition, innovations in wearables, sensor technology and Internet of Medical Things (IoMT) architectures offer many opportunities (and challenges) to acquire data9. In this Perspective, I share my heuristic vision of the future of clinical trials and evidence generation and deliberate on the main areas that need improvement in the domains of clinical trial design, clinical trial conduct and evidence generation.

Fig. 1: Timeline of drug development from the present to the future.
figure 1

The figure represents the timeline from drug discovery to first-in-human phase 1 trials and ultimately FDA approval. Phase 4 studies occur after FDA approval and can go on for several years. There is an urgent need to reinvigorate clinical trials through drug discovery, interpreting imaging, streamlining electronic health records, and improving workflow, over time advancing public health. AI can aid in many of these aspects in all stages of drug development. DNN, deep neural network; EHR, electronic health records; IoMT, internet of medical things; ML, machine learning.

Clinical trial design

Trial design is one of the most important steps in clinical research—better protocol designs lead to better clinical trial conduct and faster ‘go/no-go’ decisions. Moreover, losses from poorly designed, failed trials are not only financial but also societal.

Challenges with randomized controlled trials

Randomized controlled trials (RCTs) have been the gold standard for evidence generation across all areas of medicine, as they allow unbiased estimates of treatment effect without confounders. Ideally, every medical treatment or intervention should be tested via a well-powered and well-controlled RCT. However, conducting RCTs is not always feasible owing to challenges in generating evidence in a timely manner, cost, design on narrow populations precluding generalizability, ethical barriers and the time taken to conduct these trials. By the time they are completed and published, RCTs become quickly outdated and, in some cases, irrelevant to the current context. In the field of cardiology alone, 30,000 RCTs have not been completed owing to recruitment challenges10. Moreover, trials are being designed in isolation and within silos, with many clinical questions remaining unanswered. Thus, traditional trial design paradigms must adapt to contemporary rapid advances in genomics, immunology and precision medicine11.

The Application Of Machine Learning To Osgood's Affect Control Theory

Over the weekend, I chatted with an AI specialist and got to thinking A LOT about possible applications of Large Language Models and their potential specialized uses for governance. The CIA studied Language very extensively under MKUltra as part of its larger Human Ecology project. Charles E. Osgood was a long term recipient of considerable CIA largesse. This topic was a priority for the Agency. It boggles the mind to consider what kind of clandestine leaps have taken place in this speciality through the use of contemporary computational methods.

wikipedia |  In control theory, affect control theory proposes that individuals maintain affective meanings through their actions and interpretations of events. The activity of social institutions occurs through maintenance of culturally based affective meanings.

Affective meaning

Besides a denotative meaning, every concept has an affective meaning, or connotation, that varies along three dimensions:[1] evaluation – goodness versus badness, potency – powerfulness versus powerlessness, and activity – liveliness versus torpidity. Affective meanings can be measured with semantic differentials yielding a three-number profile indicating how the concept is positioned on evaluation, potency, and activity (EPA). Osgood[2] demonstrated that an elementary concept conveyed by a word or idiom has a normative affective meaning within a particular culture.

A stable affective meaning derived either from personal experience or from cultural inculcation is called a sentiment, or fundamental affective meaning, in affect control theory. Affect control theory has inspired assembly of dictionaries of EPA sentiments for thousands of concepts involved in social life – identities, behaviours, settings, personal attributes, and emotions. Sentiment dictionaries have been constructed with ratings of respondents from the US, Canada, Northern Ireland, Germany, Japan, China and Taiwan.[3]

Impression formation

Each concept that is in play in a situation has a transient affective meaning in addition to an associated sentiment. The transient corresponds to an impression created by recent events.[4]

Events modify impressions on all three EPA dimensions in complex ways that are described with non-linear equations obtained through empirical studies.[5]

Here are two examples of impression-formation processes.

  • An actor who behaves disagreeably seems less good, especially if the object of the behavior is innocent and powerless, like a child.
  • A powerful person seems desperate when performing extremely forceful acts on another, and the object person may seem invincible.

A social action creates impressions of the actor, the object person, the behavior, and the setting.[6]

Deflections

Deflections are the distances in the EPA space between transient and fundamental affective meanings. For example, a mother complimented by a stranger feels that the unknown individual is much nicer than a stranger is supposed to be, and a bit too potent and active as well – thus there is a moderate distance between the impression created and the mother's sentiment about strangers. High deflections in a situation produce an aura of unlikeliness or uncanniness.[7] It is theorized that high deflections maintained over time generate psychological stress.[8]

The basic cybernetic idea of affect control theory can be stated in terms of deflections. An individual selects a behavior that produces the minimum deflections for concepts involved in the action. Minimization of deflections is described by equations derived with calculus from empirical impression-formation equations.[9]

Action

On entering a scene an individual defines the situation by assigning identities to each participant, frequently in accord with an encompassing social institution.[10] While defining the situation, the individual tries to maintain the affective meaning of self through adoption of an identity whose sentiment serves as a surrogate for the individual's self-sentiment.[11] The identities assembled in the definition of the situation determine the sentiments that the individual tries to maintain behaviorally.

Confirming sentiments associated with institutional identities – like doctor–patient, lawyer–client, or professor–student – creates institutionally relevant role behavior.[12]

Confirming sentiments associated with negatively evaluated identities – like bully, glutton, loafer, or scatterbrain – generates deviant behavior.[13] Affect control theory's sentiment databases and mathematical model are combined in a computer simulation program[14] for analyzing social interaction in various cultures.

Emotions

According to affect control theory, an event generates emotions for the individuals involved in the event by changing impressions of the individuals. The emotion is a function of the impression created of the individual and of the difference between that impression and the sentiment attached to the individual's identity[15] Thus, for example, an event that creates a negative impression of an individual generates unpleasant emotion for that person, and the unpleasantness is worse if the individual believes she has a highly valued identity. Similarly, an event creating a positive impression generates a pleasant emotion, all the more pleasant if the individual believes he has a disvalued identity in the situation.

Non-linear equations describing how transients and fundamentals combine to produce emotions have been derived in empirical studies[16] Affect control theory's computer simulation program[17] uses these equations to predict emotions that arise in social interaction, and displays the predictions via facial expressions that are computer drawn,[18] as well as in terms of emotion words.

Based on cybernetic studies by Pavloski[19] and Goldstein,[20] that utilise perceptual control theory, Heise[21] hypothesizes that emotion is distinct from stress. For example, a parent enjoying intensely pleasant emotions while interacting with an offspring suffers no stress. A homeowner attending to a sponging house guest may feel no emotion and yet be experiencing substantial stress.

Interpretations

Others' behaviors are interpreted so as to minimize the deflections they cause.[22] For example, a man turning away from another and exiting through a doorway could be engaged in several different actions, like departing from, deserting, or escaping from the other. Observers choose among the alternatives so as to minimize deflections associated with their definitions of the situation. Observers who assigned different identities to the observed individuals could have different interpretations of the behavior.

Re-definition of the situation may follow an event that causes large deflections which cannot be resolved by reinterpreting the behavior. In this case, observers assign new identities that are confirmed by the behavior.[23] For example, seeing a father slap a son, one might re-define the father as an abusive parent, or perhaps as a strict disciplinarian; or one might re-define the son as an arrogant brat. Affect control theory's computer program predicts the plausible re-identifications, thereby providing a formal model for labeling theory.

The sentiment associated with an identity can change to befit the kinds of events in which that identity is involved, when situations keep arising where the identity is deflected in the same way, especially when identities are informal and non-institutionalized.[24]

Applications

Affect control theory has been used in research on emotions, gender, social structure, politics, deviance and law, the arts, and business. Affect Control Theory was analyzed through the use of Quantitative Methods in research, using mathematics to look at data and interpret their findings. However, recent applications of this theory have explored the concept of Affect Control Theory through Qualitative Research Methods. This process involves obtaining data through the use of interviews, observations, and questionnaires. Affect Control Theory has been explored through Qualitative measures in interviewing the family, friends, and loved ones of individuals who were murdered, looking at how the idea of forgiveness changes based on their interpretation of the situation.[25] Computer programs have also been an important part of understanding Affect Control Theory, beginning with the use of "Interact," a computer program designed to create social situations with the user to understand how an individual will react based on what is happening within the moment. "Interact" has been an essential tool in research, using it to understand social interaction and the maintenance of affect between individuals.[26] The use of interviews and observations have improved the understanding of Affect Control Theory through Qualitative research methods. A bibliography of research studies in these areas is provided by David R. Heise[27] and at the research program's website.

Monday, December 13, 2021

“It Is Dangerous To Be Right In Matters On Which The Established Authorities Are Wrong.” ― Voltaire

americanthinker |  I just finished reading an article on the Big Think website titled "When science mixes with politics, all we get is politics," by Professor Marcelo Gleiser, theoretical physicist, Dartmouth College.  I mistakenly thought the commentary would decry the misuse of science by politicians, but no.  Instead, it decries the mistrust that we, the unwashed masses, have developed for the science establishment in recent years.  Unwittingly, the eminent professor gives us yet more reasons to regard science insiders with skepticism.

He does what so many of his colleagues do, which is to equate science itself with the institutions that purport to advance science.  To question politicized scientists, then, is supposedly unscientific.

Censorship of actual science has been heavy-handed, both by Democrats and by their Big Tech acolytes.  Epidemiologists, virologists, and physicians who do not toe the party line regarding COVID have been intimidated and silenced.  Science that cannot be openly questioned is not science, since the heart and soul of science are to scrutinize every claim from every angle.  If we are to be told we must follow the science, then scientists must explain to us the inductive reasoning that was applied to exclude members of Congress, and their staffs, from the COVID restrictions they imposed on the rest of us.  If scientists are to decry those of us who doubt their word, then they must equally decry the policy of distributing unvaccinated, untested illegal aliens to every state, while denying entry to legal travelers.

To decry only the skeptics, while ignoring the egregious anti-science of many politicians, does nothing to engender trust in the institutions of science.  It does the opposite.

 

Wednesday, November 10, 2021

Public Pushback At The House That Lil'Fauci Built...,

wsj  |  The sprawling federal research agency has led government efforts studying and battling Covid-19, including funding the development and testing of vaccines. Anthony Fauci, a top NIH scientist, has been a public face of the Biden administration’s case for wider vaccine mandates, including a federal one affecting the NIH’s own staff.

But just like at workplaces across the country, vaccine mandates are sparking controversy at the NIH. The agency’s main bioethics department has scheduled a Dec. 1 live-streamed roundtable session over the ethics of mandates. The seminar is one of four agencywide ethics debates this year, accessible to all of the NIH’s nearly 20,000 staff, as well as patients and the public, organizers say. It was set up after a senior infectious-disease researcher at the institute pushed back against broadening discussion of mandates this summer and requested an agency ethics review.

“There’s a lot of debate within the NIH about whether [a vaccine mandate] is appropriate,” said David Wendler, the senior NIH bioethicist who is in charge of planning the session. “It’s an important, hot topic.”

A federal appeals court on Saturday temporarily blocked Biden administration rules issued last week by the U.S. Labor Department requiring many private employers to ensure workers are vaccinated or tested weekly for Covid-19. The Labor Department’s top legal adviser said the administration was confident in its authority to issue the mandate and prepared to defend the rules.

In the NIH-scheduled roundtable next month, Matthew Memoli, who runs a clinical studies unit within the NIH’s National Institute of Allergy and Infectious Diseases, will make the case against mandates. Dr. Memoli, 48 years old, opposes mandatory Covid-19 vaccination with currently available shots, and he has declined to be vaccinated.

“I think the way we are using the vaccines is wrong,” he said. In a July 30 email to Dr. Fauci and two of his lieutenants, Dr. Memoli called mandated vaccination “extraordinarily problematic.” He says one of Dr. Fauci’s colleagues thanked him for his email. Dr. Fauci and a NIAID spokeswoman declined to comment.

Dr. Memoli said he supports Covid-19 vaccination in high-risk populations including the elderly and obese. But he argues that with existing vaccines, blanket vaccination of people at low risk of severe illness could hamper the development of more-robust immunity gained across a population from infection.

 

 

Sunday, October 31, 2021

VAERS Is Supposedly Used By Public Health Officials To Detect Signals

The signals began ringing loudly in December 2020, when first covid shots were administered, and quickly became deafening. They were that loud, and the extraordinary magnitude of them has been and continues to be ignored by our government at all levels in Washington DC, all levels!

Senators this week demonstrated that they could put the heat on, witness AG Merrick Garland — when they want to take something seriously.

However, despite the Loudon rape fiascos, which senators used to slap Garland around with, Garland the Magnificent remains in office. And his order to FBI to treat parents complaining about public school corruption remain “domestic terrorists” remains in place far as I know.

Save for Wisconsin Sen. Ron Johnson, none of them have lifted a finger, while thousands and tens of thousands dead, permanently disabled, maimed, injured, blinded, cancers that were in remission came back, ditto herpes, thousands of miscarriages and who knows how many thousands of women now permanently sterilized? from these poisons sold as preventive medicine. Among many, many other injuries. And hospitals across the land fire skilled medical staff for saying anything about this grotesque bestiality. These are not hospitals; they are charnel houses!

What are these shots actually preventing, —- if 85% of those dead after covid shot got the disease anyways?

Below is data indicating how out of control the covid shot injuries are, which also indicates the moral turpitude of Congress, Biden, and his men [and Trump’s advocacy of “warp speed” vax], et al.

Note what happens in Dec. 2020.

all adverse events reported to VAERS
Dec., 2019 3,455
Jan., 2020 3,082
Feb., 2020 2,986
Mar., 2020 2,232
Apr., 2020 2,022
May, 2020 1,946
Jun., 2020 1,844
Jul., 2020 2,186
Aug., 2020 2,961
Sep., 2020 4,576
Oct., 2020 6,265
Nov., 2020 4,510
Dec., 2020 15,594
Jan., 2021 70,266
Feb., 2021 57,719
Mar., 2021 78,168
Apr., 2021 105,689
May, 2021 63,606
Jun., 2021 44,649
Jul., 2021 36,000
Aug., 2021 103,533
Sep., 2021 49,428

after covid shot-only reports to VAERS
Dec., 2020 10,891
Jan., 2021 66,581
Feb., 2021 54,550
Mar., 2021 74,461
Apr., 2021 102,189
May, 2021 61,113
Jun., 2021 42,374
Jul., 2021 33,564
Aug., 2021 100,718
Sep., 2021 47,158
Oct., 2021 29,144
Total 622,743

In the 11 months preceding covid shot rollout, Jan — Nov 2020,
34,701 adverse events reported to VAERS — for ALL vaccines combined.

In the 11 months since, Dec 2020 to Oct 2021,
622,743 adverse events for covid-only shots

Nearly 17 times more, or 1,685% more.

This is what vaccine failure looks like.

This is what government failure looks like.

Had the CDC and US Food and Drug Administration been serious about adverse events and in particular, the percentage of those either having covid or not, they would have done something to ensure that there would be data on this, for each and every VAERS report submitted.

In particular, regarding VAERS reports in which death occurs after covid shots.

While 100% data on this may seem like pie in the sky, the least we should expect is what Pfizer and Moderna claimed was 95% Vaccine Effectiveness VE. Which as we now know was base, rank, propaganda and deception, at best.

Irrespective of the fact that the actual VE of these poisons tends toward zero, one can at the very least expect that the percentage of VAERS reports on who did and did not test positive for this disease should have at a minimum been ~ 43%.

43% is the CDC estimated VE, from average of previous decade’s [through 2019/2020 flu season], of influenza shots.

Instead, only 16.42% is actually reported in VAERS data bank. That’s bad, that’s really unconscionably bad.

VAERS data shows that of all the after covid shot deaths,
2.54 % reported “SARS-COV-2 TEST NEGATIVE”
13.88% reported “SARS-COV-2 TEST POSITIVE”

Where are the other 83.58% ???

Thus, only 16.42% of this essential data is actually, as of Oct. 29 data, known via VAERS.

Assuming these proportions are at least in the ball park, this means
~85% of after covid shot deaths tested positive
~15% tested negative.
[13.88/16.42 = 84.531, or 85% rounding; 100-84.531 = ~15%]

8,086 deaths after covid shot reported to VAERS x 21X = ~165,000 actual deaths.

165,000 x 0.85 = ~144,211 died with positive test
165,000 x 0.15 = 21,631 died with negative test.

Total = ~ 165,842

Rose says that these deaths are caused by covid shots.


Sunday, October 24, 2021

The Nazi Past Shaped Our Dystopian Present

Annie Jacobsen's Operation Paperclip Focused On Chemical, Biological, and Medical Weaponeers

historynewsnetwork |   The journalist Annie Jacobsen recently published Operation Paperclip: The Secret Intelligence Program that Brought Nazi Scientists to America (Little Brown, 2014). Scouring the archives and unearthing previously undisclosed records as well as drawing on earlier work, Jacobsen recounts in chilling detail a very peculiar effort on the part of the U.S. military to utlize the very scientists who had been essential to Hitler’s war effort. 

As I read your book I started thinking about the various Nazi genre films such as; The Boys from Brazil, The Odessa File, and Marathon Man — they all hold to a similar premise, key Nazi’s escape Germany after the war and plot in various ways to do bad things. Apparently truth is stranger than fiction. What was Operation Paperclip?

Operation Paperclip was a classified program to bring Nazi scientists to America right after World War II. It had, however, a benign public face. The war department had issued a press release saying that good German scientists would be coming to America to help out in our scientific endeavors.

But it was not benign at all, as seen in the character of Otto Ambros, a man, as you explain, was keen on helping U.S. soldiers in matters of hygiene by offering them soap, this soon after they had conquered Germany. Who was Ambros?

Otto Ambros I must say was one of the most dark-hearted characters that I wrote about in this book. He was Hitler’s favorite chemist, and I don’t say that lightly. I found a document in the National Archives, I don’t believe it had ever been revealed before, that showed that during the war Hitler gave Ambros a one million Reichsmark bonus for his scientific acumen. The reason was two-fold. Ambros worked on the Reich’s secret nerve agent program, but he also invented synthetic rubber, that was called buna. The reason rubber was so important — if you think about the Reich’s war-machine and how tanks need treads, aircraft need wheels — the Reich needed rubber. By inventing synthetic rubber, Ambros became Hitler’s favorite chemist.

Not only that when the Reich decided to develop a factory at Auschwitz, — the death camp had a third territory, there was Auschwitz, there was Birkenau — they did it in a third territory called Auschwitz III also known as Monowitvz-Buna. This was where synthetic rubber was going to be manufactured using prisoners who would be spared the gas chamber as they were put to work, and most often worked to death by the Reich war machine. The person, the general manager there at Auschwitz III, was Otto Ambros. Ambros was one of the last individuals to leave Auschwitz, this is in the last days of January 1945 as the Russians are about to liberate the death camp. Ambros is there according to these documents I have located in Germany, destroying evidence right up until the very end.

After the war, Ambros was sought by the Allies and later found, interrogated and put on trial at Nuremberg, where he was convicted of mass-murder and slavery. He was sentenced to prison, but in the early 1950s as the Cold War became elevated he was given clemency by the U.S. High Commissioner John McCloy and released from prison. When he was sentenced, the Nuremberg judges took away all his finances, including that one million Reichsmark bonus from Hitler. When McCloy gave him clemency he also restored Otto Ambros’ finances, so he got back what was left of that money. He was then given a contract with the U.S. Department of Energy.

He actually came to work in the United States?

Otto Ambros remains one of the most difficult cases to crack in terms of Paperclip. While I was able to unearth some new and horrifying information about his postwar life, most of it remains, “lost or missing,” which I take to mean classified. We do know for a fact that Ambros came to the United States two, possibly three times. As a convicted war criminal traveling to the United States he would have needed special papers from the U.S. State Department. The State Department, however, informed me through the Freedom of Information Act that those documents are lost or missing.

Wednesday, September 08, 2021

The "Scientific Consensus" Is About To Rapidly Solidify Behind The "Political Mandates"

jonathan-cook  |  In some of these blogs I have been trying to gently highlight what should be a very obvious fact: that “the science” we are being constantly told to follow is not quite as scientific as is being claimed.

That is inevitable in the context of a new virus about which much is still not known. And it is all the more so given that our main response to the pandemic – vaccination – while being a relatively effective tool against the worst disease outcomes is nonetheless an exceedingly blunt one. Vaccines are the epitome of the one-size-fits-all approach of modern medicine.

Into the void between our scientific knowledge and our fear of mortality has rushed politics. It is a refusal to admit that “the science” is necessarily compromised by political and commercial considerations that has led to an increasingly polarised – and unreasonable – confrontation between what have become two sides of the Covid divide. Doubt and curiosity have been squeezed out by the bogus certainties of each faction.

All of this has been underscored by the latest decision of the Joint Committee on Vaccinations and Immunisation, the British government’s official advisory body on vaccinations. Unexpectedly, it has defied political pressure and demurred, for the time being at least, on extending the vaccination programme to children aged between 12 and 15.

The British government appears to be furious. Ministers who have been constantly demanding that we “follow the science” are reportedly ready to ignore the advice – or more likely, bully the JCVI into hastily changing its mind over the coming days.

Over the weekend, the vaccines minister, Nadhim Zahawi, even suggested, in a potentially radical overhaul of traditional ideas of medical consent, that doctors – and presumably schools – might soon be allowed to persuade children as young as 12 to get vaccinated against their parents’ wishes.

And liberal media outlets like the Guardian, which have been so careful until now to avoid giving a platform to “dissident” scientists, are suddenly subjecting the great and the good of the vaccination establishment to harsh criticism from doctors who want children vaccinated as quickly as possible.

Watching this confected “row” unfold, one thing is clear: “the science” is getting another political pummelling.

 

Sunday, July 25, 2021

With Benefit Of Hindsight Dr. Walter Freeman Couldn't Hold A Candle To Dr. Anthony Fauci...,

undark |  Walter Freeman was itching for a shortcut. Since the 1930s, the Washington, D.C. neurologist had been drilling through the skulls of psychiatric patients to scoop out brain chunks in the hopes of calming their mental torment. But Freeman decided he wanted something simpler than a bone drill — he wanted a rod-like implement that could pass directly through the eye socket to penetrate the brain. He’d then swirl the rod around to scramble the patient’s frontal lobes, the brain regions that control higher-level thinking and judgment.

Rummaging in his kitchen drawer, Freeman found the perfect tool: a sharp pick of the sort used to shear ice from large blocks. He knew his close colleague, surgeon James Watts, wouldn’t sanction his new approach, so he closed the office door and did his “ice-pick lobotomies” — more formally, transorbital lobotomies — without Watts’ knowledge. 

Though the amoral scientist has been a familiar trope since Victor Frankenstein, we seldom consider what sets these technicians on the path to iniquity. Journalist Sam Kean’s “The Icepick Surgeon: Murder, Fraud, Sabotage, Piracy, and Other Dastardly Deeds Perpetrated in the Name of Science,” helps fill that void, describing how dozens of promising scientists broke bad throughout history — and arguing that the better we understand their moral decay, the more prepared we’ll be to quash the next Freeman. “Understanding what good and evil look like in science — and the path from one to the other — is more vital than ever,” Kean writes. “Science has its own sins to answer for.”  

Expert at spinning historical science yarns — his last book, “The Bastard Brigade,” was about the failed Nazi atom bomb — Kean presents a scientific rogues’ gallery that’s both entertaining and chilling. Naturalist William Dampier, who influenced Charles Darwin’s work, resorted to piracy to fund his fieldwork in the 17th century. He joined a band of buccaneers that seized gems, scads of valuable silk, and stocks of perfume in raids throughout Central and South America.

A century later, celebrated Scottish surgeon John Hunter worked with grave robbers to obtain bodies so he could study human anatomy. His colleagues emulated his approach, and the pipeline from corpse-snatchers to anatomists continued for decades. The practice was tacitly accepted because it could yield valuable insights — Hunter discovered the tear ducts and the olfactory nerve, among other things — but the human toll was horrifying nonetheless. At public hangings, so-called sack-‘em-up men “sometimes even yanked people off the gibbet who weren’t quite dead yet,” Kean writes. “They’d merely passed out from lack of air — only to pop awake later on the dissection table.”

In a way, though, the gruesome endpoints Kean describes — the scrambled brains, the ransacked ships, the deathbeds — are the least interesting part of his story. They mostly confirm philosopher Simone Weil’s impression that real-world evil is “gloomy, monotonous, barren, boring.”

What’s more compelling is Kean’s take on how the scientists justified their actions. They pushed aside thoughts of collateral damage — the lives they disrespected and damaged — by rationalizing that their contributions outweighed any harm they were doing. Freeman’s work at an early 20th-century psychiatric asylum convinced him of the unalloyed good of calming agitated patients via lobotomy. “The ward could be brightened when curtains and flowerpots were no longer in danger of being used as weapons,” Freeman observed.

Thursday, May 13, 2021

Smart Scientifically Literate Folks Can Interpret Data For Themselves And Disagree With "Experts"

arvix |  Controversial understandings of the coronavirus pandemic have turned data visualizations into a battleground. Defying public health officials, coronavirus skeptics on US social media spent much of 2020 creating data visualizations showing that the government’s pandemic response was excessive and that the crisis was over. This paper investigates how pandemic visualizations circulated on social media, and shows that people who mistrust the scientific establishment often deploy the same rhetorics of data-driven decision-making used by experts, but to advocate for radical policy changes.Using a quantitative analysis of how visualizations spread on Twitter and an ethnographic approach to analyzing conversations about COVID data on Facebook, we document an epistemological gap that leads pro- and anti-mask groups to draw drastically different inferences from similar data. Ultimately, we argue that the deployment of COVID data visualizations reflect a deeper sociopolitical rift regarding the place of science in public life.

This paper has investigated anti-mask counter-visualizations on social media in two ways: quantitatively, we identify the main types of visualizations that are present within different networks (e.g., pro-and anti-mask users), and we show that anti-mask users are prolific and skilled purveyors of data visualizations. These visualizations are popular, use orthodox visualization methods, and are promulgated as a way to convince others that public health measures are unnecessary. In our qualitative analysis, we use an ethnographic approach to illustrate how COVID counter-visualizations actually reflect a deeper epistemological rift about the role of data in public life, and that the practice of making counter-visualizations reflects a participatory, heterodox approach to information sharing. Convincing anti-maskers to support public health measures in the age ofCOVID-19 will require more than “better” visualizations, data literacy campaigns, or increased public access to data. Rather, it requiresa sustained engagement with the social world of visualizations andthe people who make or interpret them.While academic science is traditionally a system for producing knowledge within a laboratory, validating it through peer review,and sharing results within subsidiary communities, anti-maskers reject this hierarchical social model. They espouse a vision of science that is radically egalitarian and individualist. This study forces us to see that coronavirus skeptics champion science as a personal practice that prizes rationality and autonomy; for them, it is not a body of knowledge certified by an institution of experts. Calls for data or scientific literacy therefore risk recapitulating narratives that anti-mask views are the product of individual ignorance rather than coordinated information campaigns that rely heavily on networked participation. 

Recognizing the systemic dynamics that contribute to this epistemological rift is the first step towards grappling with this phenomenon, and the findings presented in this paper corroborate similar studies about the impact of fake news on American evangelical voters [98] and about the limitations of fact-checking climate change denialism [42].Calls for media literacy—especially as an ethics smokescreen to avoid talking about larger structural problems like white supremacy—are problematic when these approaches are deficit-focused and trained primarily on individual responsibility. Powerful research and media organizations paid for by the tobacco or fossil fuel indus-tries [79,86] have historically capitalized on the skeptical impulse that the “science simply isn’t settled,” prompting people to simply“think for themselves” to horrifying ends. The attempted coup on January 6, 2021 has similarly illustrated that well-calibrated, well-funded systems of coordinated disinformation can be particularly dangerous when they are designed to appeal to skeptical people.While individual insurrectionists are no doubt to blame for their own acts of violence, the coup relied on a collective effort fanned by people questioning, interacting, and sharing these ideas with other people. These skeptical narratives are powerful because they resonate with these these people’s lived experience and—crucially—because they are posted by influential accounts across influential platforms.Broadly, the findings presented in this paper also challenge conventional assumptions in human-computer interaction research about who imagined users might be: visualization experts tradition-ally design systems for scientists, business analysts, or journalists. 

Researchers create systems intended to democratize processes of data analysis and inform a broader public about how to use data,often in the clean, sand-boxed environment of an academic lab.However, this literature often focuses narrowly on promoting expressivity (either of current or new visualization techniques), assuming that improving visualization tools will lead to improving public understanding of data. This paper presents a community of users that researchers might not consider in the systems building process (i.e., supposedly “data illiterate” anti-maskers), and we show how the binary opposition of literacy/illiteracy is insufficient for describing how orthodox visualizations can be used to promote unorthodox science. Understanding how these groups skillfully manipulate data to undermine mainstream science requires us to adjust the theoretical assumptions in HCI research about how data can be leveraged in public discourse.What, then, are visualization researchers and social scientists todo? One step might be to grapple with the social and political dimensions of visualizations at the beginning, rather than the end, of projects [31]. This involves in part a shift from positivist to interpretivist frameworks in visualization research, where we recognize that knowledge we produce in visualization systems is fundamentally“multiple, subjective, and socially constructed” [73]. A secondary issue is one of uncertainty: Jessica Hullman and Zeynep Tufekc

 


 

 

 

Thursday, April 08, 2021

Jean Luc Montagnier: mRNA Therapeutic A "Sorcerer's Apprentice" With Heritable Consequences..,

francesoir  |  In a letter dated March 21, 2021 published on the Nakim.org website, Professor Montagnier, Nobel Prize winner in medicine,  supports the request of Dr Seligmann and engineer Haim Yativ for the suspension of vaccination against Covid-19 judges of the Supreme Court of the State of Israel.

This letter is in support of the petition for the suspension of vaccination against covid-19 which was presented to you by MM. Yativ and Seligmann.

I am Luc Montagnier, doctor of medicine, professor emeritus at the Institut Pasteur in Paris, director of research emeritus at CNRS, Nobel Prize in physiology or medicine for the discovery of the AIDS virus.

I am an expert in virology, having devoted a large part of my research to RNA viruses, in particular mouse encephalomyocarditis, Rous sarcoma virus, HIV 1 and HIV 2 virus.

Considerable effort has been devoted to vaccination against the coronavirus covid-19 responsible for a global pandemic. In particular the State of Israel has organized a mass vaccination of its population so far, 49% of its total population has received two doses of Pfizer vaccine. 

First of all, I would like to stress the novelty of this type of vaccine. 

  • In conventional vaccines, the genetic information carried by viral DNA or RNA is inactivated and virus proteins are used to induce vaccine antibodies. In some cases, the virus remains alive, but is attenuated by successive passages in vitro. 
  • In the case of so-called RNA messenger vaccines, these vaccines are made from an active fraction of the virus's RNA which will be injected into the vaccinated person. It therefore penetrates the cells of the latter which will manufacture the vaccine proteins from the code of the injected RNA.
    We immediately see that this last step depends a lot on its success on the physiological state of the recipient.

I would like to summarize the potential dangers of these vaccines in a mass vaccination policy.

1. Short-term side effects  : these are not the normal local reactions found with any vaccination, but serious reactions are life threatening to the recipient such as anaphylactic shock linked to a component of the vaccine mixture. , or severe allergies or an autoimmune reaction up to cell aplasia.

2. Lack of vaccine protection  :

2.1  induction of facilitating antibodies  - the induced antibodies do not neutralize a viral infection, but on the contrary facilitate it depending on the recipient. The latter may have already been exposed to the virus asymptomatically. A low level of naturally induced antibodies may compete with the antibodies induced by the vaccine.

2.2 The production of antibodies induced by vaccination in a population highly exposed to the virus will lead to the selection of variants resistant to these antibodies. These variants can be more virulent or more transmissible . This is what we are seeing now. An endless virus-vaccine race that will always turn to the advantage for the virus.

3. Long-term effects  : Contrary to the claims of the manufacturers of messenger RNA vaccines, there is a risk of integration of viral RNA into the human genome. Indeed, each of our cells has endogenous retroviruses with the ability to reverse transcriptase RNA into DNA. Although this is a rare event, its passage through the DNA of germ cells and its transmission to future generations cannot be excluded.

“Faced with an unpredictable future, it is better to abstain. 

Professor Luc Montagnier

Fuck Robert Kagan And Would He Please Now Just Go Quietly Burn In Hell?

politico | The Washington Post on Friday announced it will no longer endorse presidential candidates, breaking decades of tradition in a...