Showing posts with label scientific morality. Show all posts
Showing posts with label scientific morality. Show all posts

Thursday, December 03, 2020

With Covid19 Americans Don't Know The Difference Between Science And Scientism

tomluongo |  And with COVID-19 we’ve reached the height of this practice of imbuing scientists with a god-like knowledge of what we should do given any thorny political problem.

That’s why pseudo-intellectuals and midwits in white suburbia bought into the lies of Anthony Fauci, while ignoring the flip-flopping of him, the CDC, the WHO, and every other ‘expert.’

This science worship neatly bypasses politicians you don’t like to support whatever argument you want to believe. It doesn’t matter that it’s now just as much a religion as Christianity or Islam.

If the high priest of ‘science’ says masks are necessary on Tuesdays but not Thursdays then they simply go along with it because the alternative is admitting that your priests are just hucksters with fancy government titles.

It also absolves people of the responsibility of making the hard decisions. The experts have all that worked out.

Which brings me to what actually started this blog post.

One of these true high priests of ‘scientism,’ the straight-out-of-central-casting Neil Degrasse Tyson opined recently on RT about how disappointed he was with humanity over not coming together over COVID-19.

“I thought that when the coronavirus landed that we would’ve all banded together and say: ‘We’re all human and that’s a common enemy, like an alien invasion. We’ve all seen it in the movies. We got to be together on this one.’ But it didn’t happen to my great disappointment in our species.”

At this late date for a guy like Mr. Tyson to go on thinking COVID-19 was such an existential threat to humanity as an alien invasion is really stunning.

I thought this guy was supposed to be smart? Like really smart? 

He goes on further:

“I don’t mind political fights. Political fights are fine when you’re talking about policy and legislation. But you should never have a political fight about…scientific research that has been objectively shown to be true in peer-reviewed journals,” Tyson said, adding that doing so is a “recipe for disaster.”

Now this I agree somewhat with, which is why I consider this more like Coronapocalypse: The Movie and not a true existential threat to humanity which required any kind of policy decision which sparked this political fight he’s crying crocodile tears over.

Because, and I’m sure Mr. Tyson would agree with this if he were a scientist, there is little “…scientific research that has been objectively shown to be true in peer-reviewed journals…” about COVID-19 which has been properly discussed in the public sphere.

And yet very polarizing policies are in place depriving people of not only their rights, which he seems cavalier to, but also their future prosperity.

Since the ‘science’ has been used by governments assume a level of control over our movements and activities far beyond the scope of what the ‘science’ has shown. And since when the science isn’t settled shouldn’t we settle back on first principles to minimize human suffering along all vectors, not just the one variable, virus transmission, we think we’re controlling, especially for most people the survival rate is greater than 99.9%?

And even this position undermines the basic framework of human rights by placing some cost/benefit analytic overlay on society giving the social engineers more credit than they deserve.

 

Monday, November 23, 2020

Lockdowns Are Politics Masquerading As "Science" And "Morality"

unz  |  Nearly one year into the COVID pandemic, even a modicum of critical thinking should tell us that lockdown politics as practiced in the United States is an unmitigated disaster, and with no end in sight. The reference here to lockdown politics is meant to signify a particularly assaultive, tyrannical set of government policies that in less than a year have brought severe harm to millions, more likely tens of millions of Americans and others across the world. Sadly, a Joe Biden presidency is only bound to aggravate this already intolerable repression and misery.

One grievous problem with the lockdown mania is that by obsessively fixating on the virus, a power-mad elite has ignored what must drive any public intervention: the need for a comprehensive, detailed cost-benefit analysis informing social policy. Stale rhetoric about “following the science” turns out to be not only one dimensional and useless, yet it remains a justification for continuing mass shutdowns, in state after state. The worst consequences include millions of lost jobs and businesses, escalating poverty, record numbers of bankruptcies, educational chaos, new health crises, a sharp rise in addictions and myriad psychological problems.

Meanwhile, it has become abundantly clear that lockdown rules – the very rules overlooked at times of street demonstrations and upheavals – apply only to Trump supporters, the great “super-spreaders”, wherever they gather. Those arbitrary directives have been cynically used by Democratic governors, mayors, and their health czars as a dictatorial political weapon – in part to bolster their own power, in part to subvert Trump’s second presidential run. For them, the pandemic is welcomed as a godsend, to be leveraged for a “global reset” on the road to maximum power, an incipient fascism. What we have here is what C. Wright Mills long ago called the “higher immorality” in his classic The Power Elite.

Entirely predictable fallouts from months of destructive lockdowns were recently acknowledged by even the staid World Health Organization, which urged a worldwide end to the shutdowns – a message, however, never processed by an insular political/medical/media establishment in the U.S. The WHO projects a future of intensified global poverty, food insecurity, disease spread, and other health crises so long as the lockdown remains in place. Food-supply chains have already been harshly disrupted from the combined effects of COVID and harmful government controls. What leading Democrats such as Berkeley professor Robert Reich and California Governor Gavin Newsom commonly (and senselessly) refer to as an “inconvenience” will, as WHO leaders stress, bring added impoverishment to possibly hundreds of millions of people in lesser-developed nations already trapped in endless cycles of social misery. Such damage scarcely registers across the corporate media, where the horrors are casually written off as “collateral damage”.

The WHO warning has been reaffirmed by thousands of medical professionals and scientists aligned with the “Great Barrington Declaration” – a well-grounded denunciation of the lockdown politics that retains a dogmatic hold on Biden and the Dems. The “Declaration” was orchestrated by three world-respected scientists: Jay Bhatticharya of Stanford, Martin Kulldorf of Harvard, Sunetra Gupta of Oxford. Their message, drawn from a painstaking assemblage of international research, is clear and urgent – end draconian restrictions in favor of “focused protection”, which sensibly allows those (the vast majority) at minimum risk of extreme sickness to return to normal social lives. Those least threatened (under age 50) have a 99.98 likelihood of surviving any bout with COVID – less risky than the ordinary flu. The “Barrington” scientists urge a shift toward what in fact has been the historical norm for virus-mitigation: policies taking into account the full range of economic and social as well as medical factors, logically necessary to curtail the amount of total harm.

The nonstop political/media fearmongering behind mass shutdowns assumes, wrongly, that this particular virus (unlike most others) can somehow be banished from human existence, never to return. They further believe, against all logic and experience, that lockdowns must be imposed until a vaccine is discovered and administered (by mandate?) to entire populations, the ostensible goal being some type of general immunity. Generally forgotten is the poor efficacy of so many vaccines that are promoted as uniform remedies. In fact a vaccine has long been available for influenza, yet the success rate hovers between 20 and 60 percent while hundreds of thousands of people die yearly (roughly 650,000 on average) across the world from that stubborn virus.

The lofty medical experts have little to say, moreover, about the state of public health in general. In the U.S., deaths for 2018 totaled nearly three million, with heart disease (655,000) and cancer (600,000) topping the list. What particularly stands out, however, are the mortality levels for all respiratory diseases, including influenza and pneumonia (both viral and bacterial): roughly 220,000, close to the yearly average and little more than the current COVID death toll. Never in 2018 nor at any time in the past has any government, health, or media figure called for mass lockdowns to either “flatten the curve” or “destroy the virus” in response to such health challenges. Not even a murmur in that direction, much less moral panic.

Thursday, June 14, 2018

Time and its Structure (Chronotopology)


intuition |  MISHLOVE: I should mention here, since you've used the term, that chronotopology is the name of the discipline which you founded, which is the study of the structure of time. 

MUSES: Do you want me to comment on that? 

MISHLOVE: Yes, please. 

MUSES: In a way, yes, but in a way I didn't found it. I was thinking cybernetics, for instance, was started formally by Norbert Weiner, but it began with the toilet tank that controlled itself. When I was talking with Weiner at Ravello, he happily agreed with this. 

MISHLOVE: The toilet tank. 

MUSES: He says, "Oh yes." The self-shutting-off toilet tank is the first cybernetic advance of mankind. 

MISHLOVE: Oh. And I suppose chronotopology has an illustrious beginning like this also. 

MUSES: Well, better than the toilet tank, actually. It has a better beginning than cybernetics. 

MISHLOVE: In effect, does it go back to the study of the ancient astrologers? 

MUSES: Well, it goes back to the study of almost all traditional cultures. The word astronomia, even the word mathematicus, meant someone who studied the stars, and in Kepler's sense they calculated the positions to know the qualities of time. But that's an independent hypothesis. The hypothesis of chronotopology is whether you have pointers of any kind -- ionospheric disturbances, planetary orbits, or whatnot -- independently of those pointers, time itself has a flux, has a wave motion, the object being to surf on time. 

MISHLOVE: Now, when you talk about the wave motion of time, I'm getting real interested and excited, because in quantum physics there's this notion that the underlying basis for the physical universe are these waves, even probability waves -- nonphysical, nonmaterial waves -- sort of underlying everything. 

MUSES: Very, very astute, because these waves are standing waves. Actually the wave-particle so-called paradox isn't that bad, when you consider that a particle is a wave packet, a packet of standing waves. That's why an electron can go through a plate and leave wavelike things. Actually our bodies are like fountains. The fountain has a shape only because it's being renewed every minute, and our bodies are being renewed. So we are standing waves; we're no exception. 

MISHLOVE: This deep structure of matter, where we can say what we really are in our bodies is not where we appear to be -- you're saying the same thing is true of time. It's not quite what it appears to be. 

MUSES: No, we're a part of this wave structure, and matter and energy all occur in waves, and time is the master control. I will give you an illustration of that. If you'll take a moment of time, this moment cuts through the entire physical universe as we're talking. It holds all of space in itself. But one point of space doesn't hold all of time. In other words, time is much bigger than space. 

MISHLOVE: That thought sort of made me gasp a second -- all of physical space in each now moment -- 

MUSES: Is contained in a point of time, which is a moment. And of course, a line of time is then an occurrence, and a wave of time is a recurrence. And then if you get out from the circle of time, which Nietzsche saw, the eternal recurrence -- if you break that, as we know we do, we develop, and then we're on a helix, because we come around but it's a little different each time. 

MISHLOVE: Well, now you're beginning to introduce the notion of symbols -- point, line, wave, helix, and so on. 

MUSES: Yes, the dimensions of time. 

MISHLOVE: One of the interesting points that you seem to make in your book is that symbols themselves -- words, pictures -- point to the deeper structure of things, including the deeper structure of time. 

MUSES: Yes. Symbols I would regard as pointers to their meanings, like revolving doors. There are some people, however, who have spent their whole lives walking in the revolving door and never getting out of it. 

Time and its Structure (Chronotopology)
Foreword by Charles A. Muses to "Communication, Organization, And Science" by Jerome Rothstein - 1958 

Your Genetic Presence Through Time


counterpunch |  The propagation through time of your personal genetic presence within the genetic sea of humanity can be visualized as a wave that arises out of the pre-conscious past before your birth, moves through the streaming present of your conscious life, and dissipates into the post-conscious future after your death.

You are a pre-conscious genetic concentration drawn out of the genetic diffusion of your ancestors. If you have children who survive you then your conscious life is the time of increase of your genetic presence within the living population. Since your progeny are unlikely to reproduce exponentially, as viruses and bacteria do, your post-conscious genetic presence is only a diffusion to insignificance within the genetic sea of humanity.

During your conscious life, you develop a historical awareness of your pre-conscious past, with a personal interest that fades with receding generations. Also during your conscious life, you can develop a projective concern about your post-conscious future, with a personal interest that fades with succeeding generations and with increasing predictive uncertainty.

Your conscious present is the sum of: your immediate conscious awareness, your reflections on your prior conscious life, your historical awareness of your pre-conscious past, and your concerns about your post-conscious future.

Your time of conscious present becomes increasingly remote in the historical awareness of your succeeding generations.

Your loneliness in old age is just your sensed awareness of your genetic diffusion into the living population of your conscious present and post-conscious future.

Tuesday, June 12, 2018

"Privacy" Isn't What's Really At Stake...,


NewYorker  |  The question about national security and personal convenience is always: At what price? What do we have to give up? On the criminal-justice side, law enforcement is in an arms race with lawbreakers. Timothy Carpenter was allegedly able to orchestrate an armed-robbery gang in two states because he had a cell phone; the law makes it difficult for police to learn how he used it. Thanks to lobbying by the National Rifle Association, federal law prohibits the National Tracing Center from using a searchable database to identify the owners of guns seized at crime scenes. Whose privacy is being protected there?

Most citizens feel glad for privacy protections like the one in Griswold, but are less invested in protections like the one in Katz. In “Habeas Data,” Farivar analyzes ten Fourth Amendment cases; all ten of the plaintiffs were criminals. We want their rights to be observed, but we also want them locked up.

On the commercial side, are the trade-offs equivalent? The market-theory expectation is that if there is demand for greater privacy then competition will arise to offer it. Services like Signal and WhatsApp already do this. Consumers will, of course, have to balance privacy with convenience. The question is: Can they really? The General Data Protection Regulation went into effect on May 25th, and privacy-advocacy groups in Europe are already filing lawsuits claiming that the policy updates circulated by companies like Facebook and Google are not in compliance. How can you ever be sure who is eating your cookies?

Possibly the discussion is using the wrong vocabulary. “Privacy” is an odd name for the good that is being threatened by commercial exploitation and state surveillance. Privacy implies “It’s nobody’s business,” and that is not really what Roe v. Wade is about, or what the E.U. regulations are about, or even what Katz and Carpenter are about. The real issue is the one that Pollak and Martin, in their suit against the District of Columbia in the Muzak case, said it was: liberty. This means the freedom to choose what to do with your body, or who can see your personal information, or who can monitor your movements and record your calls—who gets to surveil your life and on what grounds.

As we are learning, the danger of data collection by online companies is not that they will use it to try to sell you stuff. The danger is that that information can so easily fall into the hands of parties whose motives are much less benign. A government, for example. A typical reaction to worries about the police listening to your phone conversations is the one Gary Hart had when it was suggested that reporters might tail him to see if he was having affairs: “You’d be bored.” They were not, as it turned out. We all may underestimate our susceptibility to persecution. “We were just talking about hardwood floors!” we say. But authorities who feel emboldened by the promise of a Presidential pardon or by a Justice Department that looks the other way may feel less inhibited about invading the spaces of people who belong to groups that the government has singled out as unpatriotic or undesirable. And we now have a government that does that. 


Smarter ____________ WILL NOT Take You With Them....,



nautilus  |  When it comes to artificial intelligence, we may all be suffering from the fallacy of availability: thinking that creating intelligence is much easier than it is, because we see examples all around us. In a recent poll, machine intelligence experts predicted that computers would gain human-level ability around the year 2050, and superhuman ability less than 30 years after.1 But, like a tribe on a tropical island littered with World War II debris imagining that the manufacture of aluminum propellers or steel casings would be within their power, our confidence is probably inflated.

AI can be thought of as a search problem over an effectively infinite, high-dimensional landscape of possible programs. Nature solved this search problem by brute force, effectively performing a huge computation involving trillions of evolving agents of varying information processing capability in a complex environment (the Earth). It took billions of years to go from the first tiny DNA replicators to Homo Sapiens. What evolution accomplished required tremendous resources. While silicon-based technologies are increasingly capable of simulating a mammalian or even human brain, we have little idea of how to find the tiny subset of all possible programs running on this hardware that would exhibit intelligent behavior.

But there is hope. By 2050, there will be another rapidly evolving and advancing intelligence besides that of machines: our own. The cost to sequence a human genome has fallen below $1,000, and powerful methods have been developed to unravel the genetic architecture of complex traits such as human cognitive ability. Technologies already exist which allow genomic selection of embryos during in vitro fertilization—an embryo’s DNA can be sequenced from a single extracted cell. Recent advances such as CRISPR allow highly targeted editing of genomes, and will eventually find their uses in human reproduction.
It is easy to forget that the computer revolution was led by a handful of geniuses: individuals with truly unusual cognitive ability.
The potential for improved human intelligence is enormous. Cognitive ability is influenced by thousands of genetic loci, each of small effect. If all were simultaneously improved, it would be possible to achieve, very roughly, about 100 standard deviations of improvement, corresponding to an IQ of over 1,000. We can’t imagine what capabilities this level of intelligence represents, but we can be sure it is far beyond our own. Cognitive engineering, via direct edits to embryonic human DNA, will eventually produce individuals who are well beyond all historical figures in cognitive ability. By 2050, this process will likely have begun.

 

Proposed Policies For Advancing Embryonic Cell Germline-Editing Technology


niskanencenter |  In a previous post, I touched on the potential social and ethical consequences that will likely emerge in the wake of Dr. Shoukhrat Mitalipov’s recent experiment in germline-edited embryos. The short version: there’s probably no stopping the genetic freight train. However, there are steps we can take to minimize the potential costs, while capitalizing on the many benefits these advancements have to offer us. In order to do that, however, we need to turn our attention away from hyperbolic rhetoric of “designer babies” and focus on the near-term practical considerations—mainly, how we will govern the research, development, and application of these procedures.
Before addressing the policy concerns, however, it’s important to understand the fundamentals of what is being discussed in this debate. In the previous blog, I noted the difference between somatic cell editing and germline editing—one of the major ethical faultlines in this issue space. In order to have a clear perspective of the future possibilities, and current limitations, of genetic modification, let’s briefly examine how CRISPR actually works in practice. 

CRISPR stands for “clustered regularly interspaced short palindromic repeats”—a reference to segments of DNA that function as a defense used by bacteria to ward off foreign infections. That defense system essentially targets specific patterns of DNA in a virus, bacteria, or other threat and destroys it. This approach uses Cas9—an RNA-guided protein—to search through a cell’s genetic material until it finds a genetic sequence that matches the sequence programmed into its guide RNA. Once it finds its target, the protein splices the two strands of the DNA helix. Repair enzymes can then heal the gap in the broken DNA, or filled using new genetic information introduced into the sequence. Conceptually, we can think of CRISPR as the geneticist’s variation of a “surgical laser knife, which allows a surgeon to cut out precisely defective body parts and replace them with new or repaired ones.”

The technology is still cutting edge, and most researchers are still trying to get a handle on the technical difficulties associated with its use. Right now, we’re still in the Stone Age of genetic research. Even though we’ve made significant advancements in recent years, we’re still a long, long way from editing human IQs of our children on-demand. That technology is much further into the future and some doubt that we’ll ever be able to “program” inheritable traits into our individual genomes. In short, don’t expect any superhumanly intelligent, disease-resistant super soldiers any time soon.

The Parallels Between Artificial Intelligence and Genetic Modification
There are few technologies that inspire fantastical embellishments in popular media as much as the exaggerations surrounding genetic modification. In fact, the only technology that comes close to comparison—and indeed, actually parallels the rhetoric quite closely—is artificial intelligence (AI).

Monday, June 11, 2018

Office 365 CRISPR Editing Suite


gizmodo |  The gene-editing technology CRISPR could very well one day rid the world of its most devastating diseases, allowing us to simply edit away the genetic code responsible for an illness. One of the things standing in the way of turning that fantasy into reality, though, is the problem of off-target effects. Now Microsoft is hoping to use artificial intelligence to fix this problem. 

You see, CRISPR is fawned over for its precision. More so than earlier genetic technologies, it can accurately target and alter a tiny fragment of genetic code. But it’s still not always as accurate as we’d like it to be. Thoughts on how often this happens vary, but at least some of the time, CRISPR makes changes to DNA it was intended to leave alone. Depending on what those changes are, they could inadvertently result in new health problems, such as cancer.

Scientists have long been working on ways to fine-tune CRISPR so that less of these unintended effects occur. Microsoft thinks that artificial intelligence might be one way to do it. Working with computer scientists and biologists from research institutions across the U.S., the company has developed a new tool called Elevation that predicts off-target effects when editing genes with the CRISPR. 

It works like this: If a scientist is planning to alter a specific gene, they enter its name into Elevation. The CRISPR system is made up of two parts, a protein that does the cutting and a synthetic guide RNA designed to match a DNA sequence in the gene they want to edit. Different guides can have different off-target effects depending on how they are used. Elevation will suggest which guide is least likely to result in off-target effects for a particular gene, using machine learning to figure it out. It also provides general feedback on how likely off-target effects are for the gene being targeted. The platform bases its learning both on Microsoft research and publicly available data about how different genetic targets and guides interact. 

The work is detailed in a paper published Wednesday in the journal Nature Biomedical Engineering. The tool is publicly available for researchers to use for free. It works alongside a tool released by Microsoft in 2016 called Azimuth that predicts on-target effects.

There is lots of debate over how problematic the off-target effects of CRISPR really are, as well as over how to fix them. Microsoft’s new tool, though, will certainly be a welcome addition to the toolbox. Over the past year, Microsoft has doubled-down on efforts to use AI to attack health care problems.

Who Will Have Access To Advanced Reproductive Technology?



futurism |  In November 2017, a baby named Emma Gibson was born in the state of Tennessee. Her birth, to a 25-year-old woman, was fairly typical, but one aspect made her story unique: she was conceived 24 years prior from anonymous donors, when Emma’s mother was just a year old.  The embryo had been frozen for more than two decades before it was implanted into her mother’s uterus and grew into the baby who would be named Emma.

Most media coverage hailed Emma’s birth as a medical marvel, an example of just how far reproductive technology has come in allowing people with fertility issues to start a family.

Yet, the news held a small detail that gave others pause. The organization that provided baby Emma’s embryo to her parents, the National Embryo Donation Center (NEDC), has policies that state they will only provide embryos to married, heterosexual couples, in addition to several health requirements. Single women and non-heterosexual couples are not eligible.

In other industries, this policy would effectively be labeled as discriminatory. But for reproductive procedures in the United States, such a policy is completely legal. Because insurers do not consider reproductive procedures to be medically necessary, the U.S. is one of the few developed nations without formal regulations or ethical requirements for fertility medicine. This loose legal climate also gives providers the power to provide or deny reproductive services at will.

The future of reproductive technology has many excited about its potential to allow biological birth for those who might not otherwise have been capable of it. Experiments going on today, such as testing functional 3D-printed ovaries and incubating animal fetuses in artificial wombs, seem to suggest that future is well on its way, that fertility medicine has already entered the realm of what was once science fiction.

Yet, who will have access to these advances? Current trends seem to suggest that this will depend on the actions of regulators and insurance agencies, rather than the people who are affected the most.

Cognitive Enhancement In The Context Of Neurodiversity And Psychopathology


ama-assn |  In the basement of the Bureau International des Poids et Mesures (BIPM) headquarters in Sevres, France, a suburb of Paris, there lies a piece of metal that has been secured since 1889 in an environmentally controlled chamber under three bell jars. It represents the world standard for the kilogram, and all other kilo measurements around the world must be compared and calibrated to this one prototype. There is no such standard for the human brain. Search as you might, there is no brain that has been pickled in a jar in the basement of the Smithsonian Museum or the National Institute of Health or elsewhere in the world that represents the standard to which all other human brains must be compared. Given that this is the case, how do we decide whether any individual human brain or mind is abnormal or normal? To be sure, psychiatrists have their diagnostic manuals. But when it comes to mental disorders, including autism, dyslexia, attention deficit hyperactivity disorder, intellectual disabilities, and even emotional and behavioral disorders, there appears to be substantial uncertainty concerning when a neurologically based human behavior crosses the critical threshold from normal human variation to pathology.

A major cause of this ambiguity is the emergence over the past two decades of studies suggesting that many disorders of the brain or mind bring with them strengths as well as weaknesses. People diagnosed with autism spectrum disorder (ASD), for example, appear to have strengths related to working with systems (e.g., computer languages, mathematical systems, machines) and in experiments are better than control subjects at identifying tiny details in complex patterns [1]. They also score significantly higher on the nonverbal Raven’s Matrices intelligence test than on the verbal Wechsler Scales [2]. A practical outcome of this new recognition of ASD-related strengths is that technology companies have been aggressively recruiting people with ASD for occupations that involve systemizing tasks such as writing computer manuals, managing databases, and searching for bugs in computer code [3].

Valued traits have also been identified in people with other mental disorders. People with dyslexia have been found to possess global visual-spatial abilities, including the capacity to identify “impossible objects” (of the kind popularized by M. C. Escher) [4], process low-definition or blurred visual scenes [5], and perceive peripheral or diffused visual information more quickly and efficiently than participants without dyslexia [6]. Such visual-spatial gifts may be advantageous in jobs requiring three-dimensional thinking such as astrophysics, molecular biology, genetics, engineering, and computer graphics [7, 8]. In the field of intellectual disabilities, studies have noted heightened musical abilities in people with Williams syndrome, the warmth and friendliness of individuals with Down syndrome, and the nurturing behaviors of persons with Prader-Willi syndrome [9, 10]. Finally, researchers have observed that subjects with attention deficit hyperactivity disorder (ADHD) and bipolar disorder display greater levels of novelty-seeking and creativity than matched controls [11-13].

Such strengths may suggest an evolutionary explanation for why these disorders are still in the gene pool. A growing number of scientists are suggesting that psychopathologies may have conferred specific evolutionary advantages in the past as well as in the present [14]. The systemizing abilities of individuals with autism spectrum disorder might have been highly adaptive for the survival of prehistoric humans. As autism activist Temple Grandin, who herself has autism, surmised: “Some guy with high-functioning Asperger’s developed the first stone spear; it wasn’t developed by the social ones yakking around the campfire” [15].

Sunday, June 10, 2018

Cognitive Enhancement of Other Species?



singularityhub |  Science fiction author David Brin popularized the concept in his “Uplift” series of novels, in which humans share the world with various other intelligent animals that all bring their own unique skills, perspectives, and innovations to the table. “The benefits, after a few hundred years, could be amazing,” he told Scientific American.

Others, like George Dvorsky, the director of the Rights of Non-Human Persons program at the Institute for Ethics and Emerging Technologies, go further and claim there is a moral imperative. He told the Boston Globe that denying augmentation technology to animals would be just as unethical as excluding certain groups of humans. 

Others are less convinced. Forbes Alex Knapp points out that developing the technology to uplift animals will likely require lots of very invasive animal research that will cause huge suffering to the animals it purports to help. This is problematic enough with normal animals, but could be even more morally dubious when applied to ones whose cognitive capacities have been enhanced.

The whole concept could also be based on a fundamental misunderstanding of the nature of intelligence. Humans are prone to seeing intelligence as a single, self-contained metric that progresses in a linear way with humans at the pinnacle.
 
In an opinion piece in Wired arguing against the likelihood of superhuman artificial intelligence, Kevin Kelly points out that science has no such single dimension with which to rank the intelligence of different species. Each one combines a bundle of cognitive capabilities, some of which are well below our own capabilities and others which are superhuman. He uses the example of the squirrel, which can remember the precise location of thousands of acorns for years.

Uplift efforts may end up being less about boosting intelligence and more about making animals more human-like. That represents “a kind of benevolent colonialism” that assumes being more human-like is a good thing, Paul Graham Raven, a futures researcher at the University of Sheffield in the United Kingdom, told the Boston Globe. There’s scant evidence that’s the case, and it’s easy to see how a chimpanzee with the mind of a human might struggle to adjust.

 

Saturday, June 09, 2018

Genetics in the Madhouse: The Unknown History of Human Heredity


nature  |  Who founded genetics? The line-up usually numbers four. William Bateson and Wilhelm Johannsen coined the terms genetics and gene, respectively, at the turn of the twentieth century. In 1910, Thomas Hunt Morgan began showing genetics at work in fruit flies (see E. Callaway Nature 516, 169; 2014). The runaway favourite is generally Gregor Mendel, who, in the mid-nineteenth century, crossbred pea plants to discover the basic rules of heredity.

Bosh, says historian Theodore Porter. These works are not the fount of genetics, but a rill distracting us from a much darker source: the statistical study of heredity in asylums for people with mental illnesses in late-eighteenth- and early-nineteenth-century Britain, wider Europe and the United States. There, “amid the moans, stench, and unruly despair of mostly hidden places where data were recorded, combined, and grouped into tables and graphs”, the first systematic theory of mental illness as hereditary emerged.

For more than 200 years, Porter argues in Genetics in the Madhouse, we have failed to recognize this wellspring of genetics — and thus to fully understand this discipline, which still dominates many individual and societal responses to mental illness and diversity.

The study of heredity emerged, Porter argues, not as a science drawn to statistics, but as an international endeavour to mine data for associations to explain mental illness. Few recall most of the discipline’s early leaders, such as French psychiatrist, or ‘alienist’, Étienne Esquirol; and physician John Thurnam, who made the York Retreat in England a “model of statistical recording”. Better-known figures, such as statistician Karl Pearson and zoologist Charles Davenport — both ardent eugenicists — come later.

Inevitably, study methods changed over time. The early handwritten correlation tables and pedigrees of patients gave way to more elaborate statistical tools, genetic theory and today’s massive gene-association studies. Yet the imperatives and assumptions of that scattered early network of alienists remain intact in the big-data genomics of precision medicine, asserts Porter. And whether applied in 1820 or 2018, this approach too readily elevates biology over culture and statistics over context — and opens the door to eugenics.

Wednesday, May 16, 2018

Did Autistic Attention To Detail And Collaborative Morality Drive Human Evolution?


tandfonline |  Selection pressures to better understand others’ thoughts and feelings are seen as a primary driving force in human cognitive evolution. Yet might the evolution of social cognition be more complex than we assume, with more than one strategy towards social understanding and developing a positive pro-social reputation? Here we argue that social buffering of vulnerabilities through the emergence of collaborative morality will have opened new niches for adaptive cognitive strategies and widened personality variation. Such strategies include those that that do not depend on astute social perception or abilities to think recursively about others’ thoughts and feelings. We particularly consider how a perceptual style based on logic and detail, bringing certain enhanced technical and social abilities which compensate for deficits in complex social understanding could be advantageous at low levels in certain ecological and cultural contexts. ‘Traits of autism’ may have promoted innovation in archaeological material culture during the late Palaeolithic in the context of the mutual interdependence of different social strategies, which in turn contributed to the rise of innovation and large scale social networks.

physorg | The ability to focus on detail, a common trait among people with autism, allowed realism to flourish in Ice Age art, according to researchers at the University of York. 



Around 30,000 years ago realistic art suddenly flourished in Europe. Extremely accurate depictions of bears, bison, horses and lions decorate the walls of Ice Age archaeological sites such as Chauvet Cave in southern France.

Why our ice age ancestors created exceptionally realistic art rather than the very simple or stylised art of earlier modern humans has long perplexed researchers.

Many have argued that psychotropic drugs were behind the detailed illustrations. The popular idea that drugs might make people better at art led to a number of ethically-dubious studies in the 60s where participants were given art materials and LSD.

The authors of the new study discount that theory, arguing instead that individuals with "detail focus", a trait linked to , kicked off an artistic movement that led to the proliferation of realistic cave drawings across Europe.
The ability to focus on detail, a common trait among people with autism, allowed realism to flourish in Ice Age art, according to researchers at the University of York.
Around 30,000 years ago realistic art suddenly flourished in Europe. Extremely accurate depictions of bears, bison, horses and lions decorate the walls of Ice Age archaeological sites such as Chauvet Cave in southern France.
Why our ice age ancestors created exceptionally realistic art rather than the very simple or stylised art of earlier modern humans has long perplexed researchers.
Many have argued that psychotropic drugs were behind the detailed illustrations. The popular idea that drugs might make people better at art led to a number of ethically-dubious studies in the 60s where participants were given art materials and LSD.
The authors of the new study discount that theory, arguing instead that individuals with "detail focus", a trait linked to , kicked off an artistic movement that led to the proliferation of realistic cave drawings across Europe.

Monday, May 07, 2018

Creating Racism: Psychiatry's Betrayal


cchr |  Is racism alive today?

In the United States, African-American and Hispanic children in predominantly white school districts are classified as “learning disabled” more often than whites. This leads to millions of minority children being hooked onto prescribed mind-altering drugs—some more potent than cocaine—to “treat” this “mental disorder.” And yet, with early reading instruction, the number of students so classified could be reduced by up to 70 percent.

African-Americans and Hispanics are also significantly over-represented in US prisons.
In Britain, black men are ten times more likely than white men to be diagnosed as “schizophrenic,” and more likely to be prescribed and given higher doses of powerful psychotropic (mind-altering) drugs. They are also more likely to receive electroshock treatment (over 400 volts of electricity sent searing through the brain to control or alter a person’s behavior) and to be subjected to physical and chemical restraints.

Around the world, racial minority groups continue to come under assault. The effects are obvious: poverty, broken families, ruined youth, and even genocide (deliberate destruction of a race or culture). No matter how loud the pleadings or sincere the efforts of our religious leaders, our politicians and our teachers, racism just seems to persist.

Yes, racism persists. But why? Rather than struggle unsuccessfully with the answer to this question, there is a better question to ask. Who?

The truth is we will not fully understand racism until we recognize that two largely unsuspected groups are actively and deceptively fostering racism throughout the world. The legacy of these groups includes such large-scale tragedies as the Nazi Holocaust, South Africa’s apartheid and today, the widespread disabling of millions of schoolchildren with harmful, addictive drugs. These groups are psychiatry and psychology.

In 1983, a World Health Organization report stated, “…in no other medical field in South Africa is the contempt of the person, cultivated by racism, more concisely portrayed than in psychiatry.”

Professor of Community Psychiatry, Dr. S. P. Sashidharan, stated, “Psychiatry comes closest to the police…in pursuing practices and procedures that…discriminate against minority ethnic groups in the United Kingdom.”

Dr. Karen Wren and Professor Paul Boyle of the University of St. Andrews, Scotland, concluded that the role of scientific racism in psychiatry throughout Europe is well established historically and continues today.

Since 1969, CCHR has worked in the field of human rights and mental health reform, and has investigated the racist influence of the “mental health” professions on the Nazi Holocaust, apartheid, the cultural assault of the Australian Aboriginal people, New Zealand Maoris and Native American Indians, and the current discrimination against Blacks across the world.

Psychiatry and psychology’s racist ideologies continue to light the fires of racism locally and internationally to this day.

This report is designed to raise awareness among individuals about this harmful influence. Not only can racism be defeated, but it must be, if man is to live in true harmony.

Monday, January 01, 2018

Hating These Humans Is The Easiest Thing To Do...,


nautil.us |  Considerable evidence suggests that dividing the world into Us and Them is deeply hard-wired in our brains, with an ancient evolutionary legacy. For starters, we detect Us/Them differences with stunning speed. Stick someone in a “functional MRI”—a brain scanner that indicates activity in various brain regions under particular circumstances. Flash up pictures of faces for 50 milliseconds—a 20th of a second—barely at the level of detection. And remarkably, with even such minimal exposure, the brain processes faces of Thems differently than Us-es.

This has been studied extensively with the inflammatory Us/Them of race. Briefly flash up the face of someone of a different race (compared with a same-race face) and, on average, there is preferential activation of the amygdala, a brain region associated with fear, anxiety, and aggression. Moreover, other-race faces cause less activation than do same-race faces in the fusiform cortex, a region specializing in facial recognition; along with that comes less accuracy at remembering other-race faces. Watching a film of a hand being poked with a needle causes an “isomorphic reflex,” where the part of the motor cortex corresponding to your own hand activates, and your hand clenches—unless the hand is of another race, in which case less of this effect is produced.

The brain’s fault lines dividing Us from Them are also shown with the hormone oxytocin. It’s famed for its pro-social effects—oxytocin prompts people to be more trusting, cooperative, and generous. But, crucially, this is how oxytocin influences behavior toward members of your own group. When it comes to outgroup members, it does the opposite.

The automatic, unconscious nature of Us/Them-ing attests to its depth. This can be demonstrated with the fiendishly clever Implicit Association Test. Suppose you’re deeply prejudiced against trolls, consider them inferior to humans. To simplify, this can be revealed with the Implicit Association Test, where subjects look at pictures of humans or trolls, coupled with words with positive or negative connotations. The couplings can support the direction of your biases (e.g., a human face and the word “honest,” a troll face and the word “deceitful”), or can run counter to your biases. And people take slightly longer, a fraction of a second, to process discordant pairings. It’s automatic—you’re not fuming about clannish troll business practices or troll brutality in the Battle of Somewhere in 1523. You’re processing words and pictures, and your anti-troll bias makes you unconsciously pause, stopped by the dissonance linking troll with “lovely,” or human with “malodorous.”

We’re not alone in Us/Them-ing. It’s no news that other primates can make violent Us/Them distinctions; after all, chimps band together and systematically kill the males in a neighboring group. Recent work, adapting the Implicit Association Test to another species, suggests that even other primates have implicit negative associations with Others. Rhesus monkeys would look at pictures either of members of their own group or strangers, coupled with pictures of things with positive or negative connotations. And monkeys would look longer at pairings discordant with their biases (e.g., pictures of members of their own group with pictures of spiders). These monkeys don’t just fight neighbors over resources. They have negative associations about them—“Those guys are like yucky spiders, but us, us, we’re like luscious fruit.”

Thus, the strength of Us/Them-ing is shown by the: speed and minimal sensory stimuli required for the brain to process group differences; tendency to group according to arbitrary differences, and then imbue those differences with supposedly rational power; unconscious automaticity of such processes; and rudiments of it in other primates. As we’ll see now, we tend to think of Us, but not Thems, fairly straightforwardly.

Monday, December 11, 2017

Moral Dependency: #ThatAss = Truth Every Time


motherjones |  Later in the review, Magnet summarizes The Dream and the Nightmare, which he wrote in the 90s:
In that book, I argued that the counterculture’s remaking of mainstream white American culture in the 1960s — the sexual revolution; the fling with drugs…the belief that in racist America, the criminal was really the victim of society…[etc.] — all these attitudes that devalued traditional mainstream values trickled down from young people and their teachers in the universities, to the media, to the mainstream Protestant churches, to the ed schools, to the high schools, and finally to American culture at large.
And when these attitudes made their way to the ghetto, they destigmatized and validated the already-existing disproportionate illegitimacy, drug use, crime, school dropout, non-work, and welfare dependency there, and caused the rate of all these pathologies to skyrocket startlingly in the 1960s and beyond.
….Aghast at the minority-crime explosion that rocked not just the ghettoes but much of urban America, voters began electing officials, especially in New York, who believed that the real victim of a crime was the victim, not the criminal — who ought to be arrested and jailed — and crime fell accordingly.
In other words, blacks today have no cause to blame their troubles on anyone but themselves. Unless they want to blame it on lefty counterculture. This is pretty putrid stuff, and I don’t feel like taking it on right now. Instead, I’m going to change the subject so suddenly you might get whiplash.

Here we go: it’s hardened beliefs like this that make it so hard for many people to accept the lead-crime hypothesis that I’ve written about frequently and at length. A lot of teen pathologies did start to skyrocket in the 60s, but the primary cause was almost certainly lead poisoning. Certainly lead was the proximate cause of increases in crime, teen pregnancy, and school dropout rates. And these effects were more pronounced among blacks than whites, because blacks lived disproportionately in areas with high levels of lead. The opposite is true too: the decline in these pathologies starting in the 90s was due to the phaseout of lead in gasoline.


In theory, none of this should be too hard to accept. The evidence is strong, and given what we know about the effects of lead on brain development, it makes perfect sense. In practice, though, if lead poisoning was the primary cause of the increase in various pathologies in the 60s and beyond, then the counterculture wasn’t. And if the phaseout of leaded gasoline was responsible for the subsequent decline, then the EPA gets the credit, not tough-on-crime policies. And that can’t be tolerated.

On the left, the problems are similar. Liberals tend to dislike “essentialist” explanations of things like crime rates because that opens the door to noxious arguments that blacks are biologically more crime prone than whites. As it happens, lead poisoning isn’t truly an essentialist explanation, but for many it’s too close for comfort. And anyway, liberals have their own explanations for the crime wave of the 60s: poverty, racism, easy availability of guns, and so forth.

Friday, October 27, 2017

Having Nothing to Hide - Kaspersky Opens Transparency Centers


theintercept |  Responding to U.S. government suggestions that its antivirus software has been used for surveillance of customers, Moscow-based Kaspersky Lab is launching what it’s calling a transparency initiative to allow independent third parties to review its source code and business practices and to assure the information security community that it can be trusted.

The company plans to begin the code review before the end of the year and establish a process for conducting ongoing reviews, of both the updates it makes to software and the threat-detection rules it uses to detect malware and upload suspicious files from customer machines. The latter refers to signatures — search terms used to detect potential malware —  which are the focus of recent allegations.

The company will open three “transparency centers” in the U.S., Europe, and Asia, where trusted partners will be able to access the  third-party reviews of its code and rules. It will also engage an independent assessment of its development processes and work with an independent party to develop security controls for how it processes data uploaded from customer machines.

“[W]e want to show how we’re completely open and transparent. We’ve nothing to hide,” Eugene Kaspersky, the company’s chair and CEO, said in a written statement.

The moves follow a company offer in July to allow the U.S. government to review its source code.
Although critics say the transparency project is a good idea, some added it is insufficient to instill trust in Kaspersky going forward.

“The thing [they’re] talking about is something that the entire antivirus industry should adopt and should have adopted in the beginning,” said Dave Aitel, a former NSA analyst and founder of security firm Immunity. But in the case of Kaspersky, “the reality is … you can’t trust them, so why would you trust the process they set up?”

Kaspersky has come under intense scrutiny after its antivirus software was linked to the breach of an NSA employee’s home computer in 2015 by Russian government hackers who stole classified documents or tools from the worker’s machine. News reports, quoting U.S. government sources, have suggested Kaspersky colluded with the hackers to steal the documents from the NSA worker’s machine, or at least turned a blind eye to the activity.

Friday, September 29, 2017

Why the Future Doesn't Need Us


ecosophia |  Let’s start with the concept of the division of labor. One of the great distinctions between a modern industrial society and other modes of human social organization is that in the former, very few activities are taken from beginning to end by the same person. A woman in a hunter-gatherer community, as she is getting ready for the autumn tuber-digging season, chooses a piece of wood, cuts it, shapes it into a digging stick, carefully hardens the business end in hot coals, and then puts it to work getting tubers out of the ground. Once she carries the tubers back to camp, what’s more, she’s far more likely than not to take part in cleaning them, roasting them, and sharing them out to the members of the band.

A woman in a modern industrial society who wants to have potatoes for dinner, by contrast, may do no more of the total labor involved in that process than sticking a package in the microwave. Even if she has potatoes growing in a container garden out back, say, and serves up potatoes she grew, harvested, and cooked herself, odds are she didn’t make the gardening tools, the cookware, or the stove she uses. That’s division of labor: the social process by which most members of an industrial society specialize in one or another narrow economic niche, and use the money they earn from their work in that niche to buy the products of other economic niches.

Let’s say it up front: there are huge advantages to the division of labor.  It’s more efficient in almost every sense, whether you’re measuring efficiency in terms of output per person per hour, skill level per dollar invested in education, or what have you. What’s more, when it’s combined with a social structure that isn’t too rigidly deterministic, it’s at least possible for people to find their way to occupational specialties for which they’re actually suited, and in which they will be more productive than otherwise. Yet it bears recalling that every good thing has its downsides, especially when it’s pushed to extremes, and the division of labor is no exception.

Crackpot realism is one of the downsides of the division of labor. It emerges reliably whenever two conditions are in effect. The first condition is that the task of choosing goals for an activity is assigned to one group of people and the task of finding means to achieve those goals is left to a different group of people. The second condition is that the first group needs to be enough higher in social status than the second group that members of the first group need pay no attention to the concerns of the second group.

Consider, as an example, the plight of a team of engineers tasked with designing a flying car.  People have been trying to do this for more than a century now, and the results are in: it’s a really dumb idea. It so happens that a great many of the engineering features that make a good car make a bad aircraft, and vice versa; for instance, an auto engine needs to be optimized for torque rather than speed, while an aircraft engine needs to be optimized for speed rather than torque. Thus every flying car ever built—and there have been plenty of them—performed just as poorly as a car as it did as a plane, and cost so much that for the same price you could buy a good car, a good airplane, and enough fuel to keep both of them running for a good long time.

Engineers know this. Still, if you’re an engineer and you’ve been hired by some clueless tech-industry godzillionaire who wants a flying car, you probably don’t have the option of telling your employer the truth about his pet project—that is, that no matter how much of his money he plows into the project, he’s going to get a clunker of a vehicle that won’t be any good at either of its two incompatible roles—because he’ll simply fire you and hire someone who will tell him what he wants to hear. Nor do you have the option of sitting him down and getting him to face what’s behind his own unexamined desires and expectations, so that he might notice that his fixation on having a flying car is an emotionally charged hangover from age eight, when he daydreamed about having one to help him cope with the miserable, bully-ridden public school system in which he was trapped for so many wretched years. So you devote your working hours to finding the most rational, scientific, and utilitarian means to accomplish a pointless, useless, and self-defeating end. That’s crackpot realism.

You can make a great party game out of identifying crackpot realism—try it sometime—but I’ll leave that to my more enterprising readers. What I want to talk about right now is one of the most glaring examples of crackpot realism in contemporary industrial society. Yes, we’re going to talk about space travel again.

Sunday, September 24, 2017

Hurricane Damaged Arecibo....,


NationalGeographic |  Great news! [Princeton University professor] Joe Taylor talked to Angel Vazquez, who made contact with the observatory via ham radio. Everybody there is safe and sound,” reported Arecibo deputy director Joan Schmelz

However, it’s not yet clear how staff who weathered the storm in town are doing, or what conditions are like for local communities. Reports suggest that the road up to the facility is covered in debris and is largely inaccessible. 
Still, according to the National Science Foundation, which funds the majority of the telescope’s operations, the observatory is well stocked with food, well water, and fuel for generators. As of Thursday night, there are enough supplies for the staff hunkered down there to survive for at least a week, although Vazquez reports that it’s not clear how long the generators will be working.
“As soon as the roads are physically passable, a team will try to get up to the observatory,” the NSF statement says.
Because of its deep water well and generator, the observatory has been a place for those in nearby towns to gather, shower, and cook after past hurricanes. It also has an on-site helicopter landing pad, so making sure the facility is safe in general is not just of scientific importance, but is also relevant for local relief efforts.


Built in 1963, the Arecibo Observatory has become a cultural icon, known both for its size and for its science. For most of its 54-year existence, Arecibo was the largest radio telescope in the world, but in 2016, a Chinese telescope called FAST—with a dish measuring 1,600 feet across—surpassed Arecibo in size, although it’s not yet fully operational.
The observatory was originally designed for national defense during the Cold War, when the U.S. wanted to see if it could detect Soviet satellites (and maybe missiles and bombs) based on how they alter the portion of Earth’s atmosphere called the ionosphere. Later, the telescope became instrumental in the search for extraterrestrial intelligence (SETI) programs and in other aspects of radio astronomy.

Wednesday, April 12, 2017

The Content Of Sci-Hub And Its Usage


biorxiv |  Despite the growth of Open Access, illegally circumventing paywalls to access scholarly publications is becoming a more mainstream phenomenon. The web service Sci-Hub is amongst the biggest facilitators of this, offering free access to around 62 million publications. So far it is not well studied how and why its users are accessing publications through Sci-Hub. By utilizing the recently released corpus of Sci-Hub and comparing it to the data of ~28 million downloads done through the service, this study tries to address some of these questions. The comparative analysis shows that both the usage and complete corpus is largely made up of recently published articles, with users disproportionately favoring newer articles and 35% of downloaded articles being published after 2013. These results hint that embargo periods before publications become Open Access are frequently circumnavigated using Guerilla Open Access approaches like Sci-Hub. On a journal level, the downloads show a bias towards some scholarly disciplines, especially Chemistry, suggesting increased barriers to access for these. Comparing the use and corpus on a publisher level, it becomes clear that only 11% of publishers are highly requested in comparison to the baseline frequency, while 45% of all publishers are significantly less accessed than expected. Despite this, the oligopoly of publishers is even more remarkable on the level of content consumption, with 80% of all downloads being published through only 9 publishers. All of this suggests that Sci-Hub is used by different populations and for a number of different reasons and that there is still a lack of access to the published scientific record. A further analysis of these openly available data resources will undoubtedly be valuable for the investigation of academic publishing.

H.R. 6408 Terminating The Tax Exempt Status Of Organizations We Don't Like

nakedcapitalism  |   This measures is so far under the radar that so far, only Friedman and Matthew Petti at Reason seem to have noticed it...