Showing posts with label Genetic Omni Determinism GOD. Show all posts
Showing posts with label Genetic Omni Determinism GOD. Show all posts

Wednesday, December 18, 2019

This year’s gene-write with “ready for space” focus


gpwrite-2019  |  P23. Genetic Engineering of Human Cells for Radiotolerance Craig Westover, Sherry Yang, Sonia Iosim, Deena Najjar, Daniel Butler, Daniela Bezdan, Christopher E. Mason. Weill Cornell, New York, New York, United States Space flight has been documented to produce a number of detrimental physiological effects as a result of cosmic radiation. Space radiation is about 100 times higher than the average effective dose per year from natural radiation on earth and has the ability to produce DNA double stranded breaks leading to increased chromosomal aberrations. The harsh environmental effects of space on organisms have also been studied on the molecular level and as such have shed light on some of the underlying mechanisms that give rise to space induced alterations of cellular functions such as proliferation, differentiation, maturation, and cell survival. Our lab was recently involved in the NASA Twin Project where we analyzed Scott Kelly’s genome, transcriptome, and corresponding epigenetic modifications in response to 1 year of space flight. With this information in mind we are now moving on to genetically engineering HEK293 cells to survive ionizing cosmic radiation.

P28. Detecting evidence of genetic engineering Yuchen Ge, Jitong Cai, Joel S. Bader Johns Hopkins University, Baltimore, Maryland, United States Detecting evidence of genetic engineering is important for biosecurity, provenance, and intellectual property rights. The need for monitoring and detection is growing with contemplated release of gene drive systems. We describe results of a computational systems designed to detect engineering from DNA sequencing of biological samples, including automated identification of host strains, detection of foreign gene content, and detection of watermarks. Our results demonstrate near perfect identification of foreign gene content in blinded samples, but less ability to detect more subtle engineering associated with watermarks that blend in with natural variation. We describe plans for future improvements.

GP-Write: Quest for the Synthetic Human Genome


cen.acs.org |  In the 15 years since the Human Genome Project was declared finished in 2003, the cost of reading a whole human genome has plummeted to about $1,000. Scientists estimate that writing a full human genome with today’s DNA synthesis technology could cost upward of $100 million. That number was the group’s declared fundraising goal in 2016, but it still doesn’t have centralized funding dedicated to the task. This year, some GP-write participants suggested that patenting the ultrasafe cell line or technologies developed along the way could encourage financial support from investors.

“It may be essential,” said Kristin Neuman, executive director for biotechnology licensing at the patent firm MPEG LA. “Some of the scientists want to see everything open access. Others recognize the importance of intellectual property protection to incentivize private investment,” she observed. During the meeting, Neuman encouraged the group to consider patents for cells and technology developed by the group while still making the ultrasafe cell line available to researchers doing basic science.

GP-write cofounder Nancy Kelley said a systematic fundraising effort will begin soon. “A couple years ago we had a rocky beginning, and we really needed to do some work on straightening out the message,” she said. “I now believe we have something serious to talk about.”

Church added that more than 100 research groups involved in GP-write have their own significant funding. “I don’t think we are underfunded at this point; I think we just need to execute,” he said. Teams can now begin signing up for a chromosome, or part of a chromosome, to recode or help with technology development. “There are plenty of things for people to do today.”

At the end of the GP-write meeting, the group’s goals seemed at once more focused and much broader. Church said the group is not backing down from synthesizing a full human genome and that the ultrasafe cell line gives the consortium an immediate task with a clear payoff. But in the end, the GP-write story may be less about completing a project and more about uniting a multidisciplinary cohort of scientists behind something big.

“Our goals aren’t fixed in stone yet,” Church said. “Hopefully they won’t be fixed in stone even at the finish line.”.

Friday, October 05, 2018

Directed Evolution Via Phage Display


thescientist |  Caltech’s Frances Arnold, who advanced a technique called directed evolution to shape the function of enzymes, has received the Nobel Prize in Chemistry today (October 3). She shares the honor with George Smith, now emeritus professor of the University of Missouri, and Gregory Winter, emeritus group leader at the Medical Research Council Laboratory of Molecular Biology (LMB) in Cambridge, UK. Smith and Winter are both recognized for their work on a lab technique known as phage display in the directed evolution of new proteins—in particular, for the production of antibody therapeutics.

“I’d like to congratulate this year’s laureates for their tremendous breakthrough work in using chemistry to speed nature's own processes,” Peter Dourhout, president of the American Chemical Society, says in a statement. “The breakthroughs from these researchers enable that to occur thousands of times faster than nature to improve medicines, fuels and other products. This is truly directed evolution using chemistry.”

First reported by Smith in 1985, phage display involves the introduction of foreign DNA coding for a protein, such as an antibody, into a bacteriophage—a virus that infects bacteria. That protein is then displayed on the surface of the phage. Researchers can use these protein-displaying phages to screen for interactions with other proteins, DNA sequences, and small molecules. 

Speaking to the Associated Press this morning, Smith emphasized the role of others’ work in his achievement. “Very few research breakthroughs are novel,” he says. “Virtually all of them build on what went before. . . . That was certainly the case with my work.”

Winter, who cofounded the biotech company Cambridge Antibody Technology in 1989, developed the technique for the purpose of finding novel therapeutics. In 1993, his research group used phage display to successfully isolate fragments of human antibodies that could bind specific antigens. The genes for these fragments could be expressed in bacteria, the team reported, and could offer a “promising alternative” to mouse-based methods for the “production of antibodies against cell surface molecules.”

In 2002, adalimumab (Humira), a therapeutic produced by this approach, was approved by European and US regulators for the treatment of rheumatoid arthritis. Speaking in 2006, Winter called the approval “the sort of thing I’m most proud of.” The technique has since been used to isolate molecules against autoimmune diseases, multiple cancers, and bacteria such as Bacillus anthracis—the cause of anthrax.

Saturday, July 28, 2018

The Inflammasome and Beyond...,


elysiumhealth |  The Takeaway: In his lab at Yale School of Medicine, immunobiologist Vishwa Deep Dixit and his team are researching the ties between the immune system, metabolism, and aging-related diseases, with a specific focus on an oft-misunderstood biological phenomenon — inflammation.

We all know inflammation: the painful red swelling that happens when we are injured or a wound becomes infected. But why would a Yale scientist interested in the mechanisms of aging and age-related disease be leading a lab researching such a thing?

Turns out there’s a lot more to the condition than most people realize. “‘Inflammation’ is not just a word not understood properly by the lay public, it’s often not properly understood by scientists,” said Vishwa Deep Dixit, a professor of comparative medicine and immunobiology at Yale’s School of Medicine. Dixit and eight other students, postdoctoral fellows, and professors study the intersection between the immune system and metabolism at Dixit Lab. Their focus is not these signs of “classic” inflammation, like redness, swelling, pain, and loss of function. Instead, they believe a different, underlying condition, “low-grade chronic inflammation,” is part of a wider immune system process linked to aging and age-related diseases. By studying the connections between inflammation and other bodily systems, like metabolism and the immune system, they hope to help humans live longer. We asked Dixit about his lab’s work, the future of immunobiological research, and the potential for effective interventions in human health.

Before You Start: Terms to Understand

Inflammation:
The immune system’s local, short-term response to cellular damage by increasing blood flow and other repair-focused compounds.

Low-grade chronic inflammation: A “slow drip” response to widespread cell damage caused by aging, with the byproduct of impairing the function of cells and organs.

Inflammasome: A multiprotein intra-cellular complex that regulates inflammatory responses.

Metabolism:
The sum of every chemical reaction that happens in the body. It breaks down (catabolism) food for energy and also rebuilds (anabolism) those basic molecules into cells.

Macrophage: Immune cells that reside in every organ in the body and are critical to maintaining organ function.



Wednesday, July 25, 2018

Father Of Synthetic Genomics Better Be Careful Tampering With Whydte Folks Money....,


Genomeweb |  Human Longevity (HLI) is suing the J. Craig Venter Institute (JCVI) and a number of unknown defendants over the misappropriation and use of trade secrets passed along by Craig Venter, the founder of both the company and the institute that bears his name.

In a complaint filed last Friday with the US District Court for the Southern District of California, Human Longevity alleges that upon his termination from HLI on May 24, Venter took a company-owned laptop with trade secrets and passed on protected information to the Venter Institute, of which he is chairman and CEO. HLI also claims that the institute is working on a product that will compete with its own business.

According to the complaint, Venter was CEO of Human Longevity from 2014 until January 2017, when he became the firm's executive chairman and signed a "proprietary information and inventions" agreement. He assumed the role of interim CEO in November of 2017 until his employment was terminated in May of this year. During his time at HLI, Venter used a company-owned laptop computer, the contents of which were backed up in the cloud, and consistently used his JCVI email address rather than his HLI email to conduct company business, the complaint states.

In the spring of this year, Venter "withheld critical information from the board and the HLI investors regarding the conduct of an HLI key executive which would likely result in termination," the complaint says. Further, in May, Venter had an HLI-paid counsel "draft a Venter-favorable employment contract" and appointed a new interim president without conferring with the HLI board first.

On May 24, the HLI board "considered a rushed investor deal which Venter presented to them only less than two weeks earlier," the terms of which the board considered one-sided. The deal would have provided financial incentives to Venter and offered the new investor rights that had already been granted to another party, according to the complaint. "At that point, the HLI board voted to terminate Venter from HLI," it states.

Following his termination, Venter left the HLI offices with the company-owned laptop and "immediately began using the HLI computer and server to communicate to the public, solicit HLI investors and employees," the complaint says. In a Twitter message on May 24, Venter said that he was retiring from HLI and returning to JCVI.

His access to the HLI server and HLI emails was disabled the next day, but the company alleges that "even after his HLI termination, Venter used the HLI computer, accessed and sent HLI proprietary information and trade secrets," including communications involving Series C and Asia JV Series A documents.

Tampering With Mitochondrial DNA Demonstrates Interesting Aging Reversibility


UAB |  Wrinkled skin and hair loss are hallmarks of aging. What if they could be reversed?

Keshav Singh, Ph.D., and colleagues have done just that, in a mouse model developed at the University of Alabama at Birmingham. When a mutation leading to mitochondrial dysfunction is induced, the mouse develops wrinkled skin and extensive, visible hair loss in a matter of weeks. When the mitochondrial function is restored by turning off the gene responsible for mitochondrial dysfunction, the mouse returns to smooth skin and thick fur, indistinguishable from a healthy mouse of the same age.

“To our knowledge, this observation is unprecedented,” said Singh, a professor of genetics in the UAB School of Medicine.

Importantly, the mutation that does this is in a nuclear gene affecting mitochondrial function, the tiny organelles known as the powerhouses of the cells. Numerous mitochondria in cells produce 90 percent of the chemical energy cells need to survive.

In humans, a decline in mitochondrial function is seen during aging, and mitochondrial dysfunction can drive age-related diseases. A depletion of the DNA in mitochondria is also implicated in human mitochondrial diseases, cardiovascular disease, diabetes, age-associated neurological disorders and cancer.

“This mouse model,” Singh said, “should provide an unprecedented opportunity for the development of preventive and therapeutic drug development strategies to augment the mitochondrial functions for the treatment of aging-associated skin and hair pathology and other human diseases in which mitochondrial dysfunction plays a significant role.”

Friday, July 13, 2018

The Future of Humankind


medium |  Okay, so we’re not talking about entire brain transplants. There’s a joke that the only organ that’s better to donate than to receive is the brain.

No, no, no, just pieces.

Might people add brain tissue for extra IQ points?

For it to be used in healthy people, it has to be exceptionally safe. But I could imagine that being quite safe.
I think doing experiments on humanlike artificial intelligence would be unethical.

Are there applications of these brain organoids to artificial intelligence?

Oh, that’s the fourth category. The human brain is pretty far ahead of any silicon-based computing system, except for very specialized tasks like information retrieval or math or chess. And we do it at 20 watts of power for the brain, relative to, say, 100,000 watts for a computer doing a very specialized task like chess. So, we’re ahead both in the energy category and in versatility and out-of-the-box thinking. Also, Moore’s law is reaching a plateau, while biotechnology is going through super-exponential growth, where it’s improving by factors of 10 per year in cost/benefit.

Currently, computers have a central processing unit (CPU), often accompanied by specialized chips for particular tasks, like a graphical processing unit (GPU). Might a computer someday have an NPU, or neural processing unit — a bit of brain matter plugged into it?

Yeah, it could. Hybrid systems, such as humans using smartphones, are very valuable, because there are specialized tasks that computers are very good at, like retrieval and math. Although even that could change. For example, now there’s a big effort to store information in DNA. It’s about a million times higher-density than current silicon or other inorganic storage media. That could conceivably in the future be something where biological systems could be better than inorganic or even hybrid systems.

At what size should we think about whether lab brains deserve rights?

All of these things will at some point be capable of all kinds of advanced thinking. I think doing experiments on humanlike artificial intelligence would be unethical as well. There’s this growing tendency of computer scientists to want to make them general purpose. Even if they’re what we would call intellectually challenged, they would have some rights. We may want ways of asking them questions, as in a Turing test, but in this case, to make sure we’re not doing something that would cause pain or anxiety.

Will we ever develop into something that calls itself a new species? And could there be branching of the species tree?

It’s a little hard to predict whether we’ll go toward a monoculture or whether we’ll go toward high diversity. Even if we go toward high diversity, they could still be interbreedable. You look at dogs, for example. Very high diversity, but in principle, any breed of dog can mate with any other dog and produce hybrid puppies. My guess is that we will go toward greater diversity and yet greater interoperability. I think that’s kind of the tendency. We want all of our systems to interoperate. If you look into big cities, you’re getting more and more ability to bridge languages, to bridge cultures. I think that will also be true for species.

Do you think your greatest contribution to humanity will be something you’ve done, or something you’ve yet to do?

Well, I hope it’s something I have yet to do. I think I’m just now getting up to speed after 63 years of education. Aging reversal is something that will buy me and many of my colleagues a lot more time to make many more contributions, so you might consider that a meta-level contribution, if we can pull that off. The sort of things we’re doing with brains and new ways of computing could again be a meta thing. In other words, if we can think in new ways or scale up new forms of intelligence, that would lead to a whole new set of enabling technologies.

Thursday, July 12, 2018

Survival of the Richest


Grinnell |  Officially, the eugenics movement ended for the most part by the end of the Baby Boom, as proven by the closure of most official eugenics organizations. Unfortunately, the eugenics movement has been replaced by a slightly modified neo-eugenics movement, which also believes that characteristics or traits such as poverty, criminality, and illegitimacy are signs that a person is unfit to reproduce. The difference is that neo-eugenicists believe that these traits are passed on not genetically, but through culture and environment. This movement recognizes that traits like poverty and illegitimacy are not actually included in the genetic code, but it has many of the same effects as the original eugenics movement.

Neo-eugenics developed during the Civil Rights Movement, a time when white privilege was clearly threatened in the United States.[3] These neo-eugenicists were concerned with preserving the white race, which ironically now included southern and eastern Europeans, who had earlier been considered the greatest threat to the purity of white America. Currently, neo-eugenics rarely targets white women, regardless of their socioeconomic status, but instead focuses its attention on recent immigrants, blacks, and Mexicans, among others.

In the 1970s, the eugenics movement began to focus its attention on other underprivileged groups of people. Physicians employed by the Indian Health Service, who were supposed to be providing medical care for Native American women, forcibly sterilized somewhere between 25 and 42 percent of Native American women of childbearing age. At the same time, women on welfare who had an illegitimate child were often punished by forced sterilizations immediately after giving birth. The eugenicists and physicians who performed this procedure justified it by saying that “those who accepted government assistance should submit to government oversight and conform to mainstream, white middle-class values and gender roles.”[4] Anyone who did not follow the social rules of middle-class white men could be subject to forced sterilization.

Unfortunately, the neo-eugenics movement has not disappeared from the American consciousness. Between 2006 and 2010, 148 women incarcerated in California prisons were illegally and forcibly sterilized through the use of tubal ligations.[5] Only since 1979 have forced sterilizations been forbidden in California, and although these women were clearly wronged, there are still many supporters of these practices for women in prison.[6]

Despite the fact that eugenic ideas still permeate much of American society, statistics show that fertility levels are declining in most of the world. If current trends continue, in the near future half of the human population will be at the replacement level of fertility, or 2.1 children per set of parents.[7] If all humans eventually began to reproduce at exactly the replacement level of fertility, the entire world population would stabilize and we would not see the exponential human population growth that we are currently experiencing. The United States is currently at almost exactly the replacement level of 2.1 children per family, and any increases in the national population are due almost exclusively to immigration and higher life expectancies, not incredibly high birth rates.

Also Sprach Zarathustra


jstor |  Eugenics straddles the line between repellent Nazi ideas of racial purity and real knowledge of genetics. Scientists eventually dismissed it as pseudo-scientific racism, but it has never completely faded away. In 1994, the book The Bell Curve generated great controversy when its authors Charles Murray and Richard J. Herrnstein argued that test scores showed black people to be less intelligent than white people. In early 2017, Murray’s public appearance at Middlebury College elicited protests, showing that eugenic ideas still have power and can evoke strong reactions.

But now, these disreputable ideas could be supported by new methods of manipulating human DNA. The revolutionary CRISPR genome-editing technique, called the scientific breakthrough of 2015, makes it relatively simple to alter the genetic code. And 2016 saw the announcement of the “Human Genome Project–write,” an effort to design and build an entire artificial human genome in the lab.

These advances led to calls for a complete moratorium on human genetic experimentation until it has been more fully examined. The moratorium took effect in 2015. In early 2017, however, a report by the National Academies of Sciences and National Academy of Medicine, “Human Genome Editing: Science, Ethics, and Governance,” modified this absolute ban. The report called for further study, but also proposed that clinical trials of embryo editing could be allowed if both parents have a serious disease that could be passed on to the child. Some critics condemned even this first step as vastly premature.

Nevertheless, gene editing potentially provides great benefits in combatting disease and improving human lives and longevity. But could this technology also be pushing us toward a neo-eugenic world?

As ever, science fiction can suggest answers. The year 2017 is the 85th anniversary of Brave New World, Aldous Huxley’s vision of a eugenics-based society and one of the great twentieth-century novels. Likewise, 2017 will bring the 20th anniversary of the release of the sci-fi film Gattaca, written and directed by Andrew Niccol, about a future society based on genetic destiny. NASA has called Gattaca the most plausible science fiction film ever made.

In 1932, Huxley’s novel, written when the eugenics movement still flourished, imagined an advanced biological science. Huxley knew about heredity and eugenics through his own distinguished family: His grandfather Thomas Huxley was the Victorian biologist who defended Darwin’s theory of evolution, and his evolutionary biologist brother Julian was a leading proponent of eugenics.
Brave New World takes place in the year 2540. People are bred to order through artificial fertilization and put into higher or lower classes in order to maintain the dominant World State. The highest castes, the physically and intellectually superior Alphas and Betas, direct and control everything. The lower Gammas, Deltas, and Epsilons, many of them clones, are limited in mind and body and exist only to perform necessary menial tasks. To maintain this system, the World State chemically processes human embryos and fetuses to create people with either enlarged or diminished capacities. The latter are kept docile by large doses of propaganda and a powerful pleasure drug, soma.
Like George Orwell’s 1984, reviewers continue to find Huxley’s novel deeply unsettling. To Bob Barr, writing in the Michigan Law Review, it is “a chilling vision” and R. S. Deese, in We Are Amphibians, calls its premise “the mass production of human beings.”

Genetic Analysis Of Social Class Mobility


pnas |  Our analysis suggests three take-home messages. The first take-home message is that genetics research should incorporate information about social origins. For genetics, our findings suggest that estimates of genetic associations with socioeconomic achievement reflect direct genetic effects as well as the effects of social inheritance correlated with genetics. Future genetic studies of social attainment can refine inferences about direct genetic effects by including measures of social origins in their study designs. The same is true for genetic studies of other phenotypes, because childhood socioeconomic circumstances are implicated in the etiology of many different traits and health conditions (5456). Such analysis will help clarify interpretation of studies that analyzed GWAS data and found evidence of genetic overlap between educational attainment and several biomedical phenotypes (57, 58). The advent of national biobanks and other large genetic datasets is increasing the power of GWAS to map genetic risks. Research to investigate how much of the genetic risk measured from GWAS discoveries arises within a single generation and how much accrues from social inheritance correlated with genetics across successive generations is needed.

The second take-home message is that social science research should incorporate information about genetic inheritance. For the social sciences, our findings provide molecular evidence across birth cohorts and countries of genetic influence on social attainment and social mobility. This evidence supports theory in the social sciences that frames genetics as one mechanism among several through which social position is transmitted across generations (9, 20, 21, 59). These theories imply that genetic factors can confound estimates of social environmental effects. However, because genetics have been difficult to measure, studies addressing these theories have had to estimate genetic contributions to attainment indirectly, while other social science research has simply ignored the problem. Now, genetically informed theories of social attainment and mobility can be revisited, tested, and elaborated using molecular genetic data available in an ever-growing array of genetically informed social surveys and longitudinal cohort studies.

Beyond theory, integration of measured genetic inheritance into research on social mobility can add value in at least three ways. First, genetic controls can improve the precision of estimates of environmental effects (11, 14), e.g., of how features of parents’ social circumstances shape children’s development. Second, genetic measurements can provide a starting point for developmental investigations of pathways to social mobility (16, 60), e.g., to identify skills and behaviors that can serve as targets for environmental interventions to lift children out of poverty. Third, genetic measurements can be used to study gene–environment interplay; e.g., how policies and programs may strengthen or weaken genetic and nongenetic mechanisms of intergenerational transmission (61). In our analysis, modeling effects of social origins attenuated genetic-effect sizes by 10–50%, depending on the outcome and cohort. This variation is consistent with evidence that genetic influences on individual differences may vary across cultures and cohorts and across stages of the life course (62, 63). Research is needed to understand how molecular genetic effects on socioeconomic attainment may operate differently across environmental, historical, or economic contexts and the extent to which they may wax or wane across adult development.

The third take-home message is that genetic analysis of social mobility can inform programs and policies that change children’s environments as a way to promote positive development. The genetics we studied are related to socioeconomic attainment and mobility partly through channels that are policy-malleable. Personal characteristics linked with the attainment-related genetics we studied involve early-emerging cognitive and noncognitive skills, including learning to talk and read, act planfully, delay gratification, and get along with others (10, 16). These skills represent intervention targets in their own right, for example by policies and programs that safeguard perinatal development and provide enriching, stable family and educational environments (64). A significant contribution of our study is that the nongenetic social and material resources children inherit from their parents represent a further mechanism linking genetics and attainment over the life course. Policies and programs cannot change children’s genes, but they can help give them more of the resources that children who inherit more education-linked genetics tend to grow up with. Our findings suggest that such interventions could help close the gap. The next step is to find out precisely what those resources are.

Conclusion
A long-term goal of our sociogenomic research is to use genetics to reveal novel environmental intervention approaches to mitigating socioeconomic disadvantage. The analysis reported here takes one step toward enabling a study design to accomplish this. We found that measured genetics related to patterns of social attainment and mobility, partly through direct influences on individuals and partly through predicting the environments in which they grew up. Specifically, parents’ genetics influence the environments that give children their start in life, while children’s own genetics influence their social mobility across adult life. As we learn more about how genetics discovered in GWAS of education influence processes of human development that generate and maintain wealth and poverty, we can identify specific environments that shape those processes. Ultimately, this research approach can suggest interventions that change children’s environments to promote positive development across the life-course.

Sunday, July 01, 2018

The Omnigenic Model Of Complex Human Traits


quantamagazine |  The question most of genetics tries to answer is how genes connect to the traits we see. One person has red hair, another blonde hair; one dies at age 30 of Huntington’s disease, another lives to celebrate a 102nd birthday. Knowing what in the vast expanse of the genetic code is behind traits can fuel better treatments and information about future risks and illuminate how biology and evolution work. For some traits, the connection to certain genes is clear: Mutations of a single gene are behind sickle cell anemia, for instance, and mutations in another are behind cystic fibrosis.

But unfortunately for those who like things simple, these conditions are the exceptions. The roots of many traits, from how tall you are to your susceptibility to schizophrenia, are far more tangled. In fact, they may be so complex that almost the entire genome may be involved in some way, an idea formalized in a theory put forward last year.

Starting about 15 years ago, geneticists began to collect DNA from thousands of people who shared traits, to look for clues to each trait’s cause in commonalities between their genomes, a kind of analysis called a genome-wide association study (GWAS). What they found, first, was that you need an enormous number of people to get statistically significant results — one recent GWAS seeking correlations between genetics and insomnia, for instance, included more than a million people. 

Second, in study after study, even the most significant genetic connections turned out to have surprisingly small effects. The conclusion, sometimes called the polygenic hypothesis, was that multiple loci, or positions in the genome, were likely to be involved in every trait, with each contributing just a small part. (A single large gene can contain several loci, each representing a distinct part of the DNA where mutations make a detectable difference.)

How many loci that “multiple” description might mean was not defined precisely. One very early genetic mapping study in 1999 suggested that “a large number of loci (perhaps > than 15)” might contribute to autism risk, recalled Jonathan Pritchard, now a geneticist at Stanford University. “That’s a lot!” he remembered thinking when the paper came out.

Over the years, however, what scientists might consider “a lot” in this context has quietly inflated. Last June, Pritchard and his Stanford colleagues Evan Boyle and Yang Li (now at the University of Chicago) published a paper about this in Cell that immediately sparked controversy, although it also had many people nodding in cautious agreement. The authors described what they called the “omnigenic” model of complex traits. Drawing on GWAS analyses of three diseases, they concluded that in the cell types that are relevant to a disease, it appears that not 15, not 100, but essentially all genes contribute to the condition. The authors suggested that for some traits, “multiple” loci could mean more than 100,000.

Evolution Is Cleverer Than You Are


theatlantic |  But chimeras are not just oddities. You surely know one. In pregnant women, fetal stem cells can cross the placenta to enter the mother’s bloodstream, where they may persist for years. If Mom gets pregnant again, the stem cells of her firstborn, still circulating in her blood, can cross the placenta in the other direction, commingling with those of the younger sibling. Heredity can thus flow “upstream,” from child to parent—and then over and down to future siblings.

The genome, Zimmer goes on to reveal, eludes tidy boundaries too. Forget the notion that your genome is just the DNA in your chromosomes. We have another genome, small but vital, in our cells’ mitochondria—the tiny powerhouses that supply energy to the cell. Though the mitochondrial genes are few, damage to them can lead to disorders of the brain, muscles, internal organs, sensory systems, and more. At fertilization, an embryo receives both chromosomes and mitochondria from the egg, and only chromosomes from the sperm. Mitochondrial heredity thus flows strictly through the maternal line; every boy is an evolutionary dead end, as far as mitochondria are concerned.

Beyond the genome are more surprises. Schoolchildren learn that Darwin’s predecessor, the great French naturalist Jean-Baptiste Lamarck, got heredity wrong when he suggested that traits acquired through experience—like the giraffe’s neck, elongated by straining and stretching to reach higher, perhaps tenderer, leaves—could be passed down. The biologist August Weismann famously gave the lie to such theories, which collectively are known as “soft” heredity. If Lamarckism were true, he said, chopping the tail off mice and breeding them, generation after generation, should eventually produce tailless mice. It didn’t. Lamarck wasn’t lurking in the details.

Recent research, however, is giving Lamarck a measure of redemption. A subtle regulatory system has been shown to silence or mute the effects of genes without changing the DNA itself. Environmental stresses such as heat, salt, toxins, and infection can trigger so-called epigenetic responses, turning genes on and off to stimulate or restrict growth, initiate immune reactions, and much more. These alterations in gene activity, which are reversible, can be passed down to offspring. They are hitchhikers on the chromosomes, riding along for a while, but able to hop on and off. Harnessing epigenetics, some speculate, could enable us to create Lamarckian crops, which would adapt to a disease in one or two generations and then pass the acquired resistance down to their offspring. If the disease left the area, so would the resistance.

All of these heredities—chromosomal, mitochondrial, epigenetic—still don’t add up to your entire you. Not even close. Every one of us carries a unique flora of hundreds if not thousands of microbes, each with its own genome, without which we cannot feel healthy—cannot be “us.” These too can be passed down from parent to child—but may also move from child to adult, child to child, stranger to stranger. Always a willing volunteer, Zimmer allowed a researcher to sample the microbes living in his belly-button lint. Zimmer’s “navelome” included 53 species of bacteria. One microbe had been known, until then, only from the Mariana Trench. “You, my friend,” the scientist said, “are a wonderland.” Indeed, we all are.

With this in mind, reconsider the ongoing effort to engineer heredity. The motto of the Second International Eugenics Congress, in 1921, was “Eugenics is the self-direction of human evolution.” Since then, controlling heredity has become technically much easier and philosophically more complicated. When, in the 1970s, the first genetic engineering made medical gene therapy feasible, many of its pioneers urged caution, lest some government try to create a genetic Fourth Reich. In particular, two taboos seemed commonsense: no enhancement, only therapy (thou shalt not create a master race); and no alterations in germ-line tissues, only in somatic cells (thou shalt not make heritable modifications).

Friday, June 22, 2018

DoD Ranks Top Threats From Synthetic Biology

npr  |  New genetic tools are making it easier and cheaper to engineer viruses and bacteria, and a report commissioned by the Department of Defense has now ranked the top threats posed by the rapidly advancing field of "synthetic biology."

One of the biggest concerns is the ability to recreate known viruses from scratch in the lab. That means a lab could make a deadly virus that is normally kept under lock and key, such as smallpox.

"Right now, recreating pretty much any virus can be done relatively easily. It requires a certain amount of expertise and resources and knowledge," says Michael Imperiale, a microbiologist at the University of Michigan who chaired the committee convened by the National Academies of Sciences, Engineering, and Medicine to assess the state of synthetic biology and offer advice to defense officials.

As an example of what's possible, Imperiale pointed to the recent and controversial creation of horsepox, a cousin of smallpox, in a Canadian laboratory. "These things can now be done," he said.
Another top danger listed in the report, which was released Tuesday, is making existing bacteria or viruses more dangerous. That could happen, by, say, giving them antibiotic resistance or altering them so that they produce toxins or evade vaccines.




And one scenario pondered by the experts is the creation of microbes that would produce harmful biochemicals in humans while living on the skin or in the gut. This possibility, the report notes, "is of high concern because its novelty challenges potential mitigation options." Public health officials might not even recognize that they were witnessing a biological attack if the dangerous material was delivered to victims in such an unusual way.

All in all, the committee examined about a dozen different synthetic biology technologies that could be potentially misused. For each, they considered how likely it was to be usable as a weapon, how much expertise or resources would be needed, and how well governments would be able to recognize and manage an attack.

Thursday, June 14, 2018

Time and its Structure (Chronotopology)


intuition |  MISHLOVE: I should mention here, since you've used the term, that chronotopology is the name of the discipline which you founded, which is the study of the structure of time. 

MUSES: Do you want me to comment on that? 

MISHLOVE: Yes, please. 

MUSES: In a way, yes, but in a way I didn't found it. I was thinking cybernetics, for instance, was started formally by Norbert Weiner, but it began with the toilet tank that controlled itself. When I was talking with Weiner at Ravello, he happily agreed with this. 

MISHLOVE: The toilet tank. 

MUSES: He says, "Oh yes." The self-shutting-off toilet tank is the first cybernetic advance of mankind. 

MISHLOVE: Oh. And I suppose chronotopology has an illustrious beginning like this also. 

MUSES: Well, better than the toilet tank, actually. It has a better beginning than cybernetics. 

MISHLOVE: In effect, does it go back to the study of the ancient astrologers? 

MUSES: Well, it goes back to the study of almost all traditional cultures. The word astronomia, even the word mathematicus, meant someone who studied the stars, and in Kepler's sense they calculated the positions to know the qualities of time. But that's an independent hypothesis. The hypothesis of chronotopology is whether you have pointers of any kind -- ionospheric disturbances, planetary orbits, or whatnot -- independently of those pointers, time itself has a flux, has a wave motion, the object being to surf on time. 

MISHLOVE: Now, when you talk about the wave motion of time, I'm getting real interested and excited, because in quantum physics there's this notion that the underlying basis for the physical universe are these waves, even probability waves -- nonphysical, nonmaterial waves -- sort of underlying everything. 

MUSES: Very, very astute, because these waves are standing waves. Actually the wave-particle so-called paradox isn't that bad, when you consider that a particle is a wave packet, a packet of standing waves. That's why an electron can go through a plate and leave wavelike things. Actually our bodies are like fountains. The fountain has a shape only because it's being renewed every minute, and our bodies are being renewed. So we are standing waves; we're no exception. 

MISHLOVE: This deep structure of matter, where we can say what we really are in our bodies is not where we appear to be -- you're saying the same thing is true of time. It's not quite what it appears to be. 

MUSES: No, we're a part of this wave structure, and matter and energy all occur in waves, and time is the master control. I will give you an illustration of that. If you'll take a moment of time, this moment cuts through the entire physical universe as we're talking. It holds all of space in itself. But one point of space doesn't hold all of time. In other words, time is much bigger than space. 

MISHLOVE: That thought sort of made me gasp a second -- all of physical space in each now moment -- 

MUSES: Is contained in a point of time, which is a moment. And of course, a line of time is then an occurrence, and a wave of time is a recurrence. And then if you get out from the circle of time, which Nietzsche saw, the eternal recurrence -- if you break that, as we know we do, we develop, and then we're on a helix, because we come around but it's a little different each time. 

MISHLOVE: Well, now you're beginning to introduce the notion of symbols -- point, line, wave, helix, and so on. 

MUSES: Yes, the dimensions of time. 

MISHLOVE: One of the interesting points that you seem to make in your book is that symbols themselves -- words, pictures -- point to the deeper structure of things, including the deeper structure of time. 

MUSES: Yes. Symbols I would regard as pointers to their meanings, like revolving doors. There are some people, however, who have spent their whole lives walking in the revolving door and never getting out of it. 

Time and its Structure (Chronotopology)
Foreword by Charles A. Muses to "Communication, Organization, And Science" by Jerome Rothstein - 1958 

Your Genetic Presence Through Time


counterpunch |  The propagation through time of your personal genetic presence within the genetic sea of humanity can be visualized as a wave that arises out of the pre-conscious past before your birth, moves through the streaming present of your conscious life, and dissipates into the post-conscious future after your death.

You are a pre-conscious genetic concentration drawn out of the genetic diffusion of your ancestors. If you have children who survive you then your conscious life is the time of increase of your genetic presence within the living population. Since your progeny are unlikely to reproduce exponentially, as viruses and bacteria do, your post-conscious genetic presence is only a diffusion to insignificance within the genetic sea of humanity.

During your conscious life, you develop a historical awareness of your pre-conscious past, with a personal interest that fades with receding generations. Also during your conscious life, you can develop a projective concern about your post-conscious future, with a personal interest that fades with succeeding generations and with increasing predictive uncertainty.

Your conscious present is the sum of: your immediate conscious awareness, your reflections on your prior conscious life, your historical awareness of your pre-conscious past, and your concerns about your post-conscious future.

Your time of conscious present becomes increasingly remote in the historical awareness of your succeeding generations.

Your loneliness in old age is just your sensed awareness of your genetic diffusion into the living population of your conscious present and post-conscious future.

Tuesday, June 12, 2018

Smarter ____________ WILL NOT Take You With Them....,



nautilus  |  When it comes to artificial intelligence, we may all be suffering from the fallacy of availability: thinking that creating intelligence is much easier than it is, because we see examples all around us. In a recent poll, machine intelligence experts predicted that computers would gain human-level ability around the year 2050, and superhuman ability less than 30 years after.1 But, like a tribe on a tropical island littered with World War II debris imagining that the manufacture of aluminum propellers or steel casings would be within their power, our confidence is probably inflated.

AI can be thought of as a search problem over an effectively infinite, high-dimensional landscape of possible programs. Nature solved this search problem by brute force, effectively performing a huge computation involving trillions of evolving agents of varying information processing capability in a complex environment (the Earth). It took billions of years to go from the first tiny DNA replicators to Homo Sapiens. What evolution accomplished required tremendous resources. While silicon-based technologies are increasingly capable of simulating a mammalian or even human brain, we have little idea of how to find the tiny subset of all possible programs running on this hardware that would exhibit intelligent behavior.

But there is hope. By 2050, there will be another rapidly evolving and advancing intelligence besides that of machines: our own. The cost to sequence a human genome has fallen below $1,000, and powerful methods have been developed to unravel the genetic architecture of complex traits such as human cognitive ability. Technologies already exist which allow genomic selection of embryos during in vitro fertilization—an embryo’s DNA can be sequenced from a single extracted cell. Recent advances such as CRISPR allow highly targeted editing of genomes, and will eventually find their uses in human reproduction.
It is easy to forget that the computer revolution was led by a handful of geniuses: individuals with truly unusual cognitive ability.
The potential for improved human intelligence is enormous. Cognitive ability is influenced by thousands of genetic loci, each of small effect. If all were simultaneously improved, it would be possible to achieve, very roughly, about 100 standard deviations of improvement, corresponding to an IQ of over 1,000. We can’t imagine what capabilities this level of intelligence represents, but we can be sure it is far beyond our own. Cognitive engineering, via direct edits to embryonic human DNA, will eventually produce individuals who are well beyond all historical figures in cognitive ability. By 2050, this process will likely have begun.

 

Proposed Policies For Advancing Embryonic Cell Germline-Editing Technology


niskanencenter |  In a previous post, I touched on the potential social and ethical consequences that will likely emerge in the wake of Dr. Shoukhrat Mitalipov’s recent experiment in germline-edited embryos. The short version: there’s probably no stopping the genetic freight train. However, there are steps we can take to minimize the potential costs, while capitalizing on the many benefits these advancements have to offer us. In order to do that, however, we need to turn our attention away from hyperbolic rhetoric of “designer babies” and focus on the near-term practical considerations—mainly, how we will govern the research, development, and application of these procedures.
Before addressing the policy concerns, however, it’s important to understand the fundamentals of what is being discussed in this debate. In the previous blog, I noted the difference between somatic cell editing and germline editing—one of the major ethical faultlines in this issue space. In order to have a clear perspective of the future possibilities, and current limitations, of genetic modification, let’s briefly examine how CRISPR actually works in practice. 

CRISPR stands for “clustered regularly interspaced short palindromic repeats”—a reference to segments of DNA that function as a defense used by bacteria to ward off foreign infections. That defense system essentially targets specific patterns of DNA in a virus, bacteria, or other threat and destroys it. This approach uses Cas9—an RNA-guided protein—to search through a cell’s genetic material until it finds a genetic sequence that matches the sequence programmed into its guide RNA. Once it finds its target, the protein splices the two strands of the DNA helix. Repair enzymes can then heal the gap in the broken DNA, or filled using new genetic information introduced into the sequence. Conceptually, we can think of CRISPR as the geneticist’s variation of a “surgical laser knife, which allows a surgeon to cut out precisely defective body parts and replace them with new or repaired ones.”

The technology is still cutting edge, and most researchers are still trying to get a handle on the technical difficulties associated with its use. Right now, we’re still in the Stone Age of genetic research. Even though we’ve made significant advancements in recent years, we’re still a long, long way from editing human IQs of our children on-demand. That technology is much further into the future and some doubt that we’ll ever be able to “program” inheritable traits into our individual genomes. In short, don’t expect any superhumanly intelligent, disease-resistant super soldiers any time soon.

The Parallels Between Artificial Intelligence and Genetic Modification
There are few technologies that inspire fantastical embellishments in popular media as much as the exaggerations surrounding genetic modification. In fact, the only technology that comes close to comparison—and indeed, actually parallels the rhetoric quite closely—is artificial intelligence (AI).

Monday, June 11, 2018

Office 365 CRISPR Editing Suite


gizmodo |  The gene-editing technology CRISPR could very well one day rid the world of its most devastating diseases, allowing us to simply edit away the genetic code responsible for an illness. One of the things standing in the way of turning that fantasy into reality, though, is the problem of off-target effects. Now Microsoft is hoping to use artificial intelligence to fix this problem. 

You see, CRISPR is fawned over for its precision. More so than earlier genetic technologies, it can accurately target and alter a tiny fragment of genetic code. But it’s still not always as accurate as we’d like it to be. Thoughts on how often this happens vary, but at least some of the time, CRISPR makes changes to DNA it was intended to leave alone. Depending on what those changes are, they could inadvertently result in new health problems, such as cancer.

Scientists have long been working on ways to fine-tune CRISPR so that less of these unintended effects occur. Microsoft thinks that artificial intelligence might be one way to do it. Working with computer scientists and biologists from research institutions across the U.S., the company has developed a new tool called Elevation that predicts off-target effects when editing genes with the CRISPR. 

It works like this: If a scientist is planning to alter a specific gene, they enter its name into Elevation. The CRISPR system is made up of two parts, a protein that does the cutting and a synthetic guide RNA designed to match a DNA sequence in the gene they want to edit. Different guides can have different off-target effects depending on how they are used. Elevation will suggest which guide is least likely to result in off-target effects for a particular gene, using machine learning to figure it out. It also provides general feedback on how likely off-target effects are for the gene being targeted. The platform bases its learning both on Microsoft research and publicly available data about how different genetic targets and guides interact. 

The work is detailed in a paper published Wednesday in the journal Nature Biomedical Engineering. The tool is publicly available for researchers to use for free. It works alongside a tool released by Microsoft in 2016 called Azimuth that predicts on-target effects.

There is lots of debate over how problematic the off-target effects of CRISPR really are, as well as over how to fix them. Microsoft’s new tool, though, will certainly be a welcome addition to the toolbox. Over the past year, Microsoft has doubled-down on efforts to use AI to attack health care problems.

Who Will Have Access To Advanced Reproductive Technology?



futurism |  In November 2017, a baby named Emma Gibson was born in the state of Tennessee. Her birth, to a 25-year-old woman, was fairly typical, but one aspect made her story unique: she was conceived 24 years prior from anonymous donors, when Emma’s mother was just a year old.  The embryo had been frozen for more than two decades before it was implanted into her mother’s uterus and grew into the baby who would be named Emma.

Most media coverage hailed Emma’s birth as a medical marvel, an example of just how far reproductive technology has come in allowing people with fertility issues to start a family.

Yet, the news held a small detail that gave others pause. The organization that provided baby Emma’s embryo to her parents, the National Embryo Donation Center (NEDC), has policies that state they will only provide embryos to married, heterosexual couples, in addition to several health requirements. Single women and non-heterosexual couples are not eligible.

In other industries, this policy would effectively be labeled as discriminatory. But for reproductive procedures in the United States, such a policy is completely legal. Because insurers do not consider reproductive procedures to be medically necessary, the U.S. is one of the few developed nations without formal regulations or ethical requirements for fertility medicine. This loose legal climate also gives providers the power to provide or deny reproductive services at will.

The future of reproductive technology has many excited about its potential to allow biological birth for those who might not otherwise have been capable of it. Experiments going on today, such as testing functional 3D-printed ovaries and incubating animal fetuses in artificial wombs, seem to suggest that future is well on its way, that fertility medicine has already entered the realm of what was once science fiction.

Yet, who will have access to these advances? Current trends seem to suggest that this will depend on the actions of regulators and insurance agencies, rather than the people who are affected the most.

Sunday, June 10, 2018

Cognitive Enhancement of Other Species?



singularityhub |  Science fiction author David Brin popularized the concept in his “Uplift” series of novels, in which humans share the world with various other intelligent animals that all bring their own unique skills, perspectives, and innovations to the table. “The benefits, after a few hundred years, could be amazing,” he told Scientific American.

Others, like George Dvorsky, the director of the Rights of Non-Human Persons program at the Institute for Ethics and Emerging Technologies, go further and claim there is a moral imperative. He told the Boston Globe that denying augmentation technology to animals would be just as unethical as excluding certain groups of humans. 

Others are less convinced. Forbes Alex Knapp points out that developing the technology to uplift animals will likely require lots of very invasive animal research that will cause huge suffering to the animals it purports to help. This is problematic enough with normal animals, but could be even more morally dubious when applied to ones whose cognitive capacities have been enhanced.

The whole concept could also be based on a fundamental misunderstanding of the nature of intelligence. Humans are prone to seeing intelligence as a single, self-contained metric that progresses in a linear way with humans at the pinnacle.
 
In an opinion piece in Wired arguing against the likelihood of superhuman artificial intelligence, Kevin Kelly points out that science has no such single dimension with which to rank the intelligence of different species. Each one combines a bundle of cognitive capabilities, some of which are well below our own capabilities and others which are superhuman. He uses the example of the squirrel, which can remember the precise location of thousands of acorns for years.

Uplift efforts may end up being less about boosting intelligence and more about making animals more human-like. That represents “a kind of benevolent colonialism” that assumes being more human-like is a good thing, Paul Graham Raven, a futures researcher at the University of Sheffield in the United Kingdom, told the Boston Globe. There’s scant evidence that’s the case, and it’s easy to see how a chimpanzee with the mind of a human might struggle to adjust.

 

AIPAC Powered By Weak, Shameful, American Ejaculations

All filthy weird pathetic things belongs to the Z I O N N I I S S T S it’s in their blood pic.twitter.com/YKFjNmOyrQ — Syed M Khurram Zahoor...