Showing posts with label Noo/Nano/Geno/Thermo. Show all posts
Showing posts with label Noo/Nano/Geno/Thermo. Show all posts

Wednesday, August 30, 2017

The Weaponization of Artificial Intelligence


acq |  Recognizing that no machine—and no person—is truly autonomous in the strict sense of the word, we will sometimes speak of autonomous capabilities rather than autonomous systems.2
The primary intellectual foundation for autonomy stems from artificial intelligence (AI), the capability of computer systems to perform tasks that normally require human intelligence (e.g.,
perception, conversation, decisionmaking). 

Advances in AI are making it possible to cede to machines many tasks long regarded as impossible for machines to perform. Intelligent systems aim to apply AI to a particular problem or domain—the
implication being that the system is programmed or trained to operate within the bounds of a defined knowledge base. Autonomous function is at a system level rather than a component level. The study considered two categories of intelligent systems: those employing autonomy at rest and those employing autonomy in motion. In broad terms, systems incorporating autonomy at rest operate virtually, in software, and include planning and expert advisory systems, whereas systems incorporating autonomy in motion have a presence in the physical world and include robotics and autonomous vehicles. 

As illustrated in Figure 1, many DoD and commercial systems are already operating with varying kinds of autonomous capability. Robotics typically adds additional kinds of sensors, actuators, and mobility to intelligent systems. While early robots were largely automated, recent advances in AI are enabling increases in autonomous functionality.

One of the less well-known ways that autonomy is changing the world is in applications that include data compilation, data analysis, web search, recommendation engines, and forecasting. Given the limitations of human abilities to rapidly process the vast amounts of data available today, autonomous systems are now required to find trends and analyze patterns. There is no need to solve the long-term AI problem of general intelligence in order to build high-value applications that exploit limited-scope autonomous capabilities dedicated to specific purposes. DoD’s nascent Memex program is one of many examples in this category.3

Rapid global market expansion for robotics and other intelligent systems to address consumer and industrial applications is stimulating increasing commercial investment and delivering a diverse array of products. At the same time, autonomy is being embedded in a growing array of software systems to enhance speed and consistency of decision-making, among other benefits. Likewise, governmental entities, motivated by economic development opportunities in addition to security missions and other public sector applications, are investing in related basic and applied research.

Applications include commercial endeavors, such as IBM’s Watson, the use of robotics in ports and
mines worldwide, autonomous vehicles (from autopilot drones to self-driving cars), automated logistics and supply chain management, and many more. Japanese and U.S. companies invested more than $2 billion in autonomous systems in 2014, led by Apple, Facebook, Google, Hitachi, IBM, Intel, LinkedIn, NEC, Yahoo, and Twitter. 4

A vibrant startup ecosystem is spawning advances in response to commercial market opportunities; innovations are occurring globally, as illustrated in Figure 2 (top). Startups are targeting opportunities that drive advances in critical underlying technologies. As illustrated in Figure 2 (bottom), machine learning—both application-specific and general purpose—is of high interest. The market-pull for machine learning stems from a diverse array of applications across an equally diverse spectrum of industries, as illustrated in Figure 3.


Sunday, August 27, 2017

Internet: Subverting Democracy? Nah.., Subverting Status Quo Hegemony? Maybe...,


TheNewYorker |  On the night of November 7, 1876, Rutherford B. Hayes’s wife, Lucy, took to her bed with a headache. The returns from the Presidential election were trickling in, and the Hayeses, who had been spending the evening in their parlor, in Columbus, Ohio, were dismayed. Hayes himself remained up until midnight; then he, too, retired, convinced that his Democratic opponent, Samuel J. Tilden, would become the next President.

Hayes had indeed lost the popular vote, by more than two hundred and fifty thousand ballots. And he might have lost the Electoral College as well had it not been for the machinations of journalists working in the shady corners of what’s been called “the Victorian Internet.”

Chief among the plotters was an Ohioan named William Henry Smith. Smith ran the western arm of the Associated Press, and in this way controlled the bulk of the copy that ran in many small-town newspapers. The Western A.P. operated in tight affiliation—some would say collusion—with Western Union, which exercised a near-monopoly over the nation’s telegraph lines. Early in the campaign, Smith decided that he would employ any means necessary to assure a victory for Hayes, who, at the time, was serving a third term as Ohio’s governor. In the run-up to the Republican National Convention, Smith orchestrated the release of damaging information about the Governor’s rivals. Then he had the Western A.P. blare Hayes’s campaign statements and mute Tilden’s. At one point, an unflattering piece about Hayes appeared in the Chicago Times, a Democratic paper. (The piece claimed that Hayes, who had been a general in the Union Army, had accepted money from a soldier to give to the man’s family, but had failed to pass it on when the soldier died.) The A.P. flooded the wires with articles discrediting the story.

Once the votes had been counted, attention shifted to South Carolina, Florida, and Louisiana—states where the results were disputed. Both parties dispatched emissaries to the three states to try to influence the Electoral College outcome. Telegrams sent by Tilden’s representatives were passed on to Smith, courtesy of Western Union. Smith, in turn, shared the contents of these dispatches with the Hayes forces. This proto-hack of the Democrats’ private communications gave the Republicans an obvious edge. Meanwhile, the A.P. sought and distributed legal opinions supporting Hayes. (Outraged Tilden supporters took to calling it the “Hayesociated Press.”) As Democrats watched what they considered to be the theft of the election, they fell into a funk.

“They are full of passion and want to do something desperate but hardly know how to,” one observer noted. Two days before Hayes was inaugurated, on March 5, 1877, the New York Sun appeared with a black border on the front page. “These are days of humiliation, shame and mourning for every patriotic American,” the paper’s editor wrote.

History, Mark Twain is supposed to have said, doesn’t repeat itself, but it does rhyme. Once again, the President of the United States is a Republican who lost the popular vote. Once again, he was abetted by shadowy agents who manipulated the news. And once again Democrats are in a finger-pointing funk.

Journalists, congressional committees, and a special counsel are probing the details of what happened last fall. But two new books contend that the large lines of the problem are already clear. As in the eighteen-seventies, we are in the midst of a technological revolution that has altered the flow of information. Now, as then, just a few companies have taken control, and this concentration of power—which Americans have acquiesced to without ever really intending to, simply by clicking away—is subverting our democracy.

Thirty years ago, almost no one used the Internet for anything. Today, just about everybody uses it for everything. Even as the Web has grown, however, it has narrowed. Google now controls nearly ninety per cent of search advertising, Facebook almost eighty per cent of mobile social traffic, and Amazon about seventy-five per cent of e-book sales. Such dominance, Jonathan Taplin argues, in “Move Fast and Break Things: How Facebook, Google, and Amazon Cornered Culture and Undermined Democracy” (Little, Brown), is essentially monopolistic. In his account, the new monopolies are even more powerful than the old ones, which tended to be limited to a single product or service. Carnegie, Taplin suggests, would have been envious of the reach of Mark Zuckerberg and Jeff Bezos.

Friday, August 25, 2017

Carbon Based: Carbon Fiber Chassis Electric Vehicle on Goodyear 360's


PopularMechanics |  The tire of the future is a ball. An unbelievably sophisticated, nature-inspired, magnetic-levitation-infused ball. Goodyear just revealed its vision for a concept tire that's intended for the self-driving car of tomorrow. It's called Eagle-360, and it's totally round.

Why put a car on a quartet of glorified mouse trackballs? Goodyear says the 3D-printed tires will have a larger contact patch with the ground, allowing for more control. The design lets the tires hurl water away via centrifugal force. But the big reason is that spherical tires can be essentially omnidirectional.


PopularMechanics |  Stronger than steel and a fraction of the weight, carbon fiber is a brilliant invention. Has been for decades. Junior Johnson was building rule-bending Nascar racers out of the stuff back in the '80s. But even with all that time to come up with new sourcing and production methods, carbon fiber just won't stop being expensive. The cheapest new car with a carbon-fiber tub, the Alfa Romeo 4C, is sized for Stuart Little, yet costs as much as a Mercedes E-Class. And the real chariots of the carbon gods, the McLarens and Koenigseggs and Lamborghini Aventadors of the world, are strictly six-figure propositions. We still haven't managed to mass-produce the stuff at anything approaching the price of aluminum, let alone steel. Why hasn't anyone figured out how to make this stuff cost less?

That question is why I'm here in Sant'Agata Bolognese, Italy, at Lamborghini's carbon-fiber facility, laboriously squeegeeing air bubbles out of a sheet of carbon weave. I want to ask the guys in (black) lab coats who make this material: Why aren't we rolling around in carbon-monocoque Hyundais?

Carbon Based: Nanotubes and Vantablack


wikipedia |  Carbon nanotubes (CNTs) are allotropes of carbon with a cylindrical nanostructure. These cylindrical carbon molecules have unusual properties, which are valuable for nanotechnology, electronics, optics and other fields of materials science and technology. Owing to the material's exceptional strength and stiffness, nanotubes have been constructed with length-to-diameter ratio of up to 132,000,000:1,[1] significantly larger than for any other material.

In addition, owing to their extraordinary thermal conductivity, mechanical, and electrical properties, carbon nanotubes find applications as additives to various structural materials. For instance, nanotubes form a tiny portion of the material(s) in some (primarily carbon fiber) baseball bats, golf clubs, car parts or damascus steel.[2][3]

Nanotubes are members of the fullerene structural family. Their name is derived from their long, hollow structure with the walls formed by one-atom-thick sheets of carbon, called graphene. These sheets are rolled at specific and discrete ("chiral") angles, and the combination of the rolling angle and radius decides the nanotube properties; for example, whether the individual nanotube shell is a metal or semiconductor. Nanotubes are categorized as single-walled nanotubes (SWNTs) and multi-walled nanotubes (MWNTs). Individual nanotubes naturally align themselves into "ropes" held together by van der Waals forces, more specifically, pi-stacking.

Applied quantum chemistry, specifically, orbital hybridization best describes chemical bonding in nanotubes. The chemical bonding of nanotubes involves entirely sp2-hybrid carbon atoms. These bonds, which are similar to those of graphite and stronger than those found in alkanes and diamond (which employ sp3-hybrid carbon atoms), provide nanotubes with their unique strength.

wikipedia |   Vantablack is a substance made of vertically aligned carbon nanotube arrays[1] and is one of the blackest artificial substances[2] known, absorbing up to 99.965% of radiation in the visible spectrum.[3][4]

Vantablack is composed of a forest of vertical tubes which are "grown" on a substrate using a modified chemical vapor deposition process (CVD). When light strikes Vantablack, instead of bouncing off, it becomes trapped and is continually deflected among the tubes, eventually becoming absorbed and dissipating into heat.[1]

Vantablack was an improvement over similar substances developed at the time. Vantablack absorbs 99.965% of visible light. It can be created at 400 °C (752 °F); NASA had previously developed a similar substance, but that can only be grown at 750 °C (1,380 °F). For this reason, Vantablack can be grown on materials that cannot withstand higher temperatures.[1]

The outgassing and particle fallout levels of Vantablack are low. The high levels in similar substances in the past had prevented their commercial usefulness. Vantablack also has greater resistance to mechanical vibration, and has greater thermal stability.[6]


Carbon-Based: Graphene


wikipedia |  Graphene (/ˈɡræf.iːn/)[1][2] is an allotrope of carbon in the form of a two-dimensional, atomic-scale, hexagonal lattice in which one atom forms each vertex. It is the basic structural element of other allotropes, including graphite, charcoal, carbon nanotubes and fullerenes. It can be considered as an indefinitely large aromatic molecule, the ultimate case of the family of flat polycyclic aromatic hydrocarbons.

Graphene has many unusual properties. It is about 200 times stronger than the strongest steel. It efficiently conducts heat and electricity and is nearly transparent.[3] Graphene shows a large and nonlinear diamagnetism,[4] greater than graphite and can be levitated by neodymium magnets.
Scientists have theorized about graphene for years. It has unintentionally been produced in small quantities for centuries, through the use of pencils and other similar graphite applications. It was originally observed in electron microscopes in 1962, but it was studied only while supported on metal surfaces.[5] The material was later rediscovered, isolated, and characterized in 2004 by Andre Geim and Konstantin Novoselov at the University of Manchester.[6][7] Research was informed by existing theoretical descriptions of its composition, structure, and properties.[8] This work resulted in the two winning the Nobel Prize in Physics in 2010 "for groundbreaking experiments regarding the two-dimensional material graphene."[9]

The global market for graphene reached $9 million by 2012 with most sales in the semiconductor, electronics, battery energy, and composites industries.[10]
 
NewYorker |  Perhaps the most expansive thinker about the material’s potential is Tomas Palacios, a Spanish scientist who runs the Center for Graphene Devices and 2D Systems, at M.I.T. Rather than using graphene to improve existing applications, as Tour’s lab mostly does, Palacios is trying to build devices for a future world.

At thirty-six, Palacios has an undergraduate’s reedy build and a gentle way of speaking that makes wildly ambitious notions seem plausible. As an electrical engineer, he aspires to “ubiquitous electronics,” increasing “by a factor of one hundred” the number of electronic devices in our lives. From the perspective of his lab, the world would be greatly enhanced if every object, from windows to coffee cups, paper currency, and shoes, were embedded with energy harvesters, sensors, and light-emitting diodes, which allowed them to cheaply collect and transmit information. “Basically, everything around us will be able to convert itself into a display on demand,” he told me, when I visited him recently. Palacios says that graphene could make all this possible; first, though, it must be integrated into those coffee cups and shoes.

As Mody pointed out, radical innovation often has to wait for the right environment. “It’s less about a disruptive technology and more about moments when the linkages among a set of technologies reach a point where it’s feasible for them to change lots of practices,” he said. “Steam engines had been around a long time before they became really disruptive. What needed to happen were changes in other parts of the economy, other technologies linking up with the steam engine to make it more efficient and desirable.”

For Palacios, the crucial technological complement is an advance in 3-D printing. In his lab, four students were developing an early prototype of a printer that would allow them to create graphene-based objects with electrical “intelligence” built into them. Along with Marco de Fazio, a scientist from STMicrolectronics, a firm that manufactures ink-jet print heads, they were clustered around a small, half-built device that looked a little like a Tinkertoy contraption on a mirrored base. “We just got the printer a couple of weeks ago,” Maddy Aby, a ponytailed master’s student, said. “It came with a kit. We need to add all the electronics.” She pointed to a nozzle lying on the table. “This just shoots plastic now, but Marco gave us these print heads that will print the graphene and other types of inks.”

Monday, August 21, 2017

Human Design: Humans Can Look And Perform Any Way You Want Them To


rantt |  But as you saw, eye color and hair color are controlled by a lot more than a few genes and those genes can be altered by everything from hormones in the womb to environmental pollutants. Our genome didn’t evolve for easy, modular editing in the future. It evolved in response to diet and stressors in our ancient past. If you wanted to make sure that your child was 6' 3" tall, weighed no more than 200 pounds, and was really good at football, that’s going to involve total 24/7 control over thousands of genes and the child’s environment from the moment of conception.

Maybe this could be possible one day, but it certainly won’t be any day in the foreseeable future, and it definitely wouldn’t be practical if it was ever possible, or even remotely advisable. The kind of eugenic thought which gripped the world in the early 20th century and kicked off the Holocaust was actually based on a profound misunderstanding of statistics, and very pseudoscientific approach to evolution. Basically, Francis Galton and his followers mistook more people becoming literate and educated as a rise in mediocrity through a mathematic concept known as regression toward the mean, triggering a wave of racist and classist alarmism.

Eugenicists were worried that their “superior” genes were being corrupted by interbreeding between classes and races, that genetic diversity was just dragging them down towards brutish mediocrity. It’s a train of thought you can still find resonating among today’s racists, or ethno-nationalists as they like to call themselves. But this worry reveals a profound lack of scientific understanding that’s fairly critical to any future effort to modify DNA, and shows they’re using the wrong ways to measure human progress.

Genetic diversity is essential for any species to survive and adapt to its new environment. Without a significant enough library of genes that can help us deal with a future stressor, we may be unable to cope with drastic changes in diet or new diseases that come at us. Similarity in genes results in severe inbreeding, making us a lot more vulnerable to an environmental blow that could kill off an entire population without giving it a chance to develop any useful mutations. History is replete with examples of inbred organisms dying off when climates changed or during disease outbreaks.

Ultimately, this is why even in a far future where we can customize children, we have to be extremely mindful of allowing diversity and not messing with too many genes which could one day contribute to disease resistance, or give us the ability to adapt to a new diet. Nature doesn’t necessarily care if we’re getting high IQ scores because those are fairly arbitrary, and are much closer correlated to household values and income than biology. It’s also completely disinterested in our athletic prowess or how conventionally attractive we are to a particular culture. It only cares about reproduction rates.

In fact, in the grandest scheme of them all, nature is a series of trials which test random organisms with random genetic make-up in different climates with different resources and against different stressors. The ones able to live long enough to reproduce and pass down their genes are successful, even if they don’t end up with long lives and building civilizations that explore new worlds. Evolutionarily speaking, we’re pretty successful, but nowhere near as successful as insects or bacteria which typically live fast, die young, and are constantly reproducing in large numbers.

Generative Design: The World Can Look and Perform Any Way You Want It To


newatlas |  One little button in a piece of CAD software is threatening to fundamentally change the way we design, as well as what the built world looks like in the near future. Inspired by evolution, generative design produces extremely strong, efficient and lightweight shapes. And boy do they look weird.

Straight lines, geometric curves, solid surfaces. The constructed world as we know it is made out of them. Why? Nature rarely uses straight lines. Evolution itself is one of the toughest product tests imaginable, and you don't have a straight bone in your body, no matter how much you might like one. 

Simple shapes are popular in human designs because they're easy. Easy to design, especially with CAD, and easy to manufacture in a world where manufacturing means taking a big block or sheet of something, and machining a shape out of it, or pouring metals into a mold.

But manufacturing is starting to undergo a revolutionary change as 3D printing moves toward commercially competitive speeds and costs. And where traditional manufacturing incentivizes the simplest shapes, additive manufacturing is at its fastest and cheapest when you use the least possible material for the job.

That's a really difficult way for a human to design – but fairly easy, as it turns out, for a computer. And super easy for a giant network of computers. And now, exceptionally easy for a human designer with access to Autodesk Fusion 360 software, which has it built right in.

Saturday, December 10, 2016

Permission? To Do GOD's Work?



nature |  Scientists in London have been granted permission to edit the genomes of human embryos for research, UK fertility regulators announced. The 1 February approval by the UK Human Fertilisation and Embryology Authority (HFEA) represents the world's first endorsement of such research by a national regulatory authority.

"It’s an important first. The HFEA has been a very thoughtful, deliberative body that has provided rational oversight of sensitive research areas, and this establishes a strong precedent for allowing this type of research to go forward," says George Daley, a stem-cell biologist at Boston Children's Hospital in Massachusetts.

The HFEA has approved an application by developmental biologist Kathy Niakan, at the Francis Crick Institute in London, to use the genome-editing technique CRISPR–Cas9 in healthy human embryos. Niakan’s team is interested in early development, and it plans to alter genes that are active in the first few days after fertilization. The researchers will stop the experiments after seven days, after which the embryos will be destroyed.

Friday, December 09, 2016

Experiences Leave Behind Epigenetic Traces in Our Genetic Material



phys.org |  An ideological dispute is taking place in biology. And it's about a big topic that's central to everything: heredity. In his epoch-making book On the Origin of Species of 1859, Darwin wrote of the reigning ignorance about how differences between individuals come about. It was only with 'modern evolutionary synthesis' in the 1940s that people became convinced that heredity functions through genetics – in other words, that the characteristics of living creatures are passed on to the next generations through their genetic substance, DNA.

This perspective was helpful in providing a focus for research in the ensuing decades, which brought about extraordinary discoveries. As a result, many aspects of the form and function of living creatures can now be explained. But already in the 1950s, different observations called into question the seemingly exclusive control of the genes. For example, maize kernels can have different colours even if their DNA sequence is identical.

Plants remember aridity
Further investigations brought to light the fact that when individuals with identical genetic material have a different outward appearance, this can be traced back to different degrees of activity on the part of the genes. Whether a particular section of DNA is active or not – i.e., whether it is read – depends to a decisive degree on how densely packed the DNA is.

This packing density is influenced by several so-called epigenetic mechanisms. They form a complex machinery that can affix or detach tiny chemical attachments to the DNA. Here, the rule applies that the tighter packed the DNA, the more difficult it is to read – and this means that a particular gene will be more inactive.

Living creatures can adjust to a volatile environment by steering their epigenetic mechanisms. In this manner, for example, the epigenetic machinery can ensure that plants can deal better with a hot or arid climate if it at some point they already had to live through a similar situation. So in this sense, the epigenetic markings in the genetic material form a kind of 'stress memory' of the plants. This much is today a matter of consensus among biologists.

Doubts on heredity over generations
Several studies, however, suggest that the descendants of stressed plants are also better prepared against the dangers already faced by their ancestors. "However, these studies are a matter of controversial debate," says Ueli Grossniklaus, the director of the Department of Plant and Microbial Biology at the University of Zurich. Like many other epigeneticists who are involved in deciphering these mechanisms, he believes that, "since the evidence is patchy, we can't yet say to what degree acquired characteristics can be transmitted in stable form over several generations." So it still remains to be proven whether epigenetics actually brings organisms long-lasting advantages and thus plays a role in evolution. It's an attractive idea, thinks Grossniklaus, but it's still to be demonstrated.

It's not just in plants that results on the heredity of epigenetic markings are causing a stir – the same is true in mice. In order to investigate the possible long-term effects of severe childhood trauma, for example, the research group led by Isabelle Mansuy, a professor of neuro-epigenetics at the University of Zurich and ETH Zurich, has been taking mouse offspring away from their mothers for three hours each day, just a few days after being born.




This is Not a Question to be Left to Scientists Alone - ROTFLMBAO!!!


telegraph |  An ethical debate over how long human embryos can be grown in a lab has erupted after Cambridge University announced it had allowed fertilised eggs to mature for 13 days – just one day short of the legal limit.

In groundbreaking research, scientists invented a thick soup of nutrients which mimics conditions in the womb, and keeps an embryo alive for days longer than it could previously survive without being implanted into a mother.

Currently UK law bans laboratories for growing embryos for longer than 14 days because after two weeks, twins can no longer form, and so it is deemed that an individual has started to develop.

But scientists have now suggested that the deadline should be extended to allow for more research into the development of embryos.

Professor Magdalena Zernicka-Goetz, who led the research suggested it would be useful to extend the limit by a few days, while Professor Robert Lovell-Badge of London’s Francis Crick Institute said an extra week might be useful, but admitted it could ‘open a can of worms.’

“Proposing to extend the 14-day limit might be opening a can of worms, but would it lead to Pandora’s box, or a treasure chest of valuable information ?” said Professor Lovell-Badge

 “This is not a question to be left to scientists alone.”

Where in the World Will the First CRISPR Baby be Born?


nature |  They are meeting in China; they are meeting in the United Kingdom; and they met in the United States last week. Around the world, scientists are gathering to discuss the promise and perils of editing the genome of a human embryo. Should it be allowed — and if so, under what circumstances?

The meetings have been prompted by an explosion of interest in the powerful technology known as CRISPR/Cas9, which has brought unprecedented ease and precision to genetic engineering. This tool, and others like it, could be used to manipulate the DNA of embryos in a dish to learn about the earliest stages of human development. In theory, genome editing could also be used to 'fix' the mutations responsible for heritable human diseases. If done in embryos, this could prevent such diseases from being passed on.

The prospects have prompted widespread concern and discussion among scientists, ethicists and patients. Fears loom that if genome editing becomes acceptable in the clinic to stave off disease, it will inevitably come to be used to introduce, enhance or eliminate traits for non-medical reasons.

Ethicists are concerned that unequal access to such technologies could lead to genetic classism. And targeted changes to a person's genome would be passed on for generations, through the germ line (sperm and eggs), fuelling fears that embryo editing could have lasting, unintended consequences.

Adding to these concerns, the regulations in many countries have not kept pace with the science.
Nature has tried to capture a snapshot of the legal landscape by querying experts and government agencies in 12 countries with histories of well-funded biological research. The responses reveal a wide range of approaches. In some countries, experimenting with human embryos at all would be a criminal offence, whereas in others, almost anything would be permissible.

Thursday, December 08, 2016

Is The U.S. Worried That China Will ‘Win’ With World’s First GMO Humans?


wakingscience |  Rejecting the inherent ability of the human immune system to naturally fight disease on its own, researchers out of China have taken nature to task by introducing a new set of genetic modification techniques that they claim will “enhance” the ability of the human body to attack and destroy cancer cells.

According to reports, the procedure involves injecting extracted immune cells with so-called “CRISPR” technology, which essentially reprograms the ways in which they handle foreign invaders. CRISPR combines a DNA-cutting enzyme with a specific molecular guide that, in essence, changes the way genes express themselves.
As reported in Nature, a team of scientists led by Lu You, an oncologist from Sichuan University in China, have already used CRISPR to “treat” a patient suffering from an aggressive form of lung cancer, which is part of a larger clinical trial currently taking place at West China Hospital.

Previous trials have taken place with similar technologies, but those pushing CRISPR claim that it’s simpler and more efficient than its predecessors. If eventually approved for commercial use, CRISPR would become the world’s first form of genetic modification for humans, opening a Pandora’s box of biotechnology that threatens to further syncretize man and machine.

You’s trial received ethical approval from the hospital board back in July, and so far the results have met his expectations. Immune cells extracted from the test patient’s blood were injected with CRISPR, which in effect disabled the gene codes for certain proteins including PD-1, which under normal circumstances halt’s the body’s immune response, allowing cancer cells to proliferate.

After the reprogramming process was complete, You and his team cultured these cells, replicated them into much larger quantities, and re-injected them back into the patient. Now they wait to see whether or not the genetically-modified (GM) genome successfully overcomes the patient’s metastatic non-small-cell lung cancer.

Friday, October 28, 2016

Whatever Became of the Lebensborn Children?


nanalyze |  We now have 3 genetic engineering companies that have had an IPO and which you can now invest in; Editas Medicine (NASDAQ:EDIT), Intellia Therapeutics (NASDAQ:NTLA), and CRISPR Therapeutics (NASDAQ:CRSP). Sure, they’re involved in “gene editing” but the label of “genetic engineering” is much more appropriate for this article because we’re going to talk about something that makes people feel uncomfortable. We’re going to talk about genetic engineering in humans, in particular, we’re going to talk about germline genetic engineering which we can now do using the gene editing technologies offered by all 3 of these companies. “Germline” is a term used to refer to the source of DNA for all other cells in the body. When performing genetic engineering at the germline level in humans, it’s pretty much the equivalent of genetically modifying our food to promote superior traits except the ethical implications are far greater.

3 Stages of Genetic Engineering in Humans

Putting our ethics aside for a moment, here’s how we see that timeline progressing in 3 stages:
  1. Gene editing is first used to genetically engineer embryos such that inherited diseases including cancer are made extinct.
  2. Genetic engineering is then used to modify genetic traits that inhibit intelligence, starting with mental retardation, and move to traits that advance intelligence
  3. Genetic engineering is finally used to create “designer babies” that look more visually appealing perhaps also removing the mythical fat gene.
How thrilled is the general population about this sort of genetic engineering in humans? This thrilled:

genetic engineering humans germline
Source: MIT

So almost 50% of people think that it’s okay to go mucking around and editing our germline as a species. The Chinese have already started researching this area though everyone was up in arms over it. That’s crazy to think about. Right now we are on the cusp of an era where we can essentially start to play God. We’re already creating synthetic organisms at a massive scale. We’re doing things like taking bacteria and genetically modifying them so that they literally “sweat” biofuels. We’ve create an army of robots driven by artificial intelligence that are genetically modifying organism to save companies 10s of millions of dollars a year. We’re pretty sure that Stage 1 will eventually happen because disease is bad, right?  The future seems bright and the opportunities endless. Fist tap Big Don.

Monday, August 29, 2016

is zika the first front in the 21st century biowar?


FP |  A main element of the biological revolution will be its impact on security in the broadest sense of the term, as well as on the more specific realm of military activity. Both of these are part of the work being done by various laboratories around the globe, including here in the United States at Johns Hopkins Applied Physics Lab, where I serve as a senior fellow.

Some of the most promising advances made at JHU APL and elsewhere involve man-machine interfaces, with particular emphasis on brain-machine connections that would allow the use of disconnected limbs; more rapid disease identification in response to both natural and man-made epidemics; artificial intelligence, which offers the greatest near-term potential for both positive benefit and military application (i.e., autonomous attack drones); human performance enhancement, including significant reduction in sleep needs, increases in mental acuity, and improvements in exoskeleton and skin “armor”; and efficient genome editing using CRISPR-Cas, a technology that has become widely available to ever smaller laboratory settings, including individuals working out of their homes.

The most important question is how to appropriately pursue such research while remaining within the legal, ethical, moral, and policy boundaries that our society might one day like to set, though are still largely unformed. Scientists are like soldiers on patrol in unmarked terrain, one that is occasionally illuminated by a flash of lightning, revealing steeper and more dangerous ground ahead. The United States needs to continue its research efforts, but, equally important, it needs to develop a coherent and cohesive biological strategy to guide those efforts.

But national biological research efforts will also have international implications, so over time there will need to be international diplomacy to set norms of behavior for the use of these technologies. The diplomacy that went into developing the Law of the Sea, and is under consideration in the cyberworld, could serve as a useful model.

A major challenge for such diplomacy is that individual nations, transnational organizations, or even individuals will soon have access — if they don’t already — to biological tools that permit manipulation of living organisms. The rise of low-cost synthetic biology technologies, the falling cost of DNA sequencing, and the diffusion of knowledge through the internet create the conditions for a breakout biological event not dissimilar to the Spanish influenza of roughly a century ago. In that plague, by some estimates, nearly 40 percent of the world’s population was infected, with a 10 to 20 percent mortality rate. Extrapolated to our current global population, that would equate to more than 400 million dead.

Tuesday, May 17, 2016

Church Lab "Secret" Artificial Genome Meeting


thescientist |  Harvard Medical School’s George Church and his collaborators invited some 130 scientists, lawyers, entrepreneurs, and government officials to Boston last week (May 10) to discuss the feasibility and implementation of a project to synthesize entire large genomes in vitro. According to a statement Church provided to STAT News, such an endeavor could represent “the next chapter in our understanding of the blueprint of life.”

While the subject is an exciting one—on a smaller scale, Craig Venter’s group has synthesized the 1-million-base-pair genome of Mycoplasma mycoides—critics immediately took issue with the fact that this meeting was not open to the press. “This idea is an enormous step for the human species, and it shouldn’t be discussed only behind closed doors,” Northwestern University’s Laurie Zoloth told STAT News. She and Stanford University bioengineer Drew Endy published an article in Cosmos documenting their disapproval of the private nature of the meeting.

Church told STAT News that the original intention was to make the meeting open, but in anticipation of an imminent, high-profile publication on this project, he and his collaborators had to respect the journal’s embargo. However, Endy tweeted a photo of what appeared to be a message from the meeting organizers stating that they chose not to invite media “because we want everyone to speak freely and candidly without concerns about being misquoted or misinterpreted.”

Friday, February 12, 2016

d-wave out there clocking gwap and stacking bandos...,

wikipedia |   Adiabatic quantum computation (AQC) relies on the adiabatic theorem to do calculations[1] and is closely related to, and may be regarded as a subclass of, quantum annealing.[2][3][4][5] First, a (potentially complicated) Hamiltonian is found whose ground state describes the solution to the problem of interest. Next, a system with a simple Hamiltonian is prepared and initialized to the ground state. Finally, the simple Hamiltonian is adiabatically evolved to the desired complicated Hamiltonian. By the adiabatic theorem, the system remains in the ground state, so at the end the state of the system describes the solution to the problem. Adiabatic Quantum Computing has been shown to be polynomially equivalent to conventional quantum computing in the circuit model.[6] The time complexity for an adiabatic algorithm is the time taken to complete the adiabatic evolution which is dependent on the gap in the energy eigenvalues (spectral gap) of the Hamiltonian. Specifically, if the system is to be kept in the ground state, the energy gap between the ground state and the first excited state of H(t) provides an upper bound on the rate at which the Hamiltonian can be evolved at time t.[7] When the spectral gap is small, the Hamiltonian has to be evolved slowly. The runtime for the entire algorithm can be bounded by T = O\left(\frac{1}{g_{min}^2}\right) Where g_{min} is the minimum spectral gap for H(t).
AQC is a possible method to get around the problem of energy relaxation. Since the quantum system is in the ground state, interference with the outside world cannot make it move to a lower state. If the energy of the outside world (that is, the "temperature of the bath") is kept lower than the energy gap between the ground state and the next higher energy state, the system has a proportionally lower probability of going to a higher energy state. Thus the system can stay in a single system eigenstate as long as needed.
The D-Wave One is a device made by a Canadian company D-Wave Systems which describes it as doing quantum annealing.[13] In 2011, Lockheed-Martin purchased one for about US$10 million; in May 2013,Google purchased a D-Wave Two with 512 qubits.[14] As of now, the question of whether the D-Wave processors offer a speedup over a classical processor is still unanswered. Tests performed by researchers atUSC, ETH Zurich, and Google show that as of now, there is no evidence of a quantum advantage.[15][16]

unlike last night a genuinely interesting debate: caltech says yes while hebrew university says no...,


wikipedia |  A topological quantum computer is a theoretical quantum computer that employs two-dimensional quasiparticles called anyons, whose world lines cross over one another to form braids in a three-dimensionalspacetime (i.e., one temporal plus two spatial dimensions). These braids form the logic gates that make up the computer. The advantage of a quantum computer based on quantum braids over using trapped quantum particles is that the former is much more stable. The smallest perturbations can cause a quantum particle to decohere and introduce errors in the computation, but such small perturbations do not change the braids' topological properties. This is like the effort required to cut a string and reattach the ends to form a different braid, as opposed to a ball (representing an ordinary quantum particle in four-dimensional spacetime) bumping into a wall. Alexei Kitaev proposed topological quantum computation in 1997. While the elements of a topological quantum computer originate in a purely mathematical realm, experiments infractional quantum Hall systems indicate these elements may be created in the real world using semiconductors made of gallium arsenide at a temperature of near absolute zero and subjected to strong magnetic fields.


You Know You Done Fucked Up, Right?

nakedcapitalism  |   “Jury Instructions & Charges” (PDF) [Judge Juan Merchan, New York State Unified Court System ]. Merchan’s instruct...