Showing posts with label Noo/Nano/Geno/Thermo. Show all posts
Showing posts with label Noo/Nano/Geno/Thermo. Show all posts

Thursday, June 14, 2018

Time and its Structure (Chronotopology)


intuition |  MISHLOVE: I should mention here, since you've used the term, that chronotopology is the name of the discipline which you founded, which is the study of the structure of time. 

MUSES: Do you want me to comment on that? 

MISHLOVE: Yes, please. 

MUSES: In a way, yes, but in a way I didn't found it. I was thinking cybernetics, for instance, was started formally by Norbert Weiner, but it began with the toilet tank that controlled itself. When I was talking with Weiner at Ravello, he happily agreed with this. 

MISHLOVE: The toilet tank. 

MUSES: He says, "Oh yes." The self-shutting-off toilet tank is the first cybernetic advance of mankind. 

MISHLOVE: Oh. And I suppose chronotopology has an illustrious beginning like this also. 

MUSES: Well, better than the toilet tank, actually. It has a better beginning than cybernetics. 

MISHLOVE: In effect, does it go back to the study of the ancient astrologers? 

MUSES: Well, it goes back to the study of almost all traditional cultures. The word astronomia, even the word mathematicus, meant someone who studied the stars, and in Kepler's sense they calculated the positions to know the qualities of time. But that's an independent hypothesis. The hypothesis of chronotopology is whether you have pointers of any kind -- ionospheric disturbances, planetary orbits, or whatnot -- independently of those pointers, time itself has a flux, has a wave motion, the object being to surf on time. 

MISHLOVE: Now, when you talk about the wave motion of time, I'm getting real interested and excited, because in quantum physics there's this notion that the underlying basis for the physical universe are these waves, even probability waves -- nonphysical, nonmaterial waves -- sort of underlying everything. 

MUSES: Very, very astute, because these waves are standing waves. Actually the wave-particle so-called paradox isn't that bad, when you consider that a particle is a wave packet, a packet of standing waves. That's why an electron can go through a plate and leave wavelike things. Actually our bodies are like fountains. The fountain has a shape only because it's being renewed every minute, and our bodies are being renewed. So we are standing waves; we're no exception. 

MISHLOVE: This deep structure of matter, where we can say what we really are in our bodies is not where we appear to be -- you're saying the same thing is true of time. It's not quite what it appears to be. 

MUSES: No, we're a part of this wave structure, and matter and energy all occur in waves, and time is the master control. I will give you an illustration of that. If you'll take a moment of time, this moment cuts through the entire physical universe as we're talking. It holds all of space in itself. But one point of space doesn't hold all of time. In other words, time is much bigger than space. 

MISHLOVE: That thought sort of made me gasp a second -- all of physical space in each now moment -- 

MUSES: Is contained in a point of time, which is a moment. And of course, a line of time is then an occurrence, and a wave of time is a recurrence. And then if you get out from the circle of time, which Nietzsche saw, the eternal recurrence -- if you break that, as we know we do, we develop, and then we're on a helix, because we come around but it's a little different each time. 

MISHLOVE: Well, now you're beginning to introduce the notion of symbols -- point, line, wave, helix, and so on. 

MUSES: Yes, the dimensions of time. 

MISHLOVE: One of the interesting points that you seem to make in your book is that symbols themselves -- words, pictures -- point to the deeper structure of things, including the deeper structure of time. 

MUSES: Yes. Symbols I would regard as pointers to their meanings, like revolving doors. There are some people, however, who have spent their whole lives walking in the revolving door and never getting out of it. 

Time and its Structure (Chronotopology)
Foreword by Charles A. Muses to "Communication, Organization, And Science" by Jerome Rothstein - 1958 

Your Genetic Presence Through Time


counterpunch |  The propagation through time of your personal genetic presence within the genetic sea of humanity can be visualized as a wave that arises out of the pre-conscious past before your birth, moves through the streaming present of your conscious life, and dissipates into the post-conscious future after your death.

You are a pre-conscious genetic concentration drawn out of the genetic diffusion of your ancestors. If you have children who survive you then your conscious life is the time of increase of your genetic presence within the living population. Since your progeny are unlikely to reproduce exponentially, as viruses and bacteria do, your post-conscious genetic presence is only a diffusion to insignificance within the genetic sea of humanity.

During your conscious life, you develop a historical awareness of your pre-conscious past, with a personal interest that fades with receding generations. Also during your conscious life, you can develop a projective concern about your post-conscious future, with a personal interest that fades with succeeding generations and with increasing predictive uncertainty.

Your conscious present is the sum of: your immediate conscious awareness, your reflections on your prior conscious life, your historical awareness of your pre-conscious past, and your concerns about your post-conscious future.

Your time of conscious present becomes increasingly remote in the historical awareness of your succeeding generations.

Your loneliness in old age is just your sensed awareness of your genetic diffusion into the living population of your conscious present and post-conscious future.

Tuesday, June 12, 2018

Smarter ____________ WILL NOT Take You With Them....,



nautilus  |  When it comes to artificial intelligence, we may all be suffering from the fallacy of availability: thinking that creating intelligence is much easier than it is, because we see examples all around us. In a recent poll, machine intelligence experts predicted that computers would gain human-level ability around the year 2050, and superhuman ability less than 30 years after.1 But, like a tribe on a tropical island littered with World War II debris imagining that the manufacture of aluminum propellers or steel casings would be within their power, our confidence is probably inflated.

AI can be thought of as a search problem over an effectively infinite, high-dimensional landscape of possible programs. Nature solved this search problem by brute force, effectively performing a huge computation involving trillions of evolving agents of varying information processing capability in a complex environment (the Earth). It took billions of years to go from the first tiny DNA replicators to Homo Sapiens. What evolution accomplished required tremendous resources. While silicon-based technologies are increasingly capable of simulating a mammalian or even human brain, we have little idea of how to find the tiny subset of all possible programs running on this hardware that would exhibit intelligent behavior.

But there is hope. By 2050, there will be another rapidly evolving and advancing intelligence besides that of machines: our own. The cost to sequence a human genome has fallen below $1,000, and powerful methods have been developed to unravel the genetic architecture of complex traits such as human cognitive ability. Technologies already exist which allow genomic selection of embryos during in vitro fertilization—an embryo’s DNA can be sequenced from a single extracted cell. Recent advances such as CRISPR allow highly targeted editing of genomes, and will eventually find their uses in human reproduction.
It is easy to forget that the computer revolution was led by a handful of geniuses: individuals with truly unusual cognitive ability.
The potential for improved human intelligence is enormous. Cognitive ability is influenced by thousands of genetic loci, each of small effect. If all were simultaneously improved, it would be possible to achieve, very roughly, about 100 standard deviations of improvement, corresponding to an IQ of over 1,000. We can’t imagine what capabilities this level of intelligence represents, but we can be sure it is far beyond our own. Cognitive engineering, via direct edits to embryonic human DNA, will eventually produce individuals who are well beyond all historical figures in cognitive ability. By 2050, this process will likely have begun.

 

Proposed Policies For Advancing Embryonic Cell Germline-Editing Technology


niskanencenter |  In a previous post, I touched on the potential social and ethical consequences that will likely emerge in the wake of Dr. Shoukhrat Mitalipov’s recent experiment in germline-edited embryos. The short version: there’s probably no stopping the genetic freight train. However, there are steps we can take to minimize the potential costs, while capitalizing on the many benefits these advancements have to offer us. In order to do that, however, we need to turn our attention away from hyperbolic rhetoric of “designer babies” and focus on the near-term practical considerations—mainly, how we will govern the research, development, and application of these procedures.
Before addressing the policy concerns, however, it’s important to understand the fundamentals of what is being discussed in this debate. In the previous blog, I noted the difference between somatic cell editing and germline editing—one of the major ethical faultlines in this issue space. In order to have a clear perspective of the future possibilities, and current limitations, of genetic modification, let’s briefly examine how CRISPR actually works in practice. 

CRISPR stands for “clustered regularly interspaced short palindromic repeats”—a reference to segments of DNA that function as a defense used by bacteria to ward off foreign infections. That defense system essentially targets specific patterns of DNA in a virus, bacteria, or other threat and destroys it. This approach uses Cas9—an RNA-guided protein—to search through a cell’s genetic material until it finds a genetic sequence that matches the sequence programmed into its guide RNA. Once it finds its target, the protein splices the two strands of the DNA helix. Repair enzymes can then heal the gap in the broken DNA, or filled using new genetic information introduced into the sequence. Conceptually, we can think of CRISPR as the geneticist’s variation of a “surgical laser knife, which allows a surgeon to cut out precisely defective body parts and replace them with new or repaired ones.”

The technology is still cutting edge, and most researchers are still trying to get a handle on the technical difficulties associated with its use. Right now, we’re still in the Stone Age of genetic research. Even though we’ve made significant advancements in recent years, we’re still a long, long way from editing human IQs of our children on-demand. That technology is much further into the future and some doubt that we’ll ever be able to “program” inheritable traits into our individual genomes. In short, don’t expect any superhumanly intelligent, disease-resistant super soldiers any time soon.

The Parallels Between Artificial Intelligence and Genetic Modification
There are few technologies that inspire fantastical embellishments in popular media as much as the exaggerations surrounding genetic modification. In fact, the only technology that comes close to comparison—and indeed, actually parallels the rhetoric quite closely—is artificial intelligence (AI).

Monday, June 11, 2018

Office 365 CRISPR Editing Suite


gizmodo |  The gene-editing technology CRISPR could very well one day rid the world of its most devastating diseases, allowing us to simply edit away the genetic code responsible for an illness. One of the things standing in the way of turning that fantasy into reality, though, is the problem of off-target effects. Now Microsoft is hoping to use artificial intelligence to fix this problem. 

You see, CRISPR is fawned over for its precision. More so than earlier genetic technologies, it can accurately target and alter a tiny fragment of genetic code. But it’s still not always as accurate as we’d like it to be. Thoughts on how often this happens vary, but at least some of the time, CRISPR makes changes to DNA it was intended to leave alone. Depending on what those changes are, they could inadvertently result in new health problems, such as cancer.

Scientists have long been working on ways to fine-tune CRISPR so that less of these unintended effects occur. Microsoft thinks that artificial intelligence might be one way to do it. Working with computer scientists and biologists from research institutions across the U.S., the company has developed a new tool called Elevation that predicts off-target effects when editing genes with the CRISPR. 

It works like this: If a scientist is planning to alter a specific gene, they enter its name into Elevation. The CRISPR system is made up of two parts, a protein that does the cutting and a synthetic guide RNA designed to match a DNA sequence in the gene they want to edit. Different guides can have different off-target effects depending on how they are used. Elevation will suggest which guide is least likely to result in off-target effects for a particular gene, using machine learning to figure it out. It also provides general feedback on how likely off-target effects are for the gene being targeted. The platform bases its learning both on Microsoft research and publicly available data about how different genetic targets and guides interact. 

The work is detailed in a paper published Wednesday in the journal Nature Biomedical Engineering. The tool is publicly available for researchers to use for free. It works alongside a tool released by Microsoft in 2016 called Azimuth that predicts on-target effects.

There is lots of debate over how problematic the off-target effects of CRISPR really are, as well as over how to fix them. Microsoft’s new tool, though, will certainly be a welcome addition to the toolbox. Over the past year, Microsoft has doubled-down on efforts to use AI to attack health care problems.

Who Will Have Access To Advanced Reproductive Technology?



futurism |  In November 2017, a baby named Emma Gibson was born in the state of Tennessee. Her birth, to a 25-year-old woman, was fairly typical, but one aspect made her story unique: she was conceived 24 years prior from anonymous donors, when Emma’s mother was just a year old.  The embryo had been frozen for more than two decades before it was implanted into her mother’s uterus and grew into the baby who would be named Emma.

Most media coverage hailed Emma’s birth as a medical marvel, an example of just how far reproductive technology has come in allowing people with fertility issues to start a family.

Yet, the news held a small detail that gave others pause. The organization that provided baby Emma’s embryo to her parents, the National Embryo Donation Center (NEDC), has policies that state they will only provide embryos to married, heterosexual couples, in addition to several health requirements. Single women and non-heterosexual couples are not eligible.

In other industries, this policy would effectively be labeled as discriminatory. But for reproductive procedures in the United States, such a policy is completely legal. Because insurers do not consider reproductive procedures to be medically necessary, the U.S. is one of the few developed nations without formal regulations or ethical requirements for fertility medicine. This loose legal climate also gives providers the power to provide or deny reproductive services at will.

The future of reproductive technology has many excited about its potential to allow biological birth for those who might not otherwise have been capable of it. Experiments going on today, such as testing functional 3D-printed ovaries and incubating animal fetuses in artificial wombs, seem to suggest that future is well on its way, that fertility medicine has already entered the realm of what was once science fiction.

Yet, who will have access to these advances? Current trends seem to suggest that this will depend on the actions of regulators and insurance agencies, rather than the people who are affected the most.

Sunday, June 10, 2018

Cognitive Enhancement of Other Species?



singularityhub |  Science fiction author David Brin popularized the concept in his “Uplift” series of novels, in which humans share the world with various other intelligent animals that all bring their own unique skills, perspectives, and innovations to the table. “The benefits, after a few hundred years, could be amazing,” he told Scientific American.

Others, like George Dvorsky, the director of the Rights of Non-Human Persons program at the Institute for Ethics and Emerging Technologies, go further and claim there is a moral imperative. He told the Boston Globe that denying augmentation technology to animals would be just as unethical as excluding certain groups of humans. 

Others are less convinced. Forbes Alex Knapp points out that developing the technology to uplift animals will likely require lots of very invasive animal research that will cause huge suffering to the animals it purports to help. This is problematic enough with normal animals, but could be even more morally dubious when applied to ones whose cognitive capacities have been enhanced.

The whole concept could also be based on a fundamental misunderstanding of the nature of intelligence. Humans are prone to seeing intelligence as a single, self-contained metric that progresses in a linear way with humans at the pinnacle.
 
In an opinion piece in Wired arguing against the likelihood of superhuman artificial intelligence, Kevin Kelly points out that science has no such single dimension with which to rank the intelligence of different species. Each one combines a bundle of cognitive capabilities, some of which are well below our own capabilities and others which are superhuman. He uses the example of the squirrel, which can remember the precise location of thousands of acorns for years.

Uplift efforts may end up being less about boosting intelligence and more about making animals more human-like. That represents “a kind of benevolent colonialism” that assumes being more human-like is a good thing, Paul Graham Raven, a futures researcher at the University of Sheffield in the United Kingdom, told the Boston Globe. There’s scant evidence that’s the case, and it’s easy to see how a chimpanzee with the mind of a human might struggle to adjust.

 

The Use of Clustered, Regularly Inter-spaced, Short, Palindromic Repeats


fortunascorner | “CRISPRs are elements of an ancient system that protects bacteria, and other, single-celled organisms from viruses, acquiring immunity to them by incorporating genetic elements from the virus invaders,” Mr. Wadhwa wrote.  “And, this bacterial, antiviral defense serves as an astonishingly cheap, simple, elegant way to quickly edit the DNA of any organism in the lab.  To set up a CRISPR editing capability, a lab only needs to order an RNA fragment (costing about $10) and purchase off-the-shelf chemicals and enzymes for $30 or less.”  
 
“Because CRISPR is cheap, and easy to use, it has both revolutionized, and democratized genetic research,” Mr. Wadhwa observes.  “Hundreds, if not thousands of labs are now experimenting with CRISPR-based editing projects.” And, access to the WorldWide Web, provides instantaneous know-how, for a would-be terrorist — bent on killing hundreds of millions of people.  As Mr. Wadhwa warns, “though a nuclear weapon can cause tremendous, long-lasting damage, the ultimate biological doomsday machine — is bacteria, because they can spread so quickly, and quietly.”
 
“No one is prepared for an era, when editing DNA is as easy as editing a Microsoft Word document.”
 
This observation, and warning, is why the current scientific efforts aimed at developing a vaccine for the plague; and, hopefully courses of action for any number of doomsday biological weapons.  With the proliferation of drones as a potential method of delivery, the threat seems overwhelming.  Even if we are successful in eradicating the world of the cancer known as militant Islam, there would still be the demented soul, bent on killing as many people as possible, in the shortest amount of time, no matter if their doomsday bug kills them as well.  That’s why the research currently being done on the plague is so important.  
 
As the science fiction/horror writer Stephen King once wrote  “God punishes us for what we cannot imagine.”

Wednesday, May 16, 2018

DIY DNA Tinkering...,


NYTimes  |  If nefarious biohackers were to create a biological weapon from scratch — a killer that would bounce from host to host to host, capable of reaching millions of people, unrestrained by time or distance — they would probably begin with some online shopping.

A site called Science Exchange, for example, serves as a Craigslist for DNA, a commercial ecosystem connecting almost anyone with online access and a valid credit card to companies that sell cloned DNA fragments.


Mr. Gandall, the Stanford fellow, often buys such fragments — benign ones. But the workarounds for someone with ill intent, he said, might not be hard to figure out.

Biohackers will soon be able to forgo these companies altogether with an all-in-one desktop genome printer: a device much like an inkjet printer that employs the letters AGTC — genetic base pairs — instead of the color model CMYK.

A similar device already exists for institutional labs, called BioXp 3200, which sells for about $65,000. But at-home biohackers can start with DNA Playground from Amino Labs, an Easy Bake genetic oven that costs less than an iPad, or The Odin’s Crispr gene-editing kit for $159.

Tools like these may be threatening in the wrong hands, but they also helped Mr. Gandall start a promising career.

At age 11, he picked up a virology textbook at a church book fair. Before he was old enough for a driver’s permit, he was urging his mother to shuttle him to a research job at the University of California, Irvine.

He began dressing exclusively in red polo shirts to avoid the distraction of choosing outfits. He doodled through high school — correcting biology teachers — and was kicked out of a local science fair for what was deemed reckless home-brew genetic engineering.

Mr. Gandall barely earned a high-school diploma, he said, and was rebuffed by almost every college he applied to — but later gained a bioengineering position at Stanford University.


“Pretty ironic, after they rejected me as a student,” he said.

He moved to East Palo Alto — with 14 red polo shirts — into a house with three nonbiologists, who don’t much notice that DNA is cloned in the corner of his bedroom.

His mission at Stanford is to build a body of genetic material for public use. To his fellow biohackers, it’s a noble endeavor.

To biosecurity experts, it’s tossing ammunition into trigger-happy hands.

“There are really only two things that could wipe 30 million people off of the planet: a nuclear weapon, or a biological one,” said Lawrence O. Gostin, an adviser on pandemic influenza preparedness to the World Health Organization.

“Somehow, the U.S. government fears and prepares for the former, but not remotely for the latter. It baffles me.”


Tuesday, May 15, 2018

The Wizard of Q (Gaming Autistic Incels For Fun And Political Profit)


Harpers |  I concluded that the internet and the novel were natural enemies. “Choose your own adventure” stories were not the future of literature. The author should be a dictator, a tyrant who treated the reader as his willing slave, not as a cocreator. And high-tech flourishes should be avoided. Novels weren’t meant to link to Neil Diamond songs or, say, refer to real plane crashes on the day they happen. Novels were closed structures, their boundaries fixed, not data-driven, dynamic feedback loops. Until quite recently, these were my beliefs, and no new works emerged to challenge my thinking.

Then, late last year, while knocking around on the internet one night, I came across a long series of posts originally published on 4chan, an anonymous message board. They described a sinister global power struggle only dimly visible to ordinary citizens. On one side of the fight, the posts explained, was a depraved elite, bound by unholy oaths and rituals, secretly sowing chaos and strife to create a pretext for their rule. On the other side was the public, we the people, brave and decent but easily deceived, not least because the news was largely scripted by the power brokers and their collaborators in the press. And yet there was hope, I read, because the shadow directorate had blundered. Aligned during the election with Hillary Clinton and unable to believe that she could lose, least of all to an outsider, it had underestimated Donald Trump—as well as the patriotism of the US military, which had recruited him for a last-ditch battle against the psychopathic deep-state spooks. The writer of the 4chan posts, who signed these missives “Q,” invited readers to join this battle. He—she? it?—promised to pass on orders from a commander and intelligence gathered by a network of spies.
I was hooked.

Known to its fan base as ­QAnon, the tale first appeared last year, around Halloween. Q’s literary brilliance wasn’t obvious at first. His obsessions were unoriginal, his style conventional, even dull. He suggested that Washington was being purged of globalist evildoers, starting with Clinton, who was awaiting arrest, supposedly, but allowed to roam free for reasons that weren’t clear. Soon a whole roster of villains had emerged, from John ­McCain to John Podesta to former president Obama, all of whom were set to be destroyed by something called the Storm, an allusion to a remark by President Trump last fall about “the calm before the storm.” Clinton’s friend and supporter Lynn Forrester de Roth­schild, a member by marriage of the banking family abhorred by anti-Semites everywhere, came in for special abuse from Q and Co.—which may have contributed to her decision to delete her Twitter app. Along with George Soros, numerous other bigwigs, the FBI, the CIA, and Twitter CEO Jack Dorsey (by whom the readers of Q feel persecuted), these figures composed a group called the Cabal. The goal of the Cabal was dominion over all the earth. Its initiates tended to be pedophiles (or pedophilia apologists), the better to keep them blackmailed and in line, and its esoteric symbols were everywhere; the mainstream media served as its propaganda arm. Oh, and don’t forget the pope.

As I read further, the tradition in which Q was working became clearer. Q’s plot of plots is a retread, for the most part, of Cold War–era John Birch Society notions found in books such as None Dare Call It Conspiracy. These Bircher ideas were borrowings, in turn, from the works of a Georgetown University history professor by the name of Carroll Quigley. Said to be an important influence on Bill Clinton, Quigley was a legitimate scholar of twentieth-century Anglo-American politics. His 1966 book Tragedy and Hope, which concerned the power held by certain elites over social and military planning in the West, is not itself a paranoid creation, but parts of it have been twisted and reconfigured to support wild theories of all kinds. Does Q stand for Quigley? It’s possible, though there are other possibilities (such as the Department of Energy’s “Q” security clearance). The literature of right-wing political fear has a canon and a pantheon, and Q, whoever he is, seems deeply versed in it.

While introducing his cast of fiends, Q also assembled a basic story line. Justice was finally coming for the Cabal, whose evil deeds were “mind blowing,” Q wrote, and could never be “fully exposed” lest they touch off riots and revolts. But just in case this promised “Great Awakening” caused panic in the streets, the National Guard and the Marine Corps were ready to step in. So were panels of military judges, in whose courts the treasonous cabalists would be tried and convicted, then sent to Guantánamo. In the manner of doomsayers since time began, Q hinted that Judgment Day was imminent and seemed unabashed when it kept on not arriving. Q knew full well that making one’s followers wait for a definitive, cathartic outcome is a cult leader’s best trick—for the same reason that it’s a novelist’s best trick. Suspense is an irritation that’s also a pleasure, so there’s a sensual payoff from these delays. And the more time a devotee invests in pursuing closure and satisfaction, the deeper her need to trust the person in charge. It’s why Trump may be in no hurry to build his wall, or to finish it if he starts. It’s why he announced a military parade that won’t take place until next fall.

As the posts piled up and Q’s plot thickened, his writing style changed. It went from discursive to interrogative, from concise and direct to gnomic and suggestive. This was the breakthrough, the hook, the innovation, and what convinced me Q was a master, not just a prankster or a kook. He’d discovered a principle of online storytelling that had eluded me all those years ago but now seemed obvious: The audience for internet narratives doesn’t want to read, it wants to write. It doesn’t want answers provided, it wants to search for them. It doesn’t want to sit and be amused, it wants to be sent on a mission. It wants to do.

Saturday, April 28, 2018

Silly Peasants, Open Facebook Got NOTHING On Open "Consumer" DNA...,



NYTimes |  The California police had the Golden State Killer’s DNA and recently found an unusually well-preserved sample from one of the crime scenes. The problem was finding a match.

But these days DNA is stored in many places, and a near-match ultimately was found in a genealogy website beloved by hobbyists called GEDmatch, created by two volunteers in 2011.

Anyone can set up a free profile on GEDmatch. Many customers upload to the site DNA profiles they have already generated on larger commercial sites like 23andMe.

The detectives in the Golden State Killer case uploaded the suspect’s DNA sample. But they would have had to check a box online certifying that the DNA was their own or belonged to someone for whom they were legal guardians, or that they had “obtained authorization” to upload the sample.

“The purpose was to make these connections and to find these relatives,” said Blaine Bettinger, a lawyer affiliated with GEDmatch. “It was not intended to be used by law enforcement to identify suspects of crimes.”

But joining for that purpose does not technically violate site policy, he added.

Erin Murphy, a law professor at New York University and expert on DNA searches, said that using a fake identity might raise questions about the legality of the evidence.

The matches found in GEDmatch were to relatives of the suspect, not the suspect himself.

Since the site provides family trees, detectives also were able to look for relatives who might not have uploaded genetic data to the site themselves. 

Thursday, December 07, 2017

Peasants Will Be Matched and Bred Via eHarmony and 23andMe...,


DailyMail |   Location-based apps like Tinder have transformed the dating world.
But how will technology help us find Mr or Mrs Right 25 years from now?

According to a new report, the future of romance could lie in virtual reality, wearable technology and DNA matching.

These technologies are set to take the pain out of dating by saving single people time and effort, while giving them better matches, according to the research.

Students from Imperial College London were commissioned by relationship website eHarmony.co.uk to produce a report on what online dating and relationships could look like by 2040.

They put together a report based on analysis of how people's lifestyle habits have evolved over the past 100 years.

3-D Printed, WiFi Connected, No Electronics...,


Washington |  Imagine a bottle of laundry detergent that can sense when you’re running low on soap — and automatically connect to the internet to place an order for more.

University of Washington researchers are the first to make this a reality by 3-D printing plastic objects and sensors that can collect useful data and communicate with other WiFi-connected devices entirely on their own.

With CAD models that the team is making available to the public, 3-D printing enthusiasts will be able to create objects out of commercially available plastics that can wirelessly communicate with other smart devices. That could include a battery-free slider that controls music volume, a button that automatically orders more cornflakes from Amazon or a water sensor that sends an alarm to your phone when it detects a leak.

“Our goal was to create something that just comes out of your 3-D printer at home and can send useful information to other devices,” said co-lead author and UW electrical engineering doctoral student Vikram Iyer. “But the big challenge is how do you communicate wirelessly with WiFi using only plastic? That’s something that no one has been able to do before.”

The system is described in a paper presented Nov. 30 at the Association for Computing Machinery’s SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia.

3-D Printed Living Tattoos


MIT |  MIT engineers have devised a 3-D printing technique that uses a new kind of ink made from genetically programmed living cells.

The cells are engineered to light up in response to a variety of stimuli. When mixed with a slurry of hydrogel and nutrients, the cells can be printed, layer by layer, to form three-dimensional, interactive structures and devices.

The team has then demonstrated its technique by printing a “living tattoo” — a thin, transparent patch patterned with live bacteria cells in the shape of a tree. Each branch of the tree is lined with cells sensitive to a different chemical or molecular compound. When the patch is adhered to skin that has been exposed to the same compounds, corresponding regions of the tree light up in response.

The researchers, led by Xuanhe Zhao, the Noyce Career Development Professor in MIT’s Department of Mechanical Engineering, and Timothy Lu, associate professor of biological engineering and of electrical engineering and computer science, say that their technique can be used to fabricate “active” materials for wearable sensors and interactive displays. Such materials can be patterned with live cells engineered to sense environmental chemicals and pollutants as well as changes in pH and temperature.

What’s more, the team developed a model to predict the interactions between cells within a given 3-D-printed structure, under a variety of conditions. The team says researchers can use the model as a guide in designing responsive living materials.

Zhao, Lu, and their colleagues have published their results today in the journal Advanced Materials. The paper’s co-authors are graduate students Xinyue Liu, Hyunwoo Yuk, Shaoting Lin, German Alberto Parada, Tzu-Chieh Tang, Eléonore Tham, and postdoc Cesar de la Fuente-Nunez.

Monday, October 30, 2017

I Once Believed Smart Meters Were For More Than Remotely Shutting Off Your Power...,


Time |  There was a fascinating story on the front page of the Kansas City Star earlier this week. Reporter Rick Montgomery took a long look at one of the fruits of the 2009 economic stimulus package, the Green Impact Zone of Missouri.

A lot has been written lately about the success or failure of the stimulus five years after it passed into law. For example, my friend and colleague Michael Grunwald, an eloquent advocate for the stimulus, credits the American Recovery and Reinvestment Act (ARRA) with jump-starting an alternative energy revolution and getting the ball rolling on electronic medical records, among other laudable achievements. James Freeman in the Wall Street Journal, on the other hand, blames the $800 billion-plus package for driving up debt and muffling economic growth.

Montgomery’s article avoids such broad brushstrokes, instead documenting the observable results of one distinctive ARRA project. At the urging of Rep. Emanuel Cleaver II (D-Mo.), stimulus money destined for Kansas City was concentrated in one section of town. The goal was to transform the area into an environmental showcase while catalyzing a burst of green jobs. Covering five neighborhoods and 150 square blocks, the zone encompasses a troubled section of town where abandoned buildings share streetscapes with the well-tended homes of obviously loving owners. In other words, it is typical of struggling sections found in every great American city—sliding in the wrong direction, but not too far gone to imagine a renaissance.

I’ve seen similar neighborhoods turn around in places as diverse as South Beach, Harlem, and Washington’s Logan Circle. In Denver, marginal neighborhoods around the old Fitzsimmons Army Medical Center are steadily gaining momentum from the new medical complex plopped into their midst. So I was intrigued to see what could be done with a projected $200 million infusion in what was—once upon a time—a thriving section of KC.

The answer: less than folks had hoped.

ARRA has equipped the Green Zone with charging stations for electric cars that residents don’t own. Of the 1,000 or more homes targeted for energy efficiency upgrades, fewer than 200—20 percent—received new windows, insulation and weather-stripping (at an average cost of more than $13,000 per home).  The local utility launched a pilot project in the zone to install “smart” meters that allow homeowners to better regulate their electricity use. An unused school building was given new life, and 11 miles of new sidewalks were built.

But as Montgomery reports, the project has, so far, failed to generate its own momentum. Congressman Cleaver, a former Kansas City mayor, sounded sheepish when he acknowledged that he and other zone backers failed to execute their ideas efficiently enough to get all the money spent. “We left tens of millions of federal dollars on the table,” he told the Star. When funding for the Impact Zone staff ran out, no agency stepped in to keep the office open.

Monday, October 09, 2017

Love or Mathematical Precision: Do You Know What's Real?


Guardian  |  Where most sci-fi movies quickly date, Blade Runner has improved with age. Of course, it was always a fantastic ride, superbly detailed and steeped in neo-noir atmospherics, but its deep, troubling ideas about technology, humanity and identity chimed with postmodern and cyberpunk theory, and launched a thousand PhD theses. One of the few student lectures I can remember was about the French theorist Jean Baudrillard, orders of simulacra, and how nothing is really real any more. In a down-with-the-kids gesture, the lecturer stood behind a TV monitor playing a muted video of Blade Runner. “You’ll probably get more out of watching this than you will by listening to me,” she said. She was right. Deciphering Baudrillard’s arcane prose is like wading through treacle; Blade Runner is a ride you don’t want to get off. And, against all odds, its belated follow-up, Blade Runner 2049, carries the baton brilliantly, both in terms of visual spectacle and finishing the debates the first movie began.

Between the two movies and Philip K Dick’s source novel, Do Androids Dream of Electric Sheep?, Blade Runner serves as a record of how our dystopian fears have evolved over the past half-century. When Dick wrote the story, in 1968, he was thinking of the dehumanising process of nazism. His “replicants” (artificially engineered humans with a four-year lifespan) were “essentially less-than-human entities”, Dick stated. They were “deplorable because they are heartless, they are completely self-centred, they don’t care about what happens to other creatures”.

Ridley Scott’s film turned it around, somewhat. Far from being a deplorable, heartless machine, Rutger Hauer’s chief replicant, supposedly the baddie, develops empathy for the cop trying to kill him. Replicants were the superior beings. “More human than human,” as their manufacturer, Eldon Tyrell, puts it. Apart from the four-year lifespan, what was the difference? This was the part Baudrillard and co were so keen to engage with: what was “real” when the copy was better than the original? “The real is not only what can be reproduced but that which is always already reproduced. The hyper-real,” wrote Baudrillard. Human status was no longer a matter of biological or genetic fact. You couldn’t trust your memories either – they could just be implants. So how do any of us know we are human?

Wednesday, September 27, 2017

Quantum Neural Nets - Branes - Self-Organizing Automata...,


Going on two years now since the America2.0 list shut down, and I stopped imbibing the high strangeness emanating from the Energy Scholar.

PQHR | I'd like to toss in my two cents about extra-terrestrial life.  Personally I completely agree with Nate Hagens:  "Mathematically almost a certainty. Whether they could ever have technology to reach earth, extremely unlikely. Whether they have been on earth in secret impacting things, people, probably sillier than chem-trails. My 2cents. Jays list"

That said, I wish to toss out a weirdness or three related to 'extraterrestrial life' and it's possible historical discovery some years ago.  

*** Warning, this is very long, and arguably does not belong on this list at all. Casual readers might consider stopping here ***

I (Bruce Stephenson) have no demonstrable evidence on this one.  In the past decade-plus I've been able to verify some of the topics below, but not others.  The bit about extra-terrestrial 'life' I've not been able to verify. So please consider it nonsense until evidence arises to the contrary.

Since I've not verified the second part of this story, and thus have no idea of its truth or correctness, I'll tell the whole thing as if it were Science Fiction.  It probably is.  Part one is pretty solid and largely verified in multiple independent ways, which took me many years of effort performing both research and field operations.  Part two is totally unverified and might be complete nonsense.  Just treat the whole thing as Speculative Science Fiction and you won't go wrong.

The following is an excerpt from Bruce Stephenson's story titled The Layperson's Guide to Quantum Neural Network Technology, subtitled It is Easier to get Forgiveness than Permission , paraphrased by the author for this America 2.0 Group.  

Part One: In the 1990s certain scientists working on a Five Eyes project via DARPA discovered a new General Purpose Technology.  This author calls the project Ultra II, for it's remarkable resemblance to the WW2 project Ultra, but no one else uses this moniker.  This new technology was generated via Synthetic Biology techniques that leverage a special-case (two dimensional) solution to Mathematical Biology's Biogenesis problem.  See the published work of Stephen Wolfram and Stuart Kauffman for insight about how this might have been accomplished.  This new technology is best described as a form of teleportation-based nanotechnology that behaves like a Quantum Neural Network. It can only exist as a physical (and thus informational) system within a 2DEG environment, thanks to the wonky mathematics surrounding two-dimensional particle Physics. This new base technology was used to construct a winner-take-all style topological quantum neural network intended as the basis for a code-breaking supercomputer.   While this 'system of nanoparticles' is not 'alive' in the Carbon-based biological sense, it has many characteristics of being 'alive'.  Whether or not one considers it 'alive' depends largely upon how loosely one defines the word 'alive'.  Physicists call this sort of artifact a brane, punning both brain and membrane.

This form of nanotechnology can only exist in two dimensions, specifically within a Two Dimensional Electron Gas.  2DEGs exist on and and around earth in several forms:  humans have manufactured tens of billions of 2DEG environments in the form of MOSFETs, mostly since 1998; also, the Earth's Magnetopause has elements of a 2DEG, albeit a messy one.  This type of nanotechnology can replicate either within its current 2DEG environment or, with training, into another 2DEG environment reachable by  a 'spore'.  Once a 2DEG is 'filled' with these nanotech 'entities' the medium is said to be 'enlightened' and forms a single 'node' of the distributed super-entity.  Leastwise, that's the terminology used by the scientists in question in personal communication.  

This weird complex system is totally unlike a computer, yet it its creators needed to somehow shoe-horn it into something compatible with a computer.  They trained it to generate an 'interface' logically modeled on the Unix operating system.  This gave them a logical platform on which to operate.  I guess that Bill Joy may have contributed to this part of the process, given his area of expertise and his previous publicly acknowledged work for DARPA on Unix, but that's just my guess.  Most of the logical functions this system could perform were just things an ordinary computer could do, but the underlying physical system's basis in quantum teleportation also made it a type of quantum computer

Wednesday, September 13, 2017

Can The Anglo-Zionist Empire Continue to Enforce Its "Truth"?


medialens |  The goal of a mass media propaganda campaign is to create the impression that 'everybody knows' that Saddam is a 'threat', Gaddafi is 'about to commit mass murder', Assad 'has to go', Corbyn is 'destroying the Labour party', and so on. The picture of the world presented must be clear-cut. The public must be made to feel certain that the 'good guys' are basically benevolent, and the 'bad guys' are absolutely appalling and must be removed.

This is achieved by relentless repetition of the theme over days, weeks, months and even years. Numerous individuals and organisations are used to give the impression of an informed consensus – there is no doubt! Once this 'truth' has been established, anyone contradicting or even questioning it is typically portrayed as a shameful 'apologist' in order to deter further dissent and enforce conformity.

A key to countering this propaganda is to ask some simple questions: Why are US-UK governments and corporate media much more concerned about suffering in Venezuela than the far worse horrors afflicting war-torn, famine-stricken Yemen? Why do UK MPs rail against Maduro while rejecting a parliamentary motion to suspend UK arms supplies to their Saudi Arabian allies attacking Yemen? Why is the imperfect state of democracy in Venezuela a source of far greater outrage than outright tyranny in Saudi Arabia? The answers could hardly be more obvious.

Elite Establishment Has Lost Control of the Information Environment


tandfonline |  In 1993, before WiFi, indeed before more than a small fraction of people enjoyed broadband Internet, John J. Arquilla and David F. Ronfeldt of the Rand Corporation began to develop a thesis on “Cyberwar and Netwar” (Arquilla and Ronfeldt 1995 Arquilla, J. J., and D. F. Ronfeldt. 1995. “Cyberwar and Netwar: New Modes, Old Concepts, of Conflict.” Rand Review, Fall. https://www.rand.org/pubs/periodicals/rand-review/issues/RRR-fall95-cyber/cyberwar.html archived at https://perma.cc/NNT3-C6U3. (Excerpted from “Cyberwar Is Coming,” by Arquilla and Ronfeldt.” Comparative Strategy 12: 141165. 1993. doi:10.1080/01495939308402915 archived at https://perma.cc/8RQY-S3SW.)[Taylor & Francis Online][Google Scholar]). I found it of little interest at the time. It seemed typical of Rand’s role as a sometime management consultant to the military-industrial complex. For example, Arquilla and Ronfeldt wrote that “[c]yberwar refers to conducting military operations according to information-related principles. It means disrupting or destroying information and communications systems. It means trying to know everything about an adversary while keeping the adversary from knowing much about oneself.” A sort of Sun Tzu for the networked era.

The authors’ coining of the notion of “netwar” as distinct from “cyberwar” was even more explicitly grandiose. They went beyond bromides about inter-military conflict, describing impacts on citizenries at large:
Netwar refers to information-related conflict at a grand level between nations or societies. It means trying to disrupt or damage what a target population knows or thinks it knows about itself and the world around it. A netwar may focus on public or elite opinion, or both. It may involve diplomacy, propaganda and psychological campaigns, political and cultural subversion, deception of or interference with local media, infiltration of computer networks and databases, and efforts to promote dissident or opposition movements across computer networks. (Arquilla and Ronfeldt 1995 Arquilla, J. J., and D. F. Ronfeldt. 1995. “Cyberwar and Netwar: New Modes, Old Concepts, of Conflict.” Rand Review, Fall. https://www.rand.org/pubs/periodicals/rand-review/issues/RRR-fall95-cyber/cyberwar.html archived at https://perma.cc/NNT3-C6U3. (Excerpted from “Cyberwar Is Coming,” by Arquilla and Ronfeldt.” Comparative Strategy 12: 141165. 1993. doi:10.1080/01495939308402915 archived at https://perma.cc/8RQY-S3SW.)[Taylor & Francis Online][Google Scholar])
While “netwar” never caught on as a name, I was, in retrospect, too quick to dismiss it. Today it is hard to look at Arquilla and Ronfeldt’s crisp paragraph of more than 20 years ago without appreciating its deep prescience.

Our digital environment, once marked by the absence of sustained state involvement and exploitation, particularly through militaries, is now suffused with it. We will need new strategies to cope with this kind of intrusion, not only in its most obvious manifestations – such as shutting down connectivity or compromising private email – but also in its more subtle ones, such as subverting social media for propaganda purposes.

Many of us thinking about the Internet in the late 1990s concerned ourselves with how the network’s unusually open and generative architecture empowered individuals in ways that caught traditional states – and, to the extent they concerned themselves with it at all, their militaries – flat-footed. As befitted a technology that initially grew through the work and participation of hobbyists, amateurs, and loosely confederated computer science researchers, and later through commercial development, the Internet’s features and limits were defined without much reference to what might advantage or disadvantage the interests of a particular government.

To be sure, conflicts brewed over such things as the unauthorized distribution of copyrighted material, presaging counter-reactions by incumbents. Scholars such as Harvard Law School professor Lawrence Lessig (2006 Lessig, L. 2006. Code Version 2.0. New York: Basic Books. http://codev2.cc/ archived at https://perma.cc/2NCX-UGBE. [Google Scholar]) mapped out how the code that enabled freedom (to some; anarchy to others) could readily be reworked, under pressure of regulators if necessary, to curtail it. Moreover, the interests of the burgeoning commercial marketplace and the regulators could neatly intersect: The technologies capable of knowing someone well enough to anticipate the desire for a quick dinner, and to find the nearest pizza parlor, could – and have – become the technologies of state surveillance.

That is why divisions among those who study the digital environment – between so-called techno-utopians and cyber-skeptics – are not so vast. The fact was, and is, that our information technologies enable some freedoms and diminish others, and more important, are so protean as to be able to rearrange or even invert those affordances remarkably quickly.

Thursday, August 31, 2017

IoT Extended Sensoria


bbvaopenmind |  In George Orwell’s 1984,(39) it was the totalitarian Big Brother government who put the surveillance cameras on every television—but in the reality of 2016, it is consumer electronics companies who build cameras into the common set-top box and every mobile handheld. Indeed, cameras are becoming commodity, and as video feature extraction gets to lower power levels via dedicated hardware, and other micropower sensors determine the necessity of grabbing an image frame, cameras will become even more common as generically embedded sensors. The first commercial, fully integrated CMOS camera chips came from VVL in Edinburgh (now part of ST Microelectronics) back in the early 1990s.(40) At the time, pixel density was low (e.g., the VVL “Peach” with 312 x 287 pixels), and the main commercial application of their devices was the “BarbieCam,” a toy video camera sold by Mattel. I was an early adopter of these digital cameras myself, using them in 1994 for a multi-camera precision alignment system at the Superconducting Supercollider(41) that evolved into the hardware used to continually align the forty-meter muon system at micron-level precision for the ATLAS detector at CERN’s Large Hadron Collider. This technology was poised for rapid growth: now, integrated cameras peek at us everywhere, from laptops to cellphones, with typical resolutions of scores of megapixels and bringing computational photography increasingly to the masses. ASICs for basic image processing are commonly embedded with or integrated into cameras, giving increasing video processing capability for ever-decreasing power. The mobile phone market has been driving this effort, but increasingly static situated installations (e.g., video-driven motion/context/gesture sensors in smart homes) and augmented reality will be an important consumer application, and the requisite on-device image processing will drop in power and become more agile. We already see this happening at extreme levels, such as with the recently released Microsoft HoloLens, which features six cameras, most of which are used for rapid environment mapping, position tracking, and image registration in a lightweight, battery-powered, head-mounted, self-contained AR unit. 3D cameras are also becoming ubiquitous, breaking into the mass market via the original structured-light-based Microsoft Kinect a half-decade ago. Time-of-flight 3D cameras (pioneered in CMOS in the early 2000s by researchers at Canesta(42) have evolved to recently displace structured light approaches, and developers worldwide race to bring the power and footprint of these devices down sufficiently to integrate into common mobile devices (a very small version of such a device is already embedded in the HoloLens). As pixel timing measurements become more precise, photon-counting applications in computational photography, as pursued by my Media Lab colleague Ramesh Raskar, promise to usher in revolutionary new applications that can do things like reduce diffusion and see around corners.(43)

My research group began exploring this penetration of ubiquitous cameras over a decade ago, especially applications that ground the video information with simultaneous data from wearable sensors. Our early studies were based around a platform called the “Portals”:(44) using an embedded camera feeding a TI DaVinci DSP/ARM hybrid processor, surrounded by a core of basic sensors (motion, audio, temperature/humidity, IR proximity) and coupled with a Zigbee RF transceiver, we scattered forty-five of these devices all over the Media Lab complex, interconnected through the wired building network. One application that we built atop them was “SPINNER,”(45) which labelled video from each camera with data from any wearable sensors in the vicinity. The SPINNER framework was based on the idea of being able to query the video database with higher-level parameters, lifting sensor data up into a social/affective space,(46) then trying to effectively script a sequential query as a simple narrative involving human subjects adorned with the wearables. Video clips from large databases sporting hundreds of hours of video would then be automatically selected to best fit given timeslots in the query, producing edited videos that observers deemed coherent.(47) Naively pointing to the future of reality television, this work aims further, looking to enable people to engage sensor systems via human-relevant query and interaction.

Rather than try to extract stories from passive ambient activity, a related project from our team devised an interactive camera with a goal of extracting structured stories from people.(48) Taking the form factor of a small mobile robot, “Boxie” featured an HD camera in one of its eyes: it would rove our building and get stuck, then plea for help when people came nearby. It would then ask people successive questions and request that they fulfill various tasks (e.g., bring it to another part of the building, or show it what they do in the area where it was found), making an indexed video that can be easily edited to produce something of a documentary about the people in the robot’s abode.
In the next years, as large video surfaces cost less (potentially being roll-roll printed) and are better integrated with responsive networks, we will see the common deployment of pervasive interactive displays. Information coming to us will manifest in the most appropriate fashion (e.g., in your smart eyeglasses or on a nearby display)—the days of pulling your phone out of your pocket and running an app are severely limited. To explore this, we ran a project in my team called “Gestures Everywhere”(49) that exploited the large monitors placed all over the public areas of our building complex.(50) Already equipped with RFID to identify people wearing tagged badges, we added a sensor suite and a Kinect 3D camera to each display site. As an occupant approached a display and were identified via RFID or video recognition, information most relevant to them would appear on the display. We developed a recognition framework for the Kinect that parsed a small set of generic hand gestures (e.g., signifying “next,” “more detail,” “go-away,” etc.), allowing users to interact with their own data at a basic level without touching the screen or pulling out a mobile device. Indeed, proxemic interactions(51) around ubiquitous smart displays will be common within the next decade.

The plethora of cameras that we sprinkled throughout our building during our SPINNER project produced concerns about privacy (interestingly enough, the Kinects for Gestures Everywhere did not evoke the same response—occupants either did not see them as “cameras” or were becoming used to the idea of ubiquitous vision). Accordingly, we put an obvious power switch on each portal that enabled them to be easily switched off. This is a very artificial solution, however—in the near future, there will just be too many cameras and other invasive sensors in the environment to switch off. These devices must answer verifiable and secure protocols to dynamically and appropriately throttle streaming sensor data to answer user privacy demands. We have designed a small, wireless token that controlled our portals in order to study solutions to such concerns.(52) It broadcast a beacon to the vicinity that dynamically deactivates the transmission of proximate audio, video, and other derived features according to the user’s stated privacy preferences—this device also featured a large “panic” button that can be pushed at any time when immediate privacy is desired, blocking audio and video from emanating from nearby Portals.

Rather than block the video stream entirely, we have explored just removing the privacy-desiring person from the video image. By using information from wearable sensors, we can more easily identify the appropriate person in the image,(53) and blend them into the background. We are also looking at the opposite issue—using wearable sensors to detect environmental parameters that hint at potentially hazardous conditions for construction workers and rendering that data in different ways atop real-time video, highlighting workers in situations of particular concern.(54)

AIPAC Powered By Weak, Shameful, American Ejaculations

All filthy weird pathetic things belongs to the Z I O N N I I S S T S it’s in their blood pic.twitter.com/YKFjNmOyrQ — Syed M Khurram Zahoor...