politico |Elizabeth Warrenhas pushed back hard on questions about a Harvard Crimson piece in 1996 that described heras Native American, saying she had no idea the school where she taught law was billing her that way and saying it never came up during her hiring a year earlier, which others have backed up.
But a 1997 Fordham Law Review piece described her as Harvard Law School's "first woman of color," based, according to the notes at the bottom of the story, on a "telephone interview with Michael Chmura, News Director, Harvard Law (Aug. 6, 1996)."
The mention was in the middle of a lengthy and heavily-annotated Fordham piece on diversity and affirmative action and women. The title of the piece, by Laura Padilla, was "Intersectionality and positionality: Situating women of color in the affirmative action dialogue."
"There are few women of color who hold important positions in the academy, Fortune 500 companies, or other prominent fields or industries," the piece says. "This is not inconsequential. Diversifying these arenas, in part by adding qualified women of color to their ranks, remains important for many reaons. For one, there are scant women of color as role models. In my three years at Stanford Law School, there were no professors who were women of color. Harvard Law School hired its first woman of color, Elizabeth Warren, in 1995."
Padilla, now at California Western School of Law, told POLITICO in an email that she doesn't remember the details of the conversation with Chmura, who is now at Babson College and didn't respond to a request for comment. It is unclear whether it was Padilla's language or Chmura's.
medialens | One of the essential functions of the corporate media is to
marginalise or silence acknowledgement of the history – and continuation
– of Western imperial aggression. The coverage of the recent sentencing
in Senegal of Hissène Habré, the former dictator of Chad, for crimes
against humanity, provides a useful case study.
The verdict could well have presented the opportunity for the media
to examine in detail the complicity of the US, UK, France and their
major allies in the Middle East and North Africa in the appalling
genocide Habré inflicted on Chad during his rule – from 1982 to 1990.
After all, Habré had seized power via a CIA-backed coup. As William Blum
commented in Rogue State (2002: 152):
'With US support, Habré went on to rule for eight years during which
his secret police reportedly killed tens of thousands, tortured as many
of 200,000 and disappeared an undetermined number.'
Indeed, while coverage of Chad has been largely missing from the
British corporate media, so too was the massive, secret war waged over
these eight years by the United States, France and Britain from bases in
Chad against Libyan leader Colonel Mu'ammar Gaddafi. (See Targeting Gaddafi: Secret Warfare and the Media, by Richard Lance Keeble, in Mirage in the Desert? Reporting the 'Arab Spring', edited by John Mair and Richard Lance Keeble, Abramis, Bury St Edmunds, 2011, pp 281-296.)
By 1990, with the crisis in the Persian Gulf developing, the French
government had tired of Habré's genocidal policies while George Bush
senior's administration decided not to frustrate France in exchange for
co-operation in its attack on Iraq. And so Habré was secretly toppled
and in his place Idriss Déby was installed as the new President of Chad.
Yet the secret Chad coups can only be understood as part of the
United States' global imperial strategy. For since 1945, the US has
intervened in more than 70 countries – in Africa, Eastern Europe, the Middle East, South America and Asia. Britain, too, has engaged militarily
across the globe in virtually every year since 1914. Most of these
conflicts are conducted far away from the gaze of the corporate media.
physorg | Individuals tend to
group others based on their perceived morality, often employing
stereotypes to describe individuals or groups of people beliveved to
have different morals or values. According to Fiske et al.,
stereotypes are well described using two dimensions: warmth and
competence. Warmth (or lack of it) refers to the perceived
positive/negative intent of another person, while competence refers to
the other person's capacity to achieve their intent. Using this
terminology, the ingroup, or the group that you belong to, is both warm
and competent, and thus trustworthy. Stereotypes with high perceived
competence and low perceived warmth, including stereotypically wealthy
individuals, are often not trusted because perceived intent is either
unknown or negative. Similarly, scientists have unclear intent due to
their perceived amorality, and they are not trusted.
I believe that in order to incur more trust from the public, scientists must cultivate more warmth from the public.
I propose two ways to achieve this goal. First scientists need to make their intentions clear. Social psychologist Todd Pittinsky,
mentioned in the introduction, has some terrific ideas on how to
clarify intentions. One strategy is open access to data and methods,
which is readily achieved through open access publishing. Scientists
also need to treat misconduct by other scientists more seriously so that
people don't, for example, deem that all vaccine science is fraud due
to one case of misconduct.
Finally, we need to treat science denial without disdain and
acknowledge uncertainty properly when describing scientific results.
Second, scientists need to move into the ingroup sphere by imitating
those already in the ingroup. Kahan et al. point out that an
individual's established ideology greatly influences how they process
new information. I would suggest scientists frame their findings in a
way that fits with the audience's ideology, thus promoting "warmth". For
example, the Pew report
that reveals 37% of the public thinks that GMOs are not safe, which
violates the individual foundations. Highlighting how certain crops can
be genetically engineered for health (e.g. rice that is genetically engineered to produce beta carotene)
shows how GMOs can be compatible with individual foundations. Behaving
like an ingroup can then move scientists into the ingroup sphere.
Battling misinformation is definitely an uphill climb, but it is a
climb scientists must endeavor to make. Climate change denial and the
anti-vaccination movement threatens the future of scientific progress,
and while the danger cannot be ignored, we should not belittle
non-scientific ideas. Scientists can build goodwill through increased
transparency and communicating the significance of their findings to the
public. By taking other worldviews into account, we can find common
ground and create open dialogue and perhaps find solutions to benefit
everyone.
theantimedia | As someone who wants to give the appearance of knowing the hardships of common people, Hillary loves to bring up the topic of income inequality. In April, while giving avictory speechfor her win in the New York primary, Hillary brought up that very subject — and then attempted to boast about her support among average Americans.
“I know how important it is that we get the campaign’s resources from people just like you, who go in and chip in $5, $25. I am grateful to every one of you.
And she said every word of itwhile wearinga Giorgio Armani tweed jacket that cost a hefty $12,495.
At leastlieto me Hillary — don’t pretend to be a champion against inequality while wearing an article of clothing that literallycosts more than a new car. You lie about everything else, so rather than flouting your riches while championing the poor (and letting your riches show through the cracks), you might as well put on a cheap suit and give this ‘populist’ deception the effort you give your other ones.
theintercept |Last night, the Associated Press — on a day when nobody voted — surprised everyone by abruptly declaring
the Democratic Party primary over and Hillary Clinton the victor. The
decree, issued the night before the California primary in which polls
show Clinton and Bernie Sanders in a very close race, was based on the
media organization’s survey of “superdelegates”: the Democratic Party’s
720 insiders, corporate donors, and officials whose votes for the
presidential nominee count the same as the actually elected delegates.
AP claims that superdelegates who had not previously announced their
intentions privately told AP reporters that they intend to vote for
Clinton, bringing her over the threshold. AP is concealing the identity
of the decisive superdelegates who said this.
Although the Sanders campaign rejected the validity
of AP’s declaration — on the ground that the superdelegates do not vote
until the convention and he intends to try to persuade them to vote for
him — most major media outlets followed the projection and declared Clinton the winner.
This is the perfect symbolic ending to the Democratic Party primary: The
nomination is consecrated by a media organization, on a day when nobody
voted, based on secret discussions with anonymous establishment
insiders and donors whose identities the media organization — incredibly
— conceals. The decisive edifice of superdelegates is itself
anti-democratic and inherently corrupt: designed to prevent actual
voters from making choices that the party establishment dislikes. But
for a party run by insiders and funded by corporate interests, it’s only
fitting that its nomination process ends with such an ignominious,
awkward, and undemocratic sputter.
theintercept |One of thegreatest free speech threats in the West is the growing, multi-nation campaign literally to outlaw advocacy of boycotting Israel. People get arrested in Paris —
the site of the 2015 “free speech” (for Muslim critics) rally — for
wearing pro-boycott T-shirts. Pro-boycott students on U.S. campuses —
where the 1980s boycott of apartheid South Africa flourished — are routinely sanctioned for violating anti-discrimination policies. Canadian officials have threatened to criminally prosecute boycott advocates. British government bodies have legally barred certain types of boycott advocacy. Israel itself has outright criminalized
advocacy of such boycotts. Notably, all of this has been undertaken
with barely a peep from those who styled themselves free speech
crusaders when it came time to defend anti-Muslim cartoons.
But now, New York’s Democratic Gov. Andrew Cuomo (above, in the 2016
Celebrate Israel Parade) has significantly escalated this free speech
attack on U.S. soil, aimed at U.S. citizens. The prince of the New York
political dynasty yesterday issued an executive order directing all agencies
under his control to terminate any and all business with companies or
organizations that support a boycott of Israel. It ensures that citizens
who hold and express a particular view are punished through the denial
of benefits that other citizens enjoy: a classic free speech violation
(imagine if Cuomo issued an order stating that “anyone who expresses
conservative viewpoints shall have all state benefits immediately
terminated”).
Even more disturbing, Cuomo’s executive order requires
that one of his commissioners compile “a list of institutions and
companies” that — “either directly or through a parent or subsidiary” —
support a boycott. That government list is then posted publicly, and the
burden falls on them to prove to the state that they do not, in fact,
support such a boycott. Donna Lieberman, executive director of the New
York Civil Liberties Union, told The Intercept: “Whenever the
government creates a blacklist based on political views it raises
serious First Amendment concerns and this is no exception.” Reason’s Robby Soave denounced it today as “brazenly autocratic.”
To read the relevant provisions of Cuomo’s order is to confront the
mentality of petty censoring tyranny, flavored with McCarthyite public
shaming, in its purest form.
theantimedia | In the true Orwellian fashion now typifying
2016, a bill to implement the U.S.’ very own de facto Ministry of Truth
has been quietly introduced in Congress — its lack of fanfare
appropriate given the bill’s equally subtle language. As with any
legislation attempting to dodge the public spotlight, however, the
Countering Foreign Propaganda and Disinformation Act of 2016 marks a
further curtailment of press freedom and another avenue to stultify
avenues of accurate information.
Introduced by Congressmen Adam Kinzinger and Ted Lieu, H.R. 5181 seeks a “whole-government approach without the bureaucratic restrictions” to counter “foreign disinformation and manipulation,” which they believe threaten the world’s “security and stability.”
“As Russia continues to spew its disinformation and false
narratives, they undermine the United States and its interests in places
like Ukraine, while also breeding further instability in these
countries,” Kinzinger explained in a statement. “The
United States has a role in countering these destabilizing acts of
propaganda, which is why I’m proud to introduce [the aforementioned
bill]. This important legislation develops a comprehensive U.S. strategy
to counter disinformation campaigns through interagency cooperation and
on-the-ground partnerships with outside organizations that have
experience in countering foreign propaganda.”
Make no mistake — this legislation isn’t proposing some team of noble
fact-finders, chiseling away to free the truth from the façades of
various foreign governmental narratives for the betterment of American
and allied populations. If passed, this legislation will allow
cumbrously pro-‘American’ propaganda to infiltrate cable, online, and
mainstream news organizations wherever the government deems necessary.
“From Ukraine to the South China Sea, foreign disinformation
campaigns do more than spread anti-Western sentiments — they manipulate
public perception to change the facts on the ground, subvert democracy
and undermine U.S. interests,” Lieu explained. “In short, they make the world less safe.”
H.R. 5181 tasks the Secretary of State with coordinating the
Secretary of Defense, the Director of National Intelligence, and the
Broadcasting Board of Governors to “establish a Center for Information Analysis and Response,” which will pinpoint sources of disinformation, analyze data, and — in true dystopic manner — ‘develop and disseminate’ “fact-based narratives” to counter effrontery propaganda.
arvix | Why life persists at the edge of chaos is a question at the very heart of evolution. Here we show that molecules taking part in biochemical processes from small molecules to proteins are critical quantum mechanically. Electronic Hamiltonians of biomolecules are tuned exactly to the critical point of the metal-insulator transition separating the Anderson localized insulator phase from the conducting disordered metal phase. Using tools from Random Matrix Theory we confirm that the energy level statistics of these biomolecules show the universal transitional distribution of the metal-insulator critical point and the wave functions are multifractals in accordance with the theory of Anderson transitions. The findings point to the existence of a universal mechanism of charge transport in living matter. The revealed bio-conductor material is neither a metal nor an insulator but a new quantum critical material which can exist only in highly evolved systems and has unique material properties.
physorg | Stuart Kauffman, from the University of Calgary, and several of his colleagues have recently published a paper on the Arxiv server titled 'Quantum Criticality at the Origins of Life'. The idea of a quantum criticality, and more generally quantum critical states, comes perhaps not surprisingly, from solid state physics. It describes unusual electronic states that are are balanced somewhere between conduction and insulation. More specifically, under certain conditions, current flow at the critical point becomes unpredictable. When it does flow, it tends to do so in avalanches that vary by several orders of magnitude in size.
In suggesting that biomolecules, or at least most of them, are quantum critical conductors, Kauffman and his group are claiming that their electronic properties are precisely tuned to the transition point between a metal and an insulator. An even stronger reading of this would have that there is a universal mechanism of charge transport in living matter which can exist only in highly evolved systems. To back all this up the group took a closer look at the electronic structure of a few of our standard issue proteins like myoglobin, profilin, and apolipoprotein E.
theatlantic | If his mushrooms could grow tailor-made weapons against any other types of fungi, would it be possible for them to do the same against any type of bacteria, too?
The rise of drug-resistant bacteria is sobering. Just last week, colistin-resistant E. coli––a “superbug” resistant to the antibiotic that’s considered the last resort for combatting particularly dangerous types of infections––landed in the U.S.Soon, public health officials anticipate, infections will be harder to stop; 10 million peoplecould die of drug-resistant superbugs
It may be a long shot, but it’s conceivable that Cotter's process offers a new kind of hope. While scientists have been working on the problem of antibiotic resistance for many years—some are looking toharness the human immune systemto better fight it; others are working on simplydetecting the superbugs faster—his vision is to beat superbugs with medicine that actually adapts to destroy them. It’s not pharmaceuticals he has in mind; he's not planning to mass produce many different types of secondary metabolites. Rather, he believes it’s his unique style of co-culturing itself––the process of culturing two different microbes together to produce a defense entirely specific to the attacker––that may be able to create custom antibiotics that, at least in theory, could be inherently less susceptible to resistance.
Hisgoal, in other words, is to grow mushrooms that are themselves medicine, because they could create whatever metabolites a sick person needs.
"The best situation I could describe is something everyone has gone through, like a strep throat culture,” Cotter says, imagining a scenario in which an infected patient walks into the doctor’s office, gets a throat swab, and then has the swab dropped into a specially designed module containing a fungus. That fungus would then sweat metabolites into a reservoir that would be naturally calibrated to combat the patient’s illness.
Cotter doesn’t know how the metabolites would be administered yet. A lollipop or throat spray for strep? Delivered topically for staph? His testing is still thoroughly ongoing. Should he receive the NIH grant he's applying for––a grant backed by a $1.2 billion White House Initiativeto stop resistant diseases––answers could arrive rapidly. Analytical labs would go up, animal testing would begin, streptococcus lollipopus before we know it.
wired | Artificial intelligence wasn’t supposed to work this way. Until a few
years ago, mainstream AI researchers assumed that to create
intelligence, we just had to imbue a machine with the right logic. Write
enough rules and eventually we’d create a system sophisticated enough
to understand the world. They largely ignored, even vilified, early
proponents of machine learning, who argued in favor of plying machines
with data until they reached their own conclusions. For years computers
weren’t powerful enough to really prove the merits of either approach,
so the argument became a philosophical one. “Most of these debates were
based on fixed beliefs about how the world had to be organized and how
the brain worked,” says Sebastian Thrun, the former Stanford AI
professor who created Google’s self-driving car. “Neural nets had no
symbols or rules, just numbers. That alienated a lot of people.”
The implications of an unparsable machine language aren’t just
philosophical. For the past two decades, learning to code has been one
of the surest routes to reliable employment—a fact not lost on all those
parents enrolling their kids in after-school code academies. But a
world run by neurally networked deep-learning machines requires a
different workforce. Analysts have already started worrying about the
impact of AI on the job market, as machines render old skills
irrelevant. Programmers might soon get a taste of what that feels like
themselves.
Of course, humans still have to train these systems. But for now, at
least, that’s a rarefied skill. The job requires both a high-level grasp
of mathematics and an intuition for pedagogical give-and-take. “It’s
almost like an art form to get the best out of these systems,” says
Demis Hassabis, who leads Google’s DeepMind AI team. “There’s only a few
hundred people in the world that can do that really well.” But even
that tiny number has been enough to transform the tech industry in just a
couple of years.
These forces have led technologist Danny Hillis to declare the end of
the age of Enlightenment, our centuries-long faith in logic,
determinism, and control over nature. Hillis says we’re shifting to what
he calls the age of Entanglement. “As our technological and
institutional creations have become more complex, our relationship to
them has changed,” he wrote in the Journal of Design and Science.
“Instead of being masters of our creations, we have learned to bargain
with them, cajoling and guiding them in the general direction of our
goals. We have built our own jungle, and it has a life of its own.” The
rise of machine learning is the latest—and perhaps the last—step in this
journey.
HuffPo | More of NASA’s astrobiology strategy for the next decade can be found in its latest roadmap: Astrobiology Strategy 2015. Lindsay Hays of California Institute of Technology’s Jet Propulsion Laboratory is editor-in-chief.
Microbes are
given some attention in a section titled: “How Does Our Ignorance About
Microbial Life on Earth Hinder Our Understanding of the Limits to Life?”
Curiously, however, there’s not a word in the entire 256-page document
(including the glossary) about the existence of viruses — the biggest part of the biosphere — let alone their consortial and persistent nature, when the new thinking in science is “virus first“ and that persistence may be just as crucial to life as replication.
Templeton last
year also awarded $5.4 M for origin of life investigations to the
Foundation for Applied Molecular Evolution, with funds being
administered by FAME synthetic biologist Steve Benner
(who once quipped, “If you don’t have a theory of life, you can’t find
aliens — unless they shoot you in the leg with a ray gun.”) AND $5.6M to ELSI — the Japanese government’s earth science institute in Tokyo - for its ELSI Origins Network, headed by astrophysicist Piet Hut also of the Institute for Advanced Study in Princeton.
Steve Benner is listed as a reviewer on NASA’s latest roadmap and is on the editorial board of Astrobiology Journal whose senior editors include NAI’s new chief Penny Boston as well as ISSOL (International Society for the Study of Origin of Life) president Dave Deamer.
Astrobiology Journal is put together in the Kennewick,
Washington home of Sherry Cady, a geologist who serves as editor in
chief, and her husband Lawrence P. Cady, a fiction writer who serves as
the journal’s managing editor and copy editor — according to LP Cady.
The magazine is one of 80 of Mary Ann Liebert Inc.’s “authoritative”
journals and has close ties to other NASA-funded scientists who serve as
reviewers.
If anything
substantive is likely to happen as a result of (or in spite of)
Templeton funding on origin of life, I would expect it to come from
Steve Benner’s project, which includes people like George E. Fox who collaborated early on with Carl Woese on Archaea, and Harry Lonsdale origin of life research funds recipient, Niles Lehman — plus Benner himself and eight others.
On the other
hand, I have serious reservations about the NASA award of $1.1M of
public funds to CTI. What ever happened to the separation of church and
state?
thescientist |Little things mean a lot. To any
biologist, this time-worn maxim is old news. But it’s worth revisiting.
As several articles in this issue of The Scientist illustrate, how researchers define and examine the “little things” does mean a lot.
Consider this month’s cover story, “Noncoding RNAs Not So Noncoding,”
by TS correspondent Ruth Williams. Combing the human genome for open
reading frames (ORFs), sequences bracketed by start and stop codons,
yielded a protein-coding count somewhere in the neighborhood of 24,000.
That left a lot of the genome relegated to the category of junk—or,
later, to the tens of thousands of mostly mysterious long noncoding RNAs
(lncRNAs). But because they had only been looking for ORFs that were
300 nucleotides or longer (i.e., coding for proteins at least 100 amino
acids long), genome probers missed so-called short ORFs (sORFs), which
encode small peptides. “Their diminutive size may have caused these
peptides to be overlooked, their sORFs to be buried in statistical
noise, and their RNAs to be miscategorized, but it does not prevent them
from serving important, often essential functions, as the micropeptides
characterized to date demonstrate,” writes Williams.
How little things work definitely informs another field of life science
research: synthetic biology. As the functions of genes and gene
networks are sussed out, bioengineers are using the information to
design small, synthetic gene circuits that enable them to better
understand natural networks. In “Synthetic Biology Comes into Its Own,”
Richard Muscat summarizes the strides made by synthetic biologists over
the last 15 years and offers an optimistic view of how such networks
may be put to use in the future. And to prove him right, just as we go
to press, a collaborative group led by one of syn bio’s founding
fathers, MIT’s James Collins, has devised a paper-based test for Zika virus exposure
that relies on a freeze-dried synthetic gene circuit that changes color
upon detection of RNAs in the viral genome. The results are ready in a
matter of hours, not the days or weeks current testing takes, and the
test can distinguish Zika from dengue virus. “What’s really exciting
here is you can leverage all this expertise that synthetic biologists
are gaining in constructing genetic networks and use it in a real-world
application that is important and can potentially transform how we do
diagnostics,” commented one researcher about the test.
thescientist | Much attention paid to the bacterial CRISPR/Cas9 system has focused on its uses as a gene-editing tool. But there are other CRISPR/Cas sytems. Researchers from MIT and the National Center for Biotechnology Information (NCBI) last year identified additional CRISPR proteins.
One of these proteins, C2c2, seemed to be a putative
RNA-cleaving—rather than a DNA-targeting—enzyme, the researchers
reported at the time. Now, the same group has established that C2c2
indeed cleaves single-stranded RNA (ssRNA), providing the first example
of a CRISPR/Cas system that exclusively targets RNA. The team’s latest results, published today (June 2) in Science, confirm the diversity of CRISPR systems and point to the possibility of precise in vivo RNA editing.
“This protein does what we expected, performing RNA-guided cleavage of
cognate RNA with high specificity, and can be programmed to cleave any
RNA at will,” study coauthor Eugene Koonin, of the NCBI and the National Library of Medicine, told The Scientist.
“I am very excited about the paper,” said Gene Yeo,
an RNA researcher at the University of California, San Diego, who was
not involved in the work. “The community was expecting to find native
RNA CRISPR systems, so it’s great that one of these has now been
characterized.”
thescientist |In 2002, a group of plant researchers
studying legumes at the Max Planck Institute for Plant Breeding Research
in Cologne, Germany, discovered that a 679-nucleotide RNA believed to
function in a noncoding capacity was in fact a protein-coding messenger
RNA (mRNA).1
It had been classified as a long (or large) noncoding RNA (lncRNA) by
virtue of being more than 200 nucleotides in length. The RNA,
transcribed from a gene called early nodulin 40 (ENOD40),
contained short open reading frames (ORFs)—putative protein-coding
sequences bookended by start and stop codons—but the ORFs were so short
that they had previously been overlooked. When the Cologne collaborators
examined the RNA more closely, however, they found that two of the ORFs
did indeed encode tiny peptides: one of 12 and one of 24 amino acids.
Sampling the legumes confirmed that these micropeptides were made in the
plant, where they interacted with a sucrose-synthesizing enzyme.
Five years later, another ORF-containing mRNA that had been posing as a lncRNA was discovered in Drosophila.2,3 After performing a screen of fly embryos to find lncRNAs, Yuji Kageyama,
then of the National Institute for Basic Biology in Okazaki, Japan,
suppressed each transcript’s expression. “Only one showed a clear
phenotype,” says Kageyama, now at Kobe University. Because embryos
missing this particular RNA lacked certain cuticle features, giving them
the appearance of smooth rice grains, the researchers named the RNA
“polished rice” (pri).
Turning his attention to how the RNA functioned, Kageyama thought he
should first rule out the possibility that it encoded proteins. But he
couldn’t. “We actually found it was a protein-coding gene,” he says. “It
was an accident—we are RNA people!” The pri gene turned out to
encode four tiny peptides—three of 11 amino acids and one of 32—that
Kageyama and colleagues showed are important for activating a key
developmental transcription factor.4
Since then, a handful of other lncRNAs have switched to the mRNA ranks
after being found to harbor micropeptide-encoding short ORFs
(sORFs)—those less than 300 nucleotides in length. And given the vast
number of documented lncRNAs—most of which have no known function—the
chance of finding others that contain micropeptide codes seems high.
The hunt for these tiny treasures is now on, but it’s a challenging
quest. After all, there are good reasons why these itty-bitty peptides
and their codes went unnoticed for so long.
NYTimes | “By
focusing on building the 3Gb of human DNA, HGP-write would push current
conceptual and technical limits by orders of magnitude and deliver
important scientific advances,” they write, referring to three
gigabases, the three billion letters in the human genome.
Scientists already can change DNA in organisms or add foreign genes, as is done to make medicines like insulin or genetically modified crops. New “genome editing” tools, like one called Crispr, are making it far easier to re-engineer an organism’s DNA blueprint.
But
George Church, a professor of genetics at Harvard Medical School and
one of the organizers of the new project, said that if the changes
desired are extensive, at some point it becomes easier to synthesize the
needed DNA from scratch.
“Editing
doesn’t scale very well,” he said. “When you have to make changes to
every gene in the genome it may be more efficient to do it in large
chunks.”
Besides
Dr. Church, the other organizers of the project are Jef Boeke, director
of the Institute for Systems Genetics at NYU Langone Medical Center;
Andrew Hessel, a futurist at the software company Autodesk; and Nancy J.
Kelley, who works raising money for projects. The paper in Science
lists a total of 25 authors, many of them involved in DNA engineering.
Autodesk,
which has given $250,000 to the project, is interested in selling
software to help biologists design DNA sequences to make organisms
perform particular functions. Dr. Church is a founder of Gen9, a company that sells made-to-order strands of DNA.
Dr.
Boeke of N.Y.U. is leading an international project to synthesize the
complete genome of yeast, which has 12 million base pairs. It would be
the largest genome synthesized to date, though still much smaller than
the human genome.
WaPo | consumers are already seeing our machine learning research
reflected in the sudden explosion of digital personal assistants like
Siri, Alexa and Google Now — technologies that are very good at
interpreting voice-based requests but aren't capable of much more than
that. These "narrow AI" have been designed with a specific purpose in
mind: To help people do the things regular people do, whether it's
looking up the weather or sending a text message.
Narrow,
specialized AI is also what companies like IBM have been pursuing. It
includes, for example, algorithms to help radiologists pick out tumors
much more accurately by "learning" all the cancer research we've ever
done and by "seeing" millions of sample X-rays and MRIs. These
robots act much more like glorified calculators — they can ingest way
more data than a single person could hope to do with his or her own
brain, but they still operate within the confines of a specific task
like cancer diagnosis. These robots are not going to be launching
nuclear missiles anytime soon. They wouldn't know how, or why. And the
more pervasive this type of AI becomes, the more we'll understand about
how best to build the next generation of robots.
So who is going to lose their job?
Partly
because we're better at designing these limited AI systems, some
experts predict that high-skilled workers will adapt to the technology
as a tool, while lower-skill jobs are the ones that will see the most
disruption. When the Obama administration studied the issue, it
found that as many as 80 percent of jobs currently paying less than $20
an hour might someday be replaced by AI.
"That's over a long
period of time, and it's not like you're going to lose 80 percent of
jobs and not reemploy those people," Jason Furman, a senior economic
advisor to President Obama, said in an interview. "But [even] if you
lose 80 percent of jobs and reemploy 90 percent or 95 percent of those
people, it's still a big jump up in the structural number not working.
So I think it poses a real distributional challenge."
Policymakers
will need to come up with inventive ways to meet this looming jobs
problem. But the same estimates also hint at a way out: Higher-earning
jobs stand to be less negatively affected by automation. Compared to the
low-wage jobs, roughly a third of those who earn between $20 and $40 an
hour are expected to fall out of work due to robots, according to
Furman. And only a sliver of high-paying jobs, about 5 percent, may
be subject to robot replacement.
Those numbers might look very different if researchers were truly on the brink of creating sentient AI that can really do all the
same things a human can. In this hypothetical scenario, even
high-skilled workers might have more reason to fear. But the fact that
so much of our AI research right now appears to favor narrow forms of
artificial intelligence at least suggests we could be doing a lot worse.
ecodevoevo | Our culture, like any culture, creates symbols to use as tokens as we go
about our lives. Tokens are reassuring or explanatory symbols, and we
naturally use them in the manipulations for various resources that
culture is often about. Nowadays, a central token is the gene.
Genes are proffered as the irrefutable ubiquitous cause of things, the
salvation, the explanation, in ways rather similar to the way God and
miracles are proffered by religion. Genes conveniently lead to
manipulation by technology, and technology sells in our industrial
culture. Genes are specific rather than vague, are enumerable, can be
seen as real core 'data' to explain the world. Genes are widely used as
ultimate blameworthy causes, responsible for disease which comes to be
defined as what happens when genes go 'wrong'. Being literally unseen,
like angels, genes can take on an aura of pervasive power and mystery.
The incantation by scientists is that if we can only be enabled to find
them we can even cure them (with CRISPR or some other promised
panacea), exorcising their evil. All of this invocation of fundamental
causal tokens is particulate enough to be marketable for grants and
research proposals, great for publishing in journals and for news media
to gawk at in wonder. Genes provide impressively mysterious tokens for
scientists to promise almost to create miracles by manipulating. Genes
stand for life's Book of Truth, much as sacred texts have traditionally
done and, for many, still do.
Genes provide fundamental symbolic tokens in theories of life--its
essence, its evolution, of human behavior, of good and evil traits, of
atoms of causation from which everything follows. They lurk in the
background, responsible for all good and evil. So in our age in human
history, it is not surprising that reports of finding genes 'for' this
or that have unbelievable explanatory panache. It's not a trivial
aspect of this symbolic role that people (including scientists) have to
take others' word for what they claim as insights.
inference-review |The cell is a
complex dynamic system in which macromolecules such as DNA and the
various proteins interact within a free energy flux provided by
nutrients. Its phenotypes can be represented by quasi-stable attractors
embedded in a multi-dimensional state space whose dimensions are defined
by the activities of the cell’s constituent proteins.1
This is the basis for the dynamical model of the cell.
The current molecular genetic or machine model of the cell, on the
other hand, is predicated on the work of Gregor Mendel and Charles
Darwin. Mendel framed the laws of inheritance on the basis of his
experimental work on pea plants. The first law states that inheritance
is a discrete and not a blending process: crossing purple and white
flowered varieties produces some offspring with white and some with
purple flowers, but generally not intermediately colored offspring.2 Mendel concluded that whatever was inherited had a material or particulate nature; it could be segregated.3
According to the machine cell model, those particles are genes or
sequences of nucleobases in the genomic DNA. They constitute Mendel’s
units of inheritance. Gene sequences are transcribed, via messenger RNA,
to proteins, which are folded linear strings of amino acids called
peptides. The interactions between proteins are responsible for
phenotypic traits. This assumption relies on two general principles
affirmed by Francis Crick in 1958, namely the sequence hypothesis and
the central dogma.4
The sequence hypothesis asserts that the sequence of bases in the
genomic DNA determines the sequence of amino acids in the peptide and
the three-dimensional structure of the folded peptide. The central
dogma states that the sequence hypothesis represents a flow of
information from DNA to the proteins and rules out a flow in reverse.
In 1961, the American biologist Christian Anfinsen demonstrated that
when the enzyme ribonuclease was denatured, it lost its activity, but
regained it on re-naturing. Anfinsen concluded from the kinetics of
re-naturation that the amino acid sequence of the peptide determined how
the peptide folded.5
He did not cite Crick’s 1958 paper or the sequence hypothesis, although
he had apparently read the first and confirmed the second.
The central dogma and the sequence hypothesis proved to be wonderful
heuristic tools with which to conduct bench work in molecular biology.
The machine model recognizes cells to be highly regulated entities;
genes are responsible for that regulation through gene regulatory
networks (GRNs).6 Gene sequences provide all the information needed to build and regulate the cell.
Both a naturalist and an experimentalist, Darwin observed that
breeding populations exhibit natural variations. Limited resources mean a
struggle for existence. Individuals become better and better adapted to
their environments. This process is responsible for both small adaptive
improvements and dramatic changes. Darwin insisted evolution was, in
both cases, gradual, and predicted that intermediate forms between
species should be found both in the fossil record and in existing
populations. Today, these ideas are part of the modern evolutionary
synthesis, a term coined by Julian Huxley in 1942.7 Like the central dogma, it has been subject to controversy, despite its early designation as the set of principles under which all of biology is conducted.8
The modern synthesis, we now understand, does not explain
trans-generational epigenetic inheritance, consciousness, and niche
construction.9
It is possible that the concept of the gene and the claim that
evolution depends on genetic diversity may both need to be modified or
replaced.
This essay is a step towards describing biology as a science founded
on the laws of physics. It is a step in the right direction.
Stuart Kauffman's 1993 book, Origins of Order,
is a technical treatise on his life's work in Mathematical Biology.
Kauffman greatly extends Alan Turing
's early work in Mathematical
Biology. The intended audience is other mathematical and theoretical biologists. It's chock full of advanced mathematics. Of particular
note, Origins of Order seems to be Kauffman's only published
work in which he states his experimental results about the
interconnection
between complex systems and neural networks.
Kauffman explains that a
complex
system
tuned with particular
parameters is a neural network. I can not overstate the
importance of the last sentence in the paragraph
above. The implication is that one basis for intelligence, biological
neural networks, can spontaneously self-generate given the correct
starting
parameters. Kauffman provides the mathematics to do this, discusses his
experimental results, and points out that the parameters in question
are an
attractor state.
nationalgeographic |According to your book, physics describes the actions or tendencies of every living thing—and inanimate ones as well. Does that mean we can unite all behavior under physics?
Absolutely. Our narrow definition of the discipline is something that’s happened in the past hundred years, thanks to the immense impact of Albert Einstein and atomic physics and relativity at the turn of the [20th] century.
But we need to go back farther. In Latin, nature—physics—means “everything that happens.”
One thing that came directly from Charles Darwin is that humans are part of nature, along with all the other animate beings. Therefore all the things that we make—our tools, our homes, our technologies—are natural as well. It’s all part of the same thing.
In your magazine and on your TV channel, we see many animals doing this—extending their reach with tools, with intelligence, with social organization. Everything is naturally interconnected.
Your new book is premised on a law of physics that you formulated in 1996. The Constructal Law says there’s a universal evolutionary tendency toward design in nature, because everything is composed of systems that change and evolve to flow more easily.
That’s correct. But I would specify and say that the tendency is toward evolving freely—changing on the go in order to provide greater and greater ease of movement. That’s physics, stage four—more precise, more specific expressions of the same idea.
Flow systems are everywhere. They describe the ways that animals move and migrate, the ways that river deltas form, the ways that people build fires. In each case, they evolve freely to reduce friction and to flow better—to improve themselves and minimize their mistakes or imperfections. Blood flow and water flow essentially evolve the same way.
quantamagazine | Popular hypotheses credit a primordial soup, a bolt of lightning and a
colossal stroke of luck. But if a provocative new theory is correct,
luck may have little to do with it. Instead, according to the physicist
proposing the idea, the origin and subsequent evolution of life follow
from the fundamental laws of nature and “should be as unsurprising as
rocks rolling downhill.”
From the standpoint of physics, there is one essential difference
between living things and inanimate clumps of carbon atoms: The former
tend to be much better at capturing energy from their environment and
dissipating that energy as heat. Jeremy England,
a 31-year-old assistant professor at the Massachusetts Institute of
Technology, has derived a mathematical formula that he believes explains
this capacity. The formula, based on established physics, indicates
that when a group of atoms is driven by an external source of energy
(like the sun or chemical fuel) and surrounded by a heat bath (like the
ocean or atmosphere), it will often gradually restructure itself in
order to dissipate increasingly more energy. This could mean that under
certain conditions, matter inexorably acquires the key physical
attribute associated with life.
“You start with a random clump of atoms, and if you shine light on it
for long enough, it should not be so surprising that you get a plant,”
England said.
England’s theory is meant to underlie, rather than replace, Darwin’s
theory of evolution by natural selection, which provides a powerful
description of life at the level of genes and populations. “I am
certainly not saying that Darwinian ideas are wrong,” he explained. “On
the contrary, I am just saying that from the perspective of the physics,
you might call Darwinian evolution a special case of a more general
phenomenon.”
His idea, detailed in a recent paper and further elaborated in a talk
he is delivering at universities around the world, has sparked
controversy among his colleagues, who see it as either tenuous or a
potential breakthrough, or both.
England has taken “a very brave and very important step,” said
Alexander Grosberg, a professor of physics at New York University who
has followed England’s work since its early stages. The “big hope” is
that he has identified the underlying physical principle driving the
origin and evolution of life, Grosberg said.
“Jeremy is just about the brightest young scientist I ever came
across,” said Attila Szabo, a biophysicist in the Laboratory of Chemical
Physics at the National Institutes of Health who corresponded with
England about his theory after meeting him at a conference. “I was
struck by the originality of the ideas.”
aeon | No matter how hard
they try, brain scientists and cognitive psychologists will never find a
copy of Beethoven’s 5th Symphony in the brain – or copies of words,
pictures, grammatical rules or any other kinds of environmental stimuli.
The human brain isn’t really empty, of course. But it does not contain most of the things people think it does – not even simple things such as ‘memories’.
Our
shoddy thinking about the brain has deep historical roots, but the
invention of computers in the 1940s got us especially confused. For more
than half a century now, psychologists, linguists, neuroscientists and
other experts on human behaviour have been asserting that the human
brain works like a computer.
To
see how vacuous this idea is, consider the brains of babies. Thanks to
evolution, human neonates, like the newborns of all other mammalian
species, enter the world prepared to interact with it effectively. A
baby’s vision is blurry, but it pays special attention to faces, and is
quickly able to identify its mother’s. It prefers the sound of voices to
non-speech sounds, and can distinguish one basic speech sound from
another. We are, without doubt, built to make social connections.
A
healthy newborn is also equipped with more than a dozen reflexes –
ready-made reactions to certain stimuli that are important for its
survival. It turns its head in the direction of something that brushes
its cheek and then sucks whatever enters its mouth. It holds its breath
when submerged in water. It grasps things placed in its hands so
strongly it can nearly support its own weight. Perhaps most important,
newborns come equipped with powerful learning mechanisms that allow them
to change rapidly so they can interact increasingly
effectively with their world, even if that world is unlike the one their
distant ancestors faced.
Senses,
reflexes and learning mechanisms – this is what we start with, and it
is quite a lot, when you think about it. If we lacked any of these
capabilities at birth, we would probably have trouble surviving.
But here is what we are not born with: information,
data, rules, software, knowledge, lexicons, representations,
algorithms, programs, models, memories, images, processors, subroutines,
encoders, decoders, symbols, or buffers – design elements that allow digital computers to behave somewhat intelligently. Not only are we not born with such things, we also don’t develop them – ever.
We don’t store words or the rules that tell us how to manipulate them. We don’t create representations of visual stimuli, store them in a short-term memory buffer, and then transfer the representation into a long-term memory device. We don’t retrieve information or images or words from memory registers. Computers do all of these things, but organisms do not.
Computers, quite literally, process information
– numbers, letters, words, formulas, images. The information first has
to be encoded into a format computers can use, which means patterns of
ones and zeroes (‘bits’) organised into small chunks (‘bytes’). On my
computer, each byte contains 8 bits, and a certain pattern of those bits
stands for the letter d, another for the letter o, and another for the letter g. Side by side, those three bytes form the word dog.
One single image – say, the photograph of my cat Henry on my desktop –
is represented by a very specific pattern of a million of these bytes
(‘one megabyte’), surrounded by some special characters that tell the
computer to expect an image, not a word.
Computers,
quite literally, move these patterns from place to place in different
physical storage areas etched into electronic components. Sometimes they
also copy the patterns, and sometimes they transform them in various
ways – say, when we are correcting errors in a manuscript or when we are
touching up a photograph. The rules computers follow for moving,
copying and operating on these arrays of data are also stored inside the
computer. Together, a set of rules is called a ‘program’ or an
‘algorithm’. A group of algorithms that work together to help us do
something (like buy stocks or find a date online) is called an
‘application’ – what most people now call an ‘app’.
Forgive me for this introduction to computing, but I need to be clear: computers really do operate on symbolic representations of the world. They really store and retrieve. They really process. They really have physical memories. They really are guided in everything they do, without exception, by algorithms.
Humans,
on the other hand, do not – never did, never will. Given this reality,
why do so many scientists talk about our mental life as if we were
computers?
NYTimes | Since
its release seven years ago, Minecraft has become a global sensation,
captivating a generation of children. There are over 100 million
registered players, and it’s now the third-best-selling video game in
history, after Tetris and Wii Sports. In 2014, Microsoft bought
Minecraft — and Mojang, the Swedish game studio behind it — for $2.5
billion.
There
have been blockbuster games before, of course. But as Jordan’s
experience suggests — and as parents peering over their children’s
shoulders sense — Minecraft is a different sort of phenomenon.
For one thing, it doesn’t really feel like a game.
It’s more like a destination, a technical tool, a cultural scene, or
all three put together: a place where kids engineer complex machines,
shoot videos of their escapades that they post on YouTube, make art and
set up servers, online versions of the game where they can hang out with
friends. It’s a world of trial and error and constant discovery,
stuffed with byzantine secrets, obscure text commands and hidden
recipes. And it runs completely counter to most modern computing trends.
Where companies like Apple and Microsoft and Google want our computers
to be easy to manipulate — designing point-and-click interfaces under
the assumption that it’s best to conceal from the average user how the
computer works — Minecraft encourages kids to get under the hood, break
things, fix them and turn mooshrooms into random-number generators. It
invites them to tinker.
In
this way, Minecraft culture is a throwback to the heady early days of
the digital age. In the late ’70s and ’80s, the arrival of personal
computers like the Commodore 64 gave rise to the first generation of
kids fluent in computation. They learned to program in Basic, to write
software that they swapped excitedly with their peers. It was a playful
renaissance that eerily parallels the embrace of Minecraft by today’s
youth. As Ian Bogost, a game designer and professor of media studies at
Georgia Tech, puts it, Minecraft may well be this generation’s personal
computer.
At
a time when even the president is urging kids to learn to code,
Minecraft has become a stealth gateway to the fundamentals, and the
pleasures, of computer science. Those kids of the ’70s and ’80s grew up
to become the architects of our modern digital world, with all its
allures and perils. What will the Minecraft generation become?
“Children,” the social
critic Walter Benjamin wrote in 1924, “are particularly fond of
haunting any site where things are being visibly worked on. They are
irresistibly drawn by the detritus generated by building, gardening,
housework, tailoring or carpentry.”
afr | Among the blockchain cognoscenti, everyone is talking about Ethereum.
A
rival blockchain and virtual currency to bitcoin, Ethereum allows for
the programming of "smart contracts", or computer code which
facilitates or enforces a set of rules. Ethereum was first described by
the programmer Vitalik Buterin in late 2013; the first full public
version of the platform was released in February.
Commercial
lawyers are watching the arrival of Ethereum closely given the potential
for smart contracts in the future to disintermediate their
highly lucrative role in drafting and exchanging paper contracts. Smart
contracts are currently being used to digitise business rules, but may
soon move to codify legal agreements.
The innovation has been
made possible because Ethereum provides developers with a more liberal
"scripting language" than bitcoin. This is allowing companies to create
their own private blockchains and build applications. Already, apps
for music distribution, sports betting and a new type of financial
auditing are being tested.
Some of the world's largest technology companies, from Microsoft to
IBM, are lining up to work with Ethereum, while the R3 CEV banking
consortium has also been trialling its technology as it tests
blockchain-style applications for the banking industry including trading
commercial paper. Banks are interested in blockchain because
distributed ledgers can remove intermediaries and speed up transactions,
thereby reducing costs. But if banks move business to blockchains in
the future, financial services lawyers will need to begin re-drafting
into digital form the banking contracts that underpin the capital
markets.
The global director of IBM Blockchain Labs, Nitin Gaur,
who was in Sydney last week, says he is a "huge fan" of Ethereum,
pointing to its "rich ecosystem of developers". He predicts law to be
among the industries disrupted by the technology.
theguardian | It’s 40 years since Richard Dawkins suggested, in the opening words of The Selfish Gene,
that, were an alien to visit Earth, the question it would pose to judge
our intellectual maturity was: “Have they discovered evolution yet?” We
had, of course, by the grace of Charles Darwin and a century of
evolutionary biologists who had been trying to figure out how natural
selection actually worked. In 1976, The Selfish Gene became the
first real blockbuster popular science book, a poetic mark in the sand
to the public and scientists alike: this idea had to enter our thinking,
our research and our culture.
The idea was this: genes strive for immortality, and individuals,
families, and species are merely vehicles in that quest. The behaviour
of all living things is in service of their genes hence, metaphorically,
they are selfish. Before this, it had been proposed that natural
selection was honing the behaviour of living things to promote the
continuance through time of the individual creature, or family, or group
or species. But in fact, Dawkins said, it was the gene itself that was
trying to survive, and it just so happened that the best way for it to
survive was in concert with other genes in the impermanent husk of an
individual.
This gene-centric view of evolution also began to explain one of the
oddities of life on Earth – the behaviour of social insects. What is the
point of a drone bee, doomed to remain childless and in the service of a
totalitarian queen? Suddenly it made sense that, with the gene itself
steering evolution, the fact that the drone shared its DNA with the
queen meant that its servitude guarantees not the individual’s survival,
but the endurance of the genes they share. Or as the Anglo-Indian
biologist JBS Haldane put it: “Would I lay down my life to save my
brother? No, but I would to save two brothers or eight cousins.”
These ideas were espoused by only a handful of scientists in the
middle decades of the 20th century – notably Bob Trivers, Bill Hamilton,
John Maynard Smith and George Williams. In The Selfish Gene, Dawkins
did not merely recapitulate them; he made an impassioned argument for
the reality of natural selection. Previous attempts to explain the
mechanics of evolution had been academic and rooted in maths. Dawkins
walked us through it in prose. Many great popular science books followed
– Stephen Hawking’s A Brief History of Time, Stephen Pinker’s The Blank Slate, and, currently, The Vital Question by Nick Lane.
For many of us, The Selfish Gene was our first proper taste
of evolution. I don’t remember it being a controversial subject in my
youth. In fact, I don’t remember it being taught at all. Evolution,
Darwin and natural selection were largely absent from my secondary
education in the late 1980s. The national curriculum, introduced in the
UK in 1988, included some evolution, but before 1988 its presence in
schools was far from universal. As an aside, in my opinion the subject
is taught bafflingly minimally and late in the curriculum even today;
evolution by natural selection is crucial to every aspect of the living
world. In the words of the Russian scientist Theodosius Dobzhansky:
“Nothing in biology makes sense except in the light of evolution.”
themonkeytrap | I currently teach a class called Reality 101 at the University of
Minnesota. It is a 15 week exploration of ‘the human ecosystem’ – what
drives us, what powers us and what we are doing. Only when viewed from
such an ecological lens can ‘better’ choices be made by individuals, who
in turn impact societies. Our situation cannot be described in an hour
-but this was my latest and best attempt. The talk is 60% new from
prior talks – I start with brief summaries of energy, economy, behavior
and environment, followed by a listing of 25 of the current ‘flawed
assumptions’ underpinning modern human culture. I close with a list of
20 new ways of thinking about ones future-for consideration – and
possibly to work towards – for a young person alive this century. It is
my opinion we need more pro-social, pro-future players on the
gameboard, whatever their beliefs and priorities might be. 2 books
should be finished this year and I will post a note here about
progress/etc
A Foundation of Joy
-
Two years and I've lost count of how many times my eye has been operated
on, either beating the fuck out of the tumor, or reattaching that slippery
eel ...
April Three
-
4/3
43
When 1 = A and 26 = Z
March = 43
What day?
4 to the power of 3 is 64
64th day is March 5
My birthday
March also has 5 letters.
4 x 3 = 12
...
Return of the Magi
-
Lately, the Holy Spirit is in the air. Emotional energy is swirling out of
the earth.I can feel it bubbling up, effervescing and evaporating around
us, s...
New Travels
-
Haven’t published on the Blog in quite a while. I at least part have been
immersed in the area of writing books. My focus is on Science Fiction an
Historic...
Covid-19 Preys Upon The Elderly And The Obese
-
sciencemag | This spring, after days of flulike symptoms and fever, a man
arrived at the emergency room at the University of Vermont Medical Center.
He ...