Monday, April 17, 2017

Syria Sarin Gas Attack Staged


IBTimes |  Theodore Postol, a professor emeritus at the Massachusetts Institute of Technology (MIT), issued a series of three reports in response to the White House's finding that Syrian President Bashar Al-Assad perpetrated the attack on 4 April.

He concluded that the US government's report does not provide any "concrete" evidence that Assad was responsible, adding it was more likely that the attack was perpetrated by players on the ground.

Postol said: "I have reviewed the [White House's] document carefully, and I believe it can be shown, without doubt, that the document does not provide any evidence whatsoever that the US government has concrete knowledge that the government of Syria was the source of the chemical attack in Khan Sheikhoun, Syria at roughly 6am to 7am on 4 April, 2017.

"In fact, a main piece of evidence that is cited in the document point to an attack that was executed by individuals on the ground, not from an aircraft, on the morning of 4 April.

"This conclusion is based on an assumption made by the White House when it cited the source of the sarin release and the photographs of that source. My own assessment is that the source was very likely tampered with or staged, so no serious conclusion could be made from the photographs cited by the White House."

The image Postol refers to is that of a crater containing a shell inside, which is said to have contained the sarin gas.

His analysis of the shell suggests that it could not have been dropped from an airplane as the damage of the casing is inconsistent from an aerial explosion. Instead, Postol said it was more likely that an explosive charge was laid upon the shell containing sarin, before being detonated.

Trump Has Received and Will Obey his Marching Orders


strategic culture |  Donald Trump has reversed his national-security policies 180 degrees, and is now focusing it around conquering Russia, instead of around reducing the threat from jihadists. The reason for this drastic change is in order for him to be able to win the support of the U.S. aristocracy, who had overwhelmingly favored Hillary Clinton during the Presidential contest, and who (and whose ‘news’media) have been trying to portray Trump as «Putin’s fool» or even as «Putin’s Manchurian candidate» and thus as an illegitimate President or even traitor who is beholden to 'America’s enemy’ (which to them is Russia) for Trump’s having won the U.S. Presidency — which they had tried to block from happening.

Actually, even Republican billionaires generally preferred Hillary Clinton over Donald Trump — and almost all of them hate Putin, who insists upon Russia’s independence, which the U.S. aristocracy call by all sorts of bad names, so that any American who even so much as merely questions the characterization of Russia as being an ‘enemy’ nation, is considered to be ‘unAmerican’, like in the days of communism and Joseph R. McCarthy, as if communism and the U.S.S.R. and its Warsaw Pact that mirrored America’s NATO military alliance, even existed today, which they obviously don’t. So: the U.S. Establishment’s portrayal of current international reality is so bizarre, it can be believed only by fools, but enough such fools exist so as to enable that Establishment to do horrific things, such as the 2003 invasion of Iraq, and the 2011 invasion of Libya, just to name two examples, which got rid of two national leaders who were friendly toward Russia.) 

After Trump ditched his National Security Advisor Mike Flynn (whom Obama had fired for not being sufficiently anti-Russian, but Trump then hired) and replaced him with the rabidly anti-Russian H.R. McMaster (whom the aristocracy’s people were recommending to Trump), Trump was expecting to be relieved from the aristocracy’s intensifying campaign to impeach him or otherwise replace him and make the President his clearly pro-aristocratic Vice President Mike Pence, but the overthrow-Trump campaign continued even after McMaster became installed replacing Flynn. Then, perhaps because the replacement of Flynn by McMaster failed to satisfy the aristocracy, Trump additionally ousted Stephen Bannon and simultaneously bombed Syrian government forces, and now the campaign to overthrow Trump seems finally to have subsided, at least a bit, at least for now.

Sunday, April 16, 2017

Six Main Arcs in Human Storytelling Identified by Artificial Intelligence


theatlantic |  “My prettiest contribution to my culture,” the writer Kurt Vonnegut mused in his 1981 autobiography Palm Sunday, “was a master’s thesis in anthropology which was rejected by the University of Chicago a long time ago.”

By then, he said, the thesis had long since vanished. (“It was rejected because it was so simple and looked like too much fun,” Vonnegut explained.) But he continued to carry the idea with him for many years after that, and spoke publicly about it more than once. It was, essentially, this: “There is no reason why the simple shapes of stories can’t be fed into computers. They are beautiful shapes.”

That explanation comes from a lecture he gave, and which you can still watch on YouTube, that involves Vonnegut mapping the narrative arc of popular storylines along a simple graph. The X-axis represents the chronology of the story, from beginning to end, while the Y-axis represents the experience of the protagonist, on a spectrum of ill fortune to good fortune. “This is an exercise in relativity, really,” Vonnegut explains. “The shape of the curve is what matters.”
The most interesting shape to him, it turned out, was the one that reflected the tale of Cinderella, of all stories. Vonnegut visualizes its arc as a staircase-like climb in good fortune representing the arrival of Cinderella’s fairy godmother, leading all the way to a high point at the ball, followed by a sudden plummet back to ill fortune at the stroke of midnight. Before too long, though, the Cinderella graph is marked by a sharp leap back to good fortune, what with the whole business of (spoiler alert) the glass slipper fitting and the happily ever after.

This may not seem like anything special, Vonnegut says—his actual words are, “it certainly looks like trash”—until he notices another well known story that shares this shape. “Those steps at the beginning look like the creation myth of virtually every society on earth. And then I saw that the stroke of midnight looked exactly like the unique creation myth in the Old Testament.” Cinderella’s curfew was, if you look at it on Vonnegut’s chart, a mirror-image downfall to Adam and Eve’s ejection from the Garden of Eden. “And then I saw the rise to bliss at the end was identical with the expectation of redemption as expressed in primitive Christianity. The tales were identical.”

Artificial Intelligence Will Disclose Cetacean Souls



Scientists have struggled to understand dolphin vocalizations, but new computer tools to both track dolphins and decode their complex vocalizations are now emerging. Dr. Denise Herzing has been studying Atlantic spotted dolphins, Stenella frontalis, in the Bahamas for over three decades. Her video and acoustic database encompasses a myriad of complex vocalizations and dolphin behavior. Dr. Thad Starner works on mining this dataset and decoding dolphin sounds, and has created a wearable underwater computer, CHAT (Cetacean Hearing and Telemetry), to help establish a bridge for communication between humans and dolphins. Starner and Herzing will present this cutting-edge work and recent results, including perspectives on the challenges of studying this aquatic society, and decoding their communication signals using the latest technology.

qz |  The possibility of talking to animals has tickled popular imaginations for years, and with good reason. Who wouldn’t want to live in a Dr. Dolittle world where we could understand what our pets and animal neighbors are saying?

Animal cognition researchers have also been fascinated by the topic. Their work typically focuses on isolating animal communication to see if language is uniquely human, or if it could have evolved in other species as well. One of their top candidates is an animal known to communicate with particularly high intelligence: dolphins.

Dolphins—like many animals including monkeys, birds, cats, and dogs—clearly do relay messages to one another. They emit sounds (paywall) in three broad categories: clicks, whistles, and more complex chirps used for echolocation (paywall), a technique they use to track prey and other objects by interpreting ricocheting sound waves. Researchers believe these sounds can help dolphins communicate: Whistles can serve as unique identifiers, similar to names, and can alert the pod to sources of food or danger.

Communication is most certainly a part of what helps these animals live in social pods. But proving that dolphins use language—the way that you’re reading this article, or how you might talk to your friends about it later—is a whole different kettle of fish.

Physical Basis for Morphogenesis: On Growth and Form


nature |  Still in print, On Growth and Form was more than a decade in the planning. Thompson would regularly tell colleagues and students — he taught at what is now the University of Dundee, hence the local media interest — about his big idea before he wrote it all down. In part, he was reacting against one of the biggest ideas in scientific history. Thompson used his book to argue that Charles Darwin’s natural selection was not the only major influence on the origin and development of species and their unique forms: “In general no organic forms exist save such as are in conformity with physical and mathematical laws.”

Biological response to physical forces remains a live topic for research. In a research paper, for example, researchers report how physical stresses generated at defects in the structures of epithelial cell layers cause excess cells to be extruded.

In a separate online publication (K. Kawaguchi et al. Nature http://dx.doi.org/10.1038/nature22321; 2017), other scientists show that topological defects have a role in cell dynamics, as a result of the balance of forces. In high-density cultures of neural progenitor cells, the direction in which cells travel around defects affects whether cells become more densely packed (leading to pile-ups) or spread out (leading to a cellular fast-lane where travel speeds up).

A Technology Feature investigates in depth the innovative methods developed to detect and measure forces generated by cells and proteins. Such techniques help researchers to understand how force is translated into biological function.

Thompson’s influence also flourishes in other active areas of interdisciplinary research. A research paper offers a mathematical explanation for the colour changes that appear in the scales of ocellated lizards (Timon lepidus) during development (also featured on this week’s cover). It suggests that the patterns are generated by a system called a hexagonal cellular automaton, and that such a discrete system can emerge from the continuous reaction-diffusion framework developed by mathematician Alan Turing to explain the distinctive patterning on animals, such as spots and stripes. (Some of the research findings are explored in detail in the News and Views section.) To complete the link to Thompson, Turing cited On Growth and Form in his original work on reaction-diffusion theory in living systems.

Finally, we have also prepared an online collection of research and comment from Nature and the Nature research journals in support of the centenary, some of which we have made freely available to view for one month.

Saturday, April 15, 2017

2017 Website@Most Important Lab at Harvard and Arguably the World?!?!




-->

Hacking and Reprogramming Cells Like Computers


wired |  Cells are basically tiny computers: They send and receive inputs and output accordingly. If you chug a Frappuccino, your blood sugar spikes, and your pancreatic cells get the message. Output: more insulin.

But cellular computing is more than just a convenient metaphor. In the last couple of decades, biologists have been working to hack the cells’ algorithm in an effort to control their processes. They’ve upended nature’s role as life’s software engineer, incrementally editing a cell’s algorithm—its DNA—over generations. In a paper published today in Nature Biotechnology, researchers programmed human cells to obey 109 different sets of logical instructions. With further development, this could lead to cells capable of responding to specific directions or environmental cues in order to fight disease or manufacture important chemicals.

Their cells execute these instructions by using proteins called DNA recombinases, which cut, reshuffle, or fuse segments of DNA. These proteins recognize and target specific positions on a DNA strand—and the researchers figured out how to trigger their activity. Depending on whether the recombinase gets triggered, the cell may or may not produce the protein encoded in the DNA segment.

A cell could be programmed, for example, with a so-called NOT logic gate. This is one of the simplest logic instructions: Do NOT do something whenever you receive the trigger. This study’s authors used this function to create cells that light up on command. Biologist Wilson Wong of Boston University, who led the research, refers to these engineered cells as “genetic circuits.”

A Programming Language For Living Cells?



MIT |  MIT biological engineers have created a programming language that allows them to rapidly design complex, DNA-encoded circuits that give new functions to living cells.
Using this language, anyone can write a program for the function they want, such as detecting and responding to certain environmental conditions. They can then generate a DNA sequence that will achieve it.
“It is literally a programming language for bacteria,” says Christopher Voigt, an MIT professor of biological engineering. “You use a text-based language, just like you’re programming a computer. Then you take that text and you compile it and it turns it into a DNA sequence that you put into the cell, and the circuit runs inside the cell.”
Voigt and colleagues at Boston University and the National Institute of Standards and Technology have used this language, which they describe in the April 1 issue of Science, to build circuits that can detect up to three inputs and respond in different ways. Future applications for this kind of programming include designing bacterial cells that can produce a cancer drug when they detect a tumor, or creating yeast cells that can halt their own fermentation process if too many toxic byproducts build up.
The researchers plan to make the user design interface available on the Web.

Friday, April 14, 2017

Open Thread: More Human Than Human


Brain item -- AI processing problem...??
would require AI to have the listener's entire life history stored in its memory to determine proper context....??
Your brain fills gaps in your hearing without you realising
No BD. Not an AI processing problem, just an illustration of the mechanical and necessarily error-prone nature of both language and auditory language processing. It's not a Voight-Kampff test and "Context doesn't require a life history".  In fact, with the benefit of big data, and centralized cloud storage and processing of hundreds of thousands of utterances and their associated meanings, the probability of an AI making either the sensory or grammatical error is greatly reduced.  

...Here's a no-nonsense AI item: Turns out AI is not sufficiently stupid to allow PC liberals to shove ridiculous egalitarian concepts down its throat. AI just looks at the *FACTS* and calls it like it sees it....
Machine learning algorithms are picking up deeply ingrained race and gender prejudices concealed within the patterns of language use, scientists say
No BD. Unfortunately, you are still trapped in the realm of language and language constructs your reality. Your language reflects your tendencies - which are racist - and so what FRANK is reflecting back at you is not the truth, merely the truth about you.  Fist tap Big Don.

Thursday, April 13, 2017

The Dark Secret at the Heart of Artificial Intelligence



technologyreview |   No one really knows how the most advanced algorithms do what they do. That could be a problem.

In 2015, a research group at Mount Sinai Hospital in New York was inspired to apply deep learning to the hospital’s vast database of patient records. This data set features hundreds of variables on patients, drawn from their test results, doctor visits, and so on. The resulting program, which the researchers named Deep Patient, was trained using data from about 700,000 individuals, and when tested on new records, it proved incredibly good at predicting disease. Without any expert instruction, Deep Patient had discovered patterns hidden in the hospital data that seemed to indicate when people were on the way to a wide range of ailments, including cancer of the liver. There are a lot of methods that are “pretty good” at predicting disease from a patient’s records, says Joel Dudley, who leads the Mount Sinai team. But, he adds, “this was just way better.”

At the same time, Deep Patient is a bit puzzling. It appears to anticipate the onset of psychiatric disorders like schizophrenia surprisingly well. But since schizophrenia is notoriously difficult for physicians to predict, Dudley wondered how this was possible. He still doesn’t know. The new tool offers no clue as to how it does this. If something like Deep Patient is actually going to help doctors, it will ideally give them the rationale for its prediction, to reassure them that it is accurate and to justify, say, a change in the drugs someone is being prescribed. “We can build these models,” Dudley says ruefully, “but we don’t know how they work.”

Artificial intelligence hasn’t always been this way. From the outset, there were two schools of thought regarding how understandable, or explainable, AI ought to be. Many thought it made the most sense to build machines that reasoned according to rules and logic, making their inner workings transparent to anyone who cared to examine some code. Others felt that intelligence would more easily emerge if machines took inspiration from biology, and learned by observing and experiencing. This meant turning computer programming on its head. Instead of a programmer writing the commands to solve a problem, the program generates its own algorithm based on example data and a desired output. The machine-learning techniques that would later evolve into today’s most powerful AI systems followed the latter path: the machine essentially programs itself.

At first this approach was of limited practical use, and in the 1960s and ’70s it remained largely confined to the fringes of the field. Then the computerization of many industries and the emergence of large data sets renewed interest. That inspired the development of more powerful machine-learning techniques, especially new versions of one known as the artificial neural network. By the 1990s, neural networks could automatically digitize handwritten characters.

But it was not until the start of this decade, after several clever tweaks and refinements, that very large—or “deep”—neural networks demonstrated dramatic improvements in automated perception. Deep learning is responsible for today’s explosion of AI. It has given computers extraordinary powers, like the ability to recognize spoken words almost as well as a person could, a skill too complex to code into the machine by hand. Deep learning has transformed computer vision and dramatically improved machine translation. It is now being used to guide all sorts of key decisions in medicine, finance, manufacturing—and beyond.

Is Artificial Intelligence a Threat to Christianity?


theatlantic |  While most theologians aren’t paying it much attention, some technologists are convinced that artificial intelligence is on an inevitable path toward autonomy. How far away this may be depends on whom you ask, but the trajectory raises some fundamental questions for Christianity—as well as religion broadly conceived, though for this article I’m going to stick to the faith tradition I know best. In fact, AI may be the greatest threat to Christian theology since Charles Darwin’s On the Origin of Species.

For decades, artificial intelligence has been advancing at breakneck speed. Today, computers can fly planes, interpret X-rays, and sift through forensic evidence; algorithms can paint masterpiece artworks and compose symphonies in the style of Bach. Google is developing “artificial moral reasoning” so that its driverless cars can make decisions about potential accidents.

“AI is already here, it’s real, it’s quickening,” says Kevin Kelly, a co-founder of Wired magazine and the author of The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future. “I think the formula for the next 10,000 start-ups is to take something that already exists and add AI to it.”

Will Artificial Intelligence Redefine Human Intelligence?


theatlantic |  As machines advance and as programs learn to do things that were once only accomplished by people, what will it mean to be human?

Over time, artificial intelligence will likely prove that carving out any realm of behavior as unique to humans—like language, a classic example—is ultimately wrong. If Tinsel and Beau were still around today, they might be powered by a digital assistant, after all. In fact, it’d be a littler weird if they weren’t, wouldn’t it? Consider the fact that Disney is exploring the use of interactive humanoid robots at its theme parks, according to a patent filing last week.

Technological history proves that what seems novel today can quickly become the norm, until one day you look back surprised at the memory of a job done by a human rather than a machine. By teaching machines what we know, we are training them to be like us. This is good for humanity in so many ways. But we may still occasionally long for the days before machines could imagine the future alongside us.

Wednesday, April 12, 2017

Why is the CIA WaPo Giving Space to Assange to Make His Case?


WaPo |  On his last night in office, President Dwight D. Eisenhower delivered a powerful farewell speech to the nation — words so important that he’d spent a year and a half preparing them. “Ike” famously warned the nation to “guard against the acquisition of unwarranted influence, whether sought or unsought, by the military-industrial complex. The potential for the disastrous rise of misplaced power exists and will persist.” 

Much of Eisenhower’s speech could form part of the mission statement of WikiLeaks today. We publish truths regarding overreaches and abuses conducted in secret by the powerful.

Our most recent disclosures describe the CIA’s multibillion-dollar cyberwarfare program, in which the agency created dangerous cyberweapons, targeted private companies’ consumer products and then lost control of its cyber-arsenal. Our source(s) said they hoped to initiate a principled public debate about the “security, creation, use, proliferation and democratic control of cyberweapons.”

The truths we publish are inconvenient for those who seek to avoid one of the magnificent hallmarks of American life — public debate. Governments assert that WikiLeaks’ reporting harms security. Some claim that publishing facts about military and national security malfeasance is a greater problem than the malfeasance itself. Yet, as Eisenhower emphasized, “Only an alert and knowledgeable citizenry can compel the proper meshing of the huge industrial and military machinery of defense with our peaceful methods and goals, so that security and liberty may prosper together.” 

Quite simply, our motive is identical to that claimed by the New York Times and The Post — to publish newsworthy content. Consistent with the U.S. Constitution, we publish material that we can confirm to be true irrespective of whether sources came by that truth legally or have the right to release it to the media. And we strive to mitigate legitimate concerns, for example by using redaction to protect the identities of at-risk intelligence agents.





The Blockchain and Us


blockchain-documentary |  What is the Blockchain?

blockchain, NOUN /ˈblÉ’ktʃeɪn/
A digital ledger in which transactions made in bitcoin or another cryptocurrency are recorded chronologically and publicly.
From en.oxforddictionaries.com/definition/blockchain

A mysterious white paper (Nakamoto, Satoshi, 2008, “Bitcoin: A Peer-to-Peer Electronic Cash System”) introduced the Bitcoin blockchain, a combination of existing technologies that ensures the integrity of data without a trusted party. It consists of a ledger that can’t be changed and a consensus algorithm—a way for groups to agree. Unlike existing databases in banks and other institutions, a network of users updates and supports the blockchain—a system somewhat similar to Wikipedia, which users around the globe maintain and double-check. The cryptocurrency Bitcoin is the first use case of the blockchain, but much more seems to be possible.

The Next Generation of the Internet
The first 40 years of the Internet brought e-mail, social media, mobile applications, online shopping, Big Data, Open Data, cloud computing, and the Internet of Things. Information technology is at the heart of everything today—good and bad. Despite advances in privacy, security, and inclusion, one thing is still missing from the Internet: Trust. Enter the blockchain.

The Blockchain and Us: The Project
When the Wright brothers invented the airplane in 1903, it was hard to imagine there would be over 500,000 people traveling in the air at any point in time today. In 2008, Satoshi Nakamoto invented Bitcoin and the blockchain. For the first time in history, his invention made it possible to send money around the globe without banks, governments or any other intermediaries. Satoshi is a mystery character, and just like the Wright brothers, he solved an unsolvable problem. The concept of the blockchain isn’t very intuitive. But still, many people believe it is a game changer. Despite its mysterious beginnings, the blockchain might be the airplane of our time.

Economist and filmmaker Manuel Stagars portrays this exciting technology in interviews with software developers, cryptologists, researchers, entrepreneurs, consultants, VCs, authors, politicians, and futurists from the United States, Canada, Switzerland, the UK, and Australia.
How can the blockchain benefit the economies of nations? How will it change society? What does this mean for each of us? The Blockchain and Us is no explainer video of the technology. It gives a view on the topic far from hype, makes it accessible and starts a conversation. For a deep dive, see all full-length interviews from the film here.

The Content Of Sci-Hub And Its Usage


biorxiv |  Despite the growth of Open Access, illegally circumventing paywalls to access scholarly publications is becoming a more mainstream phenomenon. The web service Sci-Hub is amongst the biggest facilitators of this, offering free access to around 62 million publications. So far it is not well studied how and why its users are accessing publications through Sci-Hub. By utilizing the recently released corpus of Sci-Hub and comparing it to the data of ~28 million downloads done through the service, this study tries to address some of these questions. The comparative analysis shows that both the usage and complete corpus is largely made up of recently published articles, with users disproportionately favoring newer articles and 35% of downloaded articles being published after 2013. These results hint that embargo periods before publications become Open Access are frequently circumnavigated using Guerilla Open Access approaches like Sci-Hub. On a journal level, the downloads show a bias towards some scholarly disciplines, especially Chemistry, suggesting increased barriers to access for these. Comparing the use and corpus on a publisher level, it becomes clear that only 11% of publishers are highly requested in comparison to the baseline frequency, while 45% of all publishers are significantly less accessed than expected. Despite this, the oligopoly of publishers is even more remarkable on the level of content consumption, with 80% of all downloads being published through only 9 publishers. All of this suggests that Sci-Hub is used by different populations and for a number of different reasons and that there is still a lack of access to the published scientific record. A further analysis of these openly available data resources will undoubtedly be valuable for the investigation of academic publishing.

ISP Data Pollution: Hiding the Needle in a Pile of Needles?


theatlantic |  The basic idea is simple. Internet providers want to know as much as possible about your browsing habits in order to sell a detailed profile of you to advertisers. If the data the provider gathers from your home network is full of confusing, random online activity, in addition to your actual web-browsing history, it’s harder to make any inferences about you based on your data output.

Steven Smith, a senior staff member at MIT’s Lincoln Laboratory, cooked up a data-pollution program for his own family last month, after the Senate passed the privacy bill that would later become law. He uploaded the code for the project, which is unaffiliated with his employer, to GitHub. For a week and a half, his program has been pumping fake web traffic out of his home network, in an effort to mask his family’s real web activity.

Smith’s algorithm begins by stringing together a few words from an open-source dictionary and googling them. It grabs the resulting links in a random order, and saves them in a database for later use. The program also follows the Google results, capturing the links that appear on those pages, and then follows those links, and so on. The table of URLs grows quickly, but it’s capped around 100,000, to keep the computer’s memory from overloading.

A program called PhantomJS, which mimics a person using a web browser, regularly downloads data from the URLs that have been captured—minus the images, to avoid downloading unsavory or infected files. Smith set his program to download a page about every five seconds. Over the course of a month, that’s enough data to max out the 50 gigabytes of data that Smith buys from his internet service provider.

Although it relies heavily on randomness, the program tries to emulate user behavior in certain ways. Smith programmed it to visit no more than 100 domains a day, and to occasionally visit a URL twice—simulating a user reload. The pace of browsing slows down at night, and speeds up again during the day. And as PhantomJS roams around the internet, it changes its camouflage by switching between different user agents, which are identifiers that announce what type of browser a visitor is using. By doing so, Smith hopes to create the illusion of multiple users browsing on his network using different devices and software. “I’m basically using common sense and intuition,” Smith said.

Tuesday, April 11, 2017

Chasing Perpetual Motion in the Gig Economy


NYTimes |  The promises Silicon Valley makes about the gig economy can sound appealing. Its digital technology lets workers become entrepreneurs, we are told, freed from the drudgery of 9-to-5 jobs. Students, parents and others can make extra cash in their free time while pursuing their passions, maybe starting a thriving small business.

In reality, there is no utopia at companies like Uber, Lyft, Instacart and Handy, whose workers are often manipulated into working long hours for low wages while continually chasing the next ride or task. These companies have discovered they can harness advances in software and behavioral sciences to old-fashioned worker exploitation, according to a growing body of evidence, because employees lack the basic protections of American law.

A recent story in The Times by Noam Scheiber vividly described how Uber and other companies use tactics developed by the video game industry to keep drivers on the road when they would prefer to call it a day, raising company revenue while lowering drivers’ per-hour earnings. One Florida driver told The Times he earned less than $20,000 a year before expenses like gas and maintenance. In New York City, an Uber drivers group affiliated with the machinists union said that more than one-fifth of its members earn less than $30,000 before expenses.

Gig economy workers tend to be poorer and are more likely to be minorities than the population at large, a survey by the Pew Research Center found last year. Compared with the population as a whole, almost twice as many of them earned under $30,000 a year, and 40 percent were black or Hispanic, compared with 27 percent of all American adults. Most said the money they earned from online platforms was essential or important to their families.

Since workers for most gig economy companies are considered independent contractors, not employees, they do not qualify for basic protections like overtime pay and minimum wages. This helped Uber, which started in 2009, quickly grow to 700,000 active drivers in the United States, nearly three times the number of taxi drivers and chauffeurs in the country in 2014.

Student Debt Bubble Ruins Lives While Sucking Life Out of the Economy


nakedcapitalism |  The Financial Times has a generally good update on the state of the student debt bubble in the US. The article interesting not just for what it says but also for what goes unsaid. I’ll recap its main points with additional commentary. Note that many of the underlying issues will be familiar to NC readers, but it is nevertheless useful to stay current.
Access to student debt keeps inflating the cost of education. This may seem obvious but it can’t be said often enough. Per the article:
While the headline consumer price index is 2.7 per cent, between 2016 and 2017 published tuition and fee prices rose by 9 per cent at four-year state institutions, and 13 per cent at posher private colleges.
It wasn’t all that long ago that the cost of a year at an Ivy League college was $50,000 per year. Author Rana Foroohar was warned by high school counselors that the price tag for her daughter to attend one of them or a liberal arts college would be around $72,000 a year.
Spending increases are not going into improving education. As we’ve pointed out before, adjuncts are being squeezed into penury while the adminisphere bloat continues, as MBAs have swarmed in like locusts. Another waste of money is over-investment in plant. Again from the story:
A large chunk of the hike was due to schools hiring more administrators (who “brand build” and recruit wealthy donors) and building expensive facilities designed to lure wealthier, full-fee-paying students. This not only leads to excess borrowing on the part of universities — a number of them are caught up in dicey bond deals like the sort that sunk the city of Detroit — but higher tuition for students.
And there is a secondary effect. As education cost rise, students are becoming more mercenary in their choices, and in not a good way. This is another manifestation of what John Kay calls obliquity: in a complex system, trying to map a direct path will fail because it’s impossible to map the terrain well enough to identify one. Thus naive direct paths like “maximize shareholder value” do less well at achieving that objective than richer, more complicated goals.
The higher ed version of this dynamic is “I am going to school to get a well-paid job,” with the following results, per an FT reader:
BazHurl
After a career in equities, having graduated the Dreamy Spires with significant not silly debt, I had the pleasure of interviewing lots of the best and brightest graduates from European and US universities. Finance was attracting far more than its deserved share of the intellectual pie in the 90’s and Noughties in particular; so at times it was distressing to meet outrageously talented young men and women wanting to genuflect at the altar of the $, instead of building the Flux Capacitor. But the greater take-away was how mediocre and homogenous most of the grads were becoming. It seemed the longer they had studied and deferred entry into the Great Unwashed, the more difficult it was to get anything original or genuine from them. Piles and piles of CV’s of the same guys and gals: straight A’s since emerging into the world, polyglots, founders of every financial and charitable university society you could dream up … but could they honestly answer a simple question like “Fidelity or Blackrock – Who has robbed widows and orphans of more?”. Hardly. In short, few of them qualified as the sort of person you would willingly invite to sit next to you for fifteen hours a day, doing battle with pesky clients and triumphing over greedy competitors. All these once-promising 22 to 24 year old’s had somehow been hard-wired by the same robot and worse, all were entitled. Probably fair enough as they had excelled at everything that had been asked of them up until meeting my colleagues and I on the trading floors. Contrast this to the very different experience of meeting visiting sixth formers from a variety of secondary schools that used to tour the bank and with some gentle prodding, light up the Q&A sessions at tour’s end, fizzing with enthusiasm and desire. Now THESE kids I would hire ahead of the blue-chipped grads, most days. They were raw material that could be worked with and shaped into weapons. It was patently clear that University was no longer adding the expected value to these candidates and in fact was becoming quite the reverse. 
And for many grads, an investment in higher education now has a negative return on equity. A 2014 Economist article points out that the widely cited studies of whether college is worth the cost or not omit key factors that skew their results in favor of paying for higher education.

Navient: Student Loans Designed to Fail


NYTimes |  Ashley Hardin dreamed of being a professional photographer — glamorous shoots, perhaps some exotic travel. So in 2006, she enrolled in the Brooks Institute of Photography and borrowed more than $150,000 to pay for what the school described as a pathway into an industry clamoring for its graduates.

“Brooks was advertised as the most prestigious photography school on the West Coast,” Ms. Hardin said. “I wanted to learn from the best of the best.”

Ms. Hardin did not realize that she had taken out high-risk private loans in pursuit of a low-paying career. But her lender, SLM Corporation, better known as Sallie Mae, knew all of that, government lawyers say — and made the loans anyway.

In recent months, the student loan giant Navient, which was spun off from Sallie Mae in 2014 and retained nearly all of the company’s loan portfolio, has come under fire for aggressive and sloppy loan collection practices, which led to a set of government lawsuits filed in January. But those accusations have overshadowed broader claims, detailed in two state lawsuits filed by the attorneys general in Illinois and Washington, that Sallie Mae engaged in predatory lending, extending billions of dollars in private loans to students like Ms. Hardin that never should have been made in the first place.
“These loans were designed to fail,” said Shannon Smith, chief of the consumer protection division at the Washington State attorney general’s office.

New details unsealed last month in the state lawsuits against Navient shed light on how Sallie Mae used private subprime loans — some of which it expected to default at rates as high as 92 percent — as a tool to build its business relationships with colleges and universities across the country. From the outset, the lender knew that many borrowers would be unable to repay, government lawyers say, but it still made the loans, ensnaring students in debt traps that have dogged them for more than a decade.

While these risky loans were a bad deal for students, they were a boon for Sallie Mae. The private loans were — as Sallie Mae itself put it — a “baited hook” that the lender used to reel in more federally guaranteed loans, according to an internal strategy memo cited in the Illinois lawsuit.
The attorneys general in Illinois and Washington — backed by a coalition of those in 27 other states, who participated in a three-year investigation of student lending abuses — want those private loans forgiven.

Monday, April 10, 2017

Jeff Sessions Will Reinstate the War on Black Men Drugs


WaPo  |  Cook and Sessions have also fought the winds of change on Capitol Hill, where a bipartisan group of lawmakers recently tried but failed to pass the first significant bill on criminal justice reform in decades.

The legislation, which had 37 sponsors in the Senate, including Sen. Charles E. Grassley (R-Iowa) and Mike Lee (R-Utah), and 79 members of the House, would have reduced some of the long mandatory minimum sentences for gun and drug crimes. It also would have given judges more flexibility in drug sentencing and made retroactive the law that reduced the large disparity between sentencing for crack cocaine and powder cocaine.

The bill, introduced in 2015, had support from outside groups as diverse as the Koch brothers and the NAACP. House Speaker Paul D. Ryan (R-Wis.) supported it as well. The path to passage seemed clear.

But then people such as Sessions and Cook spoke up. The longtime Republican senator from Alabama became a leading opponent, citing the spike in crime in several cities.

“Violent crime and murders have increased across the country at almost alarming rates in some areas. Drug use and overdoses are occurring and dramatically increasing,” said Sessions, one of only five members of the Senate Judiciary Committee who voted against the legislation. “It is against this backdrop that we are considering a bill . . . to cut prison sentences for drug traffickers and even other violent criminals, including those currently in federal prison.”

Cook testified that it was the “wrong time to weaken the last tools available to federal prosecutors and law enforcement agents.”

After Republican lawmakers became nervous about passing legislation that might seem soft on crime, Senate Majority Leader Mitch McConnell (R-Ky.) declined to even bring the bill to the floor for a vote.

“Sessions was the main reason that bill didn’t pass,” said Inimai M. Chettiar, the director of the Justice Program at the Brennan Center for Justice. “He came in at the last minute and really torpedoed the bipartisan effort.”

Now that he is attorney general, Sessions has signaled a new direction. As his first step, Sessions told his prosecutors in a memo last month to begin using “every tool we have” — language that evoked the strategy from the drug war of loading up charges to lengthen sentences.

And he quickly appointed Cook to be a senior official on the attorney general’s task force on crime reduction and public safety, which was created following a Trump executive order to address what the president has called “American carnage.”

“If there was a flickering candle of hope that remained for sentencing reform, Cook’s appointment was a fire hose,” said Ring, president of FAMM. “There simply aren’t enough backhoes to build all the prisons it would take to realize Steve Cook’s vision for America.”

Fuck Robert Kagan And Would He Please Now Just Go Quietly Burn In Hell?

politico | The Washington Post on Friday announced it will no longer endorse presidential candidates, breaking decades of tradition in a...