Monday, September 18, 2017

The Promise and Peril of Immersive Technologies


weforum |  The best place from which to draw inspiration for how immersive technologies may be regulated is the regulatory frameworks being put into effect for traditional digital technology today. In the European Union, the General Data Protection Regulation (GDPR) will come into force in 2018. Not only does the law necessitate unambiguous consent for data collection, it also compels companies to erase individual data on request, with the threat of a fine of up to 4% of their global annual turnover for breaches. Furthermore, enshrined in the bill is the notion of ‘data portability’, which allows consumers to take their data across platforms – an incentive for an innovative start-up to compete with the biggest players. We may see similar regulatory norms for immersive technologies develop as well.

Providing users with sovereignty of personal data
Analysis shows that the major VR companies already use cookies to store data, while also collecting information on location, browser and device type and IP address. Furthermore, communication with other users in VR environments is being stored and aggregated data is shared with third parties and used to customize products for marketing purposes.

Concern over these methods of personal data collection has led to the introduction of temporary solutions that provide a buffer between individuals and companies. For example, the Electronic Frontier Foundation’s ‘Privacy Badger’ is a browser extension that automatically blocks hidden third-party trackers and allows users to customize and control the amount of data they share with online content providers. A similar solution that returns control of personal data should be developed for immersive technologies. At present, only blunt instruments are available to individuals uncomfortable with data collection but keen to explore AR/VR: using ‘offline modes’ or using separate profiles for new devices.

Managing consumption
Short-term measures also exist to address overuse in the form of stopping mechanisms. Pop-up usage warnings once healthy limits are approached or exceeded are reportedly supported by 71% of young people in the UK. Services like unGlue allow parents to place filters on content types that their children are exposed to, as well as time limits on usage across apps.

All of these could be transferred to immersive technologies, and are complementary fixes to actual regulation, such as South Korea’s Shutdown Law. This prevents children under the age of 16 from playing computer games between midnight and 6am. The policy is enforceable because it ties personal details – including date of birth – to a citizen’s resident registration number, which is required to create accounts for online services. These solutions are not infallible: one could easily imagine an enterprising child might ‘borrow’ an adult’s device after-hours to find a workaround to the restrictions. Further study is certainly needed, but we believe that long-term solutions may lie in better design.
Rethinking success metrics for digital technology
As businesses develop applications using immersive technologies, they should transition from using metrics that measure just the amount of user engagement to metrics that also take into account user satisfaction, fulfilment and enhancement of well-being. Alternative metrics could include a net promoter score for software, which would indicate how strongly users – or perhaps even regulators – recommend the service to their friends based on their level of fulfilment or satisfaction with a service.

The real challenge, however, is to find measures that align with business policy and user objectives. As Tristan Harris, Founder of Time Well Spent argues: “We have to come face-to-face with the current misalignment so we can start to generate solutions.” There are instances where improvements to user experience go hand-in-hand with business opportunities. Subscription-based services are one such example: YouTube Red will eliminate advertisements for paying users, as does Spotify Premium. These are examples where users can pay to enjoy advertising-free experiences and which do not come at the cost to the content developers since they will receive revenue in the form of paid subscriptions.

More work remains if immersive technologies are to enable happier, more fulfilling interactions with content and media. This will largely depend on designing technology that puts the user at the centre of its value proposition.

This is part of a series of articles related to the disruptive effects of several technologies (virtual/augmented reality, artificial intelligence and blockchain) on the creative economy.


Virtual Reality Health Risks...,


medium |  Two decades ago, our research group made international headlines when we published research showing that virtual reality systems could damage people’s health.

Our demonstration of side-effects was not unique — many research groups were showing that it could cause health problems. The reason that our work was newsworthy was because we showed that there were fundamental problems that needed to be tackled when designing virtual reality systems — and these problems needed engineering solutions that were tailored for the human user.

In other words, it was not enough to keep producing ever faster computers and higher definition displays — a fundamental change in the way systems were designed was required.

So why do virtual reality systems need a new approach? The answer to this question lies in the very definition of how virtual reality differs from how we traditionally use a computer.

Natural human behaviour is based on responses elicited by information detected by a person’s sensory systems. For example, rays of light bouncing off a shiny red apple can indicate that there’s a good source of food hanging on a tree.

A person can then use the information to guide their hand movements and pick the apple from the tree. This use of ‘perception’ to guide ‘motor’ actions defines a feedback loop that underpins all of human behaviour. The goal of virtual reality systems is to mimic the information that humans normally use to guide their actions, so that humans can interact with computer generated objects in a natural way.

The problems come when the normal relationship between the perceptual information and the corresponding action is disrupted. One way of thinking about such disruption is that a mismatch between perception and action causes ‘surprise’. It turns out that surprise is really important for human learning and the human brain appears to be engineered to minimise surprise.

This means that the challenge for the designers of virtual reality is that they must create systems that minimise the surprise experienced by the user when using computer generated information to control their actions.

Of course, one of the advantages of virtual reality is that the computer can create new and wonderful worlds. For example, a completely novel fruit — perhaps an elppa — could be shown hanging from a virtual tree. The elppa might have a completely different texture and appearance to any other previously encountered fruit — but it’s important that the information used to specify the location and size of the elppa allows the virtual reality user to guide their hand to the virtual object in a normal way.

If there is a mismatch between the visual information and the hand movements then ‘surprise’ will result, and the human brain will need to adapt if future interactions between vision and action are to maintain their accuracy. The issue is that the process of adaptation may cause difficulties — and these difficulties might be particularly problematic for children as their brains are not fully developed. 

This issue affects all forms of information presented within a virtual world (so hearing and touch as well as vision), and all of the different motor systems (so postural control as well as arm movement systems). One good example of the problems that can arise can be seen through the way our eyes react to movement.

In 1993, we showed that virtual reality systems had a fundamental design flaw when they attempted to show three dimensional visual information. This is because the systems produce a mismatch between where the eyes need to focus and where the eyes need to point. In everyday life, if we change our focus from something close to something far away our eyes will need to change focus and alter where they are pointing.

The change in focus is necessary to prevent blur and the change in eye direction is necessary to stop double images. In reality, the changes in focus and direction are physically linked (a change in fixation distance causes change in the images and where the images fall at the back of the eyes).

Sunday, September 17, 2017

Artificial Intelligence is Lesbian


thenewyorker |  “The face is an observable proxy for a wide range of factors, like your life history, your development factors, whether you’re healthy,” Michal Kosinski, an organizational psychologist at the Stanford Graduate School of Business, told the Guardian earlier this week. The photo of Kosinski accompanying the interview showed the face of a man beleaguered. Several days earlier, Kosinski and a colleague, Yilun Wang, had reported the results of a study, to be published in the Journal of Personality and Social Psychology, suggesting that facial-recognition software could correctly identify an individual’s sexuality with uncanny accuracy. The researchers culled tens of thousands of photos from an online-dating site, then used an off-the-shelf computer model to extract users’ facial characteristics—both transient ones, like eye makeup and hair color, and more fixed ones, like jaw shape. Then they fed the data into their own model, which classified users by their apparent sexuality. When shown two photos, one of a gay man and one of a straight man, Kosinski and Wang’s model could distinguish between them eighty-one per cent of the time; for women, its accuracy dropped slightly, to seventy-one per cent. Human viewers fared substantially worse. They correctly picked the gay man sixty-one per cent of the time and the gay woman fifty-four per cent of the time. “Gaydar,” it appeared, was little better than a random guess.

The study immediately drew fire from two leading L.G.B.T.Q. groups, the Human Rights Campaign and GLAAD, for “wrongfully suggesting that artificial intelligence (AI) can be used to detect sexual orientation.” They offered a list of complaints, which the researchers rebutted point by point. Yes, the study was in fact peer-reviewed. No, contrary to criticism, the study did not assume that there was no difference between a person’s sexual orientation and his or her sexual identity; some people might indeed identify as straight but act on same-sex attraction. “We assumed that there was a correlation . . . in that people who said they were looking for partners of the same gender were homosexual,” Kosinski and Wang wrote. True, the study consisted entirely of white faces, but only because the dating site had served up too few faces of color to provide for meaningful analysis. And that didn’t diminish the point they were making—that existing, easily obtainable technology could effectively out a sizable portion of society. To the extent that Kosinski and Wang had an agenda, it appeared to be on the side of their critics. As they wrote in the paper’s abstract, “Given that companies and governments are increasingly using computer vision algorithms to detect people’s intimate traits, our findings expose a threat to the privacy and safety of gay men and women.”

Saturday, September 16, 2017

Kevin Shipp: The Deep State and the Shadow Government


activistpost |  “The shadow government controls the deep state and manipulates our elected government behind the scenes,” Shipp warned in a recent talk at a Geoengineeringwatch.org conference.

Shipp had a series of slides explaining how the deep state and shadow government functions as well as the horrific crimes they are committing against U.S. citizens.

Some of the revelations the former CIA anti-terrorism counter intelligence officer revealed included that “Google Earth was set up through the National Geospatial Intelligence Agency and InQtel.” Indeed he is correct, the CIA and NGA owned the company Google acquired, Keyhole Inc., paying an undisclosed sum for the company to turn its tech into what we now know as Google Earth. Another curious investor in Keyhole Inc. was none other than the venture capital firm In-Q-Tel run by the CIA according to a press release at the time.

“The top of the shadow government is the National Security Agency and the Central Intelligence Agency,” Shipp said.

Shipp expressed that the CIA was created through the Council on Foreign relations with no congressional approval, and historically the CFR is also tied into the mainstream media (MSM.) He elaborated that the CIA was the “central node” of the shadow government and controlled all of other 16 intelligence agencies despite the existence of the DNI. The agency also controls defense and intelligence contractors, can manipulate the president and political decisions, has the power to start wars, torture, initiate coups, and commit false flag attacks he said.

As Shipp stated, the CIA was created through executive order by then President Harry Truman by the signing of the National Security Act of 1947.

According to Shipp, the deep state is comprised of the military industrial complex, intelligence contractors, defense contractors, MIC lobbyist, Wall St (offshore accounts), Federal Reserve, IMF/World Bank, Treasury, Foreign lobbyists, and Central Banks.

In the shocking, explosive presentation, Shipp went on to express that there are “over 10,000 secret sites in the U.S.” that formed after 9/11. There are “1,291 secret government agencies, 1,931 large private corporations and over 4,800,000 Americans that he knows of who have a secrecy clearance, and 854,000 who have Top Secret clearance, explaining they signed their lives away bound by an agreement.

He also detailed how Congress is owned by the Military Industrial Complex through the Congressional Armed Services Committee (48 senior members of Congress) giving those members money in return for a vote on the spending bill for the military and intelligence budget.

He even touched on what he called the “secret intelligence industrial complex,” which he called the center of the shadow government including the CIA, NSA, NRO, and NGA.

Shipp further stated that around the “secret intelligence industrial complex” you have the big five conglomerate of intelligence contractors – Leidos Holdings, CSRA, CACI, SAIC, and Booz Allen Hamilton. He noted that the work they do is “top secret and unreported.”

Alfred W. McCoy: Pentagon Wonder Weapons For World Dominion


tomdispatch |  Ever since the Pentagon with its 17 miles of corridors was completed in 1943, that massive bureaucratic maze has presided over a creative fusion of science and industry that President Dwight Eisenhower would dub “the military-industrial complex” in his farewell address to the nation in 1961. “We can no longer risk emergency improvisation of national defense,” he told the American people. “We have been compelled to create a permanent armaments industry of vast proportions” sustained by a “technological revolution” that is “complex and costly.” As part of his own contribution to that complex, Eisenhower had overseen the creation of both the National Aeronautics and Space Administration, or NASA, and a “high-risk, high-gain” research unit called the Advanced Research Projects Agency, or ARPA, that later added the word “Defense” to its name and became DARPA.
 
For 70 years, this close alliance between the Pentagon and major defense contractors has produced an unbroken succession of “wonder weapons” that at least theoretically gave it a critical edge in all major military domains. Even when defeated or fought to a draw, as in Vietnam, Iraq, and Afghanistan, the Pentagon’s research matrix has demonstrated a recurring resilience that could turn disaster into further technological advance.

The Vietnam War, for example, was a thoroughgoing tactical failure, yet it would also prove a technological triumph for the military-industrial complex. Although most Americans remember only the Army’s soul-destroying ground combat in the villages of South Vietnam, the Air Force fought the biggest air war in military history there and, while it too failed dismally and destructively, it turned out to be a crucial testing ground for a revolution in robotic weaponry.

To stop truck convoys that the North Vietnamese were sending through southern Laos into South Vietnam, the Pentagon’s techno-wizards combined a network of sensors, computers, and aircraft in a coordinated electronic bombing campaign that, from 1968 to 1973, dropped more than a million tons of munitions — equal to the total tonnage for the whole Korean War — in that limited area. At a cost of $800 million a year, Operation Igloo White laced that narrow mountain corridor with 20,000 acoustic, seismic, and thermal sensors that sent signals to four EC-121 communications aircraft circling ceaselessly overhead.

At a U.S. air base just across the Mekong River in Thailand, Task Force Alpha deployed two powerful IBM 360/65 mainframe computers, equipped with history’s first visual display monitors, to translate all those sensor signals into “an illuminated line of light” and so launch jet fighters over the Ho Chi Minh Trail where computers discharged laser-guided bombs automatically. Bristling with antennae and filled with the latest computers, its massive concrete bunker seemed, at the time, a futuristic marvel to a visiting Pentagon official who spoke rapturously about “being swept up in the beauty and majesty of the Task Force Alpha temple.”

However, after more than 100,000 North Vietnamese troops with tanks, trucks, and artillery somehow moved through that sensor field undetected for a massive offensive in 1972, the Air Force had to admit that its $6 billion “electronic battlefield” was an unqualified failure. Yet that same bombing campaign would prove to be the first crude step toward a future electronic battlefield for unmanned robotic warfare.

In the pressure cooker of history’s largest air war, the Air Force also transformed an old weapon, the “Firebee” target drone, into a new technology that would rise to significance three decades later. By 1972, the Air Force could send an “SC/TV” drone, equipped with a camera in its nose, up to 2,400 miles across communist China or North Vietnam while controlling it via a low-resolution television image. The Air Force also made aviation history by test firing the first missile from one of those drones.

The air war in Vietnam was also an impetus for the development of the Pentagon’s global telecommunications satellite system, another important first. After the Initial Defense Satellite Communications System launched seven orbital satellites in 1966, ground terminals in Vietnam started transmitting high-resolution aerial surveillance photos to Washington — something NASA called a “revolutionary development.” Those images proved so useful that the Pentagon quickly launched an additional 21 satellites and soon had the first system that could communicate from anywhere on the globe. Today, according to an Air Force website, the third phase of that system provides secure command, control, and communications for “the Army’s ground mobile forces, the Air Force’s airborne terminals, Navy ships at sea, the White House Communications Agency, the State Department, and special users” like the CIA and NSA.

At great cost, the Vietnam War marked a watershed in Washington’s global information architecture. Turning defeat into innovation, the Air Force had developed the key components — satellite communications, remote sensing, computer-triggered bombing, and unmanned aircraft — that would merge 40 years later into a new system of robotic warfare.

Friday, September 15, 2017

The Snowflakes Almost Stroked Out on Camera....,


Machine Learning and Data Driven Medical Diagnostics


labiotech |  Sophia Artificial Intelligence (AI) is already used worldwide to analyze next-generation sequencing (NGS) data of patients and make a diagnosis, independently of the indication. “We support over 350 hospitals in 53 countries,” CEO Jurgi Camblong told me.

With the new funds, Sophia Genetics is planning on increasing the number of centers using the technology. According to Camblong, this step is also key for the performance of the diagnostics algorythm, since the more data is available to the platform, the better results it can achieve.”By 2020, with the network, members and data we have, we will move into an era of real-time epidemiology,” assures Camblong.

Sophia’s growing network of hospitals is also the key to its ultimate goal: democratizing data-driven medicine. Until now, access to NGS equipment and analysis expertise was not affordable for all hospitals, especially those in underdeveloped regions of the world. Sophia Genetics is breaking this barrier by giving access to the network and its accumulated knowledge to small hospitals in Africa, Eastern Europe and Latin America without the resources to take on diagnostics themselves.

One of the areas Sophia AI can have a bigger impact is cancer, which currently makes up about a third of the 8,000 new patient cases registered in the platform each month. With the resources the cash injection will bring, the company wants to take on the project of implementing imaging data as well as genomic data to diagnose cancer and recommend the best treatment for each patient.  Fist tap Big Don.

Vikram Pandit Says 1.8 Million Bank Employees Gotta Go Gotta Go Gotta Go...,


bloomberg |  Vikram Pandit, who ran Citigroup Inc. during the financial crisis, said developments in technology could see some 30 percent of banking jobs disappearing in the next five years.

Artificial intelligence and robotics reduce the need for staff in roles such as back-office functions, Pandit, 60, said Wednesday in an interview with Bloomberg Television’s Haslinda Amin in Singapore. He’s now chief executive officer of Orogen Group, an investment firm that he co-founded last year.

“Everything that happens with artificial intelligence, robotics and natural language -- all of that is going to make processes easier,” said Pandit, who was Citigroup’s chief executive officer from 2007 to 2012. “It’s going to change the back office.”

Wall Street’s biggest firms are using technologies including machine learning and cloud computing to automate their operations, forcing many employees to adapt or find new positions. Bank of America Corp.’s Chief Operating Officer Tom Montag said in June the firm will keep cutting costs by finding more ways technology can replace people.

While Pandit’s forecast for job losses is in step with one made by Citigroup last year, his timeline is more aggressive. In a March 2016 report, the lender estimated a 30 percent reduction between 2015 and 2025, mainly due to automation in retail banking. That would see full-time jobs drop by 770,000 in the U.S. and by about 1 million in Europe, Citigroup said.

Thursday, September 14, 2017

Who Controls Antarctica and Keeps It Strictly Off-Limits to You?


wikipedia |  Seven sovereign states had made eight territorial claims to land in Antarctica south of the 60° S parallel before 1961. These claims have been recognized only between the countries making claims in the area. All claim areas are sectors, with the exception of Peter I Island. None of these claims have an indigenous population. The South Orkney Islands fall within the territory claimed by Argentina and the United Kingdom, and the South Shetland Islands fall within the areas claimed by Argentina, Chile, and the United Kingdom. The UK, France, Australia, New Zealand and Norway all recognize each other's claims.[30] None of these claims overlap. Prior to 1962, British Antarctic Territory was a dependency of the Falkland Islands and also included South Georgia and the South Sandwich Islands. The Antarctic areas became a separate overseas territory following the ratification of the Antarctic Treaty. South Georgia and the South Sandwich Islands remained a dependency of the Falkland Islands until 1985 when they too became a separate overseas territory.

The Antarctic Treaty and related agreements regulate international relations with respect to Antarctica, Earth's only continent without a native human population. The treaty has now been signed by 48 countries, including the United Kingdom, the United States, and the now-defunct Soviet Union. The treaty set aside Antarctica as a scientific preserve, established freedom of scientific investigation and banned military activity on that continent. This was the first arms control agreement established during the Cold War. The Soviet Union and the United States both filed reservations against the restriction on new claims,[35] and the United States and Russia assert their right to make claims in the future if they so choose. Brazil maintains the Comandante Ferraz (the Brazilian Antarctic Base) and has proposed a theory to delimiting territories using meridians, which would give it and other countries a claim. In general, territorial claims below the 60° S parallel have only been recognised among those countries making claims in the area. However, although claims are often indicated on maps of Antarctica, this does not signify de jure recognition.

All claim areas, except Peter I Island, are sectors, the borders of which are defined by degrees of longitude. In terms of latitude, the northern border of all sectors is the 60° S parallel which does not cut through any piece of land, continent or island, and is also the northern limit of the Antarctic Treaty. The southern border of all sectors collapses in one point, the South Pole. Only the Norwegian sector is an exception: the original claim of 1930 did not specify a northern or a southern limit, so that its territory is only defined by eastern and western limits.[note 2]
The Antarctic Treaty states that contracting to the treaty:
  • is not a renunciation of any previous territorial claim.
  • does not affect the basis of claims made as a result of activities of the signatory nation within Antarctica.
  • does not affect the rights of a State under customary international law to recognise (or refuse to recognise) any other territorial claim.
What the treaty does affect are new claims:
  • No activities occurring after 1961 can be the basis of a territorial claim.
  • No new claim can be made.
  • No claim can be enlarged.
wikipedia |  Positioned asymmetrically around the South Pole and largely south of the Antarctic Circle, Antarctica is the southernmost continent and is surrounded by the Southern Ocean; alternatively, it may be considered to be surrounded by the southern Pacific, Atlantic, and Indian Oceans, or by the southern waters of the World Ocean. There are a number of rivers and lakes in Antarctica, the longest river being the Onyx. The largest lake, Vostok, is one of the largest sub-glacial lakes in the world. Antarctica covers more than 14 million km2 (5,400,000 sq mi),[1] making it the fifth-largest continent, about 1.3 times as large as Europe. 

About 98% of Antarctica is covered by the Antarctic ice sheet, a sheet of ice averaging at least 1.6 km (1.0 mi) thick. The continent has about 90% of the world's ice (and thereby about 70% of the world's fresh water). If all of this ice were melted, sea levels would rise about 60 m (200 ft).[43] In most of the interior of the continent, precipitation is very low, down to 20 mm (0.8 in) per year; in a few "blue ice" areas precipitation is lower than mass loss by sublimation and so the local mass balance is negative. In the dry valleys, the same effect occurs over a rock base, leading to a desiccated landscape.


People Get Their Beliefs Reinforced Just Enough To Keep Fooling Themselves


alt-market |  That maybe, just maybe, the conservative right is being tenderized in preparation for radicalization, just as much as the left has been radicalized. For the more extreme the social divide, the more likely chaos and crisis will erupt, and the globalists never let a good crisis go to waste. Zealots, regardless of their claimed moral authority, are almost always wrong in history. Conservatives cannot afford to be wrong in this era. We cannot afford zealotry.  We cannot afford biases and mistakes; the future of individual liberty depends on our ability to remain objective, vigilant and steadfast. Without self examination, we will lose everything.

Years ago in 2012, I published a thorough examination of disinformation tactics used by globalist institutions as well as government and political outfits to manipulate the public and undermine legitimate analysts working to expose particular truths of our social and economic conditions.

If you have not read this article, titled Disinformation: How It Works, I highly recommend you do so now. It will act as a solid foundation for what I am about to discuss in this article. Without a basic understanding of how lies are utilized, you will be in no position to grasp the complexities of disinformation trends being implemented today.

Much of what I am about to discuss will probably not become apparent for much of the mainstream and portions of the liberty movement for many years to come. Sadly, the biggest lies are often the hardest to see until time and distance are achieved.

If you want to be able to predict geopolitical and economic trends with any accuracy, you must first accept a couple of hard realities. First and foremost, the majority of cultural shifts and fiscal developments within our system are a product of social engineering by an organized collective of power elites. Second, you must understand that this collective is driven by the ideology of globalism — the pursuit of total centralization of financial and political control into the hands of a select few deemed as "superior" concertmasters or "maestros."

As globalist insider, CFR member and mentor to Bill Clinton, Carroll Quigley, openly admitted in his book Tragedy And Hope:
"The powers of financial capitalism had another far-reaching aim, nothing less than to create a world system of financial control in private hands able to dominate the political system of each country and the economy of the world as a whole. This system was to be controlled in a feudalist fashion by the central banks of the world acting in concert, by secret agreements arrived at in frequent private meetings and conferences. The apex of the system was to be the Bank for International Settlements in Basel, Switzerland, a private bank owned and controlled by the world’s central banks which were themselves private corporations. Each central bank ... sought to dominate its government by its ability to control Treasury loans, to manipulate foreign exchanges, to influence the level of economic activity in the country, and to influence cooperative politicians by subsequent economic rewards in the business world."
The philosophical basis for the globalist ideology is most clearly summarized in the principles of something called "Fabian Socialism," a system founded in 1884 which promotes the subversive and deliberate manipulation of the masses towards total centralization, collectivism and population control through eugenics. Fabian Socialists prefer to carry out their strategies over a span of decades, turning a population against itself slowly, rather than trying to force changes to a system immediately and outright.  Their symbol is a coat of arms depicting a wolf in sheep's clothing, or in some cases a turtle (slow and steady wins the race?) with the words "When I strike I strike hard."
Again, it is important to acknowledge that these people are NOT unified by loyalty to any one nation, culture, political party, mainstream religion or ethnic background.

Wednesday, September 13, 2017

Can The Anglo-Zionist Empire Continue to Enforce Its "Truth"?


medialens |  The goal of a mass media propaganda campaign is to create the impression that 'everybody knows' that Saddam is a 'threat', Gaddafi is 'about to commit mass murder', Assad 'has to go', Corbyn is 'destroying the Labour party', and so on. The picture of the world presented must be clear-cut. The public must be made to feel certain that the 'good guys' are basically benevolent, and the 'bad guys' are absolutely appalling and must be removed.

This is achieved by relentless repetition of the theme over days, weeks, months and even years. Numerous individuals and organisations are used to give the impression of an informed consensus – there is no doubt! Once this 'truth' has been established, anyone contradicting or even questioning it is typically portrayed as a shameful 'apologist' in order to deter further dissent and enforce conformity.

A key to countering this propaganda is to ask some simple questions: Why are US-UK governments and corporate media much more concerned about suffering in Venezuela than the far worse horrors afflicting war-torn, famine-stricken Yemen? Why do UK MPs rail against Maduro while rejecting a parliamentary motion to suspend UK arms supplies to their Saudi Arabian allies attacking Yemen? Why is the imperfect state of democracy in Venezuela a source of far greater outrage than outright tyranny in Saudi Arabia? The answers could hardly be more obvious.

Elite Establishment Has Lost Control of the Information Environment


tandfonline |  In 1993, before WiFi, indeed before more than a small fraction of people enjoyed broadband Internet, John J. Arquilla and David F. Ronfeldt of the Rand Corporation began to develop a thesis on “Cyberwar and Netwar” (Arquilla and Ronfeldt 1995 Arquilla, J. J., and D. F. Ronfeldt. 1995. “Cyberwar and Netwar: New Modes, Old Concepts, of Conflict.” Rand Review, Fall. https://www.rand.org/pubs/periodicals/rand-review/issues/RRR-fall95-cyber/cyberwar.html archived at https://perma.cc/NNT3-C6U3. (Excerpted from “Cyberwar Is Coming,” by Arquilla and Ronfeldt.” Comparative Strategy 12: 141165. 1993. doi:10.1080/01495939308402915 archived at https://perma.cc/8RQY-S3SW.)[Taylor & Francis Online][Google Scholar]). I found it of little interest at the time. It seemed typical of Rand’s role as a sometime management consultant to the military-industrial complex. For example, Arquilla and Ronfeldt wrote that “[c]yberwar refers to conducting military operations according to information-related principles. It means disrupting or destroying information and communications systems. It means trying to know everything about an adversary while keeping the adversary from knowing much about oneself.” A sort of Sun Tzu for the networked era.

The authors’ coining of the notion of “netwar” as distinct from “cyberwar” was even more explicitly grandiose. They went beyond bromides about inter-military conflict, describing impacts on citizenries at large:
Netwar refers to information-related conflict at a grand level between nations or societies. It means trying to disrupt or damage what a target population knows or thinks it knows about itself and the world around it. A netwar may focus on public or elite opinion, or both. It may involve diplomacy, propaganda and psychological campaigns, political and cultural subversion, deception of or interference with local media, infiltration of computer networks and databases, and efforts to promote dissident or opposition movements across computer networks. (Arquilla and Ronfeldt 1995 Arquilla, J. J., and D. F. Ronfeldt. 1995. “Cyberwar and Netwar: New Modes, Old Concepts, of Conflict.” Rand Review, Fall. https://www.rand.org/pubs/periodicals/rand-review/issues/RRR-fall95-cyber/cyberwar.html archived at https://perma.cc/NNT3-C6U3. (Excerpted from “Cyberwar Is Coming,” by Arquilla and Ronfeldt.” Comparative Strategy 12: 141165. 1993. doi:10.1080/01495939308402915 archived at https://perma.cc/8RQY-S3SW.)[Taylor & Francis Online][Google Scholar])
While “netwar” never caught on as a name, I was, in retrospect, too quick to dismiss it. Today it is hard to look at Arquilla and Ronfeldt’s crisp paragraph of more than 20 years ago without appreciating its deep prescience.

Our digital environment, once marked by the absence of sustained state involvement and exploitation, particularly through militaries, is now suffused with it. We will need new strategies to cope with this kind of intrusion, not only in its most obvious manifestations – such as shutting down connectivity or compromising private email – but also in its more subtle ones, such as subverting social media for propaganda purposes.

Many of us thinking about the Internet in the late 1990s concerned ourselves with how the network’s unusually open and generative architecture empowered individuals in ways that caught traditional states – and, to the extent they concerned themselves with it at all, their militaries – flat-footed. As befitted a technology that initially grew through the work and participation of hobbyists, amateurs, and loosely confederated computer science researchers, and later through commercial development, the Internet’s features and limits were defined without much reference to what might advantage or disadvantage the interests of a particular government.

To be sure, conflicts brewed over such things as the unauthorized distribution of copyrighted material, presaging counter-reactions by incumbents. Scholars such as Harvard Law School professor Lawrence Lessig (2006 Lessig, L. 2006. Code Version 2.0. New York: Basic Books. http://codev2.cc/ archived at https://perma.cc/2NCX-UGBE. [Google Scholar]) mapped out how the code that enabled freedom (to some; anarchy to others) could readily be reworked, under pressure of regulators if necessary, to curtail it. Moreover, the interests of the burgeoning commercial marketplace and the regulators could neatly intersect: The technologies capable of knowing someone well enough to anticipate the desire for a quick dinner, and to find the nearest pizza parlor, could – and have – become the technologies of state surveillance.

That is why divisions among those who study the digital environment – between so-called techno-utopians and cyber-skeptics – are not so vast. The fact was, and is, that our information technologies enable some freedoms and diminish others, and more important, are so protean as to be able to rearrange or even invert those affordances remarkably quickly.

Fascist Traitors In House and Senate Tryna Criminalize Anti-Israel Speech


WaPo  |  When government takes sides on a particular boycott and criminalizes those who engage in a boycott, it crosses a constitutional line.

Cardin and other supporters argue that the Israel Anti-Boycott Act targets only commercial activity. In fact, the bill threatens severe penalties against any business or individual who does not purchase goods from Israeli companies operating in the occupied Palestinian territories and who makes it clear — say by posting on Twitter or Facebook — that their reason for doing so is to support a U.N.- or E.U.-called boycott. That kind of penalty does not target commercial trade; it targets free speech and political beliefs. Indeed, the bill would prohibit even the act of giving information to a U.N. body about boycott activity directed at Israel.

The bill’s chilling effect would be dramatic — and that is no doubt its very purpose. But individuals, not the government, should have the right to decide whether to support boycotts against practices they oppose. Neither individuals nor businesses should have to fear million-dollar penalties, years in prison and felony convictions for expressing their opinions through collective action. As an organization, we take no sides on the Israeli-Palestinian conflict. But regardless of the politics, we have and always will take a strong stand when government threatens our freedoms of speech and association. The First Amendment demands no less. 

WaPo  |   The Israel Anti-Boycott Act would extend the 1977 law to international organizations, such as the United Nations or even the European Union, that might parallel the Arab League’s original “blacklist” of companies doing business with Israel, which was the heart of its boycott.

It couldn’t come at a better time. Already, the U.N. Human Rights Council has passed a resolution last year requesting its high commissioner for human rights to create a database of companies that operate in or have business relationships in the West Bank beyond Israel’s 1949 Armistice Lines, which includes all of Jerusalem, Israel’s capital.

If the high commissioner implements this resolution, as he appears determined to do, it will create a new “blacklist” that could subject American individuals and companies to discrimination, yet again, for simply doing business with Israel.

Moreover, the European Union has instituted a mandatory labeling requirement for agricultural products made in the West Bank and has restricted its substantial research and development funds to Israeli universities and companies to only those with no contacts with territories east of the Armistice Line. None of the many U.N. member states that are serial human rights violators are accorded similar treatment. Not Iran. Not Syria. Not North Korea. Only Israel.

These kinds of actions do not create the right atmosphere to prompt resumption of peace talks between Israel and the Palestinians that the Trump administration is seeking to jump-start.

Tuesday, September 12, 2017

Be Ye Wise As Serpents, Gentle As Doves...,


gurdjiefflegacy |  In 1888 the 16-year-old Gurdjieff witnessed a strange incident: he saw a little boy, weeping and making strange movements, struggling with all his might to break out of a circle drawn around him by other boys. Gurdjieff released the boy by erasing part of the circle and the child ran from his tormentors. The boy, Gurdjieff learned, was a Yezidi. He had heard only that Yezidis were "a sect living in Transcaucasia, mainly in the regions near Mount Ararat. They are sometimes called devil-worshippers." Astonished by the incident, Gurdjieff made a point of telling us that he felt compelled to think seriously about the Yezidis.(1) Inquiring of the adults he knew, he received contradictory opinions representative of the usual, prejudiced view of the Yezidis. But Gurdjieff remained unsatisfied. 

This story is embedded in the narrative of Meetings with Remarkable Men, like one of the monuments in Turkestan which Gurdjieff said helps people find their way through regions in which there are no roads or footpaths. In chapter five Gurdjieff placed another such marker, an echo of the earlier story. There, he and Pogossian set off to find the Sarmoung Brotherhood, even if they must travel, as Gurdjieff says, "on the devil's back." Enroute, far from any city, Pogossian throws a stone at one barking dog in a pack, and he and Gurdjieff are immediately surrounded by fifteen Kurdish sheepdogs. Like Yezidis, the two men cannot leave the circle of dogs until they are released by the shepherds who own the dogs.(2)
 
Where does this incident happen? If we set out Gurdjieff's journey with Pogossian on a map and, following Gurdjieff's instructions, draw a line from Alexandropol through Van, we see it passes through the Lalish Valley, location of the tomb and shrine of Sheikh Adi, the principal saint of the Yezidi religion. Extending the line further, it reaches Mosul, the major town in the region and a center of Yezidism.(3) By setting such markers, is Gurdjieff advising that we too should "think seriously" about the Yezidis? 

Gurdjieff has said that the teaching he brought is completely self-supporting and independent of other lines, was completely unknown up to the present time, and its origins predate and are the source of ancient Egyptian religion and of Christianity. Why then, has he as much as asked us to look into Yezidism? Some, swayed in a superficial sense by the subtitle of Ouspensky's book, Fragments of an Unknown Teaching, went hunting for the "missing link" in Gurdjieff's supposedly incomplete teaching. They tried to find this or that source from which he put it together, little realizing that it was they who were fragmentary, not the teaching. 

The questions become instead: what ideas do we encounter in a study of the Yezidis—and do these tell us anything? As we acquaint ourselves with the Yezidis and their beliefs, we may see that Gurdjieff has led us to materials for a deeper understanding of the nature of an esoteric teaching, of the implications of a teaching transmitted "orally," and of the reasons for his unlikely choice of Beelzebub as the hero of the First Series.

Will We Vanish From the Record Like the Watchers Did?



andrewcollins | Is civilisation the legacy of a race of human angels known as Watchers and Nephilim? Andrew Collins, author of FROM THE ASHES OF ANGELS, previews his history of angels and fallen angels and traces their origin back to an extraordinarily advanced culture that entered the Near East shortly after the end of the last Ice Age.

Angels are something we associate with beautiful Pre-Raphaelite and renaissance paintings, carved statues accompanying gothic architecture and supernatural beings who intervene in our lives at times of trouble. For the last 2000 years this has been the stereotypical image fostered by the Christian Church. But what are angels? Where do they come from, and what have they meant to the development of organised religion?

Many people see the Pentateuch, the first five books of the Old Testament, as littered with accounts of angels appearing to righteous patriarchs and visionary prophets. Yet this is simply not so. There are the three angels who approach Abraham to announce the birth of a son named Izaac to his wife Sarah as he sits beneath a tree on the Plain of Mamre.

There are the two angels who visit Lot and his wife at Sodom prior to its destruction. There is the angel who wrestles all night with Jacob at a place named Penuel, or those which he sees moving up and down a ladder that stretches between heaven and earth. Yet other than these accounts, there are too few examples, and when angels do appear the narrative is often vague and unclear on what exactly is going on. For instance, in the case of both Abraham and Lot the angels in question are described simply as `men', who sit down to take food like any mortal person.

Influence of the Magi
It was not until post-exilic times - ie after the Jews returned from captivity in Babylon around 450 BC - that angels became an integral part of the Jewish religion. It was even later, around 200 BC, that they began appearing with frequency in Judaic religious literature. Works such as the Book of Daniel and the apocryphal Book of Tobit contain enigmatic accounts of angelic beings that have individual names, specific appearances and established hierarchies. These radiant figures were of non-Judaic origin. All the indications are that they were aliens, imports from a foreign kingdom, namely Persia.

The country we now today as Iran might not at first seem the most likely source for angels, but it is a fact that the exiled Jews were heavily exposed to its religious faiths after the Persian king Cyrus the Great took Babylon in 539 BC. These included not only Zoroastrianism, after the prophet Zoroaster or Zarathustra, but also the much older religion of the Magi, the elite priestly caste of Media in north-west Iran. They believed in a whole pantheon of supernatural beings called ahuras, or `shining ones', and daevas - ahuras who had fallen from grace because of their corruption of mankind.

Although eventually outlawed by Persia, the influence of the Magi ran deep within the beliefs, customs and rituals of Zoroastrianism. Moreover, there can be little doubt that Magianism, from which we get terms such as magus, magic and magician, helped to establish the belief among Jews not only of whole hierarchies of angels, but also of legions of fallen angels - a topic that gains its greatest inspiration from one work alone - the Book of Enoch.

The Book of Enoch
Compiled in stages somewhere between 165 BC and the start of the Christian era, this so-called pseudepigraphal (ie falsely attributed) work has as its main theme the story behind the fall of the angels. Yet not the fall of angels in general, but those which were originally known as 'Œrin ('Œr in singular), `those who watch', or simply `watchers' as the word is rendered in English translation.

The Book of Enoch tells the story of how 200 rebel angels, or Watchers, decided to transgress the heavenly laws and `descend' on to the plains and take wives from among mortal kind. The site given for this event is the summit of Hermon, a mythical location generally association with the snowy heights of Mount Hermon in the Ante-Lebanon range, north of modern-day Palestine (but see below for the most likely homeland of the Watchers).

The 200 rebels realise the implications of their transgressions, for they agree to swear an oath to the effect that their leader Shemyaza would take the blame if the whole ill-fated venture went terribly wrong.

After their descent to the lowlands, the Watchers indulge in earthly delights with their chosen `wives', and through these unions are born giant offspring named as Nephilim, or Nefilim, a Hebrew word meaning `those who have fallen', which is rendered in Greek translations as gigantes, or `giants'.

andrewcollins |   In both the book of Genesis (chapter six) and the book of Enoch, the rebel Watchers are said also to have come upon the Daughters of Men, i.e. mortal women, who gave birth to giant offspring called Nephilim. For this transgression against the laws of Heaven, the renegades were incarcerated and punished by those Watchers who had remained loyal to Heaven. The rebel Watchers' offspring, the Nephilim (a word meaning "those who fell), were either killed outright, or were afterwards destroyed in the flood of Noah. Some, however, the book of Numbers tells us, survived and went on to become the ancestors of giant races, such as the Anakim and Rapheim.

I wrote that the story of the Watchers is in fact the memory of a priestly or shamanic elite, a group of highly intelligent human individuals, that entered the Upper Euphrates region from another part of the ancient world sometime around the end of the last Ice Age, c. 11,000-10,000 BC. On their arrival in what became known as the land or kingdom of Eden (a term actually used in the Old Testament), they assumed control of the gradually emerging agrarian communities, who were tutored in a semi-rural life style centred around agriculture, metal working and the rearing of animal live stock. More disconcertingly, these people were made to venerate their superiors, i.e. the Watchers, as living gods, or immortals.

The precise same region of the Near East, now thought to be the biblical Garden of Eden, has long been held to be the cradle of civilization. Here a number of "firsts" occurred at the beginning of the Neolithic revolution, which began c. 10,000-9000 BC. It was in southeast Turkey, northern Syria and northern Iraq, for example, that the first domestication of wild grasses took place, the first fired pottery and baked statues were produced, the first copper and lead were smelted, the first stone buildings and standing stones were erected, the first beautification of the eyes took place among woman, the first drilled beads in ultra hard stone were produced, the first alcohol was brewed and distilled, etc., etc. In fact, many of the arts and sciences of Heaven that the Watchers are said to have revealed to mortal kind were all reported first in this region of the globe, known to archaeologists as Upper Mesopotamia, and to the people of the region as Kurdistan.

Sean Thomas acknowledges my help at the beginning of the The Genesis Secret, which follows exactly the same themes as From the Ashes of Angels (and my later book Gods of Eden, published in 1998), including the fact that the Watchers and founders of Eden were bird man, i.e. shamans that wore cloaks of feathers, and that local angel worshipping cults in Kurdistan, such as the Yezidi, Yaresan and Alevi, preserve some semblance of knowledge regarding the former existence of the Watchers or angels as the bringers of civilization. Their leader, they say was Azazel, known also as Melek Taus (or Melek Tawas), the "Peacock Angel". Azazel is a name given in the book of Enoch for one of the two leaders of the rebel Watchers (the other being Shemyaza).

It is an honour for my work to be acknowledged in this manner by Sean Thomas, especially as The Genesis Secret has become a bestseller (as was From the Ashes of Angels in 1996). I won't spoil the plot, so will not reveal Sean's conclusions, or indeed the climax of the book, although I must warn you that it is extremely gory in places!


The Book of Enoch


wikipedia |  The Book of Enoch (also 1 Enoch;[1] Ge'ez: መጽሐፈ ሄኖክ mätṣḥäfä henok) is an ancient Jewish religious work, ascribed by tradition to Enoch, the great-grandfather of Noah, although modern scholars estimate the older sections (mainly in the Book of the Watchers) to date from about 300 BC, and the latest part (Book of Parables) probably to the first century BC.[2]

It is not part of the biblical canon as used by Jews, apart from Beta Israel. Most Christian denominations and traditions may accept the Books of Enoch as having some historical or theological interest, but they generally regard the Books of Enoch as non-canonical or non-inspired.[3] It is regarded as canonical by the Ethiopian Orthodox Tewahedo Church and Eritrean Orthodox Tewahedo Church, but not by any other Christian groups.

It is wholly extant only in the Ge'ez language, with Aramaic fragments from the Dead Sea Scrolls and a few Greek and Latin fragments. For this and other reasons, the traditional Ethiopian belief is that the original language of the work was Ge'ez, whereas non-Ethiopian scholars tend to assert that it was first written in either Aramaic or Hebrew; Ephraim Isaac suggests that the Book of Enoch, like the Book of Daniel, was composed partially in Aramaic and partially in Hebrew.[4]:6 No Hebrew version is known to have survived. It is asserted in the book itself that its author was Enoch, before the Biblical Flood.

Some of the authors of the New Testament were familiar with some of the content of the story.[5] A short section of 1 Enoch (1:9) is cited in the New Testament, Epistle of Jude, Jude 1:14–15, and is attributed there to "Enoch the Seventh from Adam" (1 En 60:8), although this section of 1 Enoch is a midrash on Deuteronomy 33. Several copies of the earlier sections of 1 Enoch were preserved among the Dead Sea Scrolls.

Younger Dryas Impact Hypothesis


wikipedia |  The Younger Dryas is a climatic event from c. 12,900 to c. 11,700 calendar years ago (BP). It is named after an indicator genus, the alpine-tundra wildflower Dryas octopetala, as its leaves are occasionally abundant in the Late Glacial, often minerogenic-rich, like the lake sediments of Scandinavian lakes.

The Younger Dryas saw a sharp decline in temperature over most of the Northern Hemisphere, at the end of the Pleistocene epoch, immediately before the current, warmer Holocene. The Younger Dryas was the most recent and longest of several interruptions to the gradual warming of the Earth's climate since the severe Last Glacial Maximum, c. 27,000 to 24,000 calendar years BP. The change was relatively sudden, taking place in decades, and it resulted in a decline of 2 to 6 degrees Celsius and advances of glaciers and drier conditions, over much of the temperate northern hemisphere. It is thought to have been caused by a decline in the strength of the Atlantic meridional overturning circulation, which transports warm water from the Equator towards the North Pole, in turn thought to have been caused by an influx of fresh cold water from North America to the Atlantic.

The Younger Dryas was a period of climatic change, but the effects were complex and variable. In the Southern Hemisphere and some areas of the Northern Hemisphere, such as southeastern North America, there was a slight warming.[1]

The presence of a distinct cold period at the end of the Late Glacial interval has been known for a long time. Paleobotanical and lithostratigraphic studies of Swedish and Danish bog and lake sites, like in the Allerød clay pit in Denmark, first recognized and described the Younger Dryas.[2][3][4][5]

wikipedia |  The Younger Dryas impact hypothesis or Clovis comet hypothesis originally proposed that a large air burst or earth impact of one or more comets initiated the Younger Dryas cold period about 12,900 BP calibrated (10,900 14C uncalibrated) years ago.[1][2][3] The hypothesis has been contested by research showing that most of the conclusions cannot be repeated by other scientists, and criticized because of misinterpretation of data and the lack of confirmatory evidence.[4][5][6][7]

The current impact hypothesis states that the air burst(s) or impact(s) of a swarm of carbonaceous chondrites or comet fragments set areas of the North American continent on fire, causing the extinction of most of the megafauna in North America and the demise of the North American Clovis culture after the last glacial period.[8] The Younger Dryas ice age lasted for about 1,200 years before the climate warmed again. This swarm is hypothesized to have exploded above or possibly on the Laurentide Ice Sheet in the region of the Great Lakes, though no impact crater has yet been identified and no physical model by which such a swarm could form or explode in the air has been proposed. Nevertheless, the proponents suggest that it would be physically possible for such an air burst to have been similar to, but orders of magnitude larger than, the Tunguska event of 1908. The hypothesis proposed that animal and human life in North America not directly killed by the blast or the resulting coast-to-coast wildfires would have likely starved on the burned surface of the continent.

 

Monday, September 11, 2017

The Moon: A Natural Satellite Cannot Be a Hollow Object


disinfo |  Between 1969 and 1977, Apollo mission seismographic equipment registered up to 3,000 “moonquakes” each year of operation. Most of the vibrations were quite small and were caused by meteorite strikes or falling booster rockets. But many other quakes were detected deep inside the Moon. This internal creaking is believed to be caused by the gravitational pull of our planet as most moonquakes occur when the Moon is closest to the Earth.

An event occurred in 1958 in the Moon’s Alphonsus crater, which shook the idea that all internal moonquake activity was simply settling rocks. In November of that year, Soviet astronomer Nikolay A. Kozyrev of the Crimean Astrophysical Observatory startled the scientific world by photographing the first recorded gaseous eruption on the Moon near the crater’s peak. Kozyrev attributed this to escaping fluorescent gases. He also detected a reddish glow characteristic of carbon compounds, which “seemed to move and disappeared after an hour.”

Some scientists refused to accept Kozyrev’s findings until astronomers at the Lowell Observatory also saw reddish glows on the crests of ridges in the Aristarchus region in 1963. Days later, colored lights on the Moon lasting more than an hour were reported at two separate observatories.

Something was going on inside the volcanically dead Moon. And whatever it is, it occurs the same way at the same time. As the Moon moves closer to the Earth, seismic signals from different stations on the lunar surface detect identical vibrations. It is difficult to accept this movement as a natural phenomenon. For example, a broken artificial hull plate could shift exactly the same way each time the Moon passed near the Earth.

There is evidence to indicate the Moon may be hollow. Studies of Moon rocks indicate that the Moon’s interior differs from the Earth’s mantle in ways suggesting a very small, or even nonexistent, core. As far back as 1962, NASA scientist Dr. Gordon MacDonald stated, “If the astronomical data are reduced, it is found that the data require that the interior of the Moon be less dense than the outer parts. Indeed, it would seem that the Moon is more like a hollow than a homogeneous sphere.”

Apollo 14 astronaut Dr. Edgar Mitchell, while scoffing at the possibility of a hollow moon, nevertheless admitted that since heavier materials were on the surface, it is quite possible that giant caverns exist within the Moon. MIT’s Dr. Sean C. Solomon wrote, “The Lunar Orbiter experiments vastly improved our knowledge of the Moon’s gravitational field … indicating the frightening possibility that the Moon might be hollow.”

Why frightening? The significance was stated by astronomer Carl Sagan way back in his 1966 work Intelligent Life in the Universe, “A natural satellite cannot be a hollow object.”

The most startling evidence that the Moon could be hollow came on November 20, 1969, when the Apollo 12 crew, after returning to their command ship, sent the lunar module (LM) ascent stage crashing back onto the Moon creating an artificial moonquake. The LM struck the surface about 40 miles from the Apollo 12 landing site where ultra-sensitive seismic equipment recorded something both unexpected and astounding—the Moon reverberated like a bell for more than an hour. The vibration wave took almost eight minutes to reach a peak, and then decreased in intensity. At a news conference that day, one of the co-directors of the seismic experiment, Maurice Ewing, told reporters that scientists were at a loss to explain the ringing. “As for the meaning of it, I’d rather not make an interpretation right now. But it is as though someone had struck a bell, say, in the belfry of a church a single blow and found that the reverberation from it continued for 30 minutes.”

It was later established that small vibrations had continued on the Moon for more than an hour. The phenomenon was repeated when the Apollo 13’s third stage was sent crashing onto the Moon by radio command, striking with the equivalent of 11 tons of TNT. According to NASA, this time the Moon “reacted like a gong.” Although seismic equipment was more than 108 miles from the crash site, recordings showed reverberations lasted for three hours and 20 minutes and traveled to a depth of 22 to 25 miles.

Subsequent studies of man-made crashes on the Moon yielded similar results. After one impact the Moon reverberated for four hours. This ringing coupled with the density problem on the Moon reinforces the idea of a hollow moon. Scientists hoped to record the impact of a meteor large enough to send shock waves to the Moon’s core and back and settle the issue. That opportunity came on May 13, 1972, when a large meteor stuck the Moon with the equivalent force of 200 tons of TNT. After sending shock waves deep into the interior of the Moon, scientists were baffled to find that none returned, confirming that there is something unusual about the Moon’s core, or lack thereof.

Fuck Robert Kagan And Would He Please Now Just Go Quietly Burn In Hell?

politico | The Washington Post on Friday announced it will no longer endorse presidential candidates, breaking decades of tradition in a...