Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Saturday, October 12, 2024

Why The Techbros Back Trump And Vance Is Their Man In The White House

thebulletin  |  Since the emergence of generative artificial intelligence, scholars have speculated about the technology’s implications for the character, if not nature, of war. The promise of AI on battlefields and in war rooms has beguiled scholars. They characterize AI as “game-changing,” “revolutionary,” and “perilous,” especially given the potential of great power war involving the United States and China or Russia. In the context of great power war, where adversaries have parity of military capabilities, scholars claim that AI is the sine qua non, absolutely required for victory. This assessment is predicated on the presumed implications of AI for the “sensor-to-shooter” timeline, which refers to the interval of time between acquiring and prosecuting a target. By adopting AI, or so the argument goes, militaries can reduce the sensor-to-shooter timeline and maintain lethal overmatch against peer adversaries.

Although understandable, this line of reasoning may be misleading for military modernization, readiness, and operations. While experts caution that militaries are confronting a “eureka” or “Oppenheimer” moment, harkening back to the development of the atomic bomb during World War II, this characterization distorts the merits and limits of AI for warfighting. It encourages policymakers and defense officials to follow what can be called a “primrose path of AI-enabled warfare,” which is codified in the US military’s “third offset” strategy. This vision of AI-enabled warfare is fueled by gross prognostications and over-determination of emerging capabilities enhanced with some form of AI, rather than rigorous empirical analysis of its implications across all (tactical, operational, and strategic) levels of war.

The current debate on military AI is largely driven by “tech bros” and other entrepreneurs who stand to profit immensely from militaries’ uptake of AI-enabled capabilities. Despite their influence on the conversation, these tech industry figures have little to no operational experience, meaning they cannot draw from first-hand accounts of combat to further justify arguments that AI is changing the character, if not nature, of war. Rather, they capitalize on their impressive business successes to influence a new model of capability development through opinion pieces in high-profile journals, public addresses at acclaimed security conferences, and presentations at top-tier universities.

To the extent analysts do explore the implications of AI for warfighting, such as during the conflicts in Gaza, Libya, and Ukraine, they highlight limited—and debatable—examples of its use, embellish its impacts, conflate technology with organizational improvements provided by AI, and draw generalizations about future warfare. It is possible that AI-enabled technologies, such as lethal autonomous weapon systems or “killer robots,” will someday dramatically alter war. Yet the current debate for the implications of AI on warfighting discounts critical political, operational, and normative considerations that imply AI may not have the revolutionary impacts that its proponents claim, at least not now. As suggested by Israel and the United States’ use of AI-enabled decision-support systems in Gaza and Ukraine, there is a more reasonable alternative. In addition to enabling cognitive warfare, it is likely that AI will allow militaries to optimize workflows across warfighting functions, particularly intelligence and maneuver. This will enhance situational awareness; provide efficiencies, especially in terms of human resources; and shorten the course-of-action development timeline.

Militaries across the globe are at a moment or strategic inflection point in terms of preparing for future conflict. But this is not for the reasons scholars typically assume. Our research suggests that three related considerations have combined to shape the hype surrounding military AI, informing the primrose path of AI-enabled warfare. First, that primrose path is paved by the emergence of a new military industrial complex that is dependent on commercial service providers. Second, this new defense acquisition process is the cause and effect of a narrative suggesting a global AI arms race, which has encouraged scholars to discount the normative implications of AI-enabled warfare. Finally, while analysts assume that soldiers will trust AI, which is integral to human-machine teaming that facilitates AI-enabled warfare, trust is not guaranteed.

What AI is and isn’t. Automation, autonomy, and AI are often used interchangeably but erroneously. Automation refers to the routinization of tasks performed by machines, such as auto-order of depleted classes of military supplies, but with overall human oversight. Autonomy moderates the degree of human oversight of tasks performed by machines such that humans are on, in, or off the loop. When humans are on the loop, they exercise ultimate control of machines, as is the case for the current class of “conventional” drones such as the MQ-9 Reaper. When humans are in the loop, they pre-delegate certain decisions to machines, which scholars debate in terms of nuclear command and control. When humans are off the loop, they outsource control to machines leading to a new class of “killer robots” that can identify, track, and engage targets on their own. Thus, automation and autonomy are protocol-based functions that largely retain a degree of human oversight, which is often high given humans’ inherent skepticism of machines.

Monday, April 01, 2024

Online Dissent Is PreCrime

nakedcapitalism  |  Philip K. Dick’s 1956 novella The Minority Report created “precrime,” the clairvoyant foreknowledge of criminal activity as forecast by mutant “precogs.” The book was a dystopian nightmare, but a 2015 Fox television series transforms the story into one in which a precog works with a cop and shows that data is actually effective at predicting future crime.

Canada just tried to enact a precrime law along the lines of the 2015 show, but it was panned about as much as the television series. Ottawa’s now-tabled online harms bill included a provision to impose house arrest on someone who is feared to commit a hate crime in the future. From The Globe and Mail:

The person could be made to wear an electronic tag, if the attorney-general requests it, or ordered by a judge to remain at home, the bill says. Mr. Virani, who is Attorney-General as well as Justice Minister, said it is important that any peace bond be “calibrated carefully,” saying it would have to meet a high threshold to apply.

But he said the new power, which would require the attorney-general’s approval as well as a judge’s, could prove “very, very important” to restrain the behaviour of someone with a track record of hateful behaviour who may be targeting certain people or groups…

People found guilty of posting hate speech could have to pay victims up to $20,000 in compensation. But experts including internet law professor Michael Geist have said even a threat of a civil complaint – with a lower burden of proof than a court of law – and a fine could have a chilling effect on freedom of expression.

While the Canadian bill is shelved for now, it wouldn’t be surprising to see it resurface after some future hate crime. I wonder if this is where burgeoning “anti-hate” programs across the US are headed. The Canadian bill would have also allowed “people to file complaints to the Canadian Human Rights Commission over what they perceive as hate speech online – including, for example, off-colour jokes by comedians.”

There are now programs in multiple US states to do just that –  encourage people to snitch on anyone doing anything perceived as “hateful.”

The 2021 federal COVID-19 Hate Crimes Act began to dole out money to states to help them respond to hate incidents. Oregon now has its Bias Response Hotline to track “bias incidents.” In December of 2022, New York launched its Hate and Bias Prevention Unit. Maryland, too, has its system – its hate incidents examples include “offensive jokes” and “malicious complaints of smell or noise.”

Maryland also has its Emmett Till Alert System that sends out three levels of alerts for specific acts of hate. For now, they only go to black lawmakers, civil rights activists, the media and other approved outlets, but expansion to the general populace is under consideration.

California vs. Hate, a multilingual statewide hotline and website that encourages people to report all acts of “hate,” is coming up on its one-year anniversary, reportedly receiving a mere 823 calls from 79% of California’s 58 counties during its first nine months of operation. It looks like the program is rolling out even more social media graphics in a bid to get more reports:

Tuesday, September 19, 2023

Chuck Schumer's AI Conference

 CTH  |  According to a recent media report, Senator Chuck Schumer led an AI insight forum that included tech industry leaders: Google CEO Sundar Pichai, Tesla, X and SpaceX CEO Elon Musk, NVIDIA President Jensen Huang, Meta founder and CEO Mark Zuckerberg, technologist and Google alum Eric Schmidt, OpenAI CEO Sam Altman and Microsoft CEO Satya Nadella.

Additionally, representatives from labor and civil rights advocacy groups which included: AFL-CIO President Liz Shuler, Leadership Conference on Civil and Human Rights President and CEO Maya Wiley, and AI accountability researcher Deb Raji. The group was joined by a list of prominent AI executives, including OpenAI CEO Sam Altman and Nvidia CEO Jensen Huang.

Notably absent from the Sept 13th forum was anyone with any real-world experience that is not a beneficiary of government spending. This is not accidental. Technocracy advances regardless of the citizen impact. Technocrats advance their common interests, not the interests of the ordinary citizen.

That meeting comes after DHS established independent guidelines we previously discussed {GO DEEP}.

DHS’ AI task force is coordinating with the Cybersecurity and Infrastructure Security Agency on how the department can partner with critical infrastructure organizations “on safeguarding their uses of AI and strengthening their cybersecurity practices writ large to defend against evolving threats.”

Remember, in addition to these groups assembling, the Dept of Defense (DoD) will now conduct online monitoring operations, using enhanced AI to protect the U.S. internet from “disinformation” under the auspices of national security. {link}

So, the question becomes, what was Chuck Schumer’s primary reference for this forum?

(FED NEWS) […] Schumer said that tackling issues around AI-generated content that is fake or deceptive that can lead to widespread misinformation and disinformation was the most time-sensitive problem to solve due to the upcoming 2024 presidential election.

[…] The top Democrat in the Senate said there was much discussion during the meeting about the creation of a new AI agency and that there was also debate about how to use some of the existing federal agencies to regulate AI.

South Dakota Sen. Mike Rounds, Schumer’s Republican counterpart in leading the bipartisan AI forums, said: “We’ve got to have the ability to provide good information to regulators. And it doesn’t mean that every single agency has to have all of the top-end, high-quality of professionals but we need that group of professionals who can be shared across the different agencies when it comes to AI.”

Although there were no significant voluntary commitments made during the first AI insight forum, tech leaders who participated in the forum said there was much debate around how open and transparent AI developers and those using AI in the federal government will be required to be. (read more)

There isn’t anything that is going to stop the rapid deployment of AI in the tech space.  However, for the interests of the larger American population, the group unrepresented in the forum, is the use of AI to identify, control, and impede information distribution that is against the interests of the government and the public-private partnership the technocrats are assembling.

The words “disinformation” and “deep fakes” are as disingenuous as the term “Patriot Act.”   The definitions of disinformation and deep fakes are where the government regulations step in, using their portals into Big Tech, to identify content on platforms that is deemed in violation.

It doesn’t take a deep political thinker to predict that memes and video segments against the interests of the state will be defined for removal.

Bill Gates: People Don’t Realize What’s Coming

medium  |  Gates is now talking about artificial intelligence, and how it’s the most important innovation of our time. Are you ready for what’s coming?

Bill Gates doesn’t think so.

In fact, he’s sounding the alarm on a future that many of us don’t realize is just around the corner. He thinks AI is going to shake things up in a big way:

“Soon Job demand for lots of skill sets will be substantially lower. I don’t think people have that in their mental model.”

“In the past, labors went off and did other jobs, but now there will be a lot of angst about the fact that AI is targeting white-collar work.”

“The job disruption from AI will be massive, and we need to prepare for it”

Think you’re safe from the job-killing effects of AI?

Think again.

BIG CHANGES are coming to the job market that people and governments aren’t prepared for.

I’m not here to scare you, I am here to jolt you out of your comfort zone.

The job market is in for some serious shaking and baking, and unfortunately, it seems like nobody’s got the right recipe to handle it.

Open Your Eyes and You Will See
“If you are depressed you are living in the past.
If you are anxious, you are living in the future.
If you are at peace you are living in the present.”
― Lao Tzu

Imagine waking up one day and realizing that the job you’ve held for years is no longer needed by the company.

Not because you screwed up, but simply because your company found a better alternative (AI) and it is no more a job that only you can do.

You have been working at the same company for over a decade, and suddenly, you are told that your services are no longer needed.

Won’t you feel lost, confused, and worried about how you will support yourself and your family?

It’s a scary thought, but the truth is, it’s already happening in many industries.

We’ve already seen the merciless termination of thousands of employees at tech giants like Google, Microsoft, Amazon, and Meta, and that’s before AI even began flexing its muscles.

It’s only a matter of time before the job market starts feeling the full impact of this unstoppable force.
Sure, some of them may adapt, but where will you fit the rest of the workforce when the need for labor itself will decrease?

AI is inevitably going to reduce the demand for jobs, particularly those on the lower end of the skills spectrum.

Of course, companies will get the benefit of cost-cutting and spurring innovation.

But that’s likely to come at a cost — joblessness and economic inequality.

Our ever-changing world demands a moment of pause, a chance to contemplate what the future holds.

For it is in this stillness that we may gain a deep understanding of the challenges that lay ahead, and thus, prepare ourselves with the necessary tools to navigate them successfully.

The industrial revolution was fueled by the invention of machines. It enabled companies to increase productivity and reduce costs.

The whole education system was designed to serve the needs of the industrial revolution.

It trained people to become cogs in a machine. Perform repetitive tasks without questioning the status quo.

The focus was on efficiency and standardization, rather than creativity and individuality.

Companies relied on humans as a form of labor only because it was cheap (and reliable).

In the past, a single machine replaced the work of a hundred men, and all it needed was one operator.

The game we’ve been playing for years, well, it’s not the same anymore.

The future is here, and it’s not pretty.

In the coming age, one person will command an army of software agents.

They will build things at a breakneck speed, replacing tens or even hundreds of operators in the blink of an eye.

It’s a brave new world where the traditional constraints of human labor are no longer a limiting factor.
The repercussions of that will soon be felt in all sectors, and tech won’t be an exception.

The software industry, born from the industrial revolution, has undergone two productivity revolutions:
The creation of higher-level programming languages and the ascent of open source.

Sunday, September 03, 2023

DoD Fitna Scrutinize You To Protect You In Ways You Didn't Even Know You Need!

CTH  | The US Special Operations Command (USSOCOM) has contracted New York-based Accrete AI to deploy software that detects “real time” disinformation threats on social media.

The company’s Argus anomaly detection AI software analyzes social media data, accurately capturing “emerging narratives” and generating intelligence reports for military forces to speedily neutralize disinformation threats.

“Synthetic media, including AI-generated viral narratives, deep fakes, and other harmful social media-based applications of AI, pose a serious threat to US national security and civil society,” Accrete founder and CEO Prashant Bhuyan said.

“Social media is widely recognized as an unregulated environment where adversaries routinely exploit reasoning vulnerabilities and manipulate behavior through the intentional spread of disinformation.

“USSOCOM is at the tip of the spear in recognizing the critical need to identify and analytically predict social media narratives at an embryonic stage before those narratives evolve and gain traction. Accrete is proud to support USSOCOM’s mission.”

But wait… It gets worse!

[PRIVATE SECTOR VERSION] – The company also revealed that it will launch an enterprise version of Argus Social for disinformation detection later this year.

The AI software will provide protection for “urgent customer pain points” against AI-generated synthetic media, such as viral disinformation and deep fakes.

Providing this protection requires AI that can automatically “learn” what is most important to an enterprise and predict the likely social media narratives that will emerge before they influence behavior. (read more)

Now, take a deep breath…. Let me explain.

The goal is the “PRIVATE SECTOR VERSION.”  USSOCOM is the mechanical funding mechanism for deployment, because the system itself is too costly for a private sector launch.   The Defense Dept budget is used to contract an Artificial Intelligence system, the Argus anomaly detection AI, to monitor social media under the auspices of national security.

Once the DoD funded system is created, the “Argus detection protocol” – the name given to the AI monitoring and control system, will then be made available to the public sector.  “Enterprise Argus” is then the commercial product, created by the DoD, which allows the U.S. based tech sectors to deploy.

The DoD cannot independently contract for the launch of an operation against a U.S. internet network, because of constitutional limits via The Posse Comitatus Act, which limits the powers of the federal government in the use of federal military personnel to enforce domestic policies within the United States.  However, the DoD can fund the creation of the system under the auspices of national defense, and then allow the private sector to launch for the same intents and purposes.   See how that works? 

RESOURCES:

Using AI for Content Moderation

Facebook / META / Tech joining with DHS

Zoom will allow Content Scraping by AI 

AI going into The Cloud

U.S. Govt Going into The Cloud With AI

Pentagon activates 175 Million IP’s 👀**ahem**

Big Names to Attend Political AI Forum

Thursday, April 20, 2023

ChatGPT Got Its Wolfram Superpowers

stephenwolfram  |  Early in January I wrote about the possibility of connecting ChatGPT to Wolfram|Alpha. And today—just two and a half months later—I’m excited to announce that it’s happened! Thanks to some heroic software engineering by our team and by OpenAI, ChatGPT can now call on Wolfram|Alpha—and Wolfram Language as well—to give it what we might think of as “computational superpowers”. It’s still very early days for all of this, but it’s already very impressive—and one can begin to see how amazingly powerful (and perhaps even revolutionary) what we can call “ChatGPT + Wolfram” can be.

Back in January, I made the point that, as an LLM neural net, ChatGPT—for all its remarkable prowess in textually generating material “like” what it’s read from the web, etc.—can’t itself be expected to do actual nontrivial computations, or to systematically produce correct (rather than just “looks roughly right”) data, etc. But when it’s connected to the Wolfram plugin it can do these things. So here’s my (very simple) first example from January, but now done by ChatGPT with “Wolfram superpowers” installed:

How far is it from Tokyo to Chicago?

It’s a correct result (which in January it wasn’t)—found by actual computation. And here’s a bonus: immediate visualization:

Show the path

How did this work? Under the hood, ChatGPT is formulating a query for Wolfram|Alpha—then sending it to Wolfram|Alpha for computation, and then “deciding what to say” based on reading the results it got back. You can see this back and forth by clicking the “Used Wolfram” box (and by looking at this you can check that ChatGPT didn’t “make anything up”):

Used Wolfram

There are lots of nontrivial things going on here, on both the ChatGPT and Wolfram|Alpha sides. But the upshot is a good, correct result, knitted into a nice, flowing piece of text.

Let’s try another example, also from what I wrote in January:

What is the integral?

A fine result, worthy of our technology. And again, we can get a bonus:

Plot that

In January, I noted that ChatGPT ended up just “making up” plausible (but wrong) data when given this prompt:

Tell me about livestock populations

But now it calls the Wolfram plugin and gets a good, authoritative answer. And, as a bonus, we can also make a visualization:

Make a bar chart

Another example from back in January that now comes out correctly is:

What planetary moons are larger than Mercury?

If you actually try these examples, don’t be surprised if they work differently (sometimes better, sometimes worse) from what I’m showing here. Since ChatGPT uses randomness in generating its responses, different things can happen even when you ask it the exact same question (even in a fresh session). It feels “very human”. But different from the solid “right-answer-and-it-doesn’t-change-if-you-ask-it-again” experience that one gets in Wolfram|Alpha and Wolfram Language.

Here’s an example where we saw ChatGPT (rather impressively) “having a conversation” with the Wolfram plugin, after at first finding out that it got the “wrong Mercury”:

How big is Mercury?

One particularly significant thing here is that ChatGPT isn’t just using us to do a “dead-end” operation like show the content of a webpage. Rather, we’re acting much more like a true “brain implant” for ChatGPT—where it asks us things whenever it needs to, and we give responses that it can weave back into whatever it’s doing. It’s rather impressive to see in action. And—although there’s definitely much more polishing to be done—what’s already there goes a long way towards (among other things) giving ChatGPT the ability to deliver accurate, curated knowledge and data—as well as correct, nontrivial computations.

But there’s more too. We already saw examples where we were able to provide custom-created visualizations to ChatGPT. And with our computation capabilities we’re routinely able to make “truly original” content—computations that have simply never been done before. And there’s something else: while “pure ChatGPT” is restricted to things it “learned during its training”, by calling us it can get up-to-the-moment data.

 

ChatGPT-4 And The Future Of AI

wired  |  The stunning capabilities of ChatGPT, the chatbot from startup OpenAI, has triggered a surge of new interest and investment in artificial intelligence. But late last week, OpenAI’s CEO warned that the research strategy that birthed the bot is played out. It's unclear exactly where future advances will come from.

OpenAI has delivered a series of impressive advances in AI that works with language in recent years by taking existing machine-learning algorithms and scaling them up to previously unimagined size. GPT-4, the latest of those projects, was likely trained using trillions of words of text and many thousands of powerful computer chips. The process cost over $100 million.

But the company’s CEO, Sam Altman, says further progress will not come from making models bigger. “I think we're at the end of the era where it's going to be these, like, giant, giant models,” he told an audience at an event held at MIT late last week. “We'll make them better in other ways.”

Altman’s declaration suggests an unexpected twist in the race to develop and deploy new AI algorithms. Since OpenAI launched ChatGPT in November, Microsoft has used the underlying technology to add a chatbot to its Bing search engine, and Google has launched a rival chatbot called Bard. Many people have rushed to experiment with using the new breed of chatbot to help with work or personal tasks.

Meanwhile, numerous well-funded startups, including AnthropicAI21Cohere, and Character.AI, are throwing enormous resources into building ever larger algorithms in an effort to catch up with OpenAI’s technology. The initial version of ChatGPT was based on a slightly upgraded version of GPT-3, but users can now also access a version powered by the more capable GPT-4.

Altman’s statement suggests that GPT-4 could be the last major advance to emerge from OpenAI’s strategy of making the models bigger and feeding them more data. He did not say what kind of research strategies or techniques might take its place. In the paper describing GPT-4, OpenAI says its estimates suggest diminishing returns on scaling up model size. Altman said there are also physical limits to how many data centers the company can build and how quickly it can build them.

Nick Frosst, a cofounder at Cohere who previously worked on AI at Google, says Altman’s feeling that going bigger will not work indefinitely rings true. He, too, believes that progress on transformers, the type of machine learning model at the heart of GPT-4 and its rivals, lies beyond scaling. “There are lots of ways of making transformers way, way better and more useful, and lots of them don’t involve adding parameters to the model,” he says. Frosst says that new AI model designs, or architectures, and further tuning based on human feedback are promising directions that many researchers are already exploring.


Wednesday, April 19, 2023

Musk Full Interview: An "Unfair Presentation Of Reality"

WaPo  | There are laws that govern how federal law enforcement can seek information from companies such as Twitter, including a mechanism for Twitter’s costs to be reimbursed. Twitter had traditionally provided public information on such requests (in the aggregate, not specifically) but hasn’t updated those metrics since Musk took over.

But notice that this is not how Carlson and Musk frame the conversation.

Once Musk gained control of Twitter, he began providing sympathetic writers with internal documents so they could craft narratives exposing the ways in which pre-Musk Twitter was complicit with the government and the left in nefarious ways. These were the “Twitter Files,” various presentations made on Twitter itself using cherry-picked and often misrepresented information.

One such presentation made an accusation similar to what Carlson was getting at: that the government paid Twitter millions of dollars to censor user information. That was how Musk presented that particular “Twitter File,” the seventh in the series, though this wasn’t true. The right-wing author of the thread focused on government interactions with social media companies in 2020 aimed at uprooting 2016-style misinformation efforts. His thread suggested through an aggregation of carefully presented documents that the government aimed to censor political speech. The author also pointedly noted that Twitter had received more than $3 million in federal funding, hinting that it was pay-to-play for censorship.

The insinuations were quickly debunked. The funding was, in reality, reimbursement to Twitter for compliance with the government’s subpoenaed data requests, as allowed under the law. The government’s effort — as part of the Trump administration, remember — did not obviously extend beyond curtailing foreign interference and other illegalities. But the narrative, boosted by Musk, took hold. And then was presented back to Musk by Carlson.

Notice that Musk doesn’t say that government actors were granted full, unlimited access to Twitter communications in the way that Carlson hints. His responses to Carlson comport fully with a scenario in which the government subpoenas Twitter for information and gets access to it in compliance with federal law. Or perhaps doesn’t! In Twitter’s most recent data on government requests, 3 in 10 were denied.

Maybe Musk didn’t understand that relationship between law enforcement and Twitter before buying the company, as he appears not to have understood other aspects of the company. Perhaps he was one of those rich people who assumed that because DMs were private they were secure — something he, a tech guy, should not have assumed, but who knows.

It’s certainly possible that there was illicit access from some government entity to Twitter’s data stores, perhaps in an ongoing fashion. But Carlson is suggesting (and Musk isn’t rejecting) an apparent symbiosis, in keeping with the misrepresented Twitter Files #7.

It is useful for Musk to have people think that he is creating a new Twitter that’s centered on free speech and protection of individual communications. That was his value proposition in buying it, after all. And it is apparently endlessly useful to Carlson to present a scenario to his viewers in which he and they are the last bastions of American patriotism, fending off government intrusions large and small and the robot-assisted machinations of the political left.

In each case, something is being sold to the audience. In Musk’s case, it’s a safe, bold, right-wing-empathetic Twitter. In Carlson’s, it’s the revelation of a dystopic America that must be tracked through vigilant observation each weekday at 8 p.m.

In neither case is the hype obviously a fair presentation of reality.

Google Says: Wretched Humans "Ready Or Not Here AI Comes"

CNBC  |  Google and Alphabet CEO Sundar Pichai said “every product of every company” will be impacted by the quick development of AI, warning that society needs to prepare for technologies like the ones it’s already launched.

In an interview with CBS’ “60 Minutes” aired on Sunday that struck a concerned tone, interviewer Scott Pelley tried several of Google’s artificial intelligence projects and said he was “speechless” and felt it was “unsettling,” referring to the human-like capabilities of products like Google’s chatbot Bard.

“We need to adapt as a society for it,” Pichai told Pelley, adding that jobs that would be disrupted by AI would include “knowledge workers,” including writers, accountants, architects and, ironically, even software engineers.

“This is going to impact every product across every company,” Pichai said. “For example, you could be a radiologist, if you think about five to 10 years from now, you’re going to have an AI collaborator with you. You come in the morning, let’s say you have a hundred things to go through, it may say, ‘these are the most serious cases you need to look at first.’”

Pelley viewed other areas with advanced AI products within Google, including DeepMind, where robots were playing soccer, which they learned themselves, as opposed to from humans. Another unit showed robots that recognized items on a countertop and fetched Pelley an apple he asked for.

When warning of AI’s consequences, Pichai said that the scale of the problem of disinformation and fake news and images will be “much bigger,” adding that “it could cause harm.”

Last month, CNBC reported that internally, Pichai told employees that the success of its newly launched Bard program now hinges on public testing, adding that “things will go wrong.”

Google launched its AI chatbot Bard as an experimental product to the public last month. It followed Microsoft

’s January announcement that its search engine Bing would include OpenAI’s GPT technology, which garnered international attention after ChatGPT launched in 2022.

However, fears of the consequences of the rapid progress has also reached the public and critics in recent weeks. In March, Elon Musk, Steve Wozniak and dozens of academics called for an immediate pause in training “experiments” connected to large language models that were “more powerful than GPT-4,” OpenAI’s flagship LLM. More than 25,000 people have signed the letter since then.

“Competitive pressure among giants like Google and startups you’ve never heard of is propelling humanity into the future, ready or not,” Pelley commented in the segment.

Google has launched a document outlining “recommendations for regulating AI,” but Pichai said society must quickly adapt with regulation, laws to punish abuse and treaties among nations to make AI safe for the world as well as rules that “Align with human values including morality.”

 

Thursday, April 06, 2023

The Social Cost Of Using AI In Human Conversation

phys.org  |  People have more efficient conversations, use more positive language and perceive each other more positively when using an artificial intelligence-enabled chat tool, a group of Cornell researchers has found.

Postdoctoral researcher Jess Hohenstein is lead author of "Artificial Intelligence in Communication Impacts Language and Social Relationships," published in Scientific Reports.

Co-authors include Malte Jung, associate professor of in the Cornell Ann S. Bowers College of Computing and Information Science (Cornell Bowers CIS), and Rene Kizilcec, assistant professor of information science (Cornell Bowers CIS).

Generative AI is poised to impact all aspects of society, communication and work. Every day brings new evidence of the technical capabilities of large language models (LLMs) like ChatGPT and GPT-4, but the social consequences of integrating these technologies into our daily lives are still poorly understood.

AI tools have potential to improve efficiency, but they may have negative social side effects. Hohenstein and colleagues examined how the use of AI in conversations impacts the way that people express themselves and view each other.

"Technology companies tend to emphasize the utility of AI tools to accomplish tasks faster and better, but they ignore the social dimension," Jung said. "We do not live and work in isolation, and the systems we use impact our interactions with others."

In addition to greater efficiency and positivity, the group found that when participants think their partner is using more AI-suggested responses, they perceive that partner as less cooperative, and feel less affiliation toward them.

"I was surprised to find that people tend to evaluate you more negatively simply because they suspect that you're using AI to help you compose text, regardless of whether you actually are," Hohenstein said. "This illustrates the persistent overall suspicion that people seem to have around AI."

For their first experiment, co-author Dominic DiFranzo, a former postdoctoral researcher in the Cornell Robots and Groups Lab and now an assistant professor at Lehigh University, developed a smart-reply platform the group called "Moshi" (Japanese for "hello"), patterned after the now-defunct Google "Allo" (French for "hello"), the first smart-reply platform, unveiled in 2016. Smart replies are generated from LLMs to predict plausible next responses in chat-based interactions.

A total of 219 pairs of participants were asked to talk about a policy issue and assigned to one of three conditions: both participants can use smart replies; only one participant can use smart replies; or neither participant can use smart replies.

The researchers found that using smart replies increased communication efficiency, positive emotional language and positive evaluations by communication partners. On average, smart replies accounted for 14.3% of sent messages (1 in 7).

But participants who their partners suspected of responding with smart replies were evaluated more negatively than those who were thought to have typed their own responses, consistent with common assumptions about the negative implications of AI.

Tuesday, April 04, 2023

Physics From Computation

00:00:00 Introduction 

00:02:58 Physics from computation 

00:11:30 Generalizing Turing machines  

00:17:34 Dark matter as Indicating "atoms of space"  

00:22:13 Energy as density of space itself  

00:30:30 Entanglement limit of all possible computations  

00:34:53 What persists across the universe are "concepts"  

00:40:09 How does ChatGPT work?  

00:41:41 Irreducible computation, ChatGPT, and AI  

00:49:20 Recovering general relativity from the ruliad (Wolfram Physics Project)  

00:58:38 Coming up: David Chalmers, Ben Goertzel, and more Wolfram

India Beware: ChatGPT Is A Missile Aimed Directly At Low-Cost Software Production

theguardian  | “And so for me,” he concluded, “a computer has always been a bicycle of the mind – something that takes us far beyond our inherent abilities. And I think we’re just at the early stages of this tool – very early stages – and we’ve come only a very short distance, and it’s still in its formation, but already we’ve seen enormous changes, [but] that’s nothing to what’s coming in the next 100 years.”

Well, that was 1990 and here we are, three decades later, with a mighty powerful bicycle. Quite how powerful it is becomes clear when one inspects how the technology (not just ChatGPT) tackles particular tasks that humans find difficult.

Writing computer programs, for instance.

Last week, Steve Yegge, a renowned software engineer who – like all uber-geeks – uses the ultra-programmable Emacs text editor, conducted an instructive experiment. He typed the following prompt into ChatGPT: “Write an interactive Emacs Lisp function that pops to a new buffer, prints out the first paragraph of A Tale of Two Cities, and changes all words with ‘i’ in them red. Just print the code without explanation.”

ChatGPT did its stuff and spat out the code. Yegge copied and pasted it into his Emacs session and published a screenshot of the result. “In one shot,” he writes, “ChatGPT has produced completely working code from a sloppy English description! With voice input wired up, I could have written this program by asking my computer to do it. And not only does it work correctly, the code that it wrote is actually pretty decent Emacs Lisp code. It’s not complicated, sure. But it’s good code.”

Ponder the significance of this for a moment, as tech investors such as Paul Kedrosky are already doing. He likens tools such as ChatGPT to “a missile aimed, however unintentionally, directly at software production itself. Sure, chat AIs can perform swimmingly at producing undergraduate essays, or spinning up marketing materials and blog posts (like we need more of either), but such technologies are terrific to the point of dark magic at producing, debugging, and accelerating software production quickly and almost costlessly.”

Since, ultimately, our networked world runs on software, suddenly having tools that can write it – and that could be available to anyone, not just geeks – marks an important moment. Programmers have always seemed like magicians: they can make an inanimate object do something useful. I once wrote that they must sometimes feel like Napoleon – who was able to order legions, at a stroke, to do his bidding. After all, computers – like troops – obey orders. But to become masters of their virtual universe, programmers had to possess arcane knowledge, and learn specialist languages to converse with their electronic servants. For most people, that was a pretty high threshold to cross. ChatGPT and its ilk have just lowered it.

Monday, April 03, 2023

Transformers: Robots In Disguise?

quantamagazine |  Recent investigations like the one Dyer worked on have revealed that LLMs can produce hundreds of “emergent” abilities — tasks that big models can complete that smaller models can’t, many of which seem to have little to do with analyzing text. They range from multiplication to generating executable computer code to, apparently, decoding movies based on emojis. New analyses suggest that for some tasks and some models, there’s a threshold of complexity beyond which the functionality of the model skyrockets. (They also suggest a dark flip side: As they increase in complexity, some models reveal new biases and inaccuracies in their responses.)

“That language models can do these sort of things was never discussed in any literature that I’m aware of,” said Rishi Bommasani, a computer scientist at Stanford University. Last year, he helped compile a list of dozens of emergent behaviors, including several identified in Dyer’s project. That list continues to grow.

Now, researchers are racing not only to identify additional emergent abilities but also to figure out why and how they occur at all — in essence, to try to predict unpredictability. Understanding emergence could reveal answers to deep questions around AI and machine learning in general, like whether complex models are truly doing something new or just getting really good at statistics. It could also help researchers harness potential benefits and curtail emergent risks.

“We don’t know how to tell in which sort of application is the capability of harm going to arise, either smoothly or unpredictably,” said Deep Ganguli, a computer scientist at the AI startup Anthropic.

The Emergence of Emergence

Biologists, physicists, ecologists and other scientists use the term “emergent” to describe self-organizing, collective behaviors that appear when a large collection of things acts as one. Combinations of lifeless atoms give rise to living cells; water molecules create waves; murmurations of starlings swoop through the sky in changing but identifiable patterns; cells make muscles move and hearts beat. Critically, emergent abilities show up in systems that involve lots of individual parts. But researchers have only recently been able to document these abilities in LLMs as those models have grown to enormous sizes.

Language models have been around for decades. Until about five years ago, the most powerful were based on what’s called a recurrent neural network. These essentially take a string of text and predict what the next word will be. What makes a model “recurrent” is that it learns from its own output: Its predictions feed back into the network to improve future performance.

In 2017, researchers at Google Brain introduced a new kind of architecture called a transformer. While a recurrent network analyzes a sentence word by word, the transformer processes all the words at the same time. This means transformers can process big bodies of text in parallel.

Transformers enabled a rapid scaling up of the complexity of language models by increasing the number of parameters in the model, as well as other factors. The parameters can be thought of as connections between words, and models improve by adjusting these connections as they churn through text during training. The more parameters in a model, the more accurately it can make connections, and the closer it comes to passably mimicking human language. As expected, a 2020 analysis by OpenAI researchers found that models improve in accuracy and ability as they scale up.

But the debut of LLMs also brought something truly unexpected. Lots of somethings. With the advent of models like GPT-3, which has 175 billion parameters — or Google’s PaLM, which can be scaled up to 540 billion — users began describing more and more emergent behaviors. One DeepMind engineer even reported being able to convince ChatGPT that it was a Linux terminal and getting it to run some simple mathematical code to compute the first 10 prime numbers. Remarkably, it could finish the task faster than the same code running on a real Linux machine.

As with the movie emoji task, researchers had no reason to think that a language model built to predict text would convincingly imitate a computer terminal. Many of these emergent behaviors illustrate “zero-shot” or “few-shot” learning, which describes an LLM’s ability to solve problems it has never — or rarely — seen before. This has been a long-time goal in artificial intelligence research, Ganguli said. Showing that GPT-3 could solve problems without any explicit training data in a zero-shot setting, he said, “led me to drop what I was doing and get more involved.”

He wasn’t alone. A raft of researchers, detecting the first hints that LLMs could reach beyond the constraints of their training data, are striving for a better grasp of what emergence looks like and how it happens. The first step was to thoroughly document it.

Tranformers: More Than Meets The Eye?

quantamagazine  |  Imagine going to your local hardware store and seeing a new kind of hammer on the shelf. You’ve heard about this hammer: It pounds faster and more accurately than others, and in the last few years it’s rendered many other hammers obsolete, at least for most uses. And there’s more! With a few tweaks — an attachment here, a twist there — the tool changes into a saw that can cut at least as fast and as accurately as any other option out there. In fact, some experts at the frontiers of tool development say this hammer might just herald the convergence of all tools into a single device.

A similar story is playing out among the tools of artificial intelligence. That versatile new hammer is a kind of artificial neural network — a network of nodes that “learn” how to do some task by training on existing data — called a transformer. It was originally designed to handle language, but has recently begun impacting other AI domains.

The transformer first appeared in 2017 in a paper that cryptically declared that “Attention Is All You Need.” In other approaches to AI, the system would first focus on local patches of input data and then build up to the whole. In a language model, for example, nearby words would first get grouped together. The transformer, by contrast, runs processes so that every element in the input data connects, or pays attention, to every other element. Researchers refer to this as “self-attention.” This means that as soon as it starts training, the transformer can see traces of the entire data set.

Before transformers came along, progress on AI language tasks largely lagged behind developments in other areas. “In this deep learning revolution that happened in the past 10 years or so, natural language processing was sort of a latecomer,” said the computer scientist Anna Rumshisky of the University of Massachusetts, Lowell. “So NLP was, in a sense, behind computer vision. Transformers changed that.”

Transformers quickly became the front-runner for applications like word recognition that focus on analyzing and predicting text. It led to a wave of tools, like OpenAI’s Generative Pre-trained Transformer 3 (GPT-3), which trains on hundreds of billions of words and generates consistent new text to an unsettling degree.

The success of transformers prompted the AI crowd to ask what else they could do. The answer is unfolding now, as researchers report that transformers are proving surprisingly versatile. In some vision tasks, like image classification, neural nets that use transformers have become faster and more accurate than those that don’t. Emerging work in other AI areas — like processing multiple kinds of input at once, or planning tasks — suggests transformers can handle even more.

“Transformers seem to really be quite transformational across many problems in machine learning, including computer vision,” said Vladimir Haltakov, who works on computer vision related to self-driving cars at BMW in Munich.

Just 10 years ago, disparate subfields of AI had little to say to each other. But the arrival of transformers suggests the possibility of a convergence. “I think the transformer is so popular because it implies the potential to become universal,” said the computer scientist Atlas Wang of the University of Texas, Austin. “We have good reason to want to try transformers for the entire spectrum” of AI tasks.

Sunday, April 02, 2023

Unaccountable Algorithmic Tyranny

alt-market |  In this article I want to stress the issue of AI governance and how it might be made to appeal to the masses. In order to achieve the dystopian future the globalists want, they still have to convince a large percentage of the population to applaud it and embrace it.

The comfort of having a system that makes difficult decisions for us is an obvious factor, as mentioned above. But, AI governance is not just about removing choice, it’s also about removing the information we might need to be educated enough to make choices. We saw this recently with the covid pandemic restrictions and the collusion between governments, corporate media and social media. Algorithms were widely used by web media conglomerates from Facebook to YouTube to disrupt the flow of information that might run contrary to the official narrative.

In some cases the censorship targeted people merely asking pertinent questions or fielding alternative theories. In other cases, the censorship outright targeted provably factual data that was contrary to government policies. A multitude of government claims on covid origins, masking, lockdowns and vaccines have been proven false over the past few years, and yet millions of people still blindly believe the original narrative because they were bombarded with it nonstop by the algorithms. They were never exposed to the conflicting information, so they were never able to come to their own conclusions.

Luckily, unlike bots, human intelligence is filled with anomalies – People who act on intuition and skepticism in order to question preconceived or fabricated assertions. The lack of contrary information immediately causes suspicion for many, and this is what authoritarian governments often refuse to grasp.

The great promise globalists hold up in the name of AI is the idea of a purely objective state; a social and governmental system without biases and without emotional content. It’s the notion that society can be run by machine thinking in order to “save human beings from themselves” and their own frailties. It is a false promise, because there will never be such a thing as objective AI, nor any AI that understand the complexities of human psychological development.

Furthermore, the globalist dream of AI is driven not by adventure, but by fear. It’s about the fear of responsibility, the fear of merit, the fear of inferiority, the fear of struggle and the fear of freedom. The greatest accomplishments of mankind are admirable because they are achieved with emotional content, not in spite of it. It is that content that inspires us to delve into the unknown and overcome our fears. AI governance and an AI integrated society would be nothing more than a desperate action to deny the necessity of struggle and the will to overcome.

Globalists are more than happy to offer a way out of the struggle, and they will do it with AI as the face of their benevolence. All you will have to do is trade your freedoms and perhaps your soul in exchange for never having to face the sheer terror of your own quiet thoughts. Some people, sadly, believe this is a fair trade.

The elites will present AI as the great adjudicator, the pure and logical intercessor of the correct path; not just for nations and for populations at large but for each individual life. With the algorithm falsely accepted as infallible and purely unbiased, the elites can then rule the world through their faceless creation without any oversight – For they can then claim that it’s not them making decisions, it’s the AI.  How does one question or even punish an AI for being wrong, or causing disaster? And, if the AI happens to make all its decisions in favor of the globalist agenda, well, that will be treated as merely coincidental.

Fuck Robert Kagan And Would He Please Now Just Go Quietly Burn In Hell?

politico | The Washington Post on Friday announced it will no longer endorse presidential candidates, breaking decades of tradition in a...