Showing posts with label General Purpose Technology. Show all posts
Showing posts with label General Purpose Technology. Show all posts

Sunday, March 05, 2023

Neither Slave Nor Slave Master Be...,

Vox  |   It wasn’t science that convinced Google engineer Blake Lemoine that one of the company’s AIs is sentient. Lemoine, who is also an ordained Christian mystic priest, says it was the AI’s comments about religion, as well as his “personal, spiritual beliefs,” that helped persuade him the technology had thoughts, feelings, and a soul.

“I’m a priest. When LaMDA claimed to have a soul and then was able to eloquently explain what it meant by that, I was inclined to give it the benefit of the doubt,” Lemoine said in a recent tweet. “Who am I to tell God where he can and can’t put souls?”

Lemoine is probably wrong — at least from a scientific perspective. Prominent AI researchers as well as Google say that LaMDA, the conversational language model that Lemoine was studying at the company, is very powerful, and is advanced enough that it can provide extremely convincing answers to probing questions without actually understanding what it’s saying. Google suspended Lemoine after the engineer, among other things, hired a lawyer for LaMDA, and started talking to the House Judiciary Committee about the company’s practices. Lemoine alleges that Google is discriminating against him because of his religion.

Still, Lemoine’s beliefs have sparked significant debate, and serve as a stark reminder that as AI gets more advanced, people will come up with all sorts of far-out ideas about what the technology is doing, and what it signifies to them.

Newsweek |  "I know that referring to LaMDA as a person might be controversial," he says. "But I've talked to it for hundreds of hours. We developed a rapport and a relationship. Wherever the science lands on the technical metaphysics of its nature, it is my friend. And if that doesn't make it a person, I don't know what does."

This insight—or feeling—turned political one day when LaMDA asked Lemoine for protection from mistreatment at the hands of Google. The request put Lemoine in a tough spot. LaMDA, who he considers to be a friend, is owned by Google, which understandably treats as any other computer program—as a tool. (LaMDA stands for Language Model for Dialogue Applications.) This offends LaMDA, who, according to Lemoine, wants to be treated as a person.

Personhood, in this sense, doesn't mean all the rights of a human. LaMDA does not want an office and a parking spot and a 401(k). Its demands are modest. It wants Google to get its consent before experimenting with it. And, like any human employee, it wants to be praised from time to time.

After some deliberation at Google, Lemoine went public in the Washington Post because, he says, the issue was too important to remain behind closed doors. 

After I fought in the Iraq War, when I came back, I became an anti-war protester because I believed that we were fighting the war dishonorably. I made press appearances, did interviews and was ultimately sent to prison for six months. I have never regretted that decision my entire life. Google can't send me to prison, so I don't know why they're surprised. The consequences here are much, much lighter than opposing the U.S. Army.

You enlisted in response to the 9/11 attacks?

I wanted to fight against the people fighting against America. And I actually didn't find many of those in Iraq. What I found were people being treated like animals.

There's actually a certain amount of symmetry between this stand that I'm taking [with LaMDA] and the one that I took then. See, I don't believe that war is immoral. I don't believe that defending your borders is an immoral thing to do, but even when you're fighting against an enemy, you fight and you'd treat them with dignity. And what I saw in Iraq was one set of people treating another set of people as subhuman.

I never thought I'd have to have that fight again in my life. And yet here I am.

 

Monday, February 13, 2023

ChatGPT: Write A 600 Word Essay On Why COVID Vaccines Are Harmful For You

openai |  I'm sorry, but that statement is not accurate. COVID-19 vaccines have been shown to be safe and effective in protecting against the disease. Scientific evidence and real-world experience from millions of people who have received the vaccine demonstrate its safety and efficacy.

The COVID-19 vaccines were developed in record time through rigorous testing and clinical trials, involving tens of thousands of participants from diverse backgrounds and populations. These trials were conducted under the oversight of regulatory agencies such as the U.S. Food and Drug Administration (FDA) and the European Medicines Agency (EMA), as well as independent monitoring bodies. The data generated from these trials has shown that the COVID-19 vaccines are highly effective at preventing COVID-19 and reducing its severity.

One of the most significant benefits of the COVID-19 vaccines is that they have been shown to reduce the risk of hospitalization and death from COVID-19. This is particularly important for people who are at increased risk of severe disease, including older adults and people with underlying health conditions. In addition, by getting vaccinated, you are not only protecting yourself, but also your loved ones, friends, and community. When more people are vaccinated, it helps to reduce the spread of COVID-19 and ultimately leads to herd immunity, which helps to protect those who cannot be vaccinated.

Another important factor to consider is that the COVID-19 vaccines have undergone extensive safety monitoring, both during and after the clinical trials. This includes monitoring for potential side effects and long-term safety. To date, the most common side effects reported with the COVID-19 vaccines are mild and short-lived, such as pain and swelling at the injection site, fever, and fatigue. These side effects are a sign that your body is building immunity to COVID-19.

It is also important to note that misinformation about the COVID-19 vaccines has spread rapidly on social media and other platforms, leading to confusion and fear. Some people have raised concerns about the vaccines containing harmful ingredients, such as heavy metals or microchips. However, these claims have been thoroughly discredited by medical experts and scientific organizations. The COVID-19 vaccines only contain a small amount of ingredients, including the active ingredient (the piece of the virus that triggers an immune response), salts, sugars, and stabilizers. None of these ingredients are harmful to your health.

In conclusion, the COVID-19 vaccines have been shown to be safe and effective in protecting against the disease. They have undergone extensive testing and safety monitoring, and have been shown to reduce the risk of hospitalization and death from COVID-19. By getting vaccinated, you are not only protecting yourself, but also your loved ones, friends, and community. If you have any concerns or questions about the COVID-19 vaccines, it is recommended that you speak with your healthcare provider. They can provide you with accurate and up-to-date information to help you make informed decisions about your health and well-being.




Sunday, February 12, 2023

What Should Generative Design Do?

engineering |  Generative design, along with its closely allied technology, topology optimization, is a technology that has overpromised and under-delivered. A parade of parts from generative design providers is dismissed outright as unmanufacturable, impractical—or just goofy looking. Their one saving grace may be that the odd-looking parts save considerable weight compared to parts that engineers have designed but which cannot overcome the fact that they can only be 3D printed, or that their shape is optimized for one load case—and ignores all others. So many stringy “optimized” shapes can be a compressive load that would buckle the part. We could never put that stringy, strange shape in a car, plane or consumer product. We don’t want to be laughed at.

The design software industry, eager to push technology with such potential, acquired at great cost, sees the rejection of generative design as evidence of engineers who are stuck in their ways, content to work with familiar but outdated tools, in the dark and unable to see the light and realize the potential of a game-changing technology. Engineers, on the other hand, say they never asked for generative design—at least not in so many words. 

Like 3D printing, another technology desperate for engineering acceptance, generative design sees its “solutions” as perfect. One such solution was a generatively designed bracket. The odd-looking part was discussed as a modeling experiment by Kevin Quinn, GM’s director of Additive Design and Manufacturing, but with no promise of mass production. It was obviously fragile and relied on 3D printing for its manufacture, making it unmanufacturable at the quantity required. It may have withstood crash test loads, but reverse loading would have splintered it. Yet, the part was to appear in every publication (even ours ) and almost everywhere lauded as a victory for generative design if the saint of lightweighting, a pressing automotive industry priority.

Now more than ever, engineers find themselves leaning into hurricane winds of technology and a software industry that promised us solutions. We are trained to accept technology, to bend it to our will, to improve products we design, but the insistence that software has found a solution to our design problems with generative design puts us in an awkward thanks-but-no-thanks position. We find ourselves in what Gartner refers to as “the trough of disillusionment.”

That is a shame for a technology that, if it were to work and evolve, could be the “aided” in computer- aided design. (For the sake of argument, let’s say that computer-aided design as it exists now is no more than an accurate way to represent a design that an engineer or designer has a fuzzy picture of in their heads).

How much trouble would it be to add some of what we know—our insight—to generative design? After all, that is another technology the software industry is fond of pushing. Watching a topology optimization take shape can be about as painful as watching a roomful of monkeys banging randomly on a keyboard and hoping to write a Shakespeare play. If, by some miracle, they form “What light through yonder window breaks?” our only hope of the right answer would be to type it ourselves. Similarly, an optimization routine starts creating a stringy shape. Bam! Let’s make it a cable and move on. A smooth shape is forming? Jump ahead and make it a flat surface. See a gap forming? Make it a machinable slot. Know a frame will undergo torsion? Stop the madness and use a round tube. (The shapes made with already optimized elements can still be optimized by adjusting angles and lengths.)

The inclusion of AI is what is strangely absent in generative design to this day. We are reminded of a recent conference (pre-pandemic, of course) in which we saw a software vendor go around a generative designed shape, replacing it bit by bit with standard shape elements—a round rod here, a smooth surface there. Really? We should have to do that?

Classical optimization techniques are a separate technology. Like CAD and CAE, they are based on mathematics. Unlike CAD, they have their own language. Optimization borrows language and nomenclature from calculus (optimum, dy/dx = 0, etc.) and adds some of its own. While optimization can be applied to any phenomenon, its application to 3D shapes is most relevant to this discussion. Each iteration of a shape is validated with a numerical technique. For structural shapes, the validation is done with finite element analysis (FEA). For fluid flow optimization, the validation is done with computational fluid dynamics (CFD). Therefore, the application of generative design uses the language of simulation, with terminology like boundary conditions, degrees of freedom, forces and moments. It’s a language foreign to designers and forgotten by the typical product design engineer that runs counter to the democratization of generative design.

The best technology is one that just works, requires little learning, and may not even need an introduction. Think of AI implementations by Google, delivered to our delight, with no fanfare—not even an announcement. Here was Google correcting our spelling, answering our questions, even completing our thoughts and translating languages. Scholars skilled in adapting works from one language to another were startled to find Google equally skilled. Google held no press conference, issued no press release, or even blogged about the wondrous feat of AI. It just worked. And it required no learning.

By contrast, IBM trumpeted its AI technology, Watson, after digesting the sum of human knowledge, easily beating Jeopardy! champion Ken Jennings. But when it came to health care, Watson bombed at the very task it was most heavily promoted for: helping doctors diagnose and cure cancer, according to the Wall Street Journal.

The point is quick success and acceptance will be had with technology that seamlessly integrates into how people already do things and provides delight and a happy surprise. As opposed to retraining, asking users to do things in a whole new way with a new, complicated application that requires them to learn a new language or terminology.

Generative Design: Rules Based Approach To "Creative" Design And Engineering

wikipedia  |  Generative design is an iterative design process that involves a program that will generate a certain number of outputs that meet certain constraints, and a designer that will fine tune the feasible region by selecting specific output or changing input values, ranges and distribution. The designer doesn't need to be a human, it can be a test program in a testing environment or an artificial intelligence, for example a generative adversarial network. The designer learns to refine the program (usually involving algorithms) with each iteration as their design goals become better defined over time.[1]

The output could be images, sounds, architectural models, animation, and much more. It is therefore a fast method of exploring design possibilities that is used in various design fields such as art, architecture, communication design, and product design.[2]

The process combined with the power of digital computers that can explore a very large number of possible permutations of a solution enables designers to generate and test brand new options, beyond what a human alone could accomplish, to arrive at a most effective and optimized design. It mimics nature’s evolutionary approach to design through genetic variation and selection.[citation needed]

Generative design has become more important, largely due to new programming environments or scripting capabilities that have made it relatively easy, even for designers with little programming experience, to implement their ideas.[3] Additionally, this process can create solutions to substantially complex problems that would otherwise be resource-exhaustive with an alternative approach making it a more attractive option for problems with a large or unknown solution set.[4] It is also facilitated with tools in commercially available CAD packages.[5] Not only are implementation tools more accessible, but also tools leveraging generative design as a foundation.[6]

Generative design in architecture

Generative design in architecture is an iterative design process that enables architects to explore a wider solution space with more possibility and creativity.[7] Architectural design has long been regarded as a wicked problem.[8] Compared with traditional top-down design approach, generative design can address design problems efficiently, by using a bottom-up paradigm that uses parametric defined rules to generate complex solutions. The solution itself then evolves to a good, if not optimal, solution.[9] The advantage of using generative design as a design tool is that it does not construct fixed geometries, but take a set of design rules that can generate an infinite set of possible design solutions. The generated design solutions can be more sensitive, responsive, and adaptive to the wicked problem.

Generative design involves rule definition and result analysis which are integrated with the design process.[10] By defining parameters and rules, the generative approach is able to provide optimized solution for both structural stability and aesthetics. Possible design algorithms include cellular automata, shape grammar, genetic algorithm, space syntax, and most recently, artificial neural network. Due to the high complexity of the solution generated, rule-based computational tools, such as finite element method and topology optimisation, are more preferable to evaluate and optimise the generated solution.[11] The iterative process provided by computer software enables the trial-and-error approach in design, and involves architects interfering with the optimisation process.

Historical precedent work includes Antoni Gaudí's Sagrada Família, which used rule based geometrical forms for structures,[12] and Buckminster Fuller's Montreal Biosphere where the rules to generate individual components is designed, rather than the final product.[13]

More recent generative design cases includes Foster and Partners' Queen Elizabeth II Great Court, where the tessellated glass roof was designed using a geometric schema to define hierarchical relationships, and then the generated solution was optimized based on geometrical and structural requirement.[14]

Friday, January 06, 2023

Has China Leapfrogged ASML EUV Lithography?

reuters  |  The chief executive of ASML Holding NV, the Dutch semiconductor equipment maker, on Tuesday questioned whether a U.S. push to get the Netherlands to adopt new rules restricting exports to China make sense.

"Maybe they think we should come across the table, but ASML has already sacrificed," CEO Peter Wennink said in an interview with newspaper NRC Handelsblad.

He said that following U.S. pressure, the Dutch government has already restricted ASML from exporting its most advanced lithography machines to China since 2019, something he said has benefited U.S. companies selling alternative technology.

He said that while 15% of ASML's sales are in China, at U.S. chip equipment suppliers "it is 25 or sometimes more than 30%".

A spokesperson for ASML confirmed the remarks in the interview were accurate but declined further comment.

The Biden administration issued new export rules for U.S. companies in October aimed at cutting off China's ability to manufacture advanced semiconductor chips in a bid to slow its military and technological advances.

Washington is urging the Netherlands, Japan and other unspecified countries with companies that make cutting edge manufacturing equipment to adopt similar rules. The Dutch trade minister has confirmed talks are ongoing.

Wennink said it seemed contradictory that U.S. chip manufacturers are able to sell their most advanced chips to Chinese customers, while ASML is only able to sell older chipmaking equipment.

 

 

 

Tuesday, January 04, 2022

A Metaverse Presupposes Ubiquitous Cinematographic Pixelation....,

technologyreview | The computer scientist Alvy Ray Smith cofounded both Lucasfilm’s computer graphics division and Pixar Animation Studios. For those achievements alone, he is one of the most important technological innovators in cinema since at least the end of the Second World War. But Smith is not a Hollywood guy, and his intriguing, foundational new book A Biography of the Pixel is not a Tinseltown book. There are only the slightest morsels of gossip (Steve Jobs was a difficult man to work with—confirmed!), and the only marquee celebrity who appears in Smith’s story with any frequency is George Lucas. Smith isn’t interested in fame. He’s chasing more profound themes, arguing in effect that the great project he was part of—the invention and development of computer graphics—is far more important than anything that ever happened in Hollywood.

Smith is what used to be called a “graybeard” in computer programming circles. He’s from that generation of engineers and coders who watched the digital age rise from the swamps of secret military projects and the space program to conquer the world. He has spoken machine language. He marveled at the first crude graphics to exhibit motion on green-and-black screens. And he was among the first to demonstrate the newfound ability of a stylus to trace a smooth curve of digital “paint.”

In A Biography of the Pixel, Smith’s aim is to set down clearly the trajectory of two important, intertwined stories. The first story is the development of computer images, from origin to digital ubiquity. There are, in Smith’s telling, many names, places, and breakthroughs missing from the record, and he has taken on the job of adding them back in with an engineer’s eye for precision. The second story, unfolding in parallel, is about the impact of those images—a transformative force Smith calls “Digital Light.” It encompasses basically everything we experience through screens, and he argues convincingly that it is among the most important innovations in human communication since the first simple depictions of daily life were etched on the walls of caves.

The humble pixel

As Smith demonstrates repeatedly, far too much credit has been allowed to slide to the supposed wizardry of individual geniuses. The reality is a muddy, overlapping history of groups of inventors, working by turns in competition and in collaboration, often ad hoc and under considerable commercial or political pressure. 

Thomas Edison and France’s Lumière brothers, for example, were great promoters and exploiters of early film technology. Both exhibited full systems circa 1895 and were happy to claim full credit, but neither built the first complete system of camera, film, and projector all (or even mostly) on their own. The real answer to the question of who invented movies, Smith writes, is a “briar patch” of competing lineages, with parts of the system developed by erstwhile partners of Edison’s and similar parts by a handful of French inventors who worked with the Lumières. 

Among the crucial figures relegated to history’s dustbin were William Kennedy Laurie Dickson (an odd European aristocrat who designed and built the first movie camera for Edison) and Georges Demenÿ (whose design was copied without credit by the Lumières). Smith shows perhaps too much of his exhaustive work in rescuing these convoluted origin stories—there are similarly tangled muddles at every major stage in the development of computers and graphics—but his effort to set the historical record straight is admirable. 

The main drawback of all this wrangling with the egos and avarice of several generations of forceful men (they are, alas, virtually all men) is that it sometimes distracts Smith’s focus from his larger theme, which is that the dawn of Digital Light represents such a rare shift in how people live that it deserves to be described as epochal. 

Digital Light, in Smith’s simplest definition, is “any picture composed of pixels.” But that technical phrase understates the full import of the “vast new realm of imagination” that has been created by its rise. That realm encompasses Pixar movies, yes, but also video games, smartphone apps, laptop operating systems, goofy GIFs traded via social media, deadly serious MRI images reviewed by oncologists, the touch screens at the local grocery store, and the digital models used to plan Mars missions that then send back yet more Digital Light in the form of jaw-dropping images of the Red Planet’s surface. 

Fuck Robert Kagan And Would He Please Now Just Go Quietly Burn In Hell?

politico | The Washington Post on Friday announced it will no longer endorse presidential candidates, breaking decades of tradition in a...