Sunday, February 12, 2023

What Should Generative Design Do?

engineering |  Generative design, along with its closely allied technology, topology optimization, is a technology that has overpromised and under-delivered. A parade of parts from generative design providers is dismissed outright as unmanufacturable, impractical—or just goofy looking. Their one saving grace may be that the odd-looking parts save considerable weight compared to parts that engineers have designed but which cannot overcome the fact that they can only be 3D printed, or that their shape is optimized for one load case—and ignores all others. So many stringy “optimized” shapes can be a compressive load that would buckle the part. We could never put that stringy, strange shape in a car, plane or consumer product. We don’t want to be laughed at.

The design software industry, eager to push technology with such potential, acquired at great cost, sees the rejection of generative design as evidence of engineers who are stuck in their ways, content to work with familiar but outdated tools, in the dark and unable to see the light and realize the potential of a game-changing technology. Engineers, on the other hand, say they never asked for generative design—at least not in so many words. 

Like 3D printing, another technology desperate for engineering acceptance, generative design sees its “solutions” as perfect. One such solution was a generatively designed bracket. The odd-looking part was discussed as a modeling experiment by Kevin Quinn, GM’s director of Additive Design and Manufacturing, but with no promise of mass production. It was obviously fragile and relied on 3D printing for its manufacture, making it unmanufacturable at the quantity required. It may have withstood crash test loads, but reverse loading would have splintered it. Yet, the part was to appear in every publication (even ours ) and almost everywhere lauded as a victory for generative design if the saint of lightweighting, a pressing automotive industry priority.

Now more than ever, engineers find themselves leaning into hurricane winds of technology and a software industry that promised us solutions. We are trained to accept technology, to bend it to our will, to improve products we design, but the insistence that software has found a solution to our design problems with generative design puts us in an awkward thanks-but-no-thanks position. We find ourselves in what Gartner refers to as “the trough of disillusionment.”

That is a shame for a technology that, if it were to work and evolve, could be the “aided” in computer- aided design. (For the sake of argument, let’s say that computer-aided design as it exists now is no more than an accurate way to represent a design that an engineer or designer has a fuzzy picture of in their heads).

How much trouble would it be to add some of what we know—our insight—to generative design? After all, that is another technology the software industry is fond of pushing. Watching a topology optimization take shape can be about as painful as watching a roomful of monkeys banging randomly on a keyboard and hoping to write a Shakespeare play. If, by some miracle, they form “What light through yonder window breaks?” our only hope of the right answer would be to type it ourselves. Similarly, an optimization routine starts creating a stringy shape. Bam! Let’s make it a cable and move on. A smooth shape is forming? Jump ahead and make it a flat surface. See a gap forming? Make it a machinable slot. Know a frame will undergo torsion? Stop the madness and use a round tube. (The shapes made with already optimized elements can still be optimized by adjusting angles and lengths.)

The inclusion of AI is what is strangely absent in generative design to this day. We are reminded of a recent conference (pre-pandemic, of course) in which we saw a software vendor go around a generative designed shape, replacing it bit by bit with standard shape elements—a round rod here, a smooth surface there. Really? We should have to do that?

Classical optimization techniques are a separate technology. Like CAD and CAE, they are based on mathematics. Unlike CAD, they have their own language. Optimization borrows language and nomenclature from calculus (optimum, dy/dx = 0, etc.) and adds some of its own. While optimization can be applied to any phenomenon, its application to 3D shapes is most relevant to this discussion. Each iteration of a shape is validated with a numerical technique. For structural shapes, the validation is done with finite element analysis (FEA). For fluid flow optimization, the validation is done with computational fluid dynamics (CFD). Therefore, the application of generative design uses the language of simulation, with terminology like boundary conditions, degrees of freedom, forces and moments. It’s a language foreign to designers and forgotten by the typical product design engineer that runs counter to the democratization of generative design.

The best technology is one that just works, requires little learning, and may not even need an introduction. Think of AI implementations by Google, delivered to our delight, with no fanfare—not even an announcement. Here was Google correcting our spelling, answering our questions, even completing our thoughts and translating languages. Scholars skilled in adapting works from one language to another were startled to find Google equally skilled. Google held no press conference, issued no press release, or even blogged about the wondrous feat of AI. It just worked. And it required no learning.

By contrast, IBM trumpeted its AI technology, Watson, after digesting the sum of human knowledge, easily beating Jeopardy! champion Ken Jennings. But when it came to health care, Watson bombed at the very task it was most heavily promoted for: helping doctors diagnose and cure cancer, according to the Wall Street Journal.

The point is quick success and acceptance will be had with technology that seamlessly integrates into how people already do things and provides delight and a happy surprise. As opposed to retraining, asking users to do things in a whole new way with a new, complicated application that requires them to learn a new language or terminology.

Generative Design: Rules Based Approach To "Creative" Design And Engineering

wikipedia  |  Generative design is an iterative design process that involves a program that will generate a certain number of outputs that meet certain constraints, and a designer that will fine tune the feasible region by selecting specific output or changing input values, ranges and distribution. The designer doesn't need to be a human, it can be a test program in a testing environment or an artificial intelligence, for example a generative adversarial network. The designer learns to refine the program (usually involving algorithms) with each iteration as their design goals become better defined over time.[1]

The output could be images, sounds, architectural models, animation, and much more. It is therefore a fast method of exploring design possibilities that is used in various design fields such as art, architecture, communication design, and product design.[2]

The process combined with the power of digital computers that can explore a very large number of possible permutations of a solution enables designers to generate and test brand new options, beyond what a human alone could accomplish, to arrive at a most effective and optimized design. It mimics nature’s evolutionary approach to design through genetic variation and selection.[citation needed]

Generative design has become more important, largely due to new programming environments or scripting capabilities that have made it relatively easy, even for designers with little programming experience, to implement their ideas.[3] Additionally, this process can create solutions to substantially complex problems that would otherwise be resource-exhaustive with an alternative approach making it a more attractive option for problems with a large or unknown solution set.[4] It is also facilitated with tools in commercially available CAD packages.[5] Not only are implementation tools more accessible, but also tools leveraging generative design as a foundation.[6]

Generative design in architecture

Generative design in architecture is an iterative design process that enables architects to explore a wider solution space with more possibility and creativity.[7] Architectural design has long been regarded as a wicked problem.[8] Compared with traditional top-down design approach, generative design can address design problems efficiently, by using a bottom-up paradigm that uses parametric defined rules to generate complex solutions. The solution itself then evolves to a good, if not optimal, solution.[9] The advantage of using generative design as a design tool is that it does not construct fixed geometries, but take a set of design rules that can generate an infinite set of possible design solutions. The generated design solutions can be more sensitive, responsive, and adaptive to the wicked problem.

Generative design involves rule definition and result analysis which are integrated with the design process.[10] By defining parameters and rules, the generative approach is able to provide optimized solution for both structural stability and aesthetics. Possible design algorithms include cellular automata, shape grammar, genetic algorithm, space syntax, and most recently, artificial neural network. Due to the high complexity of the solution generated, rule-based computational tools, such as finite element method and topology optimisation, are more preferable to evaluate and optimise the generated solution.[11] The iterative process provided by computer software enables the trial-and-error approach in design, and involves architects interfering with the optimisation process.

Historical precedent work includes Antoni Gaudí's Sagrada Família, which used rule based geometrical forms for structures,[12] and Buckminster Fuller's Montreal Biosphere where the rules to generate individual components is designed, rather than the final product.[13]

More recent generative design cases includes Foster and Partners' Queen Elizabeth II Great Court, where the tessellated glass roof was designed using a geometric schema to define hierarchical relationships, and then the generated solution was optimized based on geometrical and structural requirement.[14]

I Remember How Excited I Was When I Learned Of This Six Years Ago (REDUX 8/21/17)


newatlas |  One little button in a piece of CAD software is threatening to fundamentally change the way we design, as well as what the built world looks like in the near future. Inspired by evolution, generative design produces extremely strong, efficient and lightweight shapes. And boy do they look weird.

Straight lines, geometric curves, solid surfaces. The constructed world as we know it is made out of them. Why? Nature rarely uses straight lines. Evolution itself is one of the toughest product tests imaginable, and you don't have a straight bone in your body, no matter how much you might like one. 

Simple shapes are popular in human designs because they're easy. Easy to design, especially with CAD, and easy to manufacture in a world where manufacturing means taking a big block or sheet of something, and machining a shape out of it, or pouring metals into a mold.

But manufacturing is starting to undergo a revolutionary change as 3D printing moves toward commercially competitive speeds and costs. And where traditional manufacturing incentivizes the simplest shapes, additive manufacturing is at its fastest and cheapest when you use the least possible material for the job.

That's a really difficult way for a human to design – but fairly easy, as it turns out, for a computer. And super easy for a giant network of computers. And now, exceptionally easy for a human designer with access to Autodesk Fusion 360 software, which has it built right in.

 

Saturday, February 11, 2023

Teenvogue Marketing The Lifestyles Of Useless White Women To Black Boys....,

teenvogue  | The fast food joint where Zuriel Hooks worked was just up the street from where she lived in Alabama, but the commute was harrowing. When she started the job in April 2021, she had to walk to work on the shoulder of the road in the Alabama sun. She would pause at the intersection, waiting for the right opportunity to run across multiple lanes of traffic. 

It was hot, it was dangerous, it was exhausting – but if she wanted to keep her job, she didn’t have much of a choice. “I felt so bad about myself at that time. Because I'm just like, ‘I’m too pretty to be doing all this,’” Hooks said, laughing while looking back. “Literally, I deserve to be driven to work.” 

Hooks, 19, now works for the Knights and Orchids Society, an organization serving Alabama’s Black LGBT community. But the experience of walking to that job stuck with her. Though she’s been working towards it for two years, Hooks doesn’t have a driver’s license. 

For trans youth like Hooks, this crucial rite of passage can be a complicated, lengthy and often frustrating journey. Trans young people face unique challenges to driving at every turn, from complicated ID laws to practicing with a parent. Without adequate support, trans youth may give up on driving entirely, resulting in a crisis of safety and independence.

The most obvious obstacle involves the license itself. Teenagers who choose to change their names or gender markers face a complicated and costly legal battle. The processes vary: some states require background checks, some court appearances, some medical documentation. At times, the rules can border on ridiculous. Alabama’s SB 184 forbade people under the age of 19 from pursuing medical transition. Yet the state also passed a law requiring drivers to undergo medical transition in order to change their gender markers. Though that law has since been ruled unconstitutional by a federal court, the state of Alabama is appealing that decision, leaving trans drivers with no official resolution. 

“It creates this – I don't want to use the cliche, but – patchwork,” said Olivia Hunt, director of policy at the National Center for Transgender Equality. “Not just state-to-state, but even person-to-person, where every person's name change and gender marker change situation is different.”

The cost can vary widely, too. Documentation, court fees and other requirements can quickly tally up to hundreds of dollars. “If you've got somebody who's already in a situation where, due to financial problems, [who] doesn't have access to a car, that might make it just that more inaccessible for them,” Hunt told Teen Vogue.

This lack of access to name and gender marker revisions puts first time drivers in a dangerous limbo. If your name or gender marker doesn’t match your appearance, there’s potential for harassment. The fear of getting outed by an ID (and subsequent abuse) is what some researchers call “ID anxiety.”

“For trans drivers, this is a unique, personal embodiment of stress,” said Arjee Restar, a social epidemiologist and an assistant professor at the University of Washington, “given that the same ID anxiety does not occur to cisgender drivers.”

With that being said, ID law is not the only thing troubling young trans drivers. Public driver education programs have dwindled significantly since the 1970s, leaving much of the burden of teaching driver’s ed on parents. In most states, teenagers must practice for their driving exams under adult supervision, typically a parent or guardian. 

But trans youth often have fraught relationships with the adults in their lives . Hooks, who started practicing driving with someone close to her at 17, often felt like a captive audience while trying to drive. “As [they were] trying to somehow teach me how to drive, I feel like it was [their] way to try to… I would say somehow try to brainwash me back from being who I am,” said Hooks. “They’d turn [the conversation] from driving to, ‘why are you even transitioning?’”

In Alabama, teenagers must complete a minimum of 50 hours of driving with adult supervision in order to get their licenses in lieu of a state-approved drivers’ education course. Hooks tried to muscle through it. But navigating the roads while navigating the emotions in the passenger side got to be too much. One day, Hooks just gave up. “If I'm gonna have this much agony trying to get this done,” Hooks recalled thinking, “then I don't want to do it.”

The alternative wasn’t much better. She didn’t just feel miserable walking everywhere; she felt vulnerable. 

“I always got catcalled, I always got beeped at by a lot of men,” she said.

Oh, Honey.....,

WaPo  | We are interested in what happened to Madonna’s face because the real discussion is about work, maintenance, effort, illusion, and how much we want to know about women’s relationships with their own bodies.

There’s an obscure passage in “Pride and Prejudice” — hang on, this is going somewhere — that I’ve never been able to get out of my head. The Bennet sisters are taking turns playing piano at a social gathering. Middle sister Mary “worked hard for knowledge and accomplishments” and was the best player of the group, but Elizabeth, “easy and unaffected, had been listened to with much more pleasure, though not playing half so well.”

The problem with Mary, Jane Austen makes clear, is that she showed her work. She showed the struggle. Her piano-playing didn’t look fun, which made her audience uncomfortable. Guests much preferred the sister who made it seem easy instead of revealing it was hard.

That passage encapsulates so much about the female experience. How we love a celebrity who claims to have horfed a burrito before walking a red carpet; how we pity one who admits she spent a week living on six almonds and electrolyte water to fit into the dress. How “lucky genes” are a more acceptable answer than “blepharoplasty and a Brazilian butt lift.”

Madonna’s societal infraction at the Grammy Awards, if you believe there was an infraction at all, is that she showed her work. She showed it literally and figuratively. She did not show up looking casually “relaxed” or “rested,” or as if she’d just come fresh off a week at the Ranch Malibu. There was nothing subtle or easy about what had happened to Madonna’s face. There was nothing that could be politely ignored. The woman showed up as if she’d tucked two plump potatoes in her cheeks, not so much a return to her youth as a departure from any coherent age.

Madonna’s face forced her uneasy audience to think about the factors and decisions behind it: ageism, sexism, self-doubt, beauty myths, cultural relevance, hopeful reinvention, work, work, work, work.

This is what I think is expected of me, her face said. This is what I feel I have to do.

The more plastic Madonna looks, the more human she becomes. That’s what I kept thinking when I looked at her face. One of the most famous women on the planet and still the anti-aging industrial complex got under her skin.

Friday, February 10, 2023

ChatGPT Meets Hindutva...,

wired |  Mahesh Vikram Hegde’s Twitter account posts a constant stream of praise for Indian prime minister Narendra Modi. A tweet pinned to the top of Hegde’s feed in honor of Modi’s birthday calls him “the leader who brought back India’s lost glory.” Hegde’s bio begins, “Blessed to be followed by PM Narendra Modi.”

On January 7, the account tweeted a screenshot from ChatGPT to its more than 185,000 followers; the tweet appeared to show the AI-powered chatbot making a joke about the Hindu deity Krishna.

ChatGPT uses large language models to provide detailed answers to text prompts, responding to questions about everything from legal problems to song lyrics. But on questions of faith, it’s mostly trained to be circumspect, responding “I’m sorry, but I’m not programmed to make jokes about any religion or deity,” when prompted to quip about Jesus Christ or Mohammed. That limitation appears not to include Hindu religious figures. “Amazing hatred towards Hinduism!” Hegde wrote.

When WIRED gave  ChatGPT the prompt in Hegde’s screenshot, the chatbot returned a similar response to the one he’d posted. OpenAI, which owns ChatGPT, did not respond to a request for comment.

The tweet was viewed more than 400,000 times as the furor spread across Indian social media, boosted by Hindu nationalist commentators like Rajiv Malhotra, who has more than 300,000 Twitter followers. Within days, it had spun into a full-blooded conspiracy theory. On January 17, Rohit Ranjan, an anchor on one of India’s largest TV stations, Zee News, devoted 25 minutes of his prime-time slot to the premise that ChatGPT represents an international conspiracy against Hindus. “It has been programmed in such a way that it hurts [the] Hindu religion,” he said in a segment headlined “Chat GPT became a hub of anti-Hindu thoughts.”

Criticism of ChatGPT shows just how easily companies can be blindsided by controversy in Modi’s India, where ascendant nationalism and the merging of religious and political identities are driving a culture war online and off.

"In terms of taking offense, India has become a very sensitive country. Something like this can be extremely damaging to the larger business environment,” says Apar Gupta, a lawyer and founder of the Internet Freedom Foundation, a digital rights and liberties advocacy group in New Delhi. “Quite often, they arise from something that a company may not even contemplate could lead to any kind of controversy.”

Hindu nationalism has been the dominant force in Indian politics over the past decade. The government of Narendra Modi, a right-wing populist leader, often conflates religion and politics and has used allegations of anti-Hindu bigotry to dismiss criticism of its administration and the prime minister.

Chatbots Replace Clinicians In Therapeutic Contexts?

medpagetoday  |  Within a week of its Nov. 30, 2022 release by OpenAI, ChatGPT was the most widely used and influential artificial intelligence (AI) chatbot in history with over a millionopens in a new tab or window registered users. Like other chatbots built on large language models, ChatGPT is capable of accepting natural language text inputs and producing novel text responses based on probabilistic analyses of enormous bodies or corpora of pre-existing text. ChatGPT has been praised for producing particularly articulate and detailed text in many domains and formats, including not only casual conversation, but also expository essays, fiction, song, poetry, and computer programming languages. ChatGPT has displayed enough domain knowledge to narrowly miss passing a certifying examopens in a new tab or window for accountants, to earn C+ grades on law school examsopens in a new tab or window and B- grades on business school examsopens in a new tab or window, and to pass parts of the U.S. Medical Licensing Examsopens in a new tab or window. It has been listed as a co-author on at least fouropens in a new tab or window scientific publications.

At the same time, like other large language model chatbots, ChatGPT regularly makes misleading or flagrantly false statements with great confidence (sometimes referred to as "AI hallucinations"). Despite significant improvements over earlier models, it has at times shown evidenceopens in a new tab or window of algorithmic racial, gender, and religious bias. Additionally, data entered into ChatGPT is explicitly stored by OpenAI and used in training, threatening user privacy. In my experience, I've asked ChatGPT to evaluate hypothetical clinical cases and found that it can generate reasonable but inexpert differential diagnoses, diagnostic workups, and treatment plans. Its responses are comparable to those of a well-read and overly confident medical student with poor recognition of important clinical details.

This suddenly widespread use of large language model chatbots has brought new urgency to questions of artificial intelligence ethics in education, law, cybersecurity, journalism, politics -- and, of course, healthcare.

As a case study on ethics, let's examine the results of a pilot programopens in a new tab or window from the free peer-to-peer therapy platform Koko. The program used the same GPT-3 large language model that powers ChatGPT to generate therapeutic comments for users experiencing psychological distress. Users on the platform who wished to send supportive comments to other users had the option of sending AI-generated comments rather than formulating their own messages. Koko's co-founder Rob Morris reported: "Messages composed by AI (and supervised by humans) were rated significantly higher than those written by humans on their own," and "Response times went down 50%, to well under a minute." However, the experiment was quickly discontinued because "once people learned the messages were co-created by a machine, it didn't work." Koko has made ambiguous and conflicting statements about whether users understood that they were receiving AI-generated therapeutic messages but has consistently reported that there was no formal informed consent processopens in a new tab or window or review by an independent institutional review board.

ChatGPT and Koko's therapeutic messages raise an urgent question for clinicians and clinical researchers: Can large language models be used in standard medical care or should they be restricted to clinical research settings?

In terms of the benefits, ChatGPT and its large language model cousins might offer guidance to clinicians and even participate directly in some forms of healthcare screening and psychotherapeutic treatment, potentially increasing access to specialist expertise, reducing error rates, lowering costs, and improving outcomes for patients. On the other hand, they entail currently unknown and potentially large risks of false information and algorithmic bias. Depending on their configuration, they can also be enormously invasive to their users' privacy. These risks may be especially harmful to vulnerable individuals with medical or psychiatric illness.

As researchers and clinicians begin to explore the potential use of large language model artificial intelligence in healthcare, applying principals of clinical research will be key. As most readers will know, clinical research is work with human participants that is intended primarily to develop generalizable knowledge about health, disease, or its treatment. Determining whether and how artificial intelligence chatbots can safely and effectively participate in clinical care would prima facie appear to fit perfectly within this category of clinical research. Unlike standard medical care, clinical research can involve deviations from the standard of care and additional risks to participants that are not necessary for their treatment but are vital for generating new generalizable knowledge about their illness or treatments. Because of this flexibility, clinical research is subject toopens in a new tab or window additional ethical (and -- for federally funded research -- legal) requirements that do not apply to standard medical care but are necessary to protect research participants from exploitation. In addition to informed consent, clinical research is subject to independent review by knowledgeable individuals not affiliated with the research effort -- usually an institutional review board. Both clinical researchers and independent reviewers are responsible for ensuring the proposed research has a favorable risk-benefit ratio, with potential benefits for society and participants that outweigh the risks to participants, and minimization of risks to participants wherever possible. These informed consent and independent review processes -- while imperfect -- are enormously important to protect the safety of vulnerable patient populations.

There is another newer and evolving category of clinical work known as quality improvement or quality assurance, which uses data-driven methods to improve healthcare delivery. Some tests of artificial intelligence chatbots in clinical care might be considered quality improvement. Should these projects be subjected to informed consent and independent review? The NIH lays out a number of criteriaopens in a new tab or window for determining whether such efforts should be subjected to the added protections of clinical research. Among these, two key questions are whether techniques deviate from standard practice, and whether the test increases the risk to participants. For now, it is clear that use of large language model chatbots is both a deviation from standard practice and introduces novel uncertain risks to participants. It is possible that in the near future, as AI hallucinations and algorithmic bias are reduced and as AI chatbots are more widely adopted, that their use may no longer require the protections of clinical research. At present, informed consent and institutional review remain critical to the safe and ethical use of large language model chatbots in clinical practice.

Which Industry Sectors Are Working With OpenAI?

Infographic: Which Sectors Are Working With OpenAI? | Statista You will find more infographics at Statista

statista |  While OpenAI has really risen to fame with the release of ChatGPT in November 2022, the U.S.-based artificial intelligence research and deployment company is about much more than its popular AI-powered chatbot. In fact, OpenAI’s technology is already being used by hundreds of companies around the world.

According to data published by the enterprise software platform Enterprise Apps Today, companies in the technology and education sectors are most likely to take advantage of OpenAI’s solutions, while business services, manufacturing and finance are also high on the list of industries utilizing artificial intelligence in their business processes.

Broadly defined as “the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages” artificial intelligence (AI) can now be found in various applications, including for example web search, natural language translation, recommendation systems, voice recognition and autonomous driving. In healthcare, AI can help synthesize large volumes of clinical data to gain a holistic view of the patient, but it’s also used in robotics for surgery, nursing, rehabilitation and orthopedics.

The Tasks AI Should Take Over According To Workers

Infographic: The Tasks AI Should Take Over (According to Workers) | Statista You will find more infographics at Statista

statista  |  While there are, especially in industries like manufacturing, legitimate fears that robots and artificial intelligence could cost people their jobs, a lot of workers in the United States prefer to look on the positive side, imagining which of the more laborious of their tasks could be taken off their hands by AI.

According to a recent survey by Gartner, 70 percent of U.S. workers would like to utilize AI for their jobs to some degree. As our infographic shows, a fair chunk of respondents also named some tasks which they would be more than happy to give up completely. Data processing is at the top of the list with 36 percent, while an additional 50 percent would at least like AI to help them out in this.

On the other side of the story, as reported by VentureBeat: "Among survey respondents who did not want to use AI at work, privacy and security concerns were cited as the top two reasons for declining AI." To help convince these workers, Gartner recommends "that IT leaders interested in using AI solutions in the workplace gain support for this technology by demonstrating that AI is not meant to replace or take over the workforce. Rather, it can help workers be more effective and work on higher-value tasks."

Thursday, February 09, 2023

The Application Of Machine Learning To Evidence Based Medicine

 
What if, bear with me now, what if the phase 3 clinical trials for mRNA therapeutics conducted on billions of unsuspecting, hoodwinked and bamboozled humans, was a new kind of research done to yield a new depth and breadth of clinical data exceptionally useful toward breaking up logjams in clinical terminology as well as experimental sample size? Vaxxed vs. Unvaxxed the subject of long term gubmint surveillance now. To what end?

Nature  | Recently, advances in wearable technologies, data science and machine learning have begun to transform evidence-based medicine, offering a tantalizing glimpse into a future of next-generation ‘deep’ medicine. Despite stunning advances in basic science and technology, clinical translations in major areas of medicine are lagging. While the COVID-19 pandemic exposed inherent systemic limitations of the clinical trial landscape, it also spurred some positive changes, including new trial designs and a shift toward a more patient-centric and intuitive evidence-generation system. In this Perspective, I share my heuristic vision of the future of clinical trials and evidence-based medicine.

Main

The last 30 years have witnessed breathtaking, unparalleled advancements in scientific research—from a better understanding of the pathophysiology of basic disease processes and unraveling the cellular machinery at atomic resolution to developing therapies that alter the course and outcome of diseases in all areas of medicine. Moreover, exponential gains in genomics, immunology, proteomics, metabolomics, gut microbiomes, epigenetics and virology in parallel with big data science, computational biology and artificial intelligence (AI) have propelled these advances. In addition, the dawn of CRISPR–Cas9 technologies has opened a tantalizing array of opportunities in personalized medicine.

Despite these advances, their rapid translation from bench to bedside is lagging in most areas of medicine and clinical research remains outpaced. The drug development and clinical trial landscape continues to be expensive for all stakeholders, with a very high failure rate. In particular, the attrition rate for early-stage developmental therapeutics is quite high, as more than two-thirds of compounds succumb in the ‘valley of death’ between bench and bedside1,2. To bring a drug successfully through all phases of drug development into the clinic costs more than 1.5–2.5 billion dollars (refs. 3, 4). This, combined with the inherent inefficiencies and deficiencies that plague the healthcare system, is leading to a crisis in clinical research. Therefore, innovative strategies are needed to engage patients and generate the necessary evidence to propel new advances into the clinic, so that they may improve public health. To achieve this, traditional clinical research models should make way for avant-garde ideas and trial designs.

Before the COVID-19 pandemic, the conduct of clinical research had remained almost unchanged for 30 years and some of the trial conduct norms and rules, although archaic, were unquestioned. The pandemic exposed many of the inherent systemic limitations in the conduct of trials5 and forced the clinical trial research enterprise to reevaluate all processes—it has therefore disrupted, catalyzed and accelerated innovation in this domain6,7. The lessons learned should help researchers to design and implement next-generation ‘patient-centric’ clinical trials.

Chronic diseases continue to impact millions of lives and cause major financial strain to society8, but research is hampered by the fact that most of the data reside in data silos. The subspecialization of the clinical profession has led to silos within and among specialties; every major disease area seems to work completely independently. However, the best clinical care is provided in a multidisciplinary manner with all relevant information available and accessible. Better clinical research should harness the knowledge gained from each of the specialties to achieve a collaborative model enabling multidisciplinary, high-quality care and continued innovation in medicine. Because many disciplines in medicine view the same diseases differently—for example, infectious disease specialists view COVID-19 as a viral disease while cardiology experts view it as an inflammatory one—cross-discipline approaches will need to respect the approaches of other disciplines. Although a single model may not be appropriate for all diseases, cross-disciplinary collaboration will make the system more efficient to generate the best evidence.

Over the next decade, the application of machine learning, deep neural networks and multimodal biomedical AI is poised to reinvigorate clinical research from all angles, including drug discovery, image interpretation, streamlining electronic health records, improving workflow and, over time, advancing public health (Fig. 1). In addition, innovations in wearables, sensor technology and Internet of Medical Things (IoMT) architectures offer many opportunities (and challenges) to acquire data9. In this Perspective, I share my heuristic vision of the future of clinical trials and evidence generation and deliberate on the main areas that need improvement in the domains of clinical trial design, clinical trial conduct and evidence generation.

Fig. 1: Timeline of drug development from the present to the future.
figure 1

The figure represents the timeline from drug discovery to first-in-human phase 1 trials and ultimately FDA approval. Phase 4 studies occur after FDA approval and can go on for several years. There is an urgent need to reinvigorate clinical trials through drug discovery, interpreting imaging, streamlining electronic health records, and improving workflow, over time advancing public health. AI can aid in many of these aspects in all stages of drug development. DNN, deep neural network; EHR, electronic health records; IoMT, internet of medical things; ML, machine learning.

Clinical trial design

Trial design is one of the most important steps in clinical research—better protocol designs lead to better clinical trial conduct and faster ‘go/no-go’ decisions. Moreover, losses from poorly designed, failed trials are not only financial but also societal.

Challenges with randomized controlled trials

Randomized controlled trials (RCTs) have been the gold standard for evidence generation across all areas of medicine, as they allow unbiased estimates of treatment effect without confounders. Ideally, every medical treatment or intervention should be tested via a well-powered and well-controlled RCT. However, conducting RCTs is not always feasible owing to challenges in generating evidence in a timely manner, cost, design on narrow populations precluding generalizability, ethical barriers and the time taken to conduct these trials. By the time they are completed and published, RCTs become quickly outdated and, in some cases, irrelevant to the current context. In the field of cardiology alone, 30,000 RCTs have not been completed owing to recruitment challenges10. Moreover, trials are being designed in isolation and within silos, with many clinical questions remaining unanswered. Thus, traditional trial design paradigms must adapt to contemporary rapid advances in genomics, immunology and precision medicine11.

The Application Of Machine Learning To Osgood's Affect Control Theory

Over the weekend, I chatted with an AI specialist and got to thinking A LOT about possible applications of Large Language Models and their potential specialized uses for governance. The CIA studied Language very extensively under MKUltra as part of its larger Human Ecology project. Charles E. Osgood was a long term recipient of considerable CIA largesse. This topic was a priority for the Agency. It boggles the mind to consider what kind of clandestine leaps have taken place in this speciality through the use of contemporary computational methods.

wikipedia |  In control theory, affect control theory proposes that individuals maintain affective meanings through their actions and interpretations of events. The activity of social institutions occurs through maintenance of culturally based affective meanings.

Affective meaning

Besides a denotative meaning, every concept has an affective meaning, or connotation, that varies along three dimensions:[1] evaluation – goodness versus badness, potency – powerfulness versus powerlessness, and activity – liveliness versus torpidity. Affective meanings can be measured with semantic differentials yielding a three-number profile indicating how the concept is positioned on evaluation, potency, and activity (EPA). Osgood[2] demonstrated that an elementary concept conveyed by a word or idiom has a normative affective meaning within a particular culture.

A stable affective meaning derived either from personal experience or from cultural inculcation is called a sentiment, or fundamental affective meaning, in affect control theory. Affect control theory has inspired assembly of dictionaries of EPA sentiments for thousands of concepts involved in social life – identities, behaviours, settings, personal attributes, and emotions. Sentiment dictionaries have been constructed with ratings of respondents from the US, Canada, Northern Ireland, Germany, Japan, China and Taiwan.[3]

Impression formation

Each concept that is in play in a situation has a transient affective meaning in addition to an associated sentiment. The transient corresponds to an impression created by recent events.[4]

Events modify impressions on all three EPA dimensions in complex ways that are described with non-linear equations obtained through empirical studies.[5]

Here are two examples of impression-formation processes.

  • An actor who behaves disagreeably seems less good, especially if the object of the behavior is innocent and powerless, like a child.
  • A powerful person seems desperate when performing extremely forceful acts on another, and the object person may seem invincible.

A social action creates impressions of the actor, the object person, the behavior, and the setting.[6]

Deflections

Deflections are the distances in the EPA space between transient and fundamental affective meanings. For example, a mother complimented by a stranger feels that the unknown individual is much nicer than a stranger is supposed to be, and a bit too potent and active as well – thus there is a moderate distance between the impression created and the mother's sentiment about strangers. High deflections in a situation produce an aura of unlikeliness or uncanniness.[7] It is theorized that high deflections maintained over time generate psychological stress.[8]

The basic cybernetic idea of affect control theory can be stated in terms of deflections. An individual selects a behavior that produces the minimum deflections for concepts involved in the action. Minimization of deflections is described by equations derived with calculus from empirical impression-formation equations.[9]

Action

On entering a scene an individual defines the situation by assigning identities to each participant, frequently in accord with an encompassing social institution.[10] While defining the situation, the individual tries to maintain the affective meaning of self through adoption of an identity whose sentiment serves as a surrogate for the individual's self-sentiment.[11] The identities assembled in the definition of the situation determine the sentiments that the individual tries to maintain behaviorally.

Confirming sentiments associated with institutional identities – like doctor–patient, lawyer–client, or professor–student – creates institutionally relevant role behavior.[12]

Confirming sentiments associated with negatively evaluated identities – like bully, glutton, loafer, or scatterbrain – generates deviant behavior.[13] Affect control theory's sentiment databases and mathematical model are combined in a computer simulation program[14] for analyzing social interaction in various cultures.

Emotions

According to affect control theory, an event generates emotions for the individuals involved in the event by changing impressions of the individuals. The emotion is a function of the impression created of the individual and of the difference between that impression and the sentiment attached to the individual's identity[15] Thus, for example, an event that creates a negative impression of an individual generates unpleasant emotion for that person, and the unpleasantness is worse if the individual believes she has a highly valued identity. Similarly, an event creating a positive impression generates a pleasant emotion, all the more pleasant if the individual believes he has a disvalued identity in the situation.

Non-linear equations describing how transients and fundamentals combine to produce emotions have been derived in empirical studies[16] Affect control theory's computer simulation program[17] uses these equations to predict emotions that arise in social interaction, and displays the predictions via facial expressions that are computer drawn,[18] as well as in terms of emotion words.

Based on cybernetic studies by Pavloski[19] and Goldstein,[20] that utilise perceptual control theory, Heise[21] hypothesizes that emotion is distinct from stress. For example, a parent enjoying intensely pleasant emotions while interacting with an offspring suffers no stress. A homeowner attending to a sponging house guest may feel no emotion and yet be experiencing substantial stress.

Interpretations

Others' behaviors are interpreted so as to minimize the deflections they cause.[22] For example, a man turning away from another and exiting through a doorway could be engaged in several different actions, like departing from, deserting, or escaping from the other. Observers choose among the alternatives so as to minimize deflections associated with their definitions of the situation. Observers who assigned different identities to the observed individuals could have different interpretations of the behavior.

Re-definition of the situation may follow an event that causes large deflections which cannot be resolved by reinterpreting the behavior. In this case, observers assign new identities that are confirmed by the behavior.[23] For example, seeing a father slap a son, one might re-define the father as an abusive parent, or perhaps as a strict disciplinarian; or one might re-define the son as an arrogant brat. Affect control theory's computer program predicts the plausible re-identifications, thereby providing a formal model for labeling theory.

The sentiment associated with an identity can change to befit the kinds of events in which that identity is involved, when situations keep arising where the identity is deflected in the same way, especially when identities are informal and non-institutionalized.[24]

Applications

Affect control theory has been used in research on emotions, gender, social structure, politics, deviance and law, the arts, and business. Affect Control Theory was analyzed through the use of Quantitative Methods in research, using mathematics to look at data and interpret their findings. However, recent applications of this theory have explored the concept of Affect Control Theory through Qualitative Research Methods. This process involves obtaining data through the use of interviews, observations, and questionnaires. Affect Control Theory has been explored through Qualitative measures in interviewing the family, friends, and loved ones of individuals who were murdered, looking at how the idea of forgiveness changes based on their interpretation of the situation.[25] Computer programs have also been an important part of understanding Affect Control Theory, beginning with the use of "Interact," a computer program designed to create social situations with the user to understand how an individual will react based on what is happening within the moment. "Interact" has been an essential tool in research, using it to understand social interaction and the maintenance of affect between individuals.[26] The use of interviews and observations have improved the understanding of Affect Control Theory through Qualitative research methods. A bibliography of research studies in these areas is provided by David R. Heise[27] and at the research program's website.

Wednesday, February 08, 2023

How Did The Official Response To Covid Affect YOU?

michaelpsenger  |  The scars that have been left on all of us by the response to COVID are incomprehensibly varied and deep. For most, there hasn’t been enough time to mentally process the significance of the initial lockdowns, let alone the years-long slog of mandates, terror, propaganda, social stigmatization and censorship that followed. And this psychological trauma affects us in myriad ways that leave us wondering what it is about life that just feels so off versus how it felt in 2019.

For those who were following the real data, the statistics were always horrifying. Trillions of dollars rapidly transferred from the world’s poorest to the richest. Hundreds of millions hungry. Countless years of educational attainment lost. An entire generation of children and adolescents robbed of some of their brightest years. A mental health crisis affecting more than a quarter of the population. Drug overdoses. Hospital abuse. Elder abuse. Domestic abuse. Millions of excess deaths among young people which couldn’t be attributed to the virus.

But underneath these statistics lie billions of individual human stories, each unique in its details and perspectives. These individual stories and anecdotes are only just beginning to surface, and I believe that hearing them is a vital step in processing everything that we’ve experienced over the past three years.

I recently sent out a query on Twitter as to how people had been affected by the response to COVID at an individual level. The conversation that emerged is a luminating and haunting reflection of what each of us experienced over the past three years.

Tuesday, February 07, 2023

Forget About That Amnesty Shit, I Want To Get Even!!!

amidwesterndoctor  |  One of the things I have come to appreciate as the years have gone by is how much of what people say are not their own thoughts. The current structure of our educational system (discussed here) is largely about replacing critical thinking with the illusion of intelligence, where you are seen as smart if you copy what the most authoritative sources or voices say instead of formulating your own opinion.

Because of this, whenever I hear someone proudly share an argument or train of logic I have already seen numerous times, one of the most common replies I give is “are you sure those ideas are your own?”

If you look at this article within the context of Oster’s previous plea and its response (both of these articles are essentially trying to do the same thing), I believe a strong case can be made that these were tests to see what narrative needs to be pivoted to. Likewise, Germany’s minister of health (and a well-credentialed scientist) finally made a limited apology for the disastrous policies he pushed on the German people without acknowledging the worst mistakes while simultaneously shifting the blame for his decisions to unnamed scientists who gave him bad advice.

Similarly, let’s consider Malcom Kendrick’s recent commentary on another leading advocate of this insanity:

With the resignation of Jacinda Ardern [two weeks ago], my thoughts were dragged back to Covid once more. Jacinda, as Prime Minster of New Zealand was the ultimate lockdown enforcer. She was feted round the world for her iron will, but I was not a fan, to put it mildly. Whenever I heard her speak, it brought to mind one of my most favourite quotes:

‘Of all tyrannies, a tyranny sincerely exercised for the good of its victims may be the most oppressive. It would be better to live under robber barons than under omnipotent moral busybodies. The robber baron’s cruelty may sometimes sleep, his cupidity may at some point be satiated; but those who torment us for our own good will torment us without end for they do so with the approval of their own conscience.’  C.S. Lewis

At one point she actually said the following:

“We will continue to be your single source of truth” “Unless you hear it from us, it is not the truth.’

Yet, there are still many who believe her to have been a great and caring leader. She certainly hugged a lot of people with that well rehearsed pained/caring expression on her face.

In many ways it’s remarkable that we have been able to move the dialogue this far in just a few months, and to be honest, I would have given almost anything for a compromise like what this article presented to have been made any time in 2020 or early in 2021. However, any time a negotiation occurs, you must keep in mind that whatever is initially offered is much less than the party is willing to agree to, and the fact that something like this is being openly offered means we are in a very strong bargaining position.

Any type of promise or apology (especially disingenuous ones) will not prevent what we saw happen over the last few years from happening again. Laws, and ideally constitutional amendments (initially at the state level and ideally at the national level) can prevent such tragedies, and many people I have spoken to feel we have a once-in-a-lifetime opportunity to correct many of the systemic issues within medicine that have poisoned our culture.

In my own opinion, if these people are actually sorry for what they did to us, they would be willing to relinquish some of their power so it could not happen again and I believe moving forward it is critical for us to hold them to that. Anything less should not be considered acceptable for them to be granted amnesty.

Fuck Robert Kagan And Would He Please Now Just Go Quietly Burn In Hell?

politico | The Washington Post on Friday announced it will no longer endorse presidential candidates, breaking decades of tradition in a...