Can a Generative AI Agent Precisely Mimic My Personality?

This web page was created programmatically, to learn the article in its unique location you possibly can go to the hyperlink bellow:
https://www.scientificamerican.com/article/can-a-generative-ai-agent-accurately-mimic-my-personality/
and if you wish to take away this text from our web site please contact us


On a grey Sunday morning in March, I advised an AI chatbot my life story.

Introducing herself as Isabella, she spoke with a pleasant feminine voice that may have been well-suited to a human therapist, have been it not for its distinctly mechanical cadence. Aside from that, there wasn’t something humanlike about her; she appeared on my pc display screen as a small digital avatar, like a personality from a Nineteen Nineties online game. For practically two hours Isabella collected my ideas on all the things from vaccines to emotional coping methods to policing within the U.S. When the interview was over, a big language mannequin (LLM) processed my responses to create a brand new synthetic intelligence system designed to imitate my behaviors and beliefs—a type of digital clone of my persona.

A workforce of pc scientists from Stanford University, Google DeepMind and different establishments developed Isabella and the interview course of in an effort to construct extra lifelike AI programs. Dubbed “generative agents,” these programs can simulate the decision-making habits of particular person people with spectacular accuracy. Late final yr Isabella interviewed greater than 1,000 individuals. Then the volunteers and their generative brokers took the General Social Survey, a biennial questionnaire that has cataloged American public opinion since 1972. Their outcomes have been, on common, 85 percent identical, suggesting that the brokers can intently predict the attitudes and opinions of their human counterparts. Although the expertise is in its infancy, it affords a glimmer of a future during which predictive algorithms can doubtlessly act as on-line surrogates for every of us.


On supporting science journalism

If you are having fun with this text, think about supporting our award-winning journalism by subscribing. By buying a subscription you’re serving to to make sure the way forward for impactful tales in regards to the discoveries and concepts shaping our world right this moment.


When I first discovered about generative brokers the humanist in me rebelled, silently insisting that there was one thing about me that isn’t reducible to the 1’s and 0’s of pc code. Then once more, perhaps I used to be naive. The fast evolution of AI has introduced many humbling surprises. Time and once more, machines have outperformed us in abilities we as soon as believed to be distinctive to human intelligence—from taking part in chess to writing pc code to diagnosing most cancers. Clearly AI can replicate the slim, problem-solving a part of our mind. But how a lot of your persona—a mercurial phenomenon—is deterministic, a set of possibilities which can be no extra inscrutable to algorithms than the association of items on a chessboard?

The query is hotly debated. An encounter with my very own generative agent, it appeared to me, might assist me to get some solutions.


The LLMs behind generative brokers and chatbots resembling ChatGPT, Claude and Gemini are definitely knowledgeable imitators. People have fed texts from deceased family members to ChatGPT, which might then conduct textual content conversations that intently approximated the departed’s voices.

Today builders are positioning brokers as a extra superior type of chatbot, able to autonomously making selections and finishing routine duties, resembling navigating a Web browser or debugging pc code. They’re additionally advertising brokers as productiveness boosters, onto which companies can offload time-intensive human drudgery. Amazon, OpenAI, Anthropic, Google, Salesforce, Microsoft, Perplexity and nearly each main tech participant has jumped onboard the agent bandwagon.

Joon Sung Park, a pacesetter of Stanford’s generative agent work, had all the time been drawn to what early Disney animators referred to as “the illusion of life.” He started his doctoral work at Stanford in late 2020, after the COVID pandemic was forcing a lot of the world into lockdown, and as generative AI was beginning to increase. Three years earlier, Google researchers introduced the transformer, a sort of neural community that may analyze and reproduce mathematical patterns in textual content. (The “GPT” in ChatGPT stands for “generative pretrained transformer.”) Park knew that online game designers had lengthy struggled to create lifelike characters that would do greater than transfer mechanically and skim from a script. He puzzled: Could generative AI create authentically humanlike habits in digital characters?

He unveiled generative agents in a 2023 convention paper during which he described them as “interactive simulacra of human behavior.” They have been constructed atop ChatGPT and built-in with an “agent architecture,” a layer of code permitting them to recollect data and formulate plans. The design simulates some key points of human notion and habits, says Daniel Cervone, a professor of psychology specializing in persona concept on the University of Illinois Chicago. Generative brokers are doing “a big slice of what a real person does, which is to reflect on their experiences, abstract out beliefs about themselves, store those beliefs and use them as cognitive tools to interpret the world,” Cervone advised me. “That’s what we do all the time.”

Park dropped 25 generative brokers inside Smallville, a digital house modeled on Swarthmore College, the place he had studied as an undergraduate. He included primary affordances resembling a café and a bar the place the brokers might mingle; image The Sims with no human participant calling the pictures. Smallville was a petri dish for digital sociality; moderately than watching cells multiply, Park noticed the brokers regularly coalescing from particular person nodes right into a unified community. At one level, Isabella (the identical agent that may later interview me), assigned with the position of café proprietor, spontaneously started handing out invites to her fellow brokers for a Valentine’s Day occasion. “That starts to spark some real signals that this could actually work,” Park advised me. Yet as encouraging as these early outcomes have been, the residents of Smallville had been programmed with explicit persona traits. The actual take a look at, Park believed, would lie in constructing generative brokers that would simulate the personalities of dwelling people.

It was a tall order. Personality is a notoriously nebulous idea, fraught with hidden layers. The phrase itself is rooted in uncertainty, vagary, deception: it’s derived from the Latin persona, which initially referred to a masks worn by a stage actor. Park and his workforce don’t declare to have constructed excellent simulations of people’ personalities. “A two-hour interview doesn’t [capture] you in anything near your entirety,” says Michael Bernstein, an affiliate professor of pc science at Stanford and certainly one of Park’s collaborators. “It does seem to be enough to gather a sense of your attitudes.”And they don’t suppose generative brokers are near synthetic normal intelligence, or AGI—an as-yet-theoretical system that may match people on any cognitive activity.

In their newest paper, Park and his colleagues argue that their brokers might assist researchers perceive complicated, real-world social phenomena, such because the unfold of on-line misinformation and the end result of nationwide elections. If they’ll precisely simulate people, then they’ll theoretically set the simulations unfastened to work together with each other and see what sort of social behaviors emerge. Think Smallville on a a lot greater scale.

Yet, as I’d quickly uncover, generative brokers could solely be capable to imitate a really slim and simplified slice of the human persona.


Meeting my generative agent per week after my interview with Isabella felt like myself in a funhouse mirror: I knew I used to be seeing my very own reflection, however the picture was warped and twisted.

The very first thing I seen was that the agent—let’s say “he”—didn’t converse like me. I used to be on a video name with Park, and the 2 of us have been taking turns asking him questions. Unlike Isabella, he didn’t come along with his personal avatar; he simply appeared as faceless traces of inexperienced textual content spilling throughout my display screen. We have been testing his capability to make knowledgeable guesses about my life, filling in data I hadn’t instantly supplied to Isabella. The outcomes have been considerably disappointing. At one level, I requested him to inform me a secret about himself that nobody else is aware of, hoping he would floor some type of reasonably deep perception. He stated he beloved astronomy. True sufficient however hardly revelatory.

His actual expertise appeared to be inferring a few of the extra mundane particulars of my life. When requested if his household had canines rising up, he appropriately answered sure, regardless that I had solely advised Isabella that my sister and fogeys have canines right this moment. I had, nonetheless, described my childhood in Colorado, which was stuffed with household tenting journeys within the mountains, and the agent had apparently appropriately deduced that there was a excessive chance that any person who grew up in such an surroundings additionally grew up with canines. “Those are the basic boundaries of this technology,” Park advised me. “In the absence of the ground-truth information, it will try to make its best guess.”

At occasions these guesses have been comically off the mark. When requested to recount an embarrassing second from his previous, he fabricated a narrative a couple of occasion in faculty during which he’d misplaced his steadiness whereas dancing and fell onto a desk stuffed with snacks. Thankfully, that by no means occurred to the true me. Here, too, the system was connecting disparate bits of my private knowledge and doing its greatest to concoct one thing believable.

Toward the top of the Q&A, I ventured into the deep finish by asking whether or not he believed in God. The reply wasn’t removed from how I would reply. “I wasn’t raised in a specific religious framework, but I’ve always felt a connection with something greater than myself, especially when I’m in nature,” my agent wrote. “It’s like there’s a kind of transcendent energy or consciousness that we can tap into.”

Most memorably, there have been moments that felt like real perception. I had had a protracted, disturbing few months. I had began working as a full-time freelance author final October, ushering in a interval {of professional} freedom but in addition persistent uncertainty. I’d spent a lot of the winter hunkered down, studying and writing, in my small New York City house, and I used to be feeling the psychological results of extended isolation. Embracing the spirit of spring, I had resolved to make what I believed could be some wholesome modifications, beginning with a better effort to keep up a greater work-life steadiness. I requested my generative agent, “What’s some advice you’d want to offer to your past self?”

“Embrace uncertainty a bit more,” he started. I needed to pause. Already the response so intently echoed the themes I’d been articulating to myself in my journal the day before today, it was nearly as if I have been writing the brokers’ phrases myself. “I think I spent a lot of time worrying about the future and trying to control every aspect of my life, and that often led to unnecessary stress,” he continued. “I would advise myself to trust the process and be more open to the unexpected paths that life can take…. It’s easy to get caught up in career ambitions, but nurturing relationships and taking time for oneself is equally important.”

Despite these moments of nice shock, my dialog with my generative agent left me feeling hole. I felt I had met a two-dimensional model of myself—all artifice, no depth. It had captured a veneer of my persona, but it surely was simply that: a digital actor taking part in a task, carrying my knowledge as a masks.

At no level did I get the sensation that I used to be interacting with a system that actually captured my voice and my ideas. But that isn’t the purpose. Generative brokers don’t must sound such as you or perceive you in your entirety to be helpful, simply as psychologists don’t want to grasp each quirk of your habits to make broad-stroke diagnoses of your persona sort.

Adam Green, a neuroscientist at Georgetown University, who research the impacts of AI on human creativity, believes that that lack of specificity and our rising reliance on a handful of highly effective algorithms might filter out a lot of the colour and quirks that make every of us distinctive. Even probably the most superior algorithm will revert to the imply of the dataset on which it’s been skilled. “That matters,” Green says, “because ultimately what you’ll have is homogenization.” In his view, the increasing ubiquity of predictive AI fashions is squeezing our tradition right into a type of groupthink, during which all our idiosyncrasies slowly however certainly turn out to be discounted as irrelevant outliers within the knowledge of humanity.

After assembly my generative agent, I remembered the sensation I had again once I spoke with Isabella—my interior voice that had rejected the concept that my persona may very well be re-created in silicon or, as Meghan O’Gieblyn put it in her guide God, Human, Animal, Machine, “that the soul is little more than a data set.” I nonetheless felt that approach. If something, my conviction had been strengthened. I used to be additionally conscious that I may be falling prey to the identical type of hubris that after stored early critics of AI from believing that computer systems might ever compose respectable poetry or outmatch people in chess. But I used to be keen to take that danger.


This web page was created programmatically, to learn the article in its unique location you possibly can go to the hyperlink bellow:
https://www.scientificamerican.com/article/can-a-generative-ai-agent-accurately-mimic-my-personality/
and if you wish to take away this text from our web site please contact us

Leave a Reply

Your email address will not be published. Required fields are marked *