This web page was created programmatically, to learn the article in its unique location you possibly can go to the hyperlink bellow:
https://news.harvard.edu/gazette/story/2025/10/what-will-ai-mean-for-humanity/
and if you wish to take away this text from our web site please contact us
What does the rise of synthetic intelligence imply for humanity? That was the query on the core of “How is digital technology shaping the human soul?,” a panel dialogue that drew specialists from pc science to comparative literature final week.
The Oct. 1 occasion was the primary from the Public Culture Project, a brand new initiative primarily based within the workplace of the dean of arts and humanities. Program Director Ian Marcus Corbin, a thinker on the neurology school of Harvard Medical School, stated the venture’s objective was placing “humanist and humanist thinking at the center of the big conversations of our age.”
“Are we becoming tech people?” Corbin requested. The solutions had been different
“We as humanity are excellent at creating different tools that support our lives,” stated Nataliya Kos’myna, a analysis scientist with the MIT Media Lab. These instruments are good at making “our lives longer, but not always making our lives the happiest, the most fulfilling,” she continued, itemizing examples from the typewriter to the web.
Generative AI, particularly ChatGPT, is the most recent instance of a instrument that primarily backfires in selling human happiness, she recommended.
She shared particulars of a study of 54 college students from throughout Greater Boston whose mind exercise was monitored by electroencephalography after being requested to put in writing an essay.
One group of scholars was allowed to make use of ChatGPT, one other permitted entry to the web and Google, whereas a 3rd group was restricted to their very own intelligence and creativeness. The matters — resembling “Is there true happiness?” — didn’t require any earlier or specialised data.
The outcomes had been putting: The ChatGPT group demonstrated “much less brain activity.” In addition, their essays had been very related, focusing totally on profession decisions because the determinants of happiness.
The web group tended to put in writing about giving, whereas the third group targeted extra on the query of true happiness.
Questions illuminated the hole. All the contributors had been requested whether or not they may quote a line from their very own essays, one minute after turning them in.
“Eighty-three percent of the ChatGPT group couldn’t quote anything,” compared to 11 percent from the second and third groups. ChatGPT customers “didn’t feel much ownership,” of their work. They “didn’t remember, didn’t feel it was theirs.”
“Your brain needs struggle,” Kos’myna stated. “It doesn’t bloom” when a job is just too simple. In order to study and have interaction, a job “needs to be just hard enough for you to work for this knowledge.”
E. Glen Weyl, analysis lead with Microsoft Research Special Projects, had a extra optimistic view of know-how. “Just seeing the problems disempowers us,” he stated, urging as a substitute for scientists to “redesign systems.”
He famous that a lot of the present deal with know-how is on its business facet. “Well, the only way they can make money is by selling advertising,” he stated, paraphrasing prevailing knowledge earlier than countering it. “I’m not sure that’s the only way this can be structured.”
“Underlying what we might call scientific intelligence there is a deeper, spiritual intelligence — why things matter.”
Brandon Vaidyanathan
Citing works resembling Steven Pinker’s new ebook, “When Everyone Knows That Everyone Knows,” Weyl talked concerning the concept of group — and the way social media is extra targeted on teams than on people.
“If we thought about engineering a feed about these notions, you might be made aware of things in your feed that come from different members of your community. You would have a sense that everyone is hearing that at the same time.”
This would result in a “theory of mind” of these different individuals, he defined, opening our sense of shared experiences, like that shared by attendees at a live performance.
To illustrate how that might work for social media, he introduced up Super Bowl advertisements. These, stated Weyl, “are all about creating meaning.” Rather than promote particular person drinks or computer systems, for instance, we’re advised “Coke is for sharing. Apple is for rebels.”
“Creating a common understanding of something leads us to expect others to share the understanding of that thing,” he stated.
To reconfigure tech on this course, he acknowledged, “requires taking our values seriously enough to let them shape” social media. It is, nevertheless, a promising choice.
Moira Weigel, an assistant professor in comparative literature at Harvard, took the dialog again earlier than going ahead, stating that lots of the questions mentioned have captivated people because the nineteenth century.
Weigel, who can be a college affiliate on the Berkman Klein Center for Internet and Society, centered her feedback round 5 questions, that are additionally on the core of her introductory class, “Literature and/as AI: Humanity, Technology, and Creativity.”
“What is the purpose of work?” she requested, amending her question so as to add whether or not a “good” society ought to attempt to automate all work. “What does it mean to have, or find, your voice? Do our technologies extend our agency — or do they escape our control and control us? Can we have relationships with things that we or other human beings have created? What does it mean to say that some activity is merely technical, a craft or a skill, and when is it poesis” or artwork?
Looking on the affect of enormous language fashions in schooling, she stated, “I think and hope LLMs are creating an interesting occasion to rethink what is instrumental. They scramble our perception of what education is essential,” she stated. LLMs “allow us to ask how different we are from machines — and to claim the space to ask those questions.”
Brandon Vaidyanathan, a professor of sociology at Catholic University of America, additionally noticed risk.
Vaidyanathan, the panel’s first speaker, started by noting the distinction between science and know-how, citing the thinker Martin Heidegger’s idea of “enframing” has tech viewing every thing as “product.”
Vaidyanathan famous that his expertise suggests scientists take a distinct view.
“Underlying what we might call scientific intelligence there is a deeper, spiritual intelligence — why things matter,” he stated.
Instead of the “domination, extraction, and fragmentation” most see driving tech (and particularly AI), he famous that scientists have a tendency towards “the three principles of spiritual intelligence: reverence, receptivity, and reconnection.” More than 80 p.c of them “encounter a deep sense of respect for what they’re studying,” he stated.
Describing a researcher learning the injection needle of the salmonella micro organism with a “deep sense of reverence,” he famous, “You’d have thought this was the stupa of a Hindu temple.
“Tech and science can open us up to these kind of spiritual experiences,” Vaidyanathan continued.
“Can we imagine the development of technology that could cultivate a sense of reverence rather than domination?” To do this, he concluded, would possibly require a “disconnect on a regular basis.”
This web page was created programmatically, to learn the article in its unique location you possibly can go to the hyperlink bellow:
https://news.harvard.edu/gazette/story/2025/10/what-will-ai-mean-for-humanity/
and if you wish to take away this text from our web site please contact us
This web page was created programmatically, to learn the article in its authentic location you'll…
This web page was created programmatically, to learn the article in its authentic location you'll…
This web page was created programmatically, to learn the article in its unique location you…
This web page was created programmatically, to learn the article in its authentic location you…
This web page was created programmatically, to learn the article in its unique location you'll…
This web page was created programmatically, to learn the article in its unique location you'll…