This webpage was generated automatically; to view the article in its original context, you may follow the link below:
https://hai.stanford.edu/news/yejin-choi-teaching-ai-how-world-works
and if you wish to have this article removed from our website, please reach out to us
Throughout her professional journey, Dr. Yejin Choi has confronted some of the most formidable – and occasionally unpopular – facets of artificial intelligence (AI).
As one of the leading experts in natural language processing (NLP) and AI, Yejin’s pioneering research on themes such as AI’s capacity for commonsense reasoning has granted her widespread acclaim, including the receipt of a MacArthur Fellowship.
She joins Stanford HAI as the Dieter Schwartz Foundation HAI Professor, Professor of Computer Science, and Stanford HAI Senior Fellow, transitioning from her recent role as senior research manager at the Allen Institute for Artificial Intelligence and associate professor at the Paul G. Allen School of Computer Science & Engineering at the University of Washington, where she has authored several foundational papers on AI and commonsense understanding.
In this Q&A, Yejin discusses her aspirations for her new role, the journey that led her to this career, and how growing up as a girl intrigued by science in South Korea motivated her to pursue unconventional paths.
Share with us about your role at HAI and your objectives.
As a senior fellow, I am eager to concentrate on what I am truly passionate about: conducting AI research with a significant focus on its effects on humanity. I intend to carry on the interdisciplinary endeavors that have characterized my career, drawing insights from areas such as cognitive neuroscience and philosophy to shape the design and assessment of AI. An additional source of inspiration is the opportunity to contribute back to these disciplines by providing insights that could benefit their own intellectual pursuits. That is my goal.
Working with ethical philosophers like John Tasioulas at the University of Oxford on AI’s moral decision-making ignited my curiosity in examining how large language models (LLMs) might make ethical choices. This prompted me to delve into pluralistic alignment – the notion that there might be several answers to a query rather than just one “gold” answer. Contemporary AI models typically operate under the premise of a definitive answer; however, the truth is considerably more intricate and shaped by elements like cultural standards. This insight highlighted the necessity of ensuring that AI is genuinely safe for humans. We must guarantee that AI is not solely tuned for a single result, and I am keen to heavily invest in this line of work at Stanford HAI.
I am also keen on delving into more algorithmic endeavors, particularly devising AI models that require less data and are more computationally efficient. Present-day LLMs are excessively large, costly, and confined to a few tech firms capable of developing them. Another research direction I am excited to explore at Stanford HAI is smaller LLMs – or SLMs.
Discuss how your journey has led you to where you are today?
My path has been particularly intricate. When I opted to pursue my PhD in natural language processing and computer vision, AI had not yet become a trendy field. In fact, many advised me against entering it. However, I am someone who seeks adventure and was intrigued by the fact that it was not yet a widely embraced field. While other domains were more solidified, I aspired to place myself in a field that could potentially rise in significance.
My fascination with risky ventures began when I was a young girl in South Korea. I became captivated by a contest that involved making wooden planes fly. One of the event organizers questioned my involvement, believing that girls shouldn’t participate. Such discouraging feedback was something I frequently faced. Throughout my life, I have struggled with considerable self-doubt and anxiety due to the cultural norms in which I was raised. Nevertheless, this has provided me with a profound insight into how cultural expectations can shape a person’s life, a viewpoint that has significantly influenced my research interests.
Rather than immediately proceeding from university to graduate school, I didn’t consider it a feasible option at the time. Instead, I embarked on my professional journey after my undergraduate studies as a software developer at Microsoft in Seattle around the year 2000. After some time, I resolved that I wanted to pursue a more adventurous path, which led me to a PhD in AI.
Eventually, my focus shifted toward commonsense and AI – a long-standing challenge that was often dismissed due to the stagnation in advancements within the field at that time. It was perceived as an almost insurmountable challenge. While I understood why some viewed it negatively, I recognized it as a vital challenge for progressing AI, given that so much of our comprehension of the world extends beyond the visible text we encounter. For AI models to genuinely assist, they must grasp the unspoken rules governing how the world functions. Past failures did not imply we would inevitably fail again, especially with the progress in data availability and computational power that had since arisen.
What achievements of yours stand out?
During my tenure at Stony Brook University, I collaborated with Jeff Hancock, the Founding Director of Stanford’s Social Media Lab, to develop an NLP model capable of analyzing linguistic patterns to determine whether a product review was authentic or fraudulent. This project was particularly significant during an era where product reviews were becoming increasingly impactful. Interestingly, the usage of pronouns or the tendency of individuals to utilize more nouns compared to adverbs can yield substantial clues regarding whether a review is fabricated.
In addition, it was quite unexpected that winning the MacArthur Fellowship for my contributions toward using NLP to enable AI to employ commonsense reasoning served as a genuine affirmation for my choice to pursue commonsense and AI despite the skepticism from others.
Finally, I take pride in the investigations I’ve conducted concerning bias within AI. I engaged in some of the pioneering research exploring racism and sexism present in written material. This closely ties to my commonsense work as it relates to cultural norms and values and inferring the unexpressed assumptions regarding how the social landscape operates.
This webpage was generated automatically; to view the article in its original context, you may follow the link below:
https://hai.stanford.edu/news/yejin-choi-teaching-ai-how-world-works
and if you wish to have this article removed from our website, please reach out to us