Apple has recently revealed its suspension of AI notification summaries regarding news and entertainment.
This decision came after a surge of criticism regarding the production of misleading news alerts.
The forthcoming IOS update will completely disable the notification until a later update that refines the service.
Apple’s reversal occurs as hallucinations continue to derail generative – and now agentic – AI initiatives.
Consider Amazon’s recent challenges in rebranding Alexa as an AI Agent. It cited hallucinations as an ongoing barrier.
However, despite extensive discussions about hallucinations, numerous contact centers proceeded to implement auto-summarizations.
Ultimately, service providers asserted that this would serve as a straightforward initial use case for service teams to bolster their confidence in GenAI.
Now, those auto-summarizations no longer appear so straightforward.
Auto-Summarization Adoption Is High in Contact Centers, But It Comes with Challenges
Eager to keep pace with the AI rollout, many contact centers have embraced the trend of case auto-summarization.
The past 18 months have seen a significant rise in service providers adopting auto-summarizations.
Indeed, a recent CX Today report indicates that 38 percent of contact centers have already done so.
Integrating these auto-summarizations into the CRM after interactions has shown advantages in tracking customers’ case histories, reducing handling times, and cutting costs.
Nevertheless, some contact centers have discovered that – although models have performed admirably in POCs and pilots – scaling them up to enterprise-level production presents its challenges.
That’s according to Swapnil Jain, Co-Founder & CEO at Observe.AI.
In a LinkedIn post, Jain outlined the following criteria for enterprise contact centers to ensure the effectiveness of their auto-summarizations:
- Tuning models for precision
- Correcting transcription inaccuracies
- Enforcing length limitations
- Attaining high-accuracy entity extraction
- Meeting latency specifications
- Consistently maintaining voice style (first-person vs. third-person)
- Managing intricate scenarios like call transfers without issues
“These aren’t minor aspects,” commented Jain. “They’re crucial for creating authentic enterprise-grade AI solutions.”
Should Contact Centers Abandon Auto-Summarizations?
Before entirely dismissing auto-summarizations, it’s essential to reflect on the time when they were not an option.
Contact center agents had to manually draft a summary, categorize the interaction with a disposition code, and submit it to the CRM.
In their haste, agents frequently overlooked important details and chose incorrect disposition codes.
Consequently, they cluttered the CRM with erroneous data, resulting in contact centers struggling to trace the histories of various customer cases. Moreover, they faced challenges in accurately identifying the reasons behind customer calls.
While auto-summarizations may sometimes generate inaccuracies, they represent an advancement over previous methods.
However, contact centers must strive to ensure their implementations are as precise as possible.
This begins with a healthy level of skepticism. As one commentator on Jain’s post noted:
A quality AI demonstration takes 10 hours of effort, but genuine production requires 10,000 hours of meticulous work
Additionally, contact centers ought to seek auto-summarization solutions that allow them to tailor the back-end large language model (LLM) and the accompanying prompt.
From that point, conduct extensive tests on those models and prompts in a controlled setting, utilizing tools to evaluate the outcomes.
An example of a solution that facilitates all of this is the Five9 GenAI Studio. Discover more about that solution here.