Categories: Technology

Cracking the Code: How Microsoft’s Red Team Tackles Evolving AI Threats


This page was generated programmatically; to view the article in its original format, you can visit the link below:
https://www.crn.com.au/news/microsoft-red-teaming-leader-ai-threats-novel-but-solvable-614319
and if you wish to have this article removed from our website, please get in touch with us


Generative artificial intelligence systems pose both new and traditional threats to MSSPs, but various practices and tools are evolving to address customers’ issues.

Ram Shankar Siva Kumar, who leads Microsoft’s AI Red Team and co-authored a paper released this week detailing case studies, insights, and inquiries regarding the simulation of cyberattacks on AI systems, informed CRN during an interview that by 2025, clients will expect detailed information from MSSPs and other experts concerning protection in the AI era.

“We can no longer rely on broad principles,” Kumar indicated.

“Present me with tools. Provide me with frameworks. Ground them in concrete lessons so that if I’m an MSSP contracted to red team an AI system … I possess a tool, I have examples, I have foundational prompts, and I’m performing the task effectively.”

The document authored by Kumar and his team, entitled “Lessons from red teaming 100 generative AI products”, offers eight lessons and five case studies derived from simulated assaults involving copilots, plugins, models, applications, and features.

Microsoft is not new to imparting its AI safety knowledge to the broader community.

In 2021, it launched the Counterfit open-source automation tool intended for testing AI systems and algorithms.

Last year, Microsoft unveiled the Pyrit (pronounced “pirate,” short for Python Risk Identification Toolkit) open-source automation framework aimed at identifying risks in GenAI systems as yet another example of the vendor’s community-focused efforts to enhance AI security.

Among the insights Microsoft provides to experts in this latest paper is the importance of understanding what AI systems are capable of and where they are utilized.

Threat actors do not necessarily need to compute gradients to exploit AI systems, as prompt engineering holds the potential to inflict damage, according to the document.

AI red teams cannot depend on safety metrics for unique and future categories of AI threats.

Further, teams should seek automation to encompass a broader spectrum of risk.

Red teams ought to rely on subject matter experts for evaluating content risks and consider models’ risk nature across different languages, as stated in the document.

Responsible AI risks are subjective and challenging to assess. Moreover, securing AI systems is an ongoing endeavor, with system regulations potentially evolving over time.

The case studies presented by Kumar’s team within the document include:

Security professionals will find that AI security and red teaming bring forth new strategies and techniques; however, well-known methods and practices remain relevant even in the AI era, Kumar noted.

“If you are neglecting to patch or update the library of an obsolete video processing library in a multi-modal AI system, an adversary is not going to breach the system.”

They’re going to log in,” he remarked.

“We aimed to emphasize that traditional security vulnerabilities simply do not vanish.”

This article originally appeared at crn.com


This page was generated programmatically; to view the article in its original format, you can visit the link below:
https://www.crn.com.au/news/microsoft-red-teaming-leader-ai-threats-novel-but-solvable-614319
and if you wish to have this article removed from our website, please get in touch with us

fooshya

Share
Published by
fooshya

Recent Posts

Unlock Big Adventures: Save More with Group Travel Deals!

This page was generated automatically; to read the article at its original source, you can…

2 minutes ago

Tondo Teacher’s Shocking Tale: A Near-Robbery of P300,000 in Cash and Gadgets!

This page was generated programmatically, to view the article in its original site you can…

6 minutes ago

Woodstock’s Creative Lens: The Center for Photography Revolution

This page was generated programmatically; to view the article in its initial location, you can…

18 minutes ago

Navigating a Smaller World: Lessons on Travel Vaccines from the COVID-19 Pandemic

This webpage was generated programmatically. To access the article in its original context, you can…

21 minutes ago

“Southbound Growth: Canberra CBD’s Bold Expansion Toward the Lake”

This page was generated programmatically, to access the article in its original setting you can…

27 minutes ago

Unveiling Gen Z’s Must-Have Gadgets: The Tech They Can’t Live Without!

This webpage was generated programmatically; to view the article in its initial location, you may…

27 minutes ago