Reimagining Gaming: Is Nvidia’s Vision for AI the Future We Need?


This page was generated automatically, to access the article in its original form you can follow the link below:
https://www.newsweek.com/entertainment/opinion-nvidia-showed-me-future-ai-gaming-im-not-convinced-2003932
and if you wish to remove this article from our website please reach out to us


Artificial intelligence is incredibly popular these days, and it’s infiltrating gaming more prominently than ever. Sony and Microsoft have only begun to integrate AI and machine learning into their consoles recently, with features such as the PS5 Pro’s PSSR, but PC hardware producer Nvidia has been in this game for a significantly longer duration. The company revolutionized PC gaming with its machine learning-based DLSS, and during a recent tour, revealed its aspirations for AI in the future of video games.

At a press event in Bangalore, Nvidia provided me with a glimpse of various generative AI-driven demonstrations for gaming and content creation. Many of these demos had previously been displayed at different industry conventions, like Computex, but this was my first opportunity to interact with them personally. The demonstrations, operating on what Nvidia refers to as RTX AI PCs, included a variety of experiences utilizing both existing and new LLMs (Large Language Models) and functioning locally on Nvidia’s RTX graphics cards. The comprehensive array of features constitutes what is dubbed Nvidia ACE, a tool for the creation and implementation of digital personas in interactive media.

These demonstrations all employ slightly varied technologies to support content creation and gaming in different manners. In one demonstration, I conversed directly with an in-game NPC to guide me contextually through its progression framework, while another application expedited the search for documents and files on your system significantly quicker than what is possible on standard Windows. One demo aids in generative AI for producing imagery within a digital landscape, while another represents an extension of ChatGPT shaped into an AI NPC.

It’s a substantial amount of information to digest, so I opted to concentrate on what is most significant to me: gaming.

Mecha Break landing bay combat
Two mechs battle each other in a landing bay in Mecha Break. The game has been utilized to demonstrate Nvidia’s AI technology, but the final version won’t include AI support.

Amazing Seasun Games

AI NPCs are the exciting new trend that Nvidia hopes to highlight, and we’re beginning to observe their initial practical applications in both current and future games. At the event, I engaged with an AI NPC in Amazing Seasun Games’ multiplayer game Mecha BREAK. Simply put, you can converse with Martel, the NPC mechanic responsible for your mech suits, using natural language. Inquire Martel about the objectives for the forthcoming mission, and she will respond swiftly. Ask her what the fastest mech armor in your inventory happens to be, and she will present it, ready for additional customization, which you can direct using voice commands. Of course, you’ll need to use lore-consistent terminology when interacting with her, but it’s a rather impressive demonstration.

Is this a feature I envision myself utilizing anytime soon? No, not once the thrill diminishes after the initial few interactions. While these demonstrations appear fascinating, gamers prioritize functionality. I would much prefer utilizing the input methods with which I’m accustomed, like a mouse and keyboard, to modify my in-game character and experience rather than depending on an NPC with whom I need to speak. If I require any support, I’m perfectly willing to browse the internet, where I’ll discover countless helpful guides and forums that provide the information I’m seeking, unhindered by developers’ constraints tailored to their specifications.

Perfect World Games’ Legends tech demo provided a similar encounter. In it, you engage in conversation with Yun Ni, a member of a tribal community in the jungle. You can communicate with Yun Ni in a natural manner, and she can perceive and identify objects from the real world utilizing your computer’s webcam. Although there isn’t a game enveloping the demo itself, it is impressive how its creators have implemented GPT 4 within a controlled environment with specific instructions dictating the world Yun Ni inhabits. If you ask her about the finest cars from Tesla, she may respond, “What’s a car?” How could she possibly know? She resides in a jungle. Hold up an object she might recognize, like a knife, and she will become enthusiastic about discussing its uses in the wilderness.

Well, she appears excited by her responses, but I can’t ascertain her actual feeling based on her voice. She sounds like those text-to-speech automated TikTok narrations, so you can imagine how devoid of life her tone must be. The same applies to Martel from Mecha BREAK, yet I continually question why I should find any of this compelling.

There is a minor delay between your request/inquiry and Yun Ni’s response as the demo sends data to GPT4 servers. Although you’re only waiting a few seconds, it’s sufficient to remind you that all of this is precisely what it appears to be: a gimmick. Microsoft also displayed a demo of its Copilot AI assisting players inside Minecraft earlier this year, although we have yet to witness the feature’s integration in the current version of the game.

There’s an argument to be made regarding how the new wave of AI NPCs, primarily interacted with through voice input, could be advantageous for accessibility. We are not yet at a stage where games are interactive enough to be navigated purely through natural language voice inputs, but Nvidia ACE is laying the groundwork for that future. One might also argue that while these features may not appeal to hardcore gamers, they could attract casual users on lower-end hardware.

On the topic of hardware, Nvidia’s technical product marketing manager John Gillooly informs us that all of these models operate locally on the hardware. We can corroborate this as we engaged with each demonstration on systems that were not connected to the internet, except for Legends, utilizing consumer hardware like the RTX 4080. The only expense? VRAM. We weren’t provided precise figures, but do not expect to maintain long, meaningful conversations with these NPCs on an RTX 4050.

Curiously, one demo that was notably missing from the event was Project G-Assist. Nvidia showcased the AI assistant at Computex earlier this year in its prototype stage, but I didn’t observe it functioning. For those unfamiliar, G-Assist is an AI-driven chatbox that receives information about your system’s specifications and the game you’re currently playing. It operates akin to Microsoft’s Copilot for gaming, wherein users can pose questions about the game and receive contextual responses. G-Assist will also assist gamers in optimizing their systems with minimal effort, and the closest equivalent I encountered at the event was ChatRTX.

ChatRTX is a local tool that allows you to select from a range of publicly accessible LLMs and configure them for your system. This includes Llama 2, Mistral 7B, CLIP, and others. Essentially, you can craft a personal chatbot with access to your system, and you can engage with it to find documents, images, and vital system information. It’s a localized rendition of what Apple is promoting with its substantial AI overhaul on the iPhone with Apple Intelligence, except you have the freedom to tailor it to your heart’s desire.

While all of these developments are impressive individually, there exists a lack of coherence. With Microsoft likewise promising an array of context-aware AI functionalities in its forthcoming PCs and OS releases, why should I be concerned about Nvidia’s claims? If most of these features are restricted to Nvidia GPUs, will developers effectively integrate them into their projects and disregard AMD/Intel users?

A multitude of inquiries looms regarding the future of AI within gaming and general computing, and each company offers a somewhat distinct answer. But even if hardware and software manufacturers can collaborate to refine their objectives and present a unified vision to the end-user, will the consumers even care? Ultimately, I wish to utilize my PC for gaming, and I want my hardware to perform one task efficiently: running them smoothly. I engage in this to evade social interactions in the real world, and I’m not about to start conversations with video game characters. At least not until they begin to resonate more like a human, which is probably going to take a considerable amount of time.


This page was generated automatically, to access the article in its original form you can follow the link below:
https://www.newsweek.com/entertainment/opinion-nvidia-showed-me-future-ai-gaming-im-not-convinced-2003932
and if you wish to remove this article from our website please reach out to us

Leave a Reply

Your email address will not be published. Required fields are marked *