In today’s rapidly evolving tech landscape, organisations are increasingly seeking efficient and cost-effective methods to develop and deploy GenAI solutions across industries. NVIDIA, in particular, is a forerunner in building and deploying scalable and economic GenAI solutions for all industries.
During a masterclass session at DevSparks, Jigar Halani, Director – Solution Architect & Engg, NVIDIA and Amit Kumar, Manager Solutions Architecture and Engineering for Industries, NVIDIA, explained how NVIDIA’s GenAI SDKs and workflows, particularly the NeMo framework, facilitate the entire lifecycle of GenAI model development—from data curation to fine-tuning, implementation of guardrails, and optimised inference RAG Services.
In today’s time, anything having a structure, in the data format, everything is getting worked out in generative AI, Halani said while talking at the session.
“You want to create a voice, you can use generative AI. You want to do chemical engineering, you are using generative AI or you want to create materials, people are using generative AI. And the process is just not getting over,” Halani explained.
“Robots are getting operated out, getting simulated in generative AI format, and thereby, human robots are becoming more and more pervasive, and we are seeing this transformation in the last six months taking place at a pace,” he added.
Headquartered in California, with a presence in Pune, Bangalore, Hyderabad and Gurgaon in India, NVIDIA helps enterprises deploy generative AI applications into production, at scale, at any location.
Today, NVIDIA offers a full-stack accelerated computing platform purpose-built for generative AI workloads. The platform is both deep and wide, offering a combination of hardware, software, and services—all built by NVIDIA and its broad ecosystem of partners—so developers can deliver cutting-edge solutions.
Developers can choose to engage with the NVIDIA AI platform at any layer of the stack, from infrastructure, software, and models to applications, either directly through NVIDIA products or through a vast ecosystem of offerings.
Meanwhile, NVIDIA’s umbrella offering for generative AI, NeMo framework, is a scalable and cloud-native generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech).
Amit stated that NeMo is cloud agnostic.
“Same technology works on cloud, same technology works on premise. So essentially, if you build a pipeline, the same pipeline, you can migrate from one cloud to another cloud, and the same pipeline you can migrate on premise, depending on your economics calculation,” he said.
Recently, the company has also announced that it is providing the world’s leading robot manufacturers, AI model developers and software makers with a suite of services, models and computing platforms to develop, train and build the next generation of humanoid robotics.
NVIDIA provides three computing platforms to ease humanoid robotics development: NVIDIA AI supercomputers to train the models; NVIDIA Isaac Sim built on Omniverse, where robots can learn and refine their skills in simulated worlds; and NVIDIA Jetson Thor humanoid robot computers to run the models. Developers can access and use all—or any part of—the platforms for their specific needs.
At the event, Halani also announced that NVIDIA is conducting its first AI summit in Mumbai in October this year.
“We call this a mini GTC. Every March, GTC at NVIDIA brings all together at least 20,000 plus people physically and over two lakh people virtually, and a gamut of CEOs and CXOs gather to that event as well,” he added.
He also mentioned that Jensen Huang founder of NVIDIA will be there at the event as a keynote speaker.