India needs multimodal AI models that consider diverse Indian cultures and sensitivities, moving away from Western-centric models,
executive Mohit Sewak said on Wednesday.
Sewak, AI Researcher and Developer Relations, South Asia, NVIDIA, said this will require large-scale, multilingual datasets and synthetic data generation.
Indian LLMs should be multimodal with voice as an essential element, he said while addressing the Global AI Summit.
“India has around 23 official languages, but a Bharat that I am proud to represent, speaks around 10, 500 unique dialects across 123 unique languages,” said Sewak.
The largest model that I am aware of can deal with 100 languages, he said.
Sewak said his Bharat speaks more comfortably than it could write, and emphasised the need for Large multilingual models. “We are talking about tens of trillions of tokens of data across these languages if we want a real Indian LLM which can actually do the type of tasks that we expect it to do,” he said.
Underlining the importance of taking the Indian culture into consideration while building these models, Sewak said sensitivities in Indians may be different from those of the Western world, on which most LLMs are trained.
.thumbnailWrapper{
width:6.62rem !important;
}
.alsoReadTitleImage{
min-width: 81px !important;
min-height: 81px !important;
}
.alsoReadMainTitleText{
font-size: 14px !important;
line-height: 20px !important;
}
.alsoReadHeadText{
font-size: 24px !important;
line-height: 20px !important;
}
}
Mere translation from one language to another does not solve the problem, he said. If India wants to create a foundation model for Bharat, it is not going to be one single step, he further said.
“We are now in an iterative cycle. We’re not talking about one foundation model for India, but a series of hierarchically more complex, more sophisticated foundation models for India,” Sewak noted.
He said it is important to ensure alignment of cultural diversity in India, and Indian sensitivities, with Indian LLMs.
Most LLMs are trained on Western culture and preferences, he said, adding that it is important to use Bharat-specific data to train Indian LLMs.
Relying on collected data is not enough, building synthetic data is the need of the hour, he emphasised. “Research has proved that for LLMs and Large foundation models, you can generate very good quality synthetic data that could be used to train them optimally,” he said.
Speaking about the threat of LLMs taking away employment, Sewak said scaling of LLMs is imminent and unstoppable. “Trying to stop the revolution will prove more dangerous to human jobs than finding out alternative opportunities that these trends lay before you,” he said.
One should use the demographic dividend to open up new streams where India can scale, he added.