has secured $7 million in Series A funding round led by , with participation from Shastra VC, Titan Capital, and high-profile angels, including Akshay Kothari, Co-founder of Notion.
The new capital will be used to fuel the company’s R&D and growth for their enterprise-focused MLOps orchestration platform.
“Building generative AI applications is a core need for enterprises today. However, the adoption of generative AI is far behind the rate of new developments. It’s because enterprises struggle with four bottlenecks: lack of standardised workflows, high costs leading to poor ROI, data privacy, and the need to control and customise the system to avoid downtime and limits from other services,” said Amritanshu Jain, Co-founder and CEO, Simplismart.
Founded by Amritanshu Jain and Devansh Ghatak in 2022, Simplismart is a cloud-based MLOps workflow orchestration platform that helps organisations fine-tune, deploy, and observe models at scale.
In the span of two years and with less than $1 million in initial funding, Simplismart claims to have outperformed public benchmarks by developing the world’s fastest inference engine. This engine enables organisations to execute machine learning models at speed, enhancing performance while reducing costs.
“As GenAI undergoes its Cambrian explosion moment, developers are starting to realise that customising & deploying open-source models on their infrastructure carries significant merit; it unlocks control over performance, costs, customizability over proprietary data, flexibility in the backend stack, and high levels of privacy/security. Not only did Simplismart identify this opportunity early, but with a small team, they have already begun serving some of India’s fastest-growing AI-powered companies in production,” said Anand Daniel, Partner at Accel.
For instance, its software-based optimisations allow
‘s AI model Llama3.1 (8B) to operate at a rate of over 440 tokens per second.Unlike many competitors who prioritise hardware enhancements or cloud solutions, Simplismart has achieved this improvement through a MLOps platform designed specifically for on-premises enterprise environments and agnostic towards choice of model and cloud platform.