You are currently viewing [Product Roadmap] With over 50,000 connected assets and 200M daily data points, how Zenatix is leveraging tech to bring energy efficiency

[Product Roadmap] With over 50,000 connected assets and 200M daily data points, how Zenatix is leveraging tech to bring energy efficiency


Way back in 2014, Dr Amarjeet Singh, professor at the IIIT, Delhi, had realised that energy as a space is a big opportunity.

More so when broadly divided into two parts — the supply side which involves generation and distribution of electricity (a largely regulated market); and then the consumption side – which involves a wide variety of consumers such as residential, commercial, and industrial, with different consumption patterns and requirements.

“While the supply side is much more consolidated (with a small number of large players), it is highly regulated (though the trend is towards deregulation). Which is why we thought of focusing on the consumption side and developing a robust and scalable solution to be ready when the supply side gets deregulated,” says Amarjeet. 

He realised there was no real-time data in the sector, besides monthly electricity bills. When these bills exceeded budgeted costs, there was no way for consumers to know how to control these costs.

For several businesses, energy is amongst the top three expenses. Real time data monitoring and closing the loop with automated control and ticketing based on analytics is one way to help tackle the costs.

This thought process led to the birth of Zenatix. The energy-tech startup provides an end-to-end Internet of Things (IoT) stack that is developed and maintained in-house, with components either developed ground up or extensions of multiple open-source projects.

The early days 

“We started back in 2014, when platforms like AWS-IoT and Azure IoT did not exist. As a result, we had no option but to build everything ourselves, which, in hindsight, has been a boon. The easier route of using some other platform would have bound us with their development cycle as well as associated costs involved,” says Amarjeet. 

Zenatix’s end-to-end stack involved its software running on sensors/control devices (commonly termed as firmware) that communicates with the software running on edge gateway, which eventually communicates with the stack running on the cloud. The cloud stack is broadly divided into two components:

  1. IoT stack – that takes care of application data and its associated processes (eg. how to display temperature data and/or run analytics on the temperature data)
  2. Device management stack – that takes care of simple installation of sensor/control nodes and the gateway, and their remote commissioning and maintenance

Today, the stack manages more than 50,000 connected assets, managing more than 200 million daily data points. It consists of multiple frameworks that allow business teams to provide customised solutions (as per different customer requirements) on their own without coming back to the technology team.

The first product 

It all started as a simple stack with a monolith architecture. Over time, it has evolved into a service-oriented architecture wherein all services are dockerised and managed through a Kubernetes cluster, an open-source system for automating deployment, scaling, and management of containerised applications.

“The first product was built by bringing one of my former students as a full-time employee, and convincing two of my then- current students to do a full time six-month internship in their final semester with us. This was indeed the best team we could have put together with the constrained resources. We had the first version of our product within four months of starting. Of course, that product has gone through a significant overhaul since then,” says Amarjeet. 

At IIIT Delhi, Amarjeet had been researching and building the energy efficiency space for almost five years before starting Zenatix. As part of his research, Amarjeet had played around with several open-source systems that were built as academic research projects catering to this specific segment. 

A headstart

“We got a headstart by extending one such open-source project from University of California, Berkeley, which we found to be reasonably stable and feature rich during our research days,” he adds. 

The first version of the product, a skeletal version, was an extension of this open-source project. It then developed suitable visualisations as well as basic analytics as the minimum viable product (MVP) to quickly launch for its initial customers.

After narrowing down its focus on the consumption side, the team decided to not cater to the residential consumer segment, since energy prices are highly subsidised for that segment. 

After initially catering to different verticals across industrial and commercial clients with a product that was rich in dashboard visualisation and analytics, Zenatix’s team realised the following:

  1. Visualisations could be alluring but have a tapering effect. Customers did not care about them after some time.
  2. Analytics by themselves do not result in savings. 

“Back then, we were creating a thermal profile of a building to optimise the start and stop times of air conditioning. The optimal start time and stop time that were predicted were then notified to someone since automated control did not exist in these buildings. Thus, the savings realisation was dependent on whether the person (who was notified) acted on the message or not, which most often did not happen,” reveals Amarjeet. 

The team then realised how imperative automated loop closure was, as it could not just rely on visualisation and analytics. And thus, automated control and a ticketing system based on analytics was brought into the product offering. 

“This closed loop control was easier to develop for smaller buildings (or retail chains) that rely more on distributed air conditioning (eg split ACs) than for larger buildings relying on centralised air conditioning. Furthermore, this small building segment had a real problem which our product was specifically addressing – their stores were geographically distributed, and standard operating procedures (SOPs) were not adhered to leading to significant leakages in energy consumption,” says Amarjeet.

Product-market fit 

This led to a product-market fit – a closed loop automated control solution serving the small and medium building segment. With his research background in the same space, and as a CTO for Zenatix, one of the core factors Amarjeet always kept in mind was that the developed architecture should account for long term scalability and flexibility. 

“The open-source project we started off with had several of these necessary ingredients that we find very relevant over several years of our technical journey as well. A time series database to efficiently store and query large volume of sensor data; a suite of frameworks for allowing business teams to deliver customisations without any significant development from the technology team; and associating (thoughtful and suitable) tags with the collected data that will then aid in wide variety of analytics at a future stage,” says Amarjeet. 

This first version, evolved from the open-source project, was a monolith architecture which was necessary to ensure simplicity in product development and enhancement. Over time (within the first four years), the team realised that they have added reasonable complexity in the overall stack, such that it now needs to live through the journey from monolith to a service-oriented architecture. 

“Another real crux of how we have architected our cloud stack from the beginning and that has lived with us throughout our journey, is to keep in mind that we will have small number of very large enterprise customers – each of whom will require custom features to be developed as per their business requirements,” explains Amarjeet.

Correspondingly, the key for the team has been to build suitable frameworks such that business teams deliver these customisations without having to come back to the technology team for new development. 

The stack frameworks 

Today, the stack consists of the following frameworks that support greater customisation as well as templated workflow:

  1. Metrics framework – Allows the analytics/program management team to write custom analytical scripts using the APIs provided by the framework. This framework also then runs these scripts when it is appropriate. 
  2. Issue framework – Internal teams write custom ticketing logic for creating suitable tickets from the health data collected by its own hardware (to be serviced by the field technician and partner ecosystem). They also create customer specific issues by raising tickets when anomalies are found with customers’ assets.
  3. Reports framework – Any customised report can be easily added in dropdown on the reports screen provided to customers.
  4. Custom Dashboard framework – Program management teams customise dashboards (widgets/blocks/pages within it) to provide custom UI/UX to different customers and different user personas within each customer.

“As more customers demanded customisations in the product that we had built, it increasingly became clearer that the stack needed more feature rich frameworks that allow for such customisations to be done easily, without involving tech team for significant new developments,” explains Amarjeet.

In the early stages, these customisations were done in the primary product offering itself, thus making the stack overly complex. This also led to another realisation that each of these custom framework engines must be an app, and the team needed to move to a service-oriented architecture. 

Some of these customisations also brought in relevant and newer databases, like Druid DB rather than querying from their time series database to serve real time update needs. 

Building on multiple systems 

“As we moved to service-oriented architecture, it also became amply clear that managing such large number of dockerised services will be tricky with the lean team we have. We therefore took an early call to move all the dockerised services into a Kubernetes cluster. While it was tough early on as no one in the team was an expert in Kubernetes, as we stabilised the infrastructure, we felt it was the right design choice considering how it now automatically take care of optimally using the resources as well as ensuring the uptime of the services,” explains Amarjeet. 

As for databases, the team realised early on that bringing in the right databases for the appropriate functionality of the stack is important than to keep going with one standard database. “Across our stack, the team uses different types of databases from SQL to NoSQL to Time seriesDB and RealtimeDB to serve different requirements across the stack,” says Amarjeet. 

He adds that API first has been another key thought process – allowing the team with easy integrations with third party systems (for both ingress and outgress) and supporting small custom requirements as standalone services.

“Wireless first on the hardware has been a key to ensure that the systems we develop are very easy to install, especially in a brownfield or retrofit environment. We brought in WiFi very early in our stack, and it has now been more than three years since we moved away from WiFi to OpenThread – a low power wireless mesh (allowing us to have sensors that run on batteries) open sourced by Google,” says Amarjeet. 

Future plans 

From ensuring customisations for large enterprise customers, Zenatix is now gearing towards serving the long tail of small customers through an ecosystem of partners. 

This involves a mindset change of supporting pre-defined templates wherever possible and not getting into complex no-code customisations. “We are currently looking at different parts of our stack with how they can be templated for this very large partner ecosystem,” says Amarjeet. 

“Furthermore, after proving scale and reliability in the Indian markets, we recently launched our solution in the Middle East markets. We also intend to grow into Europe and US markets by the end of this year – largely driven by the partner ecosystem. This further involves doubling down on template philosophy and developing some features that these geographies will require (and that were not so relevant in the Indian context),” says Amarjeet. 

“Broadly, we intend to keep moving forward on the “hardware agnostic software” philosophy and reach a point where the global partner ecosystem can bring in their own hardware (certified and locally available in their geography). Their software can then run on that diverse hardware ecosystem to put everything together in a seamless plug and play manner,” concludes Amarjeet. 



Source link

Leave a Reply