The Joint Parliamentary Committee’s (JPC) recently tabled report on the Personal Data Protection Bill, 2019 (2019 Bill) recommended expanding the scope of the law to bring non-personal data (NPD) within its ambit, disclosing fairness of algorithms, and more. Once introduced the law would require startups to rethink their data handling practices, factor-in significant compliance costs.
To help players from the startup ecosystem decode the impact of the proposed data protection law, Ikigai Law organised an interactive virtual discussion, ‘Unscramble: Impact of India’s Data Protection Law on Startups’, on February 24.
The discussion saw wide participation from the startup community with representatives from leading fintech, edtech, healthtech, ecommerce and AI services companies.
Led by Sreenidhi Srinivasan, principal associate at Ikigai Law, the discussion unpacked the impact of the 2019 Bill on startups. It touched upon the challenges of categorising datasets into personal data and NPD, how compliance with the law could impede startups from expanding, IPR implications of certain disclosure, among others.
Regulating NPD And Other Proprietary Data Is Like Asking Coke To Reveal its ‘Secret’ Formula
The personal data protection law has been in the works for close to five years now. Along the way the government caught on to the idea of utilising anonymised data/business information (NPD) to derive economic value out of it. Currently, the law proposes to regulate both personal data and NPD. It permits the central government to direct companies to share NPD for policy making purposes. Sreenidhi sought views on the expanded scope of the law, need for regulating NPD, and its impact on startups who rely on data moats for their competitive edge.
Ashutosh Senger, lead counsel of Florence Capital (a lending platform), spoke to how the inclusion of NPD is a step towards balancing access to data, but the competitive advantage that proprietary data assets provide cannot be ignored. Hinting at the need to balance the interests of the government in putting NPD to use for socio-economic purposes with intellectual property (IP) rights of businesses, he said “how do we promote the interest of the state as well as the whole ecosystem or the environment of entrepreneurship and innovation.”
Manuj Garg, cofounder of MyUpchar (a healthcare platform), added that data is a ‘key currency’ for all startups. “Everything that you generate out of the work that you have done is your intellectual property. It’s what gives you an advantage in the market and allows you to do what you’re doing. To say that this data has to then be made public, essentially kills the business.”
There are other provisions under the proposed law that could hurt businesses’ IP rights, like asking them to disclose the ‘fairness’ of algorithms. There is no threshold to define ‘fairness’ yet, and even if such clarification comes in the future, technical know-how about algorithms is not public knowledge.
“It is like asking Coke to reveal their secret formula – taking away its incentive to operate,” added Garg.
Megha Nambiar, senior legal counsel of HyperVerge (an identity verification and fraud detection platform), agreed that including NPD adds “a lot of uncertainty into the mix because this is essentially private data that is being encroached upon. And there’s a mandatory element to data sharing which again becomes a problem.” She alluded to the lack of incentives to promote data sharing.
Nambiar posed some questions to the room, and wondered if such data sharing could be voluntary and how would this be priced, valued and shared.
Sruthi Srinivasan, legal counsel of Uni Cards, agreed with the earlier speakers and questioned the need to regulate derivative/anonymised data sets.
Startups Will Face Immense Challenges In the Way the Proposed Law Is Structured
In the run-up to the law, businesses would have to revisit their data related policies and tweak their budgets to comply with the law. Sreenidhi sought views on the compliance challenges that companies anticipate.
Aditya Shamlal, legal head of Zeta (a fintech startup), said the law in its current form is ‘compliance heavy’. And that “the real teeth of this law will be shown in the regulations and codes of practice issued on things like privacy by design, the right to be forgotten, and more. Until those regulations come up, you are operating in a nebulous space, where you’re making educated guesses on the basis of European experiences like the GDPR.” He believed that newer data centric startups would find it difficult to enter the market and expand when the law comes into force.
Participants also discussed the interplay between sectoral regulators and the upcoming data regulator. Sruthi from Uni Cards, noted that many regulated entities in the fintech space are already incorporating the right to withdraw consent (envisaged under the proposed data protection law) in their standard data practices. She wondered how withdrawing consent would play out in a regulated space where players need to retain data sets as data logs in their system to show for audit purposes.
Sruthi said that regulators will have to deal with questions on the quantum of information that can be withdrawn and retained. “In the future customers may approach regulated entities and ask for a complete deletion of their information from the system. However, as a lending platform RBI requires you to retain that information to validate the fact that you onboarded this customer,” she remarked.
Vinita Varghese, head of legal, Urban Company (a hyperlocal expert services platform), agreed with Sruthi and added that, “When you talk about implementing the law in the way it is currently drafted, it’s an investment of cost and time for startups and smaller organisation, while for large organisations — who haven’t begun this process — it’s an absolute monstrous undertaking.”
She said the proposed law requires companies, regardless of their size, to create data inventories to be able to bucket data into different categories. This is not a strictly legal exercise as it would involve every vertical in the organisation. Varghese also said to enable startups to comply with the law would require awareness and education at a cross-functional level.
Building on the issue, Garg discussed how an average cofounder with no legal expertise, would struggle with unpacking what personal data, sensitive and critical personal data mean and include. With different categories of data, come different thresholds of consent (heightened obligations to get explicit consent in case of sensitive data). He mentioned how under the telemedicine guidelines, if a patient initiates consultation, the app need not take explicit consent from the patient. But under the proposed data law, it is “unclear how explicit consent will be managed.” Garg also pointed out the ambiguity in obtaining hardware software certification (which is introduced to maintain data integrity), and the need to revisit such certification when the software is updated.
Panduranga Acharya, general counsel from Girnarsoft.com (an IT solutions provider), said that “Compliance cannot be hard, it can be expensive.”
He called for the inclusion of thresholds to exclude small businesses from the more expensive compliance requirements. Acharya also identified the difficulties with different consent standards for different categories of data. He said that “Categorisation like sensitive, critical, non-sensitive data would lead to a multiplicity of consent or would it take a bipolar rate consent mechanisms. Deploying such consent mechanisms could be very difficult for any business.”
Enhanced Safeguards For Children’s Data Come At a Heavy Cost for Edtech Businesses
Sachin Ravi, cofounder of Qshala (an edtech platform), spoke about how both China and India are working on policies to regulate the edtech sector. He noted that defining children as anyone below 18 is concerning for several players in the sector. He said that “guidance that comes from the government for edtech on how the data can be played with” could be beneficial.
Shatakrutu Saha, legal advisor at KidsChaupal (an edtech platform), remarked that since the rationale for setting the age at 18 flows from provisions in the Indian Contract Act and the Majority Act, it is not likely to be changed. He did, however, note that the Juvenile Justice Act understands the age of majority differently.
Saha shared an interesting insight on how a 16-year-old Indian beat World’s chess champion Magnus Carlsen, “Imagine if this child is restricted by the consent of his guardian, who opposes this talented boy’s decision on which courses to take online—that’s a problematic case scenario.” He also spoke about the difference in approach between India and other global frameworks like the EU, which define a child as anyone who is between the age of 13-16.
The lack of clarity on age gating mechanisms and parental consent provisions was also discussed. Ravi from Qshala said that parents could explore creating pseudonymous accounts for children rather than giving up personal information. However, questions linger over the modalities of using anonymised data to create content and deliver services to both adult users and children. Saha from KidsChaupal noted that age-gating could be risk-based, explaining that such measures are in place for products and services targeting specific age-groups and demographics such as dating and online gaming apps. In these apps and services, the age-gating is done to protect minors from risks associated with such services.
Saha added that implementing new features could trigger the definition of harm or tracking in the proposed law. The blanket prohibition on tracking/profiling children’s data and the broad definition of harm could create friction between product design and legal teams in edtech startups
The Proposed Law Needs To Be Interoperable With Global Frameworks To Drive India’s Data Economy
Sreenidhi asked the room if they felt that the proposed law promotes interoperability with global data frameworks and how compliance with the law could impact access to global markets.
Neelakshi Gupta, Legal Associate at Qure.ai (a healthtech platform), posed several questions in terms of global compliances — she asked, “What constitutes as cross-border data transfer? When will the new Standard Contractual Clauses (SCCs) apply? If we sign the SCCs with our customers, can we say that we are in compliance of the Schrems II judgement?”
In terms of global compliance strategies, Nambiar from HyperVerge said that the move towards data localisation in several data protection frameworks globally is concerning as the business involves vendors in different geographies. She said that “if we see data localisation becoming more of a global trend then having local data centres in each of these geographies is going to be very expensive for us.”
Varghese from Urban Company agreed that data localisation provisions will create compliance challenges because different countries seem to have a different threshold for localisation. She said that a gold standard for data localisation could be developed that is business friendly and compatible across jurisdictions.
Next Steps On Engaging With Startups’ Concerns
The costs of compliance and the lack of clarity around certain provisions are key concerns for startups. The government’s goal to position India as a major player in the digital economy should be accompanied by an enabling data protection regime. It should allow startups to scale up and tap into global markets.
Currently, the IT Ministry is discussing the recommendations of the JPC and has acknowledged the compliance challenges that come with the law. This is a great opportunity for startups and small businesses to engage with policy makers and call for wider public consultations on the law and work towards mutually beneficial solutions.
*The quotes have been edited and refined to fit the article and the context.
*This article has been co-written by Shrinidhi Rao and Kanupriya Grover, associates at Ikigai Law