Chennai: The Office of the Principal Scientific Adviser (PSA) convened a high-level roundtable on techno-legal AI regulation to advance responsible and innovation-aligned governance frameworks ahead of the India AI Impact Summit 2026.
The PSA organised the roundtable on December 22, 2026, in collaboration with the iSPIRT Foundation and the Centre for Responsible AI at IIT Madras. Ajay Kumar Sood, Principal Scientific Adviser to the Government of India, chaired the session. The event also served as an official pre-summit engagement for the India AI Impact Summit 2026.
Senior policymakers, technologists, legal experts, and industry leaders joined the discussions. Participants included Dr. Preeti Banzal, Adviser and Scientist ‘G’ in the Office of the PSA, Kavita Bhatia, Scientist ‘G’ and Group Coordinator at the Ministry of Electronics and Information Technology, Hari Subramanian of the iSPIRT Foundation, and Prof. Balaraman Ravindran, Head of the Centre for Responsible AI at IIT Madras. Academics and startup leaders from across the AI ecosystem also took part.
To set the context, Dr. Banzal outlined India’s approach to techno-legal AI regulation. She emphasised practical implementation, sustained capacity building, and global cooperation. According to her, India must present clear and workable pathways that balance innovation with accountability.
AI governance roundtable focuses on privacy and accountability
In his keynote address, Prof. Sood said India stood ready to adopt a techno-legal approach to AI governance. He stressed the need to embed legal and regulatory principles directly into AI systems. Such integration, he said, would ensure accountability, transparency, data privacy, and cybersecurity by design. He urged participants to explore all viable models for building a strong governance framework.
Meanwhile, co-moderators Hari Subramanian and Prof. Ravindran highlighted key technical and policy challenges. These included data protection, leakage risks, differential privacy, accuracy, and system throughput. They also flagged trade-offs between privacy safeguards and performance. At the same time, they stressed equity, data sovereignty, and broader economic and strategic concerns.
Experts further discussed the need for strong consent and privacy mechanisms across AI training, inference, and deployment. The deliberations covered alignment with the DEPA framework, compliance-by-design architectures, and regulatory responses to non-deterministic AI systems. Participants also examined AI-generated content and related copyright issues. They noted that model robustness must align with technical and socio-economic realities to keep solutions practical for users.
Concluding the session, Dr. Banzal said the insights would feed into the Safe and Trusted AI Chakra of the India AI Impact Summit 2026. She added that the Office of the PSA would publish an explanatory white paper on techno-legal AI governance based on the roundtable’s recommendations.