When Device Security Becomes State Control
Rudraksh Lakra / Feb 16, 2026Rudraksh Lakra is a Research Fellow with the Applied Law and Technology Research team at Vidhi. This post was written in their personal capacity.
The recent reports of a draft proposal to overhaul smartphone security rules in India raise serious questions about the country’s approach to digital security, privacy, and regulation. The proposed standard, the Indian Telecom Security Assurance Requirements for Mobile User Equipment, is presented as a measure to strengthen device security and curb digital fraud. These are legitimate policy objectives. However, the methods chosen to achieve them appear misaligned with technical realities, constitutional principles, and established norms of sound regulatory practice.
Following the media coverage, the Ministry of Electronics and Information Technology (“MeitY”) responded by denying that the government is seeking access to companies’ source code. That reassurance is inconsistent with Section 6.17 of the proposed standard, which requires source-code-level review as part of assessing a company’s security development lifecycle. By contrast, regulatory approaches in other jurisdictions, such as the European Union’s Ecodesign framework, emphasize technical documentation and conformity assessments rather than direct access to proprietary code. Similarly, internationally recognized information security standards, including ISO/IEC 27002 and ISO/IEC 15408 (Common Criteria), rely on process assurance and independent evaluation mechanisms without mandating source-code disclosure.
Moreover, even if the ministry’s statement on source code was made in good faith, the core concern remains. The draft marks a shift toward deeper, more continuous state involvement in the design, maintenance, and updating of smartphone software. A closer reading of the standard reveals the extent of this reach. The framework moves into the operating system, application environment, hardware security features, and the internal processes through which software is built and patched. Vendors may be required to submit detailed documentation of their security architecture, undergo structured vulnerability testing, and provide technical inputs. It also introduces obligations around system logging, update governance, vulnerability analysis, and device-level security controls. Framed as safeguards, these measures shift regulatory scrutiny from the network edge into the core design and everyday functioning of consumer devices, expanding state reach into the private digital spaces of millions of users.
Another key concern is the lack of transparency. The public learned of the proposal through media leaks, not an open consultation. There is no clear record of who was consulted or what evidence underpins the decision. This is troubling for a policy that affects a vast number of users and a global technology ecosystem. Decisions of this scale should come from transparent debate, yet digital rules are increasingly appearing with minimal explanation and rushed timelines for scrutiny. This pattern is consistent with a broader trend in India, where meaningful public consultation is the exception rather than the norm. In a policy domain as complex and technical as this, consultation is essential to draw on independent expertise and practical industry knowledge that can improve regulatory design.
Policies that embed privileged visibility into internal system operations can expand state power in the name of security without necessarily delivering proportionate safety gains. The proposed standards mandate detailed logging of security events, including login attempts, configuration changes, and app activity, all of which must be retained for at least 12 months. Even when content is not captured, metadata of this kind can reveal an equally, and sometimes more, intimate picture of a person’s life when collected at scale over long durations. Patterns of device use, associations, habits, and movements can be inferred from such records.
Large, centralized or standardized stores of sensitive systems and logging data also become attractive targets for attackers. At the same time, additional oversight requirements for software can delay security patches, leaving devices exposed to known flaws. Together, these effects increase both surveillance risk and technical vulnerability. These measures risk failing the constitutional privacy test set out in Puttaswamy I (2017) and Puttaswamy II (2018), which require that any infringement of privacy be necessary and proportionate.
Mandating extensive device-level logging, long-term data retention, and deep visibility into system operations may go beyond what is strictly required to achieve cybersecurity objectives. Many of these goals could be met through less intrusive alternatives, such as aggregated or anonymised incident reporting. Moreover, creating large, centralized repositories of sensitive system data and adding procedural hurdles to software updates may increase both surveillance risks and technical vulnerabilities.
They form part of a broader sequence of recent digital security measures. The SIM binding directive issued by the Department of Telecommunications in November 2025 sought to tie certain communication services to continuous SIM-based verification, reshaping how identity is enforced at the network layer. Around the same time, authorities sought to require smartphone manufacturers to pre-install a government application, the Sanchar Saathi app, framed as a tool for fraud prevention and cybersecurity. That move prompted swift public and industry backlash and was withdrawn within roughly a day.
The current smartphone security framework goes further, embedding compliance obligations into the operating system and hardware security architecture. Taken together, these steps illustrate a vertical expansion of regulatory intervention, moving from network identity to applications and now into the core design of personal devices.
In India, digital authoritarianism must be understood as part of a broader pattern of competitive authoritarianism. Formal democratic institutions continue to operate, yet the state is steadily expanding its structural influence over information flows, digital infrastructure, and private actors through regulatory tools. Digital security often serves as the entry point for embedding this influence into technical systems.
These measures are framed as neutral, technical, and protective, which helps manufacture public consent and dampen resistance, even as they expand state capacity. In practice, they increase the government’s ability to shape how technologies function and what kinds of data they generate. At the same time, this trajectory is not uncontested. The swift rollback of the Sanchar Saathi app shows that public criticism, media scrutiny, industry resistance, and legal challenges can still curb state overreach.
The risk of gradually normalizing intrusive controls should therefore be taken seriously. Technical mandates introduced for cybersecurity can later be repurposed for monitoring or indirect pressure. Logging frameworks can evolve into tools for profiling, while updated governance mechanisms can become levers for how and when devices receive critical changes. Once these controls are built into the operating system and tied to market access, they are difficult to unwind. Clear legal limits, independent oversight, and sunset clauses are essential to prevent security policy from quietly transforming into a system of routine digital supervision.
In an earlier era, constitutions protected the privacy of our homes, papers, and effects. Today, the smartphone holds all three at once: our correspondence, records, movements, and associations compressed into a single, ever-present device. It is therefore essential that individuals retain meaningful control over how these devices function, subject only to restrictions that are proportionate. Regulations that narrow this control in the name of safety must be backed by strong evidence of necessity and effectiveness. Otherwise, they risk turning citizens into passive subjects of technical governance.
A more balanced security strategy would focus on outcomes rather than embedding state oversight into device architecture. Strengthening breach notification frameworks, improving cross-border cooperation against fraud networks, investing in user awareness, and promoting secure coding practices are proven approaches. These measures address harm directly while preserving privacy, innovation, and user autonomy.
Authors

