India’s Global AI Pitch Masks A Troubling Reality At Home
Tavishi / Feb 13, 2026
Indian Prime Minister Narendra Modi addresses armed forces personnel during a Diwali celebration onboard the INS Vikrant on October 20. (Prime Minister's Office)
Next week, heads of state, global policymakers, and Big Tech executives will convene in New Delhi for the India AI Impact Summit. As the first AI summit to be hosted in the Global South, it positions India at the centre of deliberations on the future of artificial intelligence. The State has framed its vision for AI governance around the theme of “Democratizing AI and Bridging the AI Divide.” However, this aspirational framing risks obscuring deeply troubling domestic realities, particularly at a moment when India has witnessed a marked democratic backsliding and a consolidation of authoritarian power under Prime Minister Narendra Modi.
Religious minorities and other marginalized communities increasingly face heightened surveillance, targeted hate, systemic discrimination, and state repression. The unregulated deployment and weaponization of AI systems are only making things worse.
Deployment of AI systems for state surveillance
Recently, the Chief Minister of Maharashtra, the second-most populous state in India, announced developing an AI tool to identify “suspected Bangladeshis” based on language and speech patterns. This announcement comes amidst the inhumane and often wrongful deportations of Bengali-speaking Muslim citizens to Bangladesh under allegations of being “illegal immigrants.” With nearly 30 million Bengali-origin Muslim Indians increasingly susceptible to police brutality, such systems risk becoming templates for AI-enabled persecution across other states.
Law enforcement agencies are increasingly adopting a range of AI systems for mass surveillance and predictive policing across major Indian cities. The use of predictive policing algorithms risks automating and amplifying existing biases against Muslim minorities, caste-oppressed Dalit and indigenous Adivasi communities, who constitute a disproportionate majority of the undertrials in India’s prisons. Similarly, indiscriminate deployment of Facial Recognition Technology (FRT) for mass surveillance in public spaces violates citizens' constitutional right to privacy and undermines their right to protest. In fact, in the past, the Delhi Police has deployed the Automated Facial Recognition System (AFRS), originally procured to search for missing children, to surveil protesters opposing the discriminatory Citizenship Amendment Act.
In Lucknow, Uttar Pradesh, AI-enabled cameras under the Safe City Project generate real-time alerts to law enforcement when they detect “subtle signs of distress” or unusual hand gestures or movements. The system, deployed with the expressed objective of preventing harassment of women and other vulnerable groups, has not only failed to meet the objective but also raised concerns around intensifying surveillance and targeting of dissenters, minorities, transwomen, sex workers, and interfaith couples who are often at the receiving end of police harassment. It is important to note that FRT systems operate in a complete regulatory vacuum with no judicial pre-authorization or independent oversight mechanisms in place.
Weaponization of AI for hate against minorities
Generative AI has accelerated the production and circulation of dehumanizing content targeting Muslim communities in India, with the ruling Bharatiya Janata Party (BJP) itself emerging as a key contributor. Just a week before the India AI Impact Summit, a state unit of the ruling BJP uploaded an AI-generated video on its official X account, depicting the Chief Minister of Assam, a northeastern state slated for elections, shooting at two visibly Muslim men.
Although the video was deleted after widespread criticism, it was far from an isolated instance of AI-generated imagery being used to demonize and vilify the Muslim minority community. Social media platforms are also full of Islamophobic content depicting Muslim men as violent, deviant, criminal, and fetishizing muslim women through sexualized imagery.
Independent investigations have revealed the prevalenceofbias in popular Generative AI models and have demonstrated the ease with which stereotypical and harmful imagery can be created. This is facilitated by the complete opacity surrounding these systems, which often lack adequate safety guardrails, especially for hate speech in the Global South.
The recently notified Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules 2026 for synthetically generated content do not impose any accountability obligations on platforms. Instead, these amendments have raised concerns about increased state censorship, particularly through the shortened three-hour timeline for the removal of unlawful content, notified via court or executive order. Furthermore, the efficacy and technical feasibility of the mandated labelling and provenance requirements remain disputed. It is thus unlikely that the proliferation of hate speech from generative AI models will be effectively addressed by these amendments.
AI for socio-economic development and welfare delivery
India’s vision to democratize AI hinges on promoting private entrepreneurship and innovation to expand the adoption of AI across social sectors such as health and education, to advance socio-economic development. The India AI Governance Guidelines, released in November 2025, continue to focus on “innovation over restraint” and on the integration of AI with Digital Public Infrastructure (DPI), including the national digital identity Aadhaar.
However, experts warn that this technocratic push can lead to the datafication and commodification of the poorest citizens. In the absence of robust oversight mechanisms, citizens might be denied the right to access basic services without being subjected to algorithmic experimentation. Furthermore, in the absence of public accountability mechanisms, errors or biases in opaque algorithms can lead to discrimination and exclusion.
Despite India’s global promotion of DPI as a success story, mandatory biometric authentication via Aadhar for access to welfare services has excluded some of the most vulnerable populations from receiving their basic entitlements, including subsidized food rations.
Today, many Indian states are building massive family databases containing extensive personal information of citizens, and using opaque algorithms to determine eligibility for welfare, often resulting in tragic exclusions. Recently, facial recognition-based authentication was made mandatory for pregnant and lactating mothers to access take-home rations, sparking fears of exclusion.
These opaque algorithmic systems impose an unfair burden on citizens to prove their eligibility to access public goods and risk jeopardizing their fundamental rights. This is also being witnessed in the controversial special intensive revision of electoral rolls, being conducted by the Election Commission of India (ECI), whose impartiality is increasingly under the scanner. An independent investigation has found that the ECI rolled out opaque algorithmic systems without any prior written instructions or SOPs on record. The algorithm flags suspected voters with logical discrepancies who now risk disenfranchisement and will have to produce evidence for inclusion in the electoral rolls.
AI for good or harm?
We must look beyond the official rhetoric of “AI for social good” and critically examine the documented cases of AI harms against minorities and other marginalized communities. AI Summits should not become playgrounds for Big Tech lobbying and national posturing, while paying only lip service to human rights.
If India is serious about AI for social good, it must adopt a rights-respecting approach to AI design, deployment, and governance, one that puts protection of minorities at the heart of any such conversation. This includes recognizing the need to move beyond self-regulation and conducting meaningful and transparent multi-stakeholder consultations to draft robust liability regimes and transparency obligations for AI systems, and enforceable remedies for communities harmed by AI abuse.
Regulation must prohibit AI-driven profiling based on religion or ethnicity, the use of predictive policing, and biometric and facial recognition systems for mass surveillance. Any deployment of AI systems in public service delivery must be conducted after consultation with local communities and be subject to human oversight, mandatory transparency disclosures, periodic risk assessments, and independent audits.
Local communities’ rights to demand explanations, seek human reassessment, grievance redressal, and recall of algorithmic systems must be recognized and protected. Without regulatory oversight and guardrails, AI in India will continue to function less as a public good and more as a tool of oppression.
Authors
