A businessman from Ahmedabad was allegedly targeted in a sophisticated cyber fraud involving AI tools, Aadhaar manipulation and deepfake videos. According to police, fraudsters changed his Aadhaar-linked mobile number, tampered with biometric authentication and used forged digital verification to bypass security systems.
They then opened a bank account via e-KYC and secured a ₹25,000 loan in his name without triggering alerts. Four individuals, including a Common Service Centre (CSC) operator, have been arrested. The case highlights growing concerns over AI misuse and vulnerabilities in India’s digital identity ecosystem.
How The Fraud Unfolded
The incident came to light after the victim, a businessman from Ahmedabad, noticed that he had stopped receiving OTPs on his registered mobile number for nearly two days. Suspecting foul play, he approached the cybercrime authorities, who later discovered that his Aadhaar-linked mobile number had been changed without his knowledge. This enabled the fraudsters to intercept authentication messages and gain control over his digital identity.
Investigators found that the accused used a mix of unauthorised Aadhaar update access and artificial intelligence tools to carry out the fraud. Deepfake videos were allegedly generated to bypass facial recognition systems used in biometric verification processes. Once the identity checks were compromised, the fraudsters completed e-KYC formalities, opened a bank account in the victim’s name, and availed a personal loan of ₹25,000. Police officials confirmed that the fraud was executed in a highly coordinated manner, exploiting gaps in both technical systems and human oversight within verification channels.
Arrests And Alleged Role Of CSC Operator
Cybercrime officials have arrested four individuals in connection with the case, including a Common Service Centre operator who is suspected of playing a key role in facilitating unauthorised access to Aadhaar-related services. The other accused were allegedly involved in manipulating digital records, handling identity data and executing fraudulent transactions using the victim’s credentials.
According to investigators, the group operated as a coordinated network, using access to enrolment and verification systems to alter mobile numbers linked to Aadhaar profiles. This allowed them to redirect OTPs and override critical authentication steps used in banking and financial services. Authorities also suspect that AI-generated synthetic media was used to impersonate individuals during verification checks, significantly increasing the sophistication of the fraud. Police have seized digital devices from the accused and are continuing their investigation to determine whether additional victims or a wider network is involved.
Rising Threat Of AI-Driven Identity Fraud
The case reflects a growing trend where artificial intelligence is being misused to strengthen cybercrime operations, particularly in identity theft and financial fraud. As India continues to expand its digital public infrastructure especially Aadhaar-linked services and e-KYC systems criminals are increasingly exploiting vulnerabilities in verification processes that rely heavily on biometric and OTP-based authentication.
Experts warn that deepfake technology and AI-generated identity tools are making it easier for fraudsters to impersonate individuals and bypass traditional security mechanisms. While digital systems have improved access and efficiency, incidents like this expose critical gaps in monitoring, enforcement and real-time fraud detection. The case has reignited concerns over how securely personal data is stored, accessed, and verified across multiple platforms.
The Logical Indian’s Perspective
This incident is not just a cybercrime case, it is a wake-up call about the evolving intersection of technology, trust, and security. As India deepens its digital infrastructure, safeguarding identity systems must become as important as expanding them. The misuse of AI in this case shows how innovation, if unchecked, can be turned into a powerful tool for exploitation.
There is a need for stronger accountability in Aadhaar-linked services, tighter regulation of access points like Common Service Centres and advanced AI-based fraud detection systems within financial institutions. At the same time, citizens must be made more aware of early warning signs such as unexpected OTP failures or changes in registered credentials. As digital dependence grows, so does the responsibility to ensure it remains safe and inclusive. In an era of AI-driven crime, how can we collectively ensure digital trust without compromising accessibility?
Also Read: The $1.1 Billion Trap: Inside the Rise of Billion-Dollar Investment Scams on Social Media
No OTP, no problem: Ahmedabad gang uses AI deepfakes to bypass Aadhaar checks
— IndiaToday (@IndiaToday) April 29, 2026
Read More: https://t.co/zZegXr5wtU#Ahmedabad #AadhaarBiometric #Deepfake #ArtificialIntelligence | @brijdoshi pic.twitter.com/hsrn30w7ep












