Two videos of Bharatiya Janata Party (BJP) leader Manoj Tiwari - one in Haryanvi and the other in English, were circulated on WhatsApp groups on the 7th of February, just a day before Delhi was going to go through legislative assembly elections. In both videos, which are strikingly identical, he can be heard appealing to voters - asking them to vote the BJP into power by 'pressing the lotus symbol.' He also took several digs at the rival Aam Aadmi Party and urged people to not vote for them.
Watch how BJP used #deepfake in Delhi elections where a voice mimicking Manoj Tiwari is used, making it look like he is speaking.— Gaurav Pandhi (@GauravPandhi) February 19, 2020
Vid 1: Deepfake in English
Vid 2: Deepfake in Haryanvi
Vid 3: Original Video
This is dangerous, should be illegal! pic.twitter.com/WEXb0zaXdl
On February 18, VICE Media reported that the videos were doctored using an old video of Tiwari where he spoke about the Citizenship (Amendment) Act and found it to be the first instance where the technology - rampant in the porn industry - was used for an Indian election campaign.
While media reports, including an article by the MIT Technology Review, flooded different information mediums about how the scary yet bewitching form of technology has found its place in Indian politics, they constantly referred to it (AI-based video technology) as 'deepfake'.
Sagar Vishnoi, an Artificial Intelligence strategist, whose company worked on the creation of the Delhi BJP leader's video has asserted that 'deepfake' is the wrong term to describe Manoj Tiwari's AI-based creative.
After VICE Media's report, when the BJP IT cell was pulled up for questioning by different news agencies and houses, its spokesperson, Neelkant Bakshi, not only denied that the political party initiated the production of the altered videos but also claimed victimhood. A video report by MoJo Story quotes Bakshi saying, "BJP is the victim of this technology as someone used a Facebook video of Manoj Tiwari and sent us his video with changed content in Haryanvi dialect. It was shocking for us as it may have been used in bad taste especially the Aam Aadmi Party."
Altered Video & Its Story
The first time deepfake technology was used in India was when a few journalists' faces where morphed into pornographic videos, as per Vishnoi.
"Around 96-97% of this technology is used in and for the porn industry," he said. "This is when we use and employ the term 'deepfake' but the same term cannot be extended to works of entertainment and clean advertising."
It has only been around two years since such AI (artificial intelligence) based technology has penetrated various Indian markets. The authorised companies that work with this technology are a handful and can be in 'single digits,' says the AI strategist.
Elaborating on the machine-learning technique that is being referred to as 'deepfake,' he remarked that the word has a very negative connotation and is misleading. "You can say AI-based but not 'deepfake' because the latter is the use of the same technology for producing ill-natured content. Although the technicalities are the same, the intent is different and that makes all the difference."
"For example," he elucidated, "the entertainment industry all of the world uses this AI-based technology for dubbing movies, changing dialogues and languages. This is a sanctioned business and cannot be called 'deepfake'. The Manoj Tiwari video cannot be dubbed as 'deepfake' because there was nothing problematic in the intent that went behind its creation."
Dark Web For Deepfake
The production of such content is undertaken by artists found on the dark web. "Finding ethical artists is really hard but is needed by the clients who need foolproof content," Vishnoi informed. The final creative manufactured after such collaboration can 'enchant a layperson'.
Adding on to his previous point, Vishnoi said, "Being able to acquire talent with the right skillset and values that have prevented him/her to get involved in malicious work, is a tough task."
How much would a morphed video cost? Well, the price on such creatives varies. From being free of cost, worth a few dollars to millions, the technology is niche and the higher you go on the quality scale, the costlier it gets.
"You may be able to create deepfakes on your phone through open-source software applications that give you downloadable videos of poor quality. These applications are very easily available and free. But to create a really believable and definite video, an ethical artist from the dark web is needed," he informed.
Deepfake/AI Tech In Politics
"When the news about Manoj Tiwari became viral, I received several messages from professors, technicians, and people aware of the technology, from all over the globe. A professor from Princeton and a lawyer from Harvard messaged me about how it was wrong of the Indian media to have termed it what they did," Vishnoi said, "It was merely very good lip sync."
But this type of technology has a slippery slope. "Was the outcome of Manoj Tiwari's video bad? No, it reached many prospective voters and was inherently the same message that his party had propagated. There was nothing fake apart from his voice."
The television anchors and digital journalists who loosely employed the word 'deepfake' should have spoken to experts, Vishnoi asserted. "They made it (Tiwari's video) sound negative. The implications of the spread of this technology does paint a scary picture. But these are predictions and strict laws and rules can regulate it."
Nonetheless, Sagar Vishnoi believes that caution had to and needs to be practised. "When Tiwari's video was made, it should have been released with a disclaimer that this wasn't Tiwari talking but a dubbed voice."
His firm had hired a dubbing artist to imitate Tiwari. The artist read the script in Haryanvi, which was then overlapped on to the video where the original audio was in Hindi.
The problem is that there is no law or regulation that overlooks its use and abuse. "People need to sensitized and educated about its use. Only in the last Parliament session were 'deepfakes' discussed but more needs to be done," he added.
Facebook and Twitter have banned posts that use such technology since last December, 'which is great'.
"There are applications programmed to detect doctored videos but there is very little effort concentrated in that direction," Vishoi said. "Soon, the counter-technology or detectors per se will be made available but they need to be promoted and made ready as plugins so that users can identify while they watch."
From this point on, this technology is only going to spread further. The English and Haryanvi videos of Manoj Tiwari - which was a first for Indian politics, was circulated amongst 5,800 WhatsApp groups in the Delhi and NCR region, reaching approximately 15 million people. The report by VICE Media, which broke the story quoted the co-founder of Tattle, Tarunima Prabhakar, saying, "The problem with the 'positive' campaign is that it puts the genie out of the bottle."
Even though Manoj Tiwari's video was a harmless creative to woo the Haryanvi and English-speaking population, there exists a large possibility that companies and political parties will come up with other applications to weaponise this technology.
India, already struggling with the epidemic of fake news, will have another devil to tackle if 'deepfakes' and AI-based video alterations become the new normal.
Tiwari's video was widely used to discourage the large Haryanvi-speaking migrant worker population in Delhi from voting for the rival political party (AAP).
BJP's Bakshi switching from the 'victim narrative' had told VICE Media that the response to those videos had been encouraging for the political party. Housewives in the WhatsApp groups found it endearing to watch a leader speak their language, he had recounted.
After the "viral" response to the Haryanvi video, the party went ahead with the second video of Tiwari speaking English targeted at "urban Delhi voters."
Nevertheless, it should not be forgotten that the viewers of the video were originally misled into believing that the leader had taken the trouble to learn their first language and that India lacks a system of legal scrutiny for the impending development of deepfakes within its politics.