"Writing and speaking about the matters where they don't shed light, I'm always on my toes to bring out the untold, unheard stories from the background of Economy and Defense."
Last week amid the novel coronavirus outbreak, Facebook underwent an intriguing technological glitch when its Artificial Intelligence (AI) bots decided to take down a number of posts both related or unrelated to its community standards.
Many Indians woke up with a notification that their posts have been taken down by Facebook as it apparently didn't adhere to the company's community standards. But why were AI algorithms looking at posts in the first place?
It turns out that due to the COVID-19 pandemic, tech companies such as Facebook had to grant leaves to the content moderators tasked with reviewing and regulating social media content. This job may be an intricate and unachievable one from home, therefore, the company relied on artificial intelligence.Hence the random flagging of human facebook posts by AI's reliant on machine learning and devoid of original thinking or nuance.
Facebook justified its decision in a statement that read "the decision to rely more on automated tools, which learn to identify offensive material by analyzing digital clues for aspects common to previous takedowns, has limitations".
According to the Wire report , Facebook employed at least 15,000 content moderators as contract workers in 2018 (regularly working with companies such as Accenture, and Cognizant) who focus on keeping the platform free of violence, child exploitation and spam. In its attempt to revamp and integrate artificial intelligence with content moderation, Zuckerberg has called for 'Future of content moderation'.
This will severely impact the lives of the working class who will not only lose their jobs but will be replaced by bots which may be ineffective and inefficient in regulating content on social media. Furthermore, the extensive deployment of automated tools and AI has other limitations such as that of surveillance culture, auto-leading the propaganda of the governments, and drifting away from human codes of decision-making.
Yuval Noah Harris in his work, '21 lessons for the 21st century', argues that any form of artificial intelligence (example robots) reflects and amplifies the qualities of its code.
"If the code is restrained and benign-- the robots will probably be a huge improvement over the average human soldier. Yet if the code is ruthless and cruel-- the results will be catastrophic". Therefore, the drawback with uncontrolled AI is not their own artificial intelligence, but the tendencies of its human masters.
These masters may code a tool for their own vested interests which could prove fatal for the idea of humanity. An example of this could be observed in the Netflix based series, Leila which shows the use of AI based tech in designing a Hindu religious state called Aryavarta. However, it is not fiction anymore when AI invents a language that humans cannot understand.
The Guardian reports, "The bots occasionally diverged from standard English – which was not the main finding of the paper – and reported that after the researchers "realized their bots were chattering in a new language" they decided to pull the plug on the whole experiment, as if the bots were in some way out of control".
The vulnerability associated with the use of AI is also linked with how the bots can behave in the absence of humans. The agents will drift away from the comprehensible language and invent unique codes for themselves. This further calls into question the unflinching loyalty of artificial intelligence.
Facebook has published papers arguing that AI can possibly develop their own language, "it's possible that [language] can be compressed, not just to save characters, but compressed to a form that it could express a sophisticated thought".
The most dangerous aspect of uncontrolled AI will be how these inventions regulate the experiences of humans rather than the opposite. The ideological influence of regimes and governments on AI might not only produce similar exclusion-based policies but also a dystopian regime centered on policing the routine lives of citizens.
In the end, algorithms are akin to an electronic brain that could teach itself, walk, talk and reproduce itself on its own. This became evident in 2017 when a Palestinian labourer was mistakely arrested by Israeli forces as they relied on automatic translation of facebook. The laborer was arrested for writing 'Good Morning' in Arabic which was wrongly translated by the bots as 'Ydbachhum' (Kill them all). These instances are just examples of what a surveillance regime allied with an imperfect AI may look like.
The amalgamation of military technology and AI seek to develop autonomous lethal weapons or killer robots that can identify and kill a person without a human being in the chain of command.
The facial recognition technology and decision-making algorithms would only aid in making this powerful and easier. However, the use of killer robots remains challenging in terms of moral, technical, and strategic dilemmas. This is why, United Nations and world governments have pushed for a preemptive ban against the technology. Nonetheless, companies like Microsoft, Amazon have consistently developed this weapon technology jeopardizing international security and harbinging an advanced stage in warfare.
The regulation of automated tools and AI is vital to strengthen the fabric of democracy and humanity. The only mechanism to do so effectively is to develop a robust legal framework.
While the technology on its own can theoretically make our lives easier and simpler, it is not wise enough to make decisions without any human intervention. Furthermore, humans designing the code and amplifying such voices can give rise to digital dictatorships erasing the ideas of political equality, liberty, fraternity and justice.
Gradually, we are entering an era of role replacement-- servant becomes the master. In such a context, it is imperative to strengthen the legal institutional structure in restricting the automated functioning.
Another crucial choice that we need to make is between negative and positive futures. Benjamin Kuipers, a professor of computer science at the University of Michigan, writes, "Advancing technology will provide vastly more resources; the key decision is whether those resources will be applied for the good of humanity as a whole or if they will be increasingly held by a small elite".
Therefore, it will be fruitful to find ways of productive cooperation between the individuals and increase the trust among them through such cooperative measures to work for one another.
Thirdly, we cannot counter AI with more AI based solutions and thus, a concrete human intervention is required to ensure that accountability at each step. It is equally essential to zero in on the people and their motivations behind such tools and AI innovations.
This ground report is written by Manisha Chachra, a freelance reporter at The Logical Indian. She is pursuing PhD in Political Studies at the Jawaharlal Nehru University and is also researching technology and politics at Govern.
Thank you for subscribing.
We have sent you a confirmation email.