A week to the August 2017 General Election, two videos emerged: one had been made to look like it came from CNN and the other from BBC.
They both claimed that the incumbent President Uhuru Kenyatta was leading in the polls. Their source was never established, but they had first been sent from WhatsApp and then made their viral ways through to social media.
CNN and BBC dismissed the videos as fake as fact-checkers scrambled to establish their origin, and one can only imagine that their producers were looking on with glee.
In another incident, the world was treated to a fake former US President Barack Obama video making bogus speeches. The tricksters used Artificial Intelligence (AI) tools to model how Mr Obama moves his mouth when he speaks. This allowed them to put words into their synthetic Mr Obama’s mouth.
And in South Africa, there was a fake video circulating where former President Jacob Zuma was seen struggling to say the words “in the beginning”. Somebody took the correct video, muted the audio and added their own.
Numerous deepfake content have gone viral lately, giving millions of people around the world their first taste of this new technology.
What is it?
A deepfake is a convincing fake video, photos or audio made using AI software. The multimedia content are so named because they use deep learning technology to create the fake and synthetic media.
Deepfakes started in 2017 when a Reddit user of the same name posted doctored porn clips on the site. The videos swapped the faces of celebrities – Gal Gadot, Taylor Swift, Scarlett Johnsson and others – with those of adult films actors.
A report by AI firm Deeptrace Technologies indicated that there were 7,964 deepfake videos online at the beginning of 2019. Nine months down the line, the figure had jumped to 15,000 videos. A staggering 96 percent were pornographic and 99 percent of those mapped faces of female celebrities on to porn stars.
Deepfakes are also known for spoof, satire and mischief.
How do you spot deepfakes?
Alphonce Shiundu, the Country Editor for Africa Check in Kenya, a non-profit fact-checking organisation set up in 2012 to promote accuracy in public debate and the media in Africa, says to detect such videos one must be very careful and smart.
“You need to notice the changes in font, tone of voice, the pictures, the clips becoming grainier, and once you realise that, you can detect a fake video,” Mr Shiundu says.
With AI technology, he adds, it has become difficult to detect a fake video by mere visual scrutiny, hence emerging technologies are helping zero in on characteristics that are harder to notice.
“For example, here in Kenya, we know how top politicians speaks. We know how they move their heads and how they gesture with their hands. That is stuff machines cannot learn, but if you get an actor or a comedian who can mimic those expressions, the language and the tone of their voice very well, and you superimpose the faces of politicians, then it will become much more difficult to detect a fake video,” says Mr Shiundu.
Prof Bitange Ndemo, a professor of entrepreneurship at the University of Nairobi’s School of Business, in an editorial piece published in the Business Daily in March 2021, said deepfakes can become fodder for local techies especially in an election year.. And it might also give politicians a way to cover up their gaffes.
What is the danger of deepfakes?
Researchers point out that deepfakes undermine the notion of truth in news media, criminal trials and many other areas. They can also be used to target individuals for blackmail or for other wicked purposes. In politics, they can be deployed to smear a candidate.
But, there is a flipside with people using deepfakes to spread misinformation and to stir up strife.
“We can avoid this and do what every country is doing by having a proper law in place. The law will be necessary because it can be used to create institutions that can deal with the future demands of technology and help us avoid the mistakes that saw the continent miss out on the first and second industrial revolutions. Those mistakes continue to haunt the continent to date while it is struggling to transition from the Agrarian economy to industrialisation and hopefully catch up with the rest of the world in the information age,” said Prof Ndemo.
A 2020 report by Brookings Institution on Artificial Intelligence and Emerging Technologies (AIET) Initiative, which seeks to advance good governance of transformative new technologies, summed up the range of political and social dangers that deepfakes pose as: “distorting democratic discourse; manipulating elections; eroding trust in institutions; weakening journalism; exacerbating social division; undermining public safety; and inflicting hard-to-repair damage on the reputation of prominent individuals, including elected officials and candidates for office.”
What does the law say about deepfakes?
Currently, there is no law specifically addressing deepfakes in the country. However, in 2018, the Computer Misuse and Cybercrimes Act, 2018 was signed into law by President Kenyatta criminalising abuse of persons on social media, removing the legal lacuna that existed.
The law spelt out punishment to cybercriminals and provided for timely and effective detention, prohibition, prevention, response, investigation and prosecution of computer and cybercrimes. This included search and seizure of stored computer data, record of and access to seized data, production order for data, expedited preservation, partial disclosure, real-time collection and interception of data. Offenders convicted of breaking the law are liable to a fine of KSh 5 million or a sentence of two years, or both.
The law established the National Computer and Cybercrimes Coordination Committee and facilitated international cooperation in dealing with computer and cybercrime matters.
The law also deals with computer forgery, computer fraud, cyber harassment, publication of false information, cybersquatting, identity theft and impersonation, phishing, interception of electronic messages or money transfers, wilful misdirection of electronic messages and fraudulent use of electronic data among other cybercrime.
What are some of the solutions in place to address deepfakes?
In January 2020, Facebook and Twitter published new policies for dealing with deepfakes. The two firms will not allow users to “deceptively share synthetic or manipulated media that are likely to cause harm. The policies are broad to cover highly manipulated deepfakes as well as media altered in a more low-tech fashion, such as the doctored video of the former US House Speaker Nancy Pelosi that spread on social media last year.
The policy is broad enough to cover highly manipulated deepfakes as well as media altered in a more low-tech fashion, such as the doctored video of Speaker of the United States House of Representatives Nancy Pelosi that spread on social media last year.
According to IT experts and researchers, technology firms are now working on detection systems that aim at flagging fakes whenever they appear. In the short term, the most effective solution may come from major tech platforms like Facebook, Google and Twitter voluntarily taking more rigorous action to limit the spread of harmful deepfakes.
Another strategy focuses on the attribution of the media content. They say digital watermarks are not fool proof, but a blockchain online ledger system could hold a tamper-proof record of videos, pictures and audio, so their origins and any manipulations can always be checked.
For more on how to spot deepfakes, watch this video.