Scroll Top

Are deepfakes the next big threat for women rights defenders in the Middle East? (Demo)

This statement was originally published on gc4hr.org on 14 October 2019.

The Gulf Centre for Human Rights (GCHR) expresses its concerns over the potential exploitation of deepfake content in targeting human rights defenders in the Middle East. Blurring the lines between real and fake, deepfake technology presents itself as a pressing challenge when the technology can be maliciously used to violate human rights. By using deep machine learning and artificial intelligence, it is now easy to produce fake content such as video and text that appear real. Deepfake videos can manipulate the presence as well as the words of anyone who is the target of the video. Examples of using deepfake vary from entertainment such as doctoring film snippets to more problematic content such as in gender-based fake pornography.

Dual-use technologies are a challenge to democracy and human rights when used maliciously. GCHR echoes the Electronic Frontier Foundation’s call to not rush regulation of deepfake as seen in the Congress of the United States with the Deepfake Accountability Act. Standing by our stance on dual-use technology, the technology itself is not the centre of our concern but the implications on human rights of malicious uses. Deepfake content can be used as a means to support the exercise of freedom of speech and expression through creative forms such as satire. It can also be used to fabricate incriminating or distressing content used in targeting human rights defenders, activists, and journalists.

Gender-based deepfake targeting is particularly problematic as the tools to create such content are very accessible. Mobile applications can turn pictures of targeted women into ‘undressed’ versions to be circulated as pornographic content. Alarmingly, the application does not work to ‘undress’ men. As women human rights defenders are very vocal and active across the entire Middle East region, GCHR is concerned for their security from such malicious uses.

GCHR’s Women Human Rights Defenders (WHRDs) Programme Coordinator, Weaam Youssef illuminates: “The region is known for its multi-layered patriarchal system. WHRDs are defying the challenges and resisting the restrictions imposed on them in this system. Whilst civic spaces are vehemently closed by governments, WHRDs are successful at reclaiming and reoccupying spaces of civic participation. Innovatively, WHRDs took the virtual space as an alternative to mobilise, solidify and sound their voices on a louder scale. However, the risks which come with online activism are increasing, and we are very concerned as such technology will only lead to further gender-based targeting against WHRDs, particularly through defamation and smearing campaigns, and systematic stigmatisation attacks.”

The world’s tech companies are joining to filter malicious from harmless deepfake content in anticipation of the U.S. elections in 2020 but deepfake affects democracy and human rights in countries beyond the U.S. This calls for collective efforts, advocacy and education on the impact of dual-use technology in different countries and contexts.

In the spirit of this, GCHR calls on civil society organisations and the international tech community to: 

  • Be vigilant in sharing resources to detect and circumvent malicious deepfake content, especially in regional languages;
  • Hold accountable governments exploiting technology used to harm human rights defenders;
  • Commit to the protection of using technology in creative forms of expression that do not compromise human rights; and
  • Develop toolkits and helplines for gender-based deepfake targeting.

The post Are deepfakes the next big threat for women rights defenders in the Middle East? appeared first on IFEX.

Source: MEDIA FEED

Related Posts