Governance of Digital Technologies and Human Rights
From the use of online platforms to exacerbate ethnic conflicts to the use of facial recognition technology to persecute dissidents, the threats to human rights posed by digital technologies are varied and numerous. In this context, it is essential that digital governance models go beyond data protection and adopt a holistic approach to human rights protection.
Fast-paced technological advances, including in artificial intelligence (AI),[1] are increasingly disrupting and transforming our world. The combination of rapidly evolving computing technologies with the ready availability of large amounts of data has led to the development of gradually more common digital technologies.
Such technologies are increasingly present in many aspects of people’s daily lives: from “smart assistants” on our phones to algorithms designed to improve health diagnostics, digital technologies have become ubiquitous. The interest in digital technologies such as AI has continued to rise steadily in the past few years, notably as the COVID-19 pandemic exposed the need for more contactless interactions and online services. For example, digital technologies were increasingly used to facilitate the remote monitoring of patients’ health and the provision of distance learning during the pandemic.
However, digital technologies are not neutral and depending on how they are designed, developed and deployed, they may have important consequences for the promotion, protection and respect of human rights.
Opportunities and challenges posed by digital technologies
Digital technologies are not neutral and depending on how they are designed, developed and deployed, they may have important consequences for the promotion, protection and respect of human rights. On the one hand, digital technologies may contribute to the promotion and protection of human rights. For instance, digital technologies can be crucial for promoting access to education through the deployment of remote learning technologies, insofar as individuals have access to computers and the internet. Similarly, digital technologies can help enable access to information, which is nowadays largely dependent on digital means. For example, more than half of the UK’s adult population currently use the internet or social media for accessing the news.[2] Digital technologies such as AI can also support lifesaving disaster relief responses, by facilitating the quick assessment of damages and prediction of risks through the analysis of large amounts of complex data,[3] including satellite imagery.
On the other hand, digital technologies also pose significant societal challenges regarding the respect and protection of human rights. For example, online platforms can be used to exacerbate ethnic conflict and fuel hate speech. That was reportedly the case concerning Facebook and other social media platforms in enabling the spread of hateful and violent rhetoric against the Rohingya in Myanmar.[4] Facial recognition technologies[5] may be used in ways that facilitate disproportionate state surveillance, for instance, to track and monitor vulnerable groups including migrants.[6] Similarly, AI technologies may embed biases that perpetuate discriminatory narratives and practices against minorities and groups of individuals based on their race, gender or ethnic origins.[7] For example, researchers have identified racial bias in an algorithm widely used to predict patients’ needs in the US healthcare system[8] and gender bias in algorithms used for work recruitment.[9]
The international community acknowledges that digital technologies can have profound impact on human rights. For instance, the UN General Assembly has recognised that “the same rights that people have offline must also be protected online”.[10] More recently, the UN Secretary-General has called on states “to place human rights at the centre of regulatory frameworks and legislation on the development and use of digital technologies”.[11] Several UN human rights mechanisms have been tackling this issue as well. For instance, the High Commissioner for Human Rights and several Special Rapporteurs have highlighted the need for addressing digital technologies’ impacts on human rights within their mandates.[12]
The need for a human rights-based approach
While a variety of stakeholders have started adopting measures to regulate digital technologies – for instance the EU proposal to regulate AI[13] and the UK bill aiming at regulating online harms[14] – much remains to be done. Legislative reforms are certainly an important part of the puzzle. However, due to the considerable impact that digital technologies have on human rights, it is essential that governance models place human rights considerations at their very centre. That includes not only state-centred legislative and policy frameworks, but also self-regulation by technology companies and standardisation such as by the International Organization for Standardization (ISO).
Governance models should adopt a holistic approach to human rights, going beyond the protection of data privacy on which current debates about the governance of digital technologies often concentrate. Digital technologies have indeed significant implications for an array of rights beyond data privacy, such as the rights to equality and non-discrimination, to freedom of expression, to freedom of assembly and association, and to an effective remedy in case of harms linked to the uses of digital technologies. These rights should thus be duly taken into consideration.
Human rights due diligence mechanisms, either mandatory or voluntary, should be embedded into such governance frameworks. That would ensure that human rights impacts are identified, and measures are taken to prevent, mitigate and account for them. In particular, the UN Guiding Principles on Business and Human Rights (UNGPs) provide the elements of a due diligence process, which includes assessing actual and potential human rights impacts, integrating and acting upon the findings, tracking responses and communicating how impacts are addressed.[15] These elements should be taken into account by states when adopting legislative and policy regulation of digital technologies. Similarly, technology companies should establish due diligence processes in which they engage meaningfully with a wide range of stakeholders, including technology end users, civil society and international organisations with human rights mandates.
A role for International Geneva
As a well-established centre for international diplomacy, International Geneva has an important role to play in fostering a human rights-based approach to the governance of digital technologies.As a well-established centre for international diplomacy, International Geneva has an important role to play in fostering a human rights-based approach to the governance of digital technologies. To do so, it should bring together relevant stakeholders, including states, international organisations, civil society, academia and the private sector, in an open-ended coordinated effort to find solutions to the global challenges posed by digital technologies for the protection and respect of human rights.
Hence the Human Rights Council should be commended for having recently mandated two multistakeholder consultations to explore the relationship between human rights and technical standard-setting processes for digital technologies and the practical application of the UNGPs to the activities of technology companies.[16] These recent consultations, combined with existing initiatives such as the UN Human Rights B-Tech Project,[17] have the potential to provide useful further guidance to policymakers to support better alignment of regulatory efforts in the technology sector with the UNGPs.[18]
As digital technologies will continue to evolve and potentially impact human rights, it is crucial that such engagements with International Geneva and relevant stakeholders remain in place for many years to come.
________________________________
[1] While there is not a single internationally agreed definition, AI can be generally understood as “software (and possibly also hardware) systems designed by humans that, given a complex goal, act in the physical or digital dimension by perceiving their environment through data acquisition, interpreting the collected structured or unstructured data, reasoning on the knowledge, or processing the information, derived from this data and deciding the best action(s) to take to achieve the given goal” (European Union High-Level Expert Group on Artificial Intelligence, “A Definition of AI: Main Capabilities and Scientific Disciplines”, 2019, p. 6).
[2] Ofcom, News Consumption in the UK: 2021 Report, 2021.
[3] Big data can be defined as “high velocity, complex and variable data” (Tech America Foundation, Demystifying Big Data: A Practical Guide to Transforming the Business of Government”, 2012).
[4] Human Rights Council, “Report of the Detailed Findings of the Independent International Fact-Finding Mission on Myanmar”, 2018, paras 1342–54.
[5] Facial recognition can be defined as “software (and possibly also hardware) systems designed by humans that, given a complex goal, act in the physical or digital dimension by perceiving their environment through data acquisition, interpreting the collected structured or unstructured data, reasoning on the knowledge, or processing the information, derived from this data and deciding the best action(s) to take to achieve the given goal” (European Union High-Level Expert Group on Artificial Intelligence, “A Definition of AI: Main Capabilities and Scientific Disciplines”, 2019, p. 6).
[6] Ana Beduschi and Marie McAuliffe, “Artificial Intelligence, Migration and Mobility: Implications for Policy and Practice” in Marie McAuliffe and Anna Triandafyllidou (eds.), World Migration Report 2022, International Organization for Migration, 2021, pp. 280–99.
[7] UNGA, “Report of the Special Rapporteur on Contemporary Forms of Racism, Racial Discrimination, Xenophobia and Related Intolerance”, 2020, A/75/590.
[8] Ziad Obermeyer, Brian Powers, Christine Vogeli and Sendhil Mullainathan, “Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations”, Science, vol. 366, no. 6464, 2019, pp. 447–53.
[9] UNESCO, OECD and IDB, The Effects of AI on the Working Lives of Women, 2022.
[10] UNGA, “The Right to Privacy in the Digital Age”, A/Res/68/167, 21 January 2014. See also Human Rights Council, “The Promotion, Protection and Enjoyment of Human Rights on the Internet”, A/HRC/20/L.13, 29 June 2012, and A/HRC/32/L.20, 27 June 2016.
[11] UN Secretary-General, Roadmap for Digital Cooperation, 2020.
[12] Human Rights Council, “The Right to Privacy in the Digital Age”, Report of the United Nations High Commissioner for Human Rights, A/HRC/48/31, 13 September 2021; UNGA, “Report of the Special Rapporteur on Extreme Poverty and Human Rights”, A/74/48037, 11 October 2019; Human Rights Council, “Report of the Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression”, Irene Khan, A/HRC/47/25, 13 April 2021; Human Rights Council, “Report of the Special Rapporteur on Contemporary Forms of Racism, Racial Discrimination, Xenophobia and Related Intolerance”, A/HRC/48/76, 22 September 2021.
[13] European Commission, “Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending certain Union Legislative Acts, COM(2021) 206 final, 21 April 2021.
[14] UK Department for Digital, Culture, Media and Sport, “Draft Online Safety Bill”, CP 405.
[15] UN OHCHR, Guiding Principles on Business and Human Rights, 2011.
[16] Human Rights Council, “New and Emerging Digital Technologies and Human Rights”, A/HRC/RES/47/23, 16 July 2021.
[17] UN OHCHR, “B-Tech Project”; UN OHCHR, “OHCHR Consultation and Call for Submission on the Practical Application of the United Nations Guiding Principles on Business and Human Rights to the Activities of technology Companies”.
[18] Geneva Academy of International Humanitarian Law and Human Rights, “Placing Human Rights at the Centre of New Tech Regulations”.
https://ourworldindata.org/
TIMELINE: Major International Human Rights Treaties
The Universal Declaration was the first detailed expression of the basic rights and fundamental freedoms to which all human beings are entitled.
The Convention on the Prevention and Punishment of the Crime of Genocide was adopted by the UN in an effort to prevent atrocities, such as the Holocaust, from happening again. The Convention defines the crime of genocide.
The Convention relating to the Status of Refugees protects the rights of people who are forced to flee their home country for fear of persecution on specific grounds.
The Discrimination (Employment and Occupation) Convention (No. 111) of the International Labour Organization prohibits discrimination at work on many grounds, including race, sex, religion, political opinion and social origin.
The International Convention on the Elimination of All Forms of Racial Discrimination (ICERD) obliges states to take steps to prohibit racial discrimination and promote understanding among all races.
The International Covenant on Economic, Social and Cultural Rights (ICESCR) protects rights like the right to an adequate standard of living, education, work, healthcare, and social security. The ICESCR and the ICCPR (below) build on the Universal Declaration of Human Rights by creating binding obligations for state parties.
Human rights protected by the International Covenant on Civil and Political Rights (ICCPR) include the right to vote, the right to freedom of association, the right to a fair trial, the right to privacy, and the right to freedom of religion. The First Optional Protocol to the ICCPR creates a mechanism for individuals to make complaints about breaches of their rights. The Second Optional Protocol concerns abolition of the death penalty.
Under the Convention on the Elimination of All forms of Discrimination against Women (CEDAW), states must take steps to eliminate discrimination against women and to ensure that women enjoy human rights to the same degree as men in a range of areas, including education, employment, healthcare and family life. The Optional Protocol establishes a mechanism for making complaints.
The Indigenous and Tribal Peoples Convention (No. 169) of the International Labour Organization aims to protect the rights of Indigenous and tribal peoples around the world. It is based on respect for the right of Indigenous peoples to maintain their own identities and to decide their own path for development in all areas including land rights, customary law, health and employment.
The International Convention on the Protection of the Rights of All Migrant Workers and Members of Their Families aims to ensure that migrant workers enjoy full protection of their human rights, regardless of their legal status.
The Convention on the Rights of Persons with Disabilities aims to promote, protect and ensure the full and equal enjoyment of all human rights by persons with disability. It includes the right to health, education, employment, accessibility, and non-discrimination. The Optional Protocol establishes an individual complaints mechanism.
This Declaration establishes minimum standards for the enjoyment of individual and collective rights by Indigenous peoples. These include the right to effectively participate in decision-making on matters which affect them, and the right to pursue their own priorities for economic, social and cultural development.
This Declaration asserts that everyone has the right to know, seek and receive information about all human rights and fundamental freedoms and should have access to human rights education and training.
Based on the information produced by the Australian Human Rights Commission