Building A Global Ethics Ecosystem – Challenges and Opportunities

Challenges and Opportunities

Frontier technologies will converge, indeed, it already are. Cloud computing will be ubiquitous. Automation of repetitive manual tasks will be commonplace, and mobile technology will facilitate access to banking, transport, medical, and other services. 3D manufacturing and IOT will reduce the labour intensity of production, monitoring, and maintenance. Blockchain will enable us to create global connected communities working on global commons issues; and big data and advanced data analytics will predict epidemics. Artificial Intelligence (AI) will be interwoven with many of these technologies.

The IBE defines artificial intelligence (AI) as the simulation of elements of human intelligence processes by machines and computer systems. It is characterized by three main features: Learning – the ability to acquire relevant information and the rules for using it; Reasoning – the ability to apply the rules acquired and use them to reach approximate or definite conclusions and Iterative – the ability to change the process on the basis of new information acquired.

They highlight risks that need to be considered in system design and implementation. These are:

  • Ethics risk: certain applications of the AI systems adopted might lead to ethical lapses
  • Workforce risk: automation of jobs can lead to a deskilled labour force
  • Technology risk: black box algorithms can make it difficult to identify tampering and cyber-attacks
  • Legal risk: data privacy issues, including compliance with GDPR
  • Algorithmic risk: biased algorithms can lead to a discriminatory impact.

COVID has opened up a wider pathway for governments to utilise surveillance technology at unprecedented levels, potentially infringing privacy with their use of technology, facial recognition, cameras, scanning and tracking technology. Governments have moved swiftly to propose adoption of highly intrusive surveillance and mass data gathering technologies, with little opposition or public resistance. Different automated decision-making systems are being proposed and implemented in different countries, ranging from authoritarian social control (China) to privacy-oriented, decentralized solutions (MIT’s ‘Safe Path’).

This is risky given the current state of development of many Big Data and AI / ML systems. The algorithms built to process these data are trained on data sets hand-picked by human beings – human beings with their own unconscious biases. In data selection, as in all real life, there is bias when selecting for representation of sufficiently diverse inputs and sources, and this then manifests itself in the examples we have seen of self-driving cars not recognizing people of color[1], or police algorithms being unable to distinguish sufficiently between individual black people[2]. It also creates the temptation to form a single view of every individual and all of their activities. We cannot assume that, just because it is not being done today, it will never be done. The ability to collect large-scale data from multiple sources, including social media, and to build a picture of an individual’s preferences and thoughts, could lead to a greater use of pre-emptive policing powers outside the normal protections afforded by due process in the judicial system. We already see this happening in more censorious societies, and it has no place in a modern democracy.

2020 has seen the emergence of a new wave of ethical AI – one focused on the tough questions of power, equity, and justice that underpin emerging technologies, and directed at bringing about actionable change. The latest ethical AI has seen a Dutch Court shut down an algorithmic fraud detection system, students in the UK take to the streets to protest against algorithmically-decided exam results, and US companies voluntarily restrict their sales of facial recognition technology.

It is important to build emerging technology solutions that incorporate and address key ethical issues. There needs to be development of, and better use of ethical frameworks and criteria to ensure technology is building out in an inclusive, systemic way to address the issues supposed to solve. There is also a need to better incorporate diversity in innovation processes from problem scoping to solution building. Artificial intelligence (AI) can be used to increase the effectiveness of existing discriminatory measures, such as racial profiling, behavioral prediction, and even the identification of someone’s sexual orientation. AI enables companies and governments to keep constant tabs on what humans are doing in an automated and intelligent fashion. How do we ensure that the technology we create is a force for good? How do we protect the most vulnerable? What are the moral costs of restraint – and who will bear the costs of slower development?

Several groups globally are working on these issues including Algorithm Watch and their AI Ethics Global Inventory. Lots of really good ammunition to use against the oncoming onslaught. For the public sector, Nesta developed a Code of Standards for Public Sector Algorithmic Decision Making. These include that:

  1. Every algorithm used by a public sector organization should be accompanied with a description of its function, objectives and intended impact, made available to those who use it.
  2. Public sector organizations should publish details describing the data on which an algorithm was (or is continuously) trained, and the assumptions used in its creation, together with a risk assessment for mitigating potential biases.
  3. Algorithms should be categorized on an Algorithmic Risk Scale of 1-5, with 5 referring to those whose impact on an individual could be very high, and 1 being very minor.
  4. A list of all the inputs used by an algorithm to make a decision should be published.
  5. Citizens must be informed when their treatment has been informed wholly or in part by an algorithm.
  6. Every algorithm should have an identical sandbox version for auditors to test the impact of different input conditions.
  7. When using third parties to create or run algorithms on their behalf, public sector organizations should only procure from organizations able to meet Principles 1-6.
  8. A named member of senior staff (or their job role) should be held formally responsible for any actions taken as a result of an algorithmic decision.
  9. Public sector organizations wishing to adopt algorithmic decision making in high risk areas should sign up to a dedicated insurance scheme that provides compensation to individuals negatively impacted by a mistaken decision made by an algorithm.
  10. Public sector organizations should commit to evaluating the impact of the algorithms they use in decision making, and publishing the results.

PRIVATE SECTOR

The Ethics Centre has define principles to incorporate ethical by design, these are :Ought before can; Net benefit; Non-instrumentalism; Fairness; Self-determination; Accessibility; Responsibility and Purpose. The Blockchain Ethical Design Framework proposes identifying the outcomes and the ethical approach will guide Blockchain design choices. For example, in an aid-distribution Blockchain, the ethical approach may be to ensure that all members of a community have equal access to aid. If the community has significant power disparities among its members, the guiding design philosophy would be to prioritize design choices that minimize disparities in aid distribution. Addressing these questions at the outset of the design process provides ethical intentionality that offers a guiding star to help navigate the inevitable design tradeoffs.

[1] Wilson Benjamin, Hoffman Judy, Morgenstern Jamie ‘Predictive Inequity in Object Detection’, https://arxiv.org/pdf/1902.11097.pdf (Accessed 4th August)

[2] United States Department of Justice Civil Rights Division and United States Attorney’s Office Northern District of Illinois ‘Investigation of the Chicago Police Department’, https://www.justice.gov/opa/file/925846/download (Accessed 4th August)

Scroll to Top