Cyber-ethics: Purpose & Social Responsibility

Cyber-ethics Purpose & Social Responsibility

For the past 200 years at least, we have relied on sovereign governments to look after our welfare, our environment and our safety and security. This has worked well in the physical world we lived in. However, we live in a digital world and data is becoming the dominant basis for ‘reality.’ This new world demands new tools, because traditional methods are not fit for purpose for a non-physical space like the Internet. It’s akin to using a blacksmiths forge and skill to build a blockchain. Threats to digital systems, data manipulation, forgery or theft can originate anywhere around the globe. The response demands new tools for a digital, non-territorial dimension.

We also need new digital approaches for cyber ethics. Ethics are one of the basic foundational principles upon which we run our societies. In a digital world, there can be exponential consequences of the ethical decisions being made. The speed and the scale of the impact of one decision (or omission) can get hard coded into a machine and can have consequences for billions of people. It’s vitally important that we think about how we convert something that’s traditionally been individual decisions into frameworks and approaches that can be taken up in frontier technologies to ensure social responsibility.

Old viewpoints and approaches to ethics have difficulty moving into this much faster, much more connected, much more data driven, individualised world. Connected and combined data can be both valuable and dangerous. NESTA point out that we need to make it easier to combine data of all kinds so as to discover the greatest use of values, by spotting patterns, so as to better predict and analyse, but also to put in place stronger protections against abuse and cultivate a constant dialogue to ensure that we reap the greatest potential benefits with the fewest possible harms. Geoff Mulgan highlights autonomous weapons, facial recognition and the use of predictive algorithms in justice as some of these harms.

Cyber also creates new issues for privacy and will require its redefinition. Our old view of privacy is compromised by : new technologies; ‘Whole of Government’ data sharing; social media; the digital economy; and hacking and fraud. Traditional concepts of privacy are outdated, and assume that the government is able to protect your privacy. Clearly not so in a digital world, one where data can be easily moved across borders, stolen, or recorded without consent.

Moreover, we are willingly ceding our privacy to technologies that make life easier like mobile telephones and search engines that track our interests and movements. We want the algorithmically driven better user experience – but algorithms are designed to keep you engaged. So the more you use social media, the better because you’ve got more eyeballs, and more eyeballs means more money for advertisers. Social media is free, but you pay for it with your attention and your data. So if you like something, you’ll see more of it. Algorithms control what you see and this feeds the fake news phenomenon, which is shared six times more than real news. Algorithms know what you are interested in, and you see more of it, so a conspiracy theorist will see a conspiracy article, and reshare it with everyone that they know, because they think it’s true.

The collection, analysis, and use of big data creates heightened risks over the privacy and security of citizen data and creates massive ethical dilemmas for government, organisations and citizenry.

Take for example, health data, from the perspective of the welfare of the patient, it’s in their interest that every piece of health data from their entire medical history is available in one place. Currently, you never have one place where someone can look at your entire medical history and make accurate and informed judgments. However, when thinking about subsets of data, people will wish to make choices about what to show their bank or doctor or insurer. The debate between transparency, but also your right to be able to exclude some of your data from others, is needed. You want your health data separated from finance data or family data, but they’re all intermingled. Your health care provider also needs to know about your parents and their family history and probably has your credit card on file for co payments. In reality, given the amount of data and individual data out there – AI systems can connect them through inference at a very accurate rate. The genie is already out of the bottle.

As a society, ethics give us the fundamental framework to build off, and discuss those ethical dilemmas in the context of the kind of ethical framework each country has. There is no global set of ethics that every country will adopt. Ethics reflect cultures, some countries are far more collectivist, about the rights of the community over the rights of the individual and some are more authoritarian, and that’s going to play out in the way that they view and debate ethical questions. In this regard, Nature reported on a study of The global landscape of AI ethics guidelines and identified 84 documents with ethical principles or guidelines, most of which were developed since 2016. They found: “ a global convergence emerging around five ethical principles (transparency, justice and fairness, non-maleficence, responsibility and privacy), with substantive divergence in relation to how these principles are interpreted, why they are deemed important, what issue, domain or actors they pertain to, and how they should be implemented”.

They found that transparency was prevalent, and covered efforts to increase explainability, interpretability or other acts of communication and disclosure, including data use, human–AI interaction, automated decisions and the purpose of data use or application of AI systems. Transparency can minimize harm, improve AI, foster trust, and the principles of democracy.

Justice was mainly expressed in terms of fairness and of prevention, monitoring or mitigation of unwanted bias and discrimination. The Nature report went on to summarise how the preservation and promotion of justice could be pursued : “(1) technical solutions such as standards or explicit normative encoding; (2) transparency, notably by providing information and raising public awareness of existing rights and regulation; (3) testing, monitoring and auditing, the preferred solution of notably data protection offices; (4) developing or strengthening the rule of law and the right to appeal, recourse, redress or remedy; and (5) via systemic changes and processes such as governmental action and oversight, a more interdisciplinary or otherwise diverse workforce, as well as better inclusion of civil society or other relevant stakeholders in an interactive manner and increased attention to the distribution of benefits.”

We need to not limit ethical frameworks to solely risk reduction, but to also look at how to harness technology for ‘beneficence’ including augmentation of human senses, promotion of human well-being, environmental stewardship and the creation of economic prosperity.

Data management, data analytics and data use outcomes, all require significant thought. What can work? What limits can be put in place? What’s the most important goal? How do people even know what of their data is out there on the internet? One approach is for companies to report on the data that they have, and who it’s shared with. This provides a chain of custody on who has it and who doesn’t, and then apply opt in and opt out models for example. This can be overlaid with the control structures and the policies and the governmental regulations to empower people to decide where their data goes, who has it for how long what they’re going to use it for.

There will be no black and white. Ethics deals with shades of grey and “right versus right.” What will be needed is continued efforts to reach at least national convergence on core ethical principles in relation to cyber, like transparency, justice and fairness, non-maleficence, responsibility and privacy. However, ethics involve context and interpretation of dilemmas. It needs to consider the outcomes of data decisions, not just intent. It will be political and social according to the norms and mores of individual nations. It’s about choice. The choice of the individual, the choice of the nation or society.

Many remain oblivious to the choices that need to be made, and that are being made by companies and machines, on our behalf on a daily basis.

Scroll to Top