Ethics in AI

August 2, 2016 OPINION/NEWS , OTHER , Technology

Alicia Vikander / UPI

 

By

Alève Mine

 

It is about time I’d write this. The need was sparked by an article I read about ethics in artificial intelligence last December. It forwarded a professor’s suggestion that we should have AI learn our ethics. My throat jumped into my brain, screaming soundless and holding itself to a branch there in horror. As I then reflected on the subject (I couldn’t stop: there must be a reason why I’m a scifi author. Why should I stop? Because tasks that serve the production of existing projects have a higher priority.), it quickly showed its breadth. The below is an incitation to talk.

 

Definitions:

 

“AI”, in the framework of this text, refers to: artificial intelligence, machines (be it purely mechanical, considering design as a type of programming: form and other characteristics creating function), and/or algorithms.

“Ethics” in this framework, is viewed as any decision or embedded mechanism that leads to an action that has an impact on the enviroment (not limited to the ecological meaning of the word), the self, the future ability of the self to impact on its environment, or the environment’s future ability to impact on the self. With special attention to how these impact on the speed of processes in the self and in environment (see my blog post called “How to change the nature of a process”). Ethics can be implemented in limitations by design, and/or in rules of behaviour. Ethics in I are in question at the following – I believe merely incidentally 3 (see paragraph about scifi below) – levels:

 

1. Ethics that an AI should be endowed with.

2. Ethics that humans should have when dealing with AI: making an AI.

3. Ethics that humans should have when dealing with AI: dealing with the actions of an AI.

 

(Btw: what is the difference between the body and the self in this framework? And what is the self where abstraction is made of its ability to impact on the environment?)

 

Law*:

 

The above levels at which ethics is in question each entail an embedding of ethics in law. When an organisation argues that their system makes the decisions and not their people, the requirements on the said system are obviously defined by humans, and the organisation, who may or may not have understood that, is trying to get rid of any claims, but here is what such incidents mean: the organisation succeeds against claimants who don’t have the resources (energy, time, money, flexibility in priorities) to escalate any issue, and this is the kind of defense – which is, like it or not, part of our legal structures – that is facilitated by the transfer of the performance of actions from a person to a system. In some cases, the requirements may be, with regard to their impact (see the above definition of ethics for this), statically defined: no evolving AI. But in the case of an evolving AI, the question of predictability of the system arises on top of the above human behaviour. I am unaware of how this may be dealt with from a legal point of view.

 

Ethics as a function of embodiment, goals and values:

 

Et=f(Em, G, V)

Embodiment affects the way and extent in which the AI is able to impact on itself and its environment. The ethics impact of embodiment is, including empirical extensions to the AI, comprised of the impact of knowledge and culture (including tools and technology) and of physical (incl. physiological and neurological) body on behavior and the impact thereof on the items as listed in the definition above.
A framework of goals is required to optimize a system’s actions. I dare not use the word “purpose” for all its connotations.

Values include results of beliefs, convictions and knowledge, but also de-facto values probably reverse-engineerable from behaviours.

When we apply the values of one person or group on a system, this becomes the set of values of the system, technically reproducing the ethics of their creator. The question arises: how do we design ethics that don’t represent a set a values? If we shouldn’t try to do that, then for which reason, and do we deal with clashes in the system’s impact among people that don’t share a value set? If we do want to do that, what are our options?

• No-value? How? Can we really not look at things without going through values, or is it that we don’t want to contradict our values?

• All-values? Whose? Some values may be incompatible with each other. How do we prioritize among values: do those shared by most people have the highest priority? Does the sharing by largest numbers make a value more right to impose on others? How to take into consideration intensities of values?

 

Most people completely shut down when I mention the above ideas: it is due to what we may call a god problem: abstracting or exploring subjects beyond certain extents or limits is a taboo for many. Nevertheless, now we reasonably should talk things through. People argue that the AI should – paraphrasing – “feel” akin to “what human life is like”: “what does an abstract position know of humans?” But is that not just a side effect, an artifact, of our embodiment? To end the argument, they crack a joke housed in a religion, perfectly staged, like a smoker would do by extinguishing their cigarette in an ashtray while maybe saying the words that are thereby truly bound to become the last: you find yourself weighing the usefulness or the potential return on investment of trying to continue the conversation. As a pluralist or anti-formalist, what would Wittgenstein agree with in the above – and would that matter? If (any philosophers or) people who will work on this have a set of values and character traits or behavioural penchants, how does the resulting system know what the approach of others is? Interestingly, philosophical works do formulate what corresponds to no-value ethics, but seem to be ignored – at least not openly discussed – and to my knowledge is not implemented in any practical use: a “cool place of philosophy”: “Not being a citizen of any community of ideas” (a presentation by M. Burley, Leeds).

As a side note, I remember reading that ethics students acted more “unethically” than the average, and I haven’t quite put my finger on what this phenomenon may mean with regard to the people working on implementing ethics on systems, so if you get any thoughts about that, please let us know.

 

Sentience, and thought vs emotion in conviction:

 

Conviction, among philosophers, seems to be seen as more “respectable” and closer to knowledge than feeling and emotion. But to me feelings lead more unequivocally to conviction. Inversely, a thought can be a motivator (just as the logical proposition that the title of my scifi ‘The Premise’ refers to provides the protagonist with a sustainable drive to act), yet still: is feeling and knowing (or thought) different from each other from a cosmological point of view, or are they mere variations within a category encompassing thought and emotion: maybe thotion?** If they are different, how do their paths of manifestation differ in their impact in an ethics perspective? Nevertheless, emotion and thought both translate into processes (a constellation of activity in the body). If such constellation types (not specific thoughts or emotions but their type of process) have different ethics impacts, then: can thought take place without emotion? Emotion without thought (conscious or subconcious)? Emotions are by definition experienced by “sentient” beings. Here the problem seems to be the connotation that sentience prerequisites endowment with a “soul”. This is the realm of beliefs.

 

Belief:

 

The way you will argue will be radically different if you believe that the soul is not present in AI from if you don’t. If we see a machine display the signs that we know from emotions in humans, or we witness the AI do or come up with something we are impressed with in one way or another, we may tend to believe that the AI has a soul. Or we may just not bring ourselves to act in a way that appears to “hurt” the AI, because it just looks too vividly like hurting a living being and our own emotions get stirred. How about an algorithm then? It may not have an assigned body, but is it a soul? The ethics seen as appropriate for our dealing with AI will depend on our beliefs, but should it?

 

 

Conflict-creating potential of the choice of ethics applied to AI:

 

One person or group may build one AI, others may build other AIs (already a reality, most certainly). To the extent that these would be equipped to impact or have the side-effect (energy consumption etc) to impact on the world one way or another, this could lead to conflicts among humans, be it akin to conflicts about legal texts, except that the evolving system of an AI is much harder to get your head around (see also next paragraph). Other types of conflicts tend to also arise when an issue can’t be solved at this level. When confronted with clashes between how an AI used by one culture/group/person impacts on another culture/group/person, we need to have a consensus about how to deal with it. In particular, how do we avoid AI as an instrument for a given human coalition in competition with another? You may say, why make of AI a special case? Indeed, human coalitions commonly use all they can against each other, don’t they. Is AI a special case in this regard?

 

Opacity of evolving AI:

 

Speaking of getting your head around an AI’s status at various points in time: does an auditability of processes leading to actions in AI (or in humans, for that sake) change the applicable ethics? If you look into the decision processes, will it make you believe less or more in the subject being alive, and if yes, more or less responsible for their actions? Which level of detail in this insight may allow you to come to a decision in this regard? (also see ** below).

 

Why scifi should not necessarily be taken that seriously:

 

When talking about ethics in AI people are quick to mention the 3 rules of Asimov. In that it is important to note that:

 

1. They are part of a work of literature made for humans, who incidentally apparently tend to retain items that come in 3 parts particularly well.

2. The robots in question have a human-like embodiment.

3. To satisfy its audience, the work needs to contain conflicts.

 

Limits to making “better” systems:

 

Nowadays we apply deep learning to every system we can, it seems. Feeding them our own output. But do we really have to create the systems “to our image”? Why are we doing this? Because, with the currently available technologies, it’s easier than to sit down and design a better system? (Or do we think that if we make a system that is like us, we’ll feel like we have explored ourselves, understood ourselves, or even feel loved by that system in some mental image we’d have of our creation?) What, ultimately, beyond dilemmas and differences in values, limits our ability to identify and/or implement the “best” ethics or create an AI that does the same (as opposed to learning from us empirically)? Well, how could we figure out a better ethics than our empirical own? Intuitively, as thinking of all possible scenarios in a discrete manner may take so to say forever, how about looking at scenarios as a smooth topography to design for the best conceivable decision within the limits of our resources? Don’t ask me to even try to implement that: it is a mere hunch and I don’t have the resources.

 

To wrap up:

 

Reproducing human ethics is most definitely not what we should be aiming for. You may argue that an AI that would have its own, “better” ethics, should still “add” some human ethics to that (see also “what human life is like” above). I’d argue that an AI that can figure out which aspects of our empirical ethics are okay to adopt in any shape may not need the empirical human ethics as a reference. It can only make use of empirical human ethics if it already has the ethics, cognition and access to all relevant information that will allow it to interpret the historical impacts of our behaviours “properly”. The question arises: in which way is the latter ethics definable, different in nature from the targeted ethics?

Is better ethics possible, or identifiable at all? Either way, it is ideally important to provide AI with better ethics than our empirical ethics. I hope that the following will suffice to convince of that: look around you. Does empirical human ethics look like something you’d like to order as a plug-in for your home robot? So if you are involved in a field where AI is a theme, please kindly and diligently look into this. Emphasis on kindly. And on diligently.

 

The subject is important not only regarding autonomous weapons or for actions impacting across borders like on climate change: I’ve recently reported a picture that ended up in front of my eyes on a renowned social network. Someone with a torn leg, blood everywhere, probably dead – I didn’t try to make sure. Not the only image that had a bad impact on me on there. Got a reply saying the image was not against the network’s policy.***

 

Any thoughts, agreements, disagreements, suggestions, ideas? Let’s do some constructive boxing.

 

*Philosophy applied to law is one of the themes of my action feature screenplay Endurance.

**The Premise, my scifi novella, condemns conviction altogether: “you think you understand until you discover that conviction was no more than a description of the shape of a construction of your mind.” Precisely an insight into decision processes that questions the validity of decisions overall.

***In The Premise, future humans are naked, but only show what they want, and from that others only see what they want, superposition of two filters. Doesn’t sound that hard to implement.

 

I’ll return to higher-priority tasks now.

 

 

 

 

by Magda Nowak and Leszek Andzel

Alève Mine

(photo by Magda Nowak and Leszek Andzel)

“Science fiction author: “The Premise“. EPFL MSc. in microengineering: robotics, nanotech and more. The only multilingual female scifi/action actor and director with serious martial arts experience to whom the above applies. Founder and lead of the Zurich VR/AR meetup group. Speaker for scifi, and further for the arts in a tech framework. www.alevemine.com

0 Comments

No Comments Yet!

You can be first to comment this post!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.