Risks in AI

January 19, 2017 OPINION/NEWS , Technology

Olga Nikonova

 

By

Alève Mine

 

Here are some of the possible risks of AI, and, in some cases, what can be done to prevent, or what trade-off prevention requires:

 

AI caretaking: reduction of lifespan and/or quality of human life: uses of AI that we see proposed include caring for the elderly in a financially more sustainable manner. I remember a study showing that being frequently held by a human raises the survival rate of babies. Do we have substantial proof that the human touch, not to mention human communication, that happens during human caretaking in adults, in particular the elderly, doesn’t help survival? We may not want robotic care to be allowed before such studies take place that are funded, conducted and peer-reviewed by parties that are independent from stakeholders in robotics or AI. These could establish how much human care is amply – as opposed to minimally – sufficient. If I add here “unless the alternative is no care”, a formulated unavailability of human care financing may be used as an excuse to bypass this serious issue, therefore the theme of lack of funding for human care has no place in this particular discussion.

 

Hacking: AI copied or modified for malicious purposes. I hear autonomous cars are going for a disconnection from the internet. AIs which have the internet as their source of activity won’t be able to do that, and we may want to see to it that all AIs that can use this trick use it. Further remains: how to reduce the hacking risk when a hacker can get hold of the device physically, or in other cases when the object is hackable from a distance. The AI monitoring itself and, possibly a separate system, monitoring any related networks to detect these are imaginable. Thereby security in these monitoring systems become an additional issue. A similar approach may also help detect any bugs (robugs!). An issue to consider is: to which extent and in practical terms, how can an ill-intertioned production of AI be treated in a similar way as hacking. A question I’d like to know more about in the hacking framework: is software architecture really enough to embed rules securely, respectively, what can be done at hardware (firmware) level? See also “interference” below.

 

Political: The whole AI theme seems to be a great tool to enhance power. When it comes to gaining power in the world, often, anything goes, and so does anything in this area. Which would make the political risks for any entity whatever wasn’t planned would be the outcome of their current efforts in this area. Let us consider the impossible scenario where all interesting companies and ideas dealing with AI and robotics and providers of parts and materials thereto are acquired by a big few companies (which may or may not be ones that we know today), because the companies seek growth, because it is believed abroad to be good for international relations, because other investors see these investments as too risky or are not qualified in the subject or are left out by some other process, and where, as some jobs for humans get scarce, unconditional income gets instaured, paid globally by an administration formed by these companies, which maps one way or the other into the original territories they are respectively based in, those they export to, and those they create human- and other jobs in. For jobs using AI, the older the AI and the better their education and the quality of the data they got to date, the more expensive a worker the AI. An AI secondary market is created and needs a platform – gets multiple platforms – and regulation. AIs earning wages and paying taxes to their company or a conglomerate of AI corporations, which would cover for at least some of the unconditional basic human income. Augmented humans may get the same wages as other humans, or pay the same taxes as the AIs or anything in between. So AIs become immigrants or progeniture thereof, depending of what you want to see their producers as.

You may want to get a job as a robot trainer because that could be one of the few jobs left if you forget for a moment that there may be robots made for that job, then there is the question of your ownership of any part of the resulting status of the AI, and of how that changes your taxation and how such taxes will make people feel about themselves, the AIs and their corporations. The administration gets involved further into every government because, you know, one thing leads to another, and this directly and indirectly causes tensions and violence within communities as well as between the local communities or nations that previously appeared to be sovereign and this de-facto wider administration. Unfortunately, shaking the box they are now in only makes them settle deeper into it. As the AIs learn on, soon the barrier to acquisition for a number of crucial AI systems become absolute. In this scenario, if AI & robotics providers don’t agree among themselves what regards governance, this triggers conflicts, especially as other tensions already exist for the parties in question. Back to today: one suggestion I hear is to make sure the AIs work with human guidance. Then the question is: which humans and in which cases. Political risks will partially depend on the people selected for and the extent of the guidance, and a lot on where the funds for AI research – and in which regions – will come from at various stages of development. See also “interference” and “chasm” below.

 

Interference: pollution, radiation and social change impacting on health, and ergonomic issues created in the production or usage of robotic and AI systems, in particular those which touch the humans in obvious ways, such as literally. Leach of implant materials into body. Electromagnetic fields and any heat gradients created. Strain on perceptive paths from the use of implants or devices that act upon human senses for relatively long periods of time. Dependency created by the adaptation of the body to the devices, whereby the body may malfunction if the device is removed or implant deactivated. This means: financial and political dependency, as well as the impact of any technical malfunction, material or energy supplier interruptions, inability to continue technical support activities, or hacking at the provider’s or user’s end. And overall pollution and environmental damage created in the production of usage of robotic and AI systems.

 

Social: disruption includes but is by far not limited to the risk that through the integration of AI into human body or activities, the distinction becomes non-obvious, and thus laws should equally be prepared to cover for variations in such cases. Numerous social implications of human-like AI. People not having read the AI’s user’s manual – don’t bother writing any. Further social aspects are present in every risk on this list.

 

Beliefs: this creates a change risk of the other risks, as a function of human (and possibly eventually AI) beliefs: what people who believe, and those who don’t, that AI, from a given level of a given dimension (debit of information through a given connective surface, “consciousness”, cuteness, compellingness of behaviour and emulation of human forms of emotional expression, physical size, apparent wisdom, apparent creativity, or anything else that plays a role in beliefs in this field), has a soul, or the relative equivalent thereof in their mental representation of the world, may do, and among these, what people who believe that humans do or don’t have a soul, or the relative equivalent thereof in their mental representation of the world, may do. Some people may believe that humans have a soul and robots may or may not, but hate themselves in some way and feel that humans are guilty of any fuzzy or precise act these people have or not in mind, which may impact on how they would behave. Further, how the concepts of pain and reward are perceived and/or implemented in AI. What makes “real” pain and “real” reward and what role that has in the context of behavior and society.

How a computer could be aware: I trust it couldn’t be aware in the sense we use the word for in humans. Apparently, we humans become aware of a thought or, let’s assume, of a feeling, when the pattern of mental activity grows above a certain threshold. What part of us keeps track of that? Another part of the brain, possibly, but then, how does that part feel conscious (as in aware) whereas the other doesn’t? Today, these questions end up in the realm of beliefs. In AI, so-called generative adversarial networks work in a way that one system checks on the other, which is apparently used for another purpose: double-checking the plausibility (real vs fake) of the data. I’ll propose that such a structure doesn’t make any part of the AI enable the AI to be “self-aware” in the sense we use the expression for humans. As discussed in my ethics in AI post, there are misunderstandings of terms due to beliefs. A risk of any set of AIs gaining self-awareness and own goals that clash with the well-being of humans is often being discussed. What then is self-awareness in this context, and why would something like the ability to answer a question in the vein of theory of mind with an “I exist” lead to come up with such goals? Other than to imitate humans because that was the goal they were programmed for. Luckily, a so-called “orthogonality thesis” states that AI can have any goal at any level of intelligence. The way people see such things may change the legal framework and how people will build the institutions and structures around AI. Among the beliefs that matter in this context is that of our having understood and analysed all that we humans are made of, so that we can discern AI questions relatively to that.

 

Scale: risk for us to be unable to deal with or prepare for the changes caused by speeds and scales, such as quantities and, chiefly relatively to human-, lifespan, and the changes thereof, that AI developments can carry. The scale of the impact of malfunctions may be extreme. To prevent problems, it may be a good idea to integrate into quality assurance standards a set of prudent increments that need to be complied with in AI development. To reduce risks, one of the principles to consider may be to only solve the problem that you initially had, and not, while you’re at it, a bigger problem that may include or be in indirect relationship to your initial one. It may be a good idea to work towards standards of design that may insure ability to turn an AI off if needed for “good” purposes, albeit not for “ill” purposes – how we know the difference is another story (see “introspection” below).

 

Willingness for introspection: risk for us to be unable to provide any AI with goals that are relevant to us. Because: what do we know of what our own goal should be? We don’t know what “our” purpose is. In the absence of that, identifying any bourgeoning diverging (as opposed to stable or limited) processes may be a step to be considered. AI goals are currently, I understand, often formulated as questions. Answering the question is done by where needed breaking down the question into subsets that can be answered easiest, and very approximately answering them, using the information that is immediately available, then reiteratively refining the answer, using ever more information (which is, incidentally, my life: answering the question: what to do, knowing what is).

Formulating the question is a different story. We need to invest introspective efforts in figuring out “right” questions. Whatever that is. Well, what are these? And how frequently do we need to reformulate them? An evolving optimal interval to the next goal-setting as a function of the types and quantities of new information, speed of processing it and degree of interference between deduced new initiatives and the exploitation of ongoing initiatives based on any previous goals? That aside, considering “values discrepancy” below, adapting the values thoroughly to the impacted entities will not always be possible. In my earlier post called “Ethics in AI”, I’ve described ethics as a function of embodiment, goals and values. Thereby values also intervene in the determination of the embodiment and goals, and a framework of goals in the determination of the embodiment. The most immovable expression of values is reflected in the embodiment and therefore we may want to think especially thoroughly about what to regulate or standardize there, and the question is equally posed of the extent to which an instructed goal should be revised by the AI for the values principle proposed below, should that principle be used.

 

Values discrepancy (see also: “introspection” above): unless we do find how to proceed from the cool place of philosophy, we may, without using our “not having figured out how to proceed from the cool place of philosophy” as an excuse for not trying harder to figure out how to proceed from the cool place of philosophy, want to look into a solution where the AI does not acquire “our” values, but that it considers and weighs the values of the people its actions are or will be impacting upon, and makes sure, given the choice, that these don’t hurt, directly or indirectly, at any time in the future, any entity in any of a number ways that may go well beyond the UN human rights. Thereby solutions must be found when faced with values tht manifest themselves as incompatible in the specific use cases, and a strategic approach as to how not to push communities with an action into a situation where a future action will bring about more such incompatibilities.

In the values arena, a risk is: an AI’s inability to assess the full impact of its actions on each entity. Another risk: the inaccuracy of the information the AI has of the thorough group of – immediately or at any (I can hear objections to that “any” already) time in the future – impacted entities or of their values. A further is the risk of side-effects of the goal an AI is given, because of lacking or erronate boundaries of action – and their priorities relatively to the goals – that are appropriate or reasonable according to what is often called common sense. But common sense is not common, in the sense of fully the same for two different people: here values and culture intervene again, as well as a concept of environmental stability vs diverging processes (see also: “introspection” above). Disparities between training data sets and operative data is in fact also a values issue. I further hear about robot rights, including the right to vote: any number of AI – produced (if not educated) by a number of companies – gaining a voice in politics? The weights of such AIs’ values would gain a whole new dimension.

 

Unplanned undesirable (“uu”) impacts of actions: beyond the side-effects of the production and education processes of AIs, by their own physical or computational presence, or actions, direct or indirect: mechanical, radiational, chemical, computational, electromagnetic, or by changing the informational status of any entity, which in humans (or animals) may lead, if not to a clear action, to an emotional change, thus impacting on them via their endocrine systems and their behavior as well as their nonverbal communication. An AI may also face uu conditions, resulting in an out-of-place AI behavior.

 

Switching glitches: the command of a system can need to go from AI to human (or AI) and back, and this currently doesn’t always run smoothly: this seems to be a design problem that should get better with appropriate development of systems where a change of command is required to be possible.

 

Chasm: complexity of decision processes (Aka “inscrutablility”. Word is out that it is because the layers in deep learning make it impossible, but it looks possible. See below.) along with our lacking the personal hands-on usage of the same sets of information (Aka “information asymmetry”). Risk, for the AI provider, of legal, political or social objections to the use of the AI, and for society the impacts of a lack of trust in and impredictibility of the AI’s behaviour. A key issue seems to be the capability to synthetize humanly intelligible summaries of decisional reasons and of knowledge insights before the implementation of decisions where sufficient time is available. Considering that document summarization AI already exists, this may be attainable.

It may be a good idea to look into what, from the observable outputs and processes of any AI we can deduce about the sum total of the unscrutable processes that it is undergoing, and how fast can we do that. This may also help understand – some aspects of some – humans. The good news is that, if we are going to get enhanced with AI, we may then gain the ability to understand more complex concepts faster. The bad news would then be that not everybody would be able to afford enhancements of that type, therefore the “morality” of those humans who do get enhancements will be important for the rest of the people to trust their decisions. On the other hand, don’t we trust random people blindly, or for lack of other options, already? But then again, we may have some options here, if we plan for this. Specialised AI or large trained resources to go through long logs, and a reduced AI speed, accuracy or efficiency for better scrutability.

 

Disruption within your industry, whichever it is: what may help be at the right place at the right time is on the one hand to take proactive part in decision making regarding standards and interoperability (see list of fields where standards are sought in, add standards for semantics description formats, while remaining aware that these will only enhance the disruption of your industry), and to make sure you and your territory are adequately represented in these negotiations. On the other hand to build own AIs so you don’t have to buy or licence while obtaining exactly what you need, meanwhile monitoring startups and burgeoning projects you may want to acquire. These probably go without saying.

 

Language interpretation inaccuracy by the AI or the humans interacting with it. I read that the development of ontologies is sought, and the need goes beyond vocabulary and semantics. In essence, the formulation of legal texts is akin to what appears to be sought here. Thereby we may find ourselves in a world where learning to speak in a way that facilitates the functioning of AIs becomes mandatory, and where we hardly can make jokes anymore, such as the ones that I make and are often taken by humans for serious propositions.

 

Legal risks? In all of the above.

 

 

These are all theories, but what risks are in question today? Current uses and some associated risks include: speech an facial recognition and image classification (targeting biases), transport routing and autonomous vehicles (with all the questions about whom to spare in case of accident), machine translation (a translator can make or break a negotiation), legged locomotion (which bugs should they not step on, and other calamities), search engines and chatbots, shopping and content recommendations (narrower view of the world, leading to groups with extremer positions. Some argue that the persons in any movement are those susceptible to start with, and that we see other news elsewhere than on our own devices. They/we may be susceptible, which is why it is important not to put them/us in an environment that can promote this. See also: my post about how to change the nature of a process.), predictions on the consumption of content or other products and services (which can lead to the modification of recipients of production funds: a self-fulfilling prophecy) and lethal autonomous robots (beyond the central questions of target selection and engagement, fear created in communities and other questions like can “bounded morality” suffice in any cases that are identifiable as such by the AI in question, the risk of getting captured and the technology being used and enhanced by an enemy), and many more implementations.

 

Please make sure you complete this reading with ‘Ethics in AI‘ because the questions posed there are not solved with the above. Nor was I in the “cool place of philosophy” while writing the above about risks. Moreover, I seem to have given some leeway regarding coolness of place. Was it because I’ve been consuming too much media advocating AIs adopting “our” values and therefore I couldn’t get rid of that thought pattern, or is the principle suggested above a real solution?

 

Would this excerpt from D.Z. Phillips’ “Contemplative Philosophy of Religion: Questions and Responses” inspire a process for cleansing patches of heat and reaching the cool place? Sounds like a lot of work.

 

“A contemplative conception of philosophy, by contrast, would wait on texts which would challenge any already crystallized conception. It would wait on criticisms and counter-criticisms to see the conceptual character of disagreement in contexts such as these.”

 

 

 

 

by Magda Nowak and Leszek Andzel

Alève Mine

(photo by Magda Nowak and Leszek Andzel)

“Science fiction author: “The Premise“. EPFL MSc. in microengineering: robotics, nanotech and more. The only multilingual female scifi/action actor and director with serious martial arts experience to whom the above applies. Founder and lead of the Zurich VR/AR meetup group. Speaker for scifi, and further for the arts in a tech framework. www.alevemine.com

0 Comments

No Comments Yet!

You can be first to comment this post!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.