Artificial Intelligence: Real, Surreal and Here to Stay

September 6, 2016 OPINION/NEWS

By

Cynthia M. Lardner

 

You awake one morning to find your brain has another lobe functioning. Invisible, this auxiliary lobe answers your questions with information beyond the realm of your own memory, suggests plausible courses of action, and asks questions that help bring out relevant facts. You quickly come to rely on the new lobe so much that you stop wondering how it works. You just use it. This is the dream of artificial intelligence. — BYTE, April 1985

 

Artificial intelligence (AI) is based on the premise that human intelligence “…can be so precisely described that a machine can be made to simulate it.”  AI has been actively deployed by the private and public sectors. This begets questions about the ethos of some AI applications and the lack of national and international regulations protecting against abuses.

 

 

Historical Context

 

By the middle of the 1960s, AI research in the United States (U.S.) was already being heavily funded by the Department of Defense (DoD).  By 1985, the AI market exceeded a billion dollars.  In the 1990s and early 21st century, public sector AI successes took place behind closed doors, while private sector development vacillated.  That has changed.

With nanotechnology advances, AI is all but self-replicating:

Just a few years ago, artificial intelligence was a field starved for funding, rife with skepticism, and distinguished not by its achievements but by its perennial disappointments. Now machines have the capability to learn, build things, answer questions, and yes, even harm people.

Today, the United States is the leader in the development of artificial intelligence applications. Most AI research has focused on developing technologies benefiting society. Areas of focus include creating a safer society, preventing accidents, enhancing accessibility, preliminary mental health diagnosis, reducing medical errors and reducing battlefield casualties. According to Eric Horvitz “”over a quarter of all attention and resources” at Microsoft Research alone are focused on artificial intelligence.”

 

 

The AI Sound Byte

 

AI is a vast multi-disciplinary area encompassing all of the computer sciences, mathematics, traditional and artificial psychology, linguistics, data analytics, the neurosciences, anthropology, history, and even philosophy. The ontology or ‘fund of knowledge’ required to implement AI is all-encompassing and malleable knowledge about the world. AI applies this knowledge bank to objects, properties, categories and relationships between objects, events, states and time; cause and effect; and specific knowledge about the end user. This fund of knowledge is employed to develop user-specific AI algorithms.

Perhaps the best known application of AI was when IBM’s supercomputer Watson won on Jeopardy! According to Simon Porter, Vice President of European Commercial Sales for IBM:

Watson’s approach to information has often been compared to that of a human, with the technology harnessing the same processes of observation, interpretation, evaluation and decision-making. In this respect, Watson is not only able to follow instructions, but to learn. Whichever field Watson is employed in, it learns the relevant context, language and thought process beforehand from human experts. By loading the relevant body of literature into Watson, it is able to develop a corpus of knowledge within a particular domain. Watson is then trained via a process called machine learning, where pairs of questions and answers give Watson the grounding to interpret the vast quantities of data at its disposal.

The next best example is the text-to-speech program used by physicist and Professor Stephen Hawking, who suffers from Amyotrophic Lateral Sclerosis.  The updated program he relies upon was a corroborative effort between Intel and SwiftKey. The combined technology is preprogrammed to learn “how the professor thinks and suggests the words he might want to use next.” This, in turn, suggests actions with the program learning from every user action.

This is all based upon triggering specific neural networks.”

These features distinguish AI from the generic predictive text programs that are commonly uploaded on to smartphones. That being said, user specific AI program are being used in smartphones and can even be covertly installed and monitored by drones.

AI is increasingly accessible to people with disabilities. “Artificial Intelligence (AI) is already making a huge impact in the lives of people with disabilities. From smart wheelchairs to unmanned vehicles all the way to improved assistive technologies, this particular group is eager to rip the benefits of AI,” stated accessibility consultant Eduardo Meza-Etienne.

Robotics using AI technology is being tested to lend better control to prosthetics. There is research focused on the use of AI to operate a second set of working eyes for the visually impaired.

Mr. Meza-Etienne further explained, “Advocacy groups and associations supporting the rights of people with disabilities are already participating in the implementation of AI to improve their quality of life.”

For instance, Comodini Cachia, representing the European Parliament, stated that she is currently working on how aspects of robotics can serve to tackle everyday life challenges for disabled individuals. Ms. Cachia concluded, “It is possibly the right time to see how technology and artificial intelligence brings more inclusiveness and more independence”.

The DoD has long been interested in AI, believing it the path to reducing operating and human costs. A quasi-civilian use of AI is the DoD Defense Advanced Research Projects Agency (DARPA) project involving the coordinated use of CCTV traffic cameras for surveillance purposes. The number of CCTVs internationally numbers in the millions. AI is designed to automatically, if not instantaneously, process the video feed, extrapolating not only license plate identification but, the identity of the drivers vis-à-vis facial recognition software.  DARPA can further distinguish between ‘normal’ and ‘abnormal’ behavior.

The problem is what abnormal and normal behavior is a value judgment contingent upon a host of uncontrollable variables. An example was given in a recent media analysis:

Imagine surveillance technologies with the capacity of a human brain. Imagine surveillance technologies capable of remembering your activity, analyzing it, correlating it to other facts and/or activities, and of predicting outcomes; and now imagine such technology used to spy on us.

This has to be weighed against the benefits. Consider the November 2015 Paris attacks when the day after following intelligence shared by the U.S., Paris authorities were able to begin identifying suspects. That could only have occurred through the use of DARPA. Another application is to stem the accidents and loss of life caused by drunk drivers.

Another DoD application is early mental health intervention, especially for combat-induced Post-Traumatic Stress Disorder. Research has established that individuals are more likely to be candid about their emotional well-being when ‘being analyzed’ by a computer.

The Pentagon’s website indicates that “artificial-intelligence projects are being pursued to provide the U.S. military with “increasingly intelligent assistance.” The United States, followed by China, lead in developing AI for military purposes. The most pressing concern is the race to develop of autonomous weapons.

 

 

Lisa Larson-Walker

 

 

 

Concerns

 

Once deployed, AI is supposed to be closely monitored and controlled by a set of checks and balances; including passive and active human monitoring.

The greatest challenge is the quest to achieving the point where science can confirm that AI reasoning mimics general intelligence. Microsoft Research’s Chief Eric Horvitz stated he believed that “…intelligent machines could achieve consciousness.”

“The question then becomes whether the two intelligences can co-exist. If our past and present history is any indication…the future doesn’t bode well,” rued physics professor Marcelo Gleiser.

Further concerns can be divided between civilian usage that is harmful or used in a manner violating human rights, and the use of AI as an autonomous military weapon. The primary problem with civilian use was described by security expert, Barnaby Jack:

 

[A] vulnerability of biotechnological systems, which raises concerns that BCI technologies may also potentially be vulnerable and expose an individual’s´ brain to hacking, manipulation and control by third parties. If the brain can control computer systems and computer systems are able to detect and distinguish brain patterns, then this ultimately means that the human brain can potentially be controlled by computer software.

 

That example is not unfounded as is reflected in Oxford University’s recent study:

 

Intelligent systems are able to “perceive” the surrounding environment and act to maximize their chances of success. For this reason the “extreme intelligences … are difficult to control and would probably act to boost their own intelligence and acquire maximal resources for almost all initial Artificial Intelligence motivations…

Artificial Intelligence (AI) seems to be possessing huge potential to deliberately work towards extinction of the human race. Though, synthetic biology and nanotechnology along with AI could be possibly be an answer to many existing problems however if used in wrong way it could probably be the worst tool against humanity.

 

There is cause for concern.  The DoD’s DARPA program is now funding several AI projects which “…could potentially equip governments with the most powerful weapon possible: mind control (Emphasis Added).”

 

Last year, following upgrades to his own AI system, Professor Hawking gave an interview to BBC discussing the pros and cons of artificial intelligence. Professor Hawking explained that success in AI development would be the “biggest event in human history” but that human beings should not underestimate the risks. “The development of full artificial intelligence could spell the end of the human race” as it “…it would take off on its own, and re-design itself at an ever increasing rate.”

This same concern has been articulated by other industry experts. Consider, for instance, the foreboding words of Elon Musk:

I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence. Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out.

Physicist Louis Del Monte agreed, stating that, “The concern I’m raising is that the machines will view us as an unpredictable and dangerous species.”

British inventor Clive Sinclair has opined that “artificial intelligence will doom mankind,” as “Once you start to make machines that are rivaling and surpassing humans with intelligence, it’s going to be very difficult for us to survive. It’s just an inevitability.”

Mr. Horvitz disagreed, stating, “There have been concerns about the long-term prospect that we lose control of certain kinds of intelligences. I fundamentally don’t think that’s going to happen. I think that we will be very proactive in terms of how we field AI systems, and that in the end we’ll be able to get incredible benefits from machine intelligence in all realms of life, from science to education to economics to daily life.”

 

 

 

 

 

The Need for Regulation

 

There are two areas mandating national and international governance:  use in civilian surveillance and as a military armament. Mr. Del Monte opined:

Today there’s no legislation regarding how much intelligence a machine can have, how interconnected it can be. If that continues, look at the exponential trend. We will reach the singularity in the timeframe most experts predict. From that point on you’re going to see that the top species will no longer be humans, but machines.

A letter, published Monday, July 28, 2015, was signed by Louis Del Monte, Professor Hawking, Elon Musk, more than 7,000 technology experts, and 1,000 artificial intelligence researchers calling upon our world’s militaries to stop pursuing ever-more-autonomous robotic weapons.

If any major military power pushes ahead with (artificial intelligence) weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: Autonomous weapons will become the Kalashnikovs of tomorrow [the Russian assault rifle in use around the world]. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce.

One articulated concern is that autonomous weapons can be deployed search and destroy targets. The military maintains that there is some human control over autonomous drones but the human element is de mininis with what control that does exist rapidly diminishing.

“The time for society to discuss this issue is right now. It’s not tomorrow,” implored Mr. Etzioni.

 

 

 

 

 

 

The author with United Nation Secretary-General Ban Ki-moon at the inauguration of the new ICC complex on April 19, 2016

Cynthia M. Lardner

Cynthia M. Lardner is a journalist focusing on geopolitics. An ardent supporter of criminal justice, Ms. Lardner is a contributing editor for Tuck Magazine and E – The Magazine for Today’s Executive Female Executive, and her blogs are read in over 37 countries. As a thought leader in the area of foreign policy, her philosophy is to collectively influence conscious global thinking. Ms. Lardner holds degrees in journalism, law, and counseling psychology.

 

 

 

Sources

Artificial Intelligence, Wikipedia, citing, McCorduck, Pamela (2004), Machines Who Think (2nd ed.), Natick, MA: A. K. Peters, Ltd., ISBN 1-56881-205-1 9″Our history is full of attempts—nutty, eerie, comical, earnest, legendary and real—to make artificial intelligences, to reproduce what is the essential us—bypassing the ordinary means. Back and forth between myth and reality, our imaginations supplying what our workshops couldn’t, we have engaged for a long time in this odd form of self-reproduction.” “(McCorduck 2004, p. 3)).

Nilsson, Nils (2010). The Quest for Artificial Intelligence: A History of Ideas and Achievements. New York: Cambridge University Press. ISBN 978-0-521-12293-1.

Clark, Jack, and Bass, Dina, “Here’s What Inspired Top Minds in Artificial Intelligence to Get Into the Field“, July 29, 2015, Bloomsberg News.

The US is at the forefront of technological innovation“, February 8, 2016, The Guardian. (“Artificial intelligence: Apple’s Siri for iPhone, IBM’s Watson, and Xbox One’s Kinect are all examples of some of the systems the US artificial intelligence industry has developed for knowledge, learning and natural language processing.:”).

See also “MIT shows off power efficient chip designed for artificial intelligence“, February 9, 2016, Hindustan Times. (The new chip, which the researchers dubbed “Eyeriss,” can also help usher in the “Internet of things” – the idea that vehicles, appliances, civil-engineering structures, manufacturing equipment, and even livestock would have sensors that report information directly to networked servers, aiding with maintenance and task coordination.

“This work is very important, showing how embedded processors for deep learning can provide power and performance optimizations that will bring these complex computations from the cloud to mobile devices,” explained Mike Polley, senior vice president at Samsung’s mobile processor innovations lab.).

Lardner, Richard, “5 Things to Know About Artificial Intelligence and Its Use“, July 28, 2015, Associated Press.

While this is not a paper on artificial psychology, having some understanding of its foundational underpinnings is essential.  See Crowder, James, and Friess, Shelli, “Artificial Psychology: The Psychology of AI”, International Multi-Conference on Informatics and Cybernetics, July 2014, Research Gate. (“With this fully autonomous, learning, reasoning, artificially intelligent system (an artificial brain), comes the need to possess constructs in its hardware and software that mimic processes and subsystems that exist within the human brain, including intuitive and emotional memory concepts.); and Friedenberg, Jay, Artificial Psychology: The Quest for What It Means to Be Human, Psychology Press, Oct 18, 2010.

Cybersecurity and Artificial Intelligence: A Dangerous Mix,” February 24, 2915, Infosec Institute. (“[These algorithms are] designed to make high-stakes decisions in real time. The real innovation is that these algorithms emulate the human brain, amplifying its capabilities through the instantaneous collaboration of a network of intelligent systems that could be able to learn from their experience”).

Nöe, Alva, “Artificial Intelligence, Really, Is Pseudo-Intelligence” NPR News, November 21, 2014. (Alva Noë is a philosopher at the University of California at Berkeley where he writes and teaches about perception, consciousness and art). (“Artificial intelligence isn’t synthetic intelligence: It’s pseudo-intelligence.  This really ought to be obvious. Clocks may keep time, but they don’t know what time it is. And strictly speaking, it is we who use them to tell time. But the same is true of Watson, the IBM supercomputer that supposedly played Jeopardy! and dominated the human competition. Watson answered no questions. It participated in no competition. It didn’t do anything. All the doing was on our side. We played Jeapordy! with Watson. We used “it” the way we use clocks”).

Porter, Simon, “A simple introduction to IBM Watson and what it can do for your business“, December 22, 2015.

Walters, Richard, “SwiftKey deal highlights gulf in artificial intelligence world“, February 9, 2016,  Technology. (“Symptomatic of the widening gulf was this week’s purchase by Microsoft of SwiftKey, a private UK company whose keyboard app uses AI to predict what word a smartphone user is likely to type next. The technology has been downloaded on to more than 300m handsets, but SwiftKey was not able to build an attractive enough business around the app to make it worth turning down Microsoft’s $250m offer”).

Stephen Hawking warns artificial intelligence could end mankind“, December 2, 2014, BBCNews.

Clark, Jack, and Bass, Dina, “Here’s What Inspired Top Minds in Artificial Intelligence to Get Into the Field“, July 29, 2015, Bloomsberg News. (Microsoft Chief Research Scientist Christopher Bishop stated, “Recreating the cognitive capabilities of the brain in an artificial system is a tantalizing challenge, and a successful solution will represent one of the most profound inventions of all time”).

Williams, Lauren C., “New Drone Can Hack Into Your Smartphone To Steal Usernames And Passwords“, March 20, 2015, Think Progress. A new hacker-developed drone can lift your smartphone’s private data from your GPS location to mobile applications’ usernames and passwords — without you ever knowing. The drone’s power lies with a new software, Snoopy, which can turn a benign video-capturing drone into a nefarious data thief.

Snoopy intercepts Wi-Fi signals when mobile devices try to find a network connection.

As a part of its controversial surveillance programs, the U.S. National Security Agency already uses similar technology to tap into Wi-Fi connections and control mobile devices.

With the right tools, Wi-Fi hacks are relatively simple to pull off, and are becoming more common. Personal data can even be sapped from your home’s Wi-Fi router.

‘True inclusion helps disabled reach potential’ – Comodini Cachia“, January 28, 2016, Malta Today. (Ms. Cachia is the EPP Group as shadow rapporteur for the European Parliament in her new role in the Culture and Education (CULT) Committee, on the ‘Implementation of the UN Convention on the rights of persons with disabilities with special regard to the concluding observations of the UN Committee on the rights of persons with disabilities (CPRD)’).

See also “Deft robot fingers give new hope for prosthetics“, February 1, 2016, SwissInfo; VIDEO: ‘Kingsbury musician battles disability through the power of Mi.Mu gloves‘, January 27, 2016, Tamsworth Herald. (“Kris Halpin, a sound engineer and singer-songwriter…has lived with hemiplegic cerebral palsy all his life – a condition that causes muscle stiffness, difficulties with balance and walking, and muscle weakness. But despite Kris’ condition, he has mastered the art of creating high quality songs with the help of the £5,000 Mi.Mu gloves, designed by Harry Potter composer Imogen Heap, which detect hand and arm movements to create the sound of different instruments.”); and Hernandez, Daniela, “The revolution in technology that is helping blind people see“, December 30, 2015, Fusion. (“Recent advancements in artificial intelligence, along with the proliferation of sensors, mean a technological revolution is coming for people with vision loss. Universities and companies like IBM, Microsoft and Baidu are working on technologies ranging from smart glasses to better computer-vision software that could one day serve as digital eyes for the estimated 285 million visually impaired people worldwide”).

The state of artificial intelligence”, June 25, 2013.

The state of artificial intelligence”, June 25, 2013. (In the mental health arena, DARPA has embarked upon the Detection and Computational Analysis of Psychological Signals program. The goal of the DCAPS program is to develop new analytical tools capable of evaluating the psychological status of war fighters in an attempt to improve psychological health awareness and encourage post-traumatic stress disorder sufferers to seek help earlier).

The computer will see you now: A virtual shrink may sometimes be better than the real thing”, August 16, 2014, The Economist. (“Ellie [a computer] could change things for the better by confidentially informing soldiers with PTSD that she feels they could be a risk to themselves and others, and advising them about how to seek treatment).

Pentagon Wants a ‘Real Roadmap’ to Artificial Intelligence”, January 6, 2015, Next Gov Newsletter.

(“In November, Undersecretary of Defense Frank Kendall quietly issued a memo to the Defense Science Board that could go on to play a role in history.

The memo calls for a new study that would “identify the science, engineering, and policy problems that must be solved to permit greater operational use of autonomy across all war-fighting domains…Emphasis will be given to exploration of the bounds-both technological and social-that limit the use of autonomy across a wide range of military operations. The study will ask questions such as: What activities cannot today be performed autonomously? When is human intervention required? What limits the use of autonomy? How might we overcome those limits and expand the use of autonomy in the near term as well as over the next 2 decades?”).

Id. (Validity: ensure that the AI system maintains a normal behavior that does not contradict the requirements defined in the design phase…

Control: how to enable human control over an AI system after it begins to operate, for example to change requirements.

Reliability: The reliability of predictions made by AI systems.).

Gleiser, Marcelo, “Are We To Become Gods, The Destroyers Of Our World?”, May 6, 2015, NPR News. (Marcelo Gleiser is a theoretical physicist and cosmologist — and professor of natural philosophy, physics and astronomy at Dartmouth College. He is the co-founder of 13.7, a prolific author of papers and essays, and active promoter of science to the general public. His latest book is The Island of Knowledge: The Limits of Science and the Search for Meaning).

Roth, George, “Artificial Intelligence Can Nab Money Launderers“, February 9, 2016, Payments Source.

Xynou, Maria, “Hacking without borders: The future of artificial intelligence and surveillance“.

“The state of artificial intelligence”, June 25, 2013, as found on the www at http://fedscoop.com/the-state-of-artificial-intelligence.

Stephen Hawking warns artificial intelligence could end mankind“, December 2, 2014, BBC News. See also “Now Bill Gates Is ‘Concerned’ About Artificial Intelligence”, Newsy Tech, YouTube. Microsoft Co-Founder Bill Gates expressed his concern:

I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.

Elon Musk (@elonmusk), 7/27/15, 9:41 PM, If you’re against a military AI arms race, please sign this open letter: tinyurl.com/awletter. See also Elon Musk On AI: ‘We’re Summoning The Demon‘, October 26, 2014, Newsy Tech, YouTube.

Tucker, Patrick, “US Drone Pilots Are As Skeptical of Autonomy As Are Stephen Hawking and Elon Musk”, July, 28, 2015. (The letter was signed by other well-respected experts in the field including:

  • Oren Etzioni, chief executive officer of the Allen Institute for Artificial Intelligence in Seattle;

  • Toby Walsh, professor of artificial intelligence at the University of New South Wales in Sydney, Australia, and at Australia’s Centre of Excellence for Information Communication Technologies; and

  • Bart Selman, computer science professor at Cornell University.)

Tucker, Patrick, Endnote No. xxxvi (“The United States military maintains a rigid public stance on robot weapons. It’s enshrined in a 2012 DOD policy directive that says that autonomous weapons “shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”

But the military keeps working steadfastly at increasing the level of autonomy in drones, boats, and a variety of other weapons and vehicles. The Air Force Human Effectiveness Directorate is working on a software and hardware package called the Vigilant Spirit Control Station, which is designed to allow a single drone crew, composed primarily of a drone operator and a sensor operator, to control up to seven UAVs by allowing the UAVs to mostly steer themselves”).

0 Comments

No Comments Yet!

You can be first to comment this post!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.