The authoritative guide to ensuring science and technology make life on Earth better, not worse.

Artificial intelligence: a detailed explainer, with a human point of view

By Milton Hoenig | December 7, 2018

Is artificial intelligence, AI, a threat to our way of life, or a blessing? AI seeks to replicate and maybe replace what human intelligence does best: make complex decisions. Currently, human decision-making processes may include some means of AI as support or backup. But AI could also be “let out of the box” to act on its own in making intricate, possibly life-effecting or conflict-causing decisions.

Defining and quantifying human intelligence is the realm of philosophers, psychologists and neuroscientists. Humans can think, learn, reason, and understand. They also have enhanced abilities to recognize patterns, to plan actions, and to solve problems. What then is AI? AI describes something a computer or a machine does upon interacting with the environment to achieve certain goals by means that mimic human cognitive functions.

Google Dictionary’s definition is clear: “The theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.” The Merriam-Webster definition is more nuanced: “A branch of computer science dealing with the simulation of intelligent behavior in computers. The capability of a machine to imitate intelligent human behavior.”

A brief history. AI researchers developed the field of machine learning after realizing that it was more efficient to teach computers to learn than to feed them information for each and every task. They program sets of rules to be followed in calculations called algorithms that allow computers to evolve in step with the collection of data, thus giving them the ability to learn without being explicitly programmed.

In a subset of machine learning known as deep learning, developers build so-called neural networks of interconnected computers that mimic the way the brain’s neurons receive, process, and transmit information. A Cornell University psychology professor, Frank Rosenblatt, pioneered this approach, modeling a single neuron in 1957 in a device called the Perceptron, which, in recent years, has received fuller appreciation as researchers further develop neural network technology.

One use for neural networks is in visual processing. After developers “teach” a neural network to recognize elements in a picture, for instance, the neural network can then scan photos independently, classifying the elements within. For the sake of improved efficiency, neural networks rely on statistical methods to take data samples rather than processing entire datasets to arrive at solutions. This sampling process then allows neural networks to make decisions based on what is most likely to be right.

Will society benefit should machines become truly independent thinkers (something yet to be achieved)? The demands placed on machine learning have been accelerated by the advent of the extremely large datasets of so-called big data. Both the size of databases and the ability to process them have grown exponentially in recent years. This has helped spawn the interdisciplinary field known as data mining, which involves machine learning, statistics, and database systems. The data mining process allows machines, almost spontaneously, to develop new hypotheses from the interpretation of vast quantities of data.

Sensors and progress in AI. Human decision-making increasingly depends on sensors. At Iran’s Natanz enrichment plant, for instance, gamma ray detectors now monitor in real time the gaseous uranium flowing in the processing pipes to assure that the enrichment level of uranium 235 does not exceed 3.67 percent, the maximum level permitted in the Iran nuclear deal.

In medicine, machines can now offer a credible diagnosis based on sensor data of a patient’s symptoms, but a physician must ultimately weigh in. (Although machines, however, are now sometimes more accurate than human doctors.) People already rely on sensors to make everyday decisions, albeit only simple ones, such as turning on air conditioning when the temperature rises or triggering a heart pacemaker.

But as machine learning and AI applications advance, they will depend on data gathered from increasingly advanced sensors. Such sensors and the systems that process their data are developing rapidly, creating beneficial opportunities in some areas, yes, but also confronting society with a challenge to privacy.

The evolution of autonomous vehicles. Autonomous suggests “going it alone,” acting independently. Early versions of two quintessential autonomous vehicles, the driverless car and the autonomous drone, are destined to become commonly used civilian and military products. By 2025, driverless cars are expected to be an $11 billion market.

AI-operated vehicles will need to drive with the same sensory and cognitive functions, including memory and logical thinking, as human drivers. They will need human-like decision-making, learning, and executive capabilities. This will happen as sensors such as improved cameras begin to compliment advances in big data, neural networks, and deep learning. Even with a driver still behind the wheel, AI will play a role in driver assistance systems to control sensory functions dealing with vision and sound.

RELATED:
AI misinformation detectors can’t save us from tyranny—at least not yet

In a truly driverless car, sensors will feed environmental data into what is known as a perception action cycle, a repetitive loop that enables the autonomous vehicle to perform specific actions repeatedly, learning cycle by cycle. Sensors collect environmental data that is fed to a computer that uses AI algorithms to make decisions, consulting the cloud for a stored database of past driving decisions that augments real-time environmental input. Through repeating these loops, AI systems will make more accurate decisions, especially in the case of autonomous vehicles that are sharing data with other vehicles about operating experiences.

Driverless cars may be nice, but what about autonomous military drones? While driverless cars may ultimately increase comfort and safety, autonomous drones in the sky above spell military conflict. Currently, human pilots remain in control of the military drones flown by countries such as the United States, Israel, and China. In the United States, teams of CIA or Air Force pilots operate drones from distant ground stations. This stressful job involves guiding these unmanned aerial vehicles in reconnaissance and attack missions, often crossing borders to ferret out and eliminate terrorists and insurgents.

No autonomous drones are known to be operational. Drone programmers must account for many alternative actions. They develop AI systems programmed with algorithms that enable learning and even charting new courses of action.

The US military, for one, wants to develop these technologies. The US Air Force, famously made strides in this area when they released a swarm of 103 micro-drones from F/A-18 fighter jets over China Lake, Calif., in 2016. The relatively inexpensive drones shared a single AI system for making decisions and adapting to each other like swarms in nature. The US Army recently announced that it is developing the first drones using AI to spot and target people and vehicles, deciding on potentially lethal action with almost no human involvement.

The moral, ethical and legal concerns of AI. With the US military seemingly racing to use AI technologies in lethal applications and the White House boasting about increasing funding for federal AI efforts, private sector and nongovernmental organizations are pushing back. Tech workers are questioning whether they want to help develop these capabilities. Privacy advocates, among others, charge that the US government is glossing over the ethical considerations of AI technology and does not have a regulatory framework to keep citizens safe. In April 2018, over 3,000 Google employees signed a petition in protest of the company’s involvement in Project Maven, a US Department of Defense AI project to study imagery for improving battlefield drone strikes. The letter to Google CEO Sundar Pichai states, “Google should not be in the business of war,” according to published reports. A year earlier, the Defense Department established Project Maven as well as the Algorithmic Warfare Cross-Functional Team to develop the use of its reconnaissance drones over combat zones for “computer vision,” a discipline that autonomously extracts items of specific interest from still or moving imagery.

Four months after the employee protest, Google announced it would not renew its Defense Department contract. According to the Washington Post, Project Maven is the first known program to weaponize advanced AI. Google and other Silicon Valley giants face the dilemma of reconciling the concerns workers have about developing warfare technology while staying in the running for lucrative military contracts.

If countries subject their autonomous drones to the rules of armed conflict, they can use the drones to attack only lawful targets and cannot cause excessive collateral damage. Three important principles militaries consider are those of necessity, proportionality, and distinction. By the principle of proportionality, the anticipated damage to human life and property must not be excessive in relation to the human advantage gained. The principle of distinction distinguishes between combatants and non-combatants. It requires that a military attack only combatants and prohibits indiscriminate attacks not directed at specific military objectives. The implied question is whether AI-driven autonomous drones are capable of reasoning in the human sense. Countries that in the future operate autonomous drones must come face to face with the question of whether having a human in the decision cycle is required under international law.

RELATED:
Apathy and hyperbole cloud the real risks of AI bioweapons

In May 2018, the White House hosted a closed meeting on “Artificial Intelligence for American Industry” with over 100 attendees from government, academic institutions, industrial research labs, and businesses. According to a summary report by the White House Office of Science and Technology Policy, the purpose of the meeting was to “discuss the promise of AI and the policies [the US government] will need to realize that promise for the American people and maintain [US] leadership in the age of artificial intelligence.”

Subsequently, the Electronic Privacy Information Center (EPIC), a Washington, DC-based public interest research center, faulted the White House summit for not being open to the public and took exception with the White House’s summary report for not discussing the critical issues of accountability, transparency, ethics, and fairness.

The White House also announced the creation of the Select Committee on Artificial Intelligence to address federal AI activities, including autonomous systems, biometric identification, computer vision, human-computer interactions, machine learning, natural language processing, and robotics. EPIC criticized the White House for failing to consider the risks of implementing AI technology and for apparently opening a channel for the Defense Department to develop and deliver AI-based autonomous weapons. EPIC wrote:  “Unless the channels of public input are formally broadened and deepened substantially, the Select Committee will fail to understand and mitigate the risks of AI deployment.”

Major nongovernmental organizations continue to raise concerns about AI. For example, an Association for Computing Machinery’s release states “the ubiquity of algorithms in our everyday lives is an important reason to focus on addressing challenges associated with the design and technical aspects of algorithms and preventing bias from the onset.” Some, like IEEE-USA, are calling for regulation: “Effective AI public policies and government regulations are needed to promote safety, privacy, intellectual property rights, and cybersecurity, as well as to enable the public to understand the potential impact of AI on society.”

The dangers of AI. The future is open on the question: Will AI serve the human species or control it? The type of learning machine depends on the type of algorithms loaded into it. Developments in areas such as facial recognition and language translation rely on computer neural networks and deep learning modeled after the human brain. In other situations, machines learn to induce new scientific hypotheses that sometimes lead, for example, to new drugs. A next goal of researchers, wrote computer scientist Pedro Domingos in a recent Scientific American article, is to combine the machine learning paradigms into one master algorithm. Could that make it easier for AI machines take over, or will they always stay in their place to serve the needs of their human originators? At best, they could greatly augment human intelligence. Perhaps soon, each person will have a digital double, a virtual assistant. Of course, this digital doppelganger could do evil unless monitored by its human, Domingos points out.

Ali Nouri, the president of the Federation of American Scientists, believes artificial Intelligence, machine learning, and automation all bring tremendous benefits, but also risks. The risks include erosion of personal privacy, increased social media disinformation, and the potential for an autonomous weapons arms race. Nouri notes: “It’s important for policy makers to work with industry, academia, and the public to address these risks in a manner that doesn’t jeopardize innovation, and currently there is too little of this discussion taking place.”

Let philosopher-historian Yuval Noah Harari have the last word. In a recent article in The Atlantic, he wrote: “AI frightens many people because they don’t trust it to remain obedient…But there is no particular reason to believe that AI will develop consciousness as it becomes more intelligent. We should instead fear AI because it will probably always obey its human masters, and never rebel. AI is a tool and a weapon unlike any other that human beings have developed; it will almost certainly allow the already powerful to consolidate their power further.”

 

 

 


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Get alerts about this thread
Notify of
guest

1 Comment
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Zohaib
5 years ago

AI is the future of Modern World.
I am from Pakistan.
But, I highly intend to attend AI event in NY and Dubai.
Even, the Pakistani government is started PIAIC program to train the 100K student to produce AI software for the rest of the world.
https://cafefreelancing.com/pakistan-artificial-intelligence-president-program.html

Almost 1 out of 5 jobs are replaced by AI up to 2025. Be ready for this biggest revolution of the 21st century.
Welcome to New World