Artificial Intelligence(AI) is the simulation of human intelligence in machines that are programmed to think like humans and mimic actions. The term may also be applied to any machine that exhibits traits associated with the human mind like learning and problem-solving.
The ideal characteristic of AI is to take actions that have the best chance of achieving a goal.
What is AI?
Most people relate Artificial Intelligence with robots. It is because big-budget movies and novels normally have stories about machines similar to humans that wreak havoc on Earth. But the truth is something different.
AI is based on the principle that human intelligence can be defined in a way that a machine can easily mimic it and execute tasks from the most simple to the most complex. The goal of AI is learning reasoning and perception.
AI is interdisciplinary in nature and applies multiple approaches. Advancements in machine and deep learning are affecting almost every sector in the tech industry.
How AI works?
After breaking the Nazi encryption machine Enigma and helping allied forces win World War 2, mathematician Alan Turing changed history with a simple question “Can Machines Think?”
Turing published a paper in 1950’s “Computing Machinery and Intelligence”. And after performing Turing test, he established fundamental goals and visions of AI.
AI is the branch that wants to answer Turing’s question in the affirmative. It endeavors to replicate human intelligence in machines.
The goal of AI has given rise to many debates and questions. No single definition is universally accepted.
The goal of AI is to build machines that are intelligent. But it does not define what artificial intelligence is or what makes a machine intelligent.
Staurt Russel and Peter Norvig define in their groundbreaking book Artificial Intelligence: A modern approach by saying that there are intelligent agents in the form of machines. When keeping this in mind, we can say that AI is the study of agents that receive percepts from the environment and perform actions.
The field of AI has historically been defined by Norvig and Russelll as:-
- Thinking Humanly
- Thinking Rationally
- Acting Humanly
- Acting Rationally
The first two ideas are concerned with the thought process and reasoning and others deal with behaviour. Norvig and Russell particularly focus on rational agents, noting all the skills needed for Turing test and also allow an agent to act rationally.”
Patrick Winston, Professor of AI and computer science at MIT defines AI as “algorithms enabled constraints, exposed by representations that support models targeted at loops that tie thinking, perception, and action.
AI is able to perform tasks that ordinarily require human intelligence. Many of these systems are powered by machine learning, deep learning, and some by very boring rules.
Is AI safe?
In the long term, AI is very beneficial and motivates research in many areas including economics, and law to technical topics like control, security, validity, and verification. It may be a nuisance that your laptop crashes or gets hacked but think about if AI controls your car, airplane, pacemaker, automated trading, or power grid. Also, there is a need to prevent a devastating arms race in lethal autonomous weapons.
There are questions in the mind that whether strong AI will ever be achieved and others that super intelligent AI is surely going to be beneficial. Research today will surely help in preparing for and preventing negative consequences, thus enjoying the benefits of AI while avoiding pitfalls.
Most researchers believe that super intelligent AI is unlikely to show human emotions like love, hate, etc and there is no reason to expect Ai to become intentionally malevolent or benevolent. So two risk scenarios are more likely: –
- AI is programmed to do something devastating
Autonomous weapons are AI systems programmed to kill. If in the hands of wrong person, they can easily cause mass casualities. An AI arms race could also result in mass casualities.
- AI is programmed to do something beneficial but develops destructive methods of achieving its goal.
If a superintelligent system is tasked with ambitious geoengineering project it might wreak havoc on our ecosystem and view human attempts to stop it as a threat.