Should We Fear Artificial Intelligence?
The general attitude towards artificial intelligence is split into two: celebration of the robots that improve humanity and fear of development of artificial intelligence that will outsmart humans. Many renowned figures in the field of science and tech has warned about the looming threat of artificial intelligence. Elon Musk said, “With artificial intelligence, we are summoning the demon.” Bill Gates also warned, “First the machines will do a lot of jobs for us and not be super intelligent… A few decades after that though the intelligence is strong enough to be a concern.” What is it that people fear about robots having full artificial intelligence? In the last few classes, we discussed if robots could have emotions, free will, or consciousness. These questions pinpoint to key features that robots need to fully emulate human beings.
If robots were to truly have emotions, why should we fear them? With emotions, one can express much more complicated thoughts and intentions. Rather than a robot with a simple input and output signal, the addition of emotion allows it to learn true human experiences. Why is it potentially dangerous for robots to understand true human emotions and experiences? Perhaps it is because knowing emotions allows robots to effectively manipulate humans. The tricky part is a robot need to have emotions but also know how to perceive emotions to be like human. For example, a robot that does not have emotions can interact with a human being. It can output verbal messages depending on the person talking. However, to convince a human being, the robot needs to use some emotional element. A robot who understands emotion will also understand how to use it. Can a robot trigger someone’s jealousy, anger and lead him or her to have murderous thoughts? Albeit, can a robot have emotions that motivates him to steal, rob, or kill someone? Many a times, these irrational and unethical behaviors are triggered by an emotional response.
Recently, I watched a video of Brooks’ Herbert robot that uses sonar to detect nearby obstacles. The robot exploits a subsumption architecture and allows the robot to perform motions. Imagine that Brooks’ robot also has emotions. It can refuse to perform the tasks that it is told, to sweep the surface to detect cans, if it is not in the right mood. What if it can have emotional mood swings like humans do? Will it be harder to control them?