May 31, 2018
AI: From controlling traffic signals to controlling the universe
Prof. Dr. Mohamed Elmasry
More by this author...In teaching engineering students about microchip design, I would start with a simple exercise - design a microchip to control the traffic signals for an intersection.
Then I increased the tasks that this microchip had to perform. For example, it could be pre-programmed to give green lights to the main road three times as often as to the secondary road, and to determine how many seconds it takes for the lights to sequence from green to yellow to red.
And extra circuitry would be required to ensure that all the lights could not show green at the same time, otherwise more accidents would happen than if there were no traffic signals at all.
Once students had completed this assignment, I increased the complexity yet again by requiring them next to optimize their designs according to criteria such as; low cost, less energy consumption, solar power operation, etc.
And we reached the point where all the added complexities became something else: we gave it the sexy name “Artificial Intelligence,” or AI for short. It evolved something like this:
The signal microchip reads data from road sensors that count vehicles in each direction and receives data from other traffic controllers which allow it to react (via changing lights) to ensure smoother traffic flow, which reduces environmental impact.
The same microchip responds to accidents in all directions, allows a central police station to take control, records a daily traffic log and sends its data to a central unit for analysis.
Through this exercise, students were challenged to design a “smart” microchip able to execute much more complex tasks with accuracy and reliability, but without exceeding the restrictions of design time, manufacturing costs, backup battery power, and so on.
They learned a very essential criterion of successful design; that “it meets a need.”
In the case of the traffic controller, the alternative is an analogue system that endlessly toggles between green, yellow and red at the same frequency, regardless of vehicular flow or other conditions. We still have these on many intersections and they frustrate the hell out of drivers and pedestrians alike. When they break down, humans have to do the job. How many times have you encountered a police officer directing traffic when the signal lights go dark?
Remember, an AI microchip must meet a need where alternatives are not optimal. Secondly, the engineering/design teams developing such microchips must ensure that their product satisfies the specific need at the lowest cost. Thus, AI microchips cannot logically be more intelligent than the engineers who create them in the first place.
This leads to the design of microchips to control robots for tasks too dangerous or logistically difficult for humans to perform, such as deactivating or detonating an explosive device. Police and the military are already using such AI robots. And a near-future use for miniaturized AI devices is being able to swallow one that can identify and even eradicate cancer cells anywhere in the human body.
Both the above examples can be described as market-driven: a need is identified and AI microchip technology offers an optimal solution.
But we are already seeing examples of AI technology being applied to no-need markets; that is solutions that are not a viable replacement for existing ones. The most prominent application, I believe, is self-driving cars, for which there is no real “need.” Such developments are purely technology-driven.
Another example, which perhaps seems far-fetched at the moment: Will it be possible to design AI microchips to control earth’s atmosphere and geology, to do a better job of reducing climate change, floods, droughts, earthquakes, volcanoes etc.?
If so, is there a genuine need for such powerful technological intervention? If I could convince global investors that this is worthwhile, would I become rich and famous? We’ll have to wait and see.
Prof. Dr. David Parnas, a world-class expert in software engineering, wrote an important technical article titled, "The Real Risks of Artificial Intelligence" in Inside Risks Viewpoint, Communications of the ACM, Vol 60, Issue 10 dated October 2017, pages 27 - 31. He warned that "Do not be misled by demonstrations (of AI applications): they are often misleading because the demonstrator avoids any situations where "AI" fails. Computers can do many things better than people but humans have evolved through a sequence of slight improvements that need not lead to an optimum design." But humans' "natural" solutions work.
In the meantime, AI gives us pause to reflect on the important difference between needs and desires, and how to recognize which is which.