The ongoing advances in artificial intelligence (AI) are astonishing, and provide enormous potential — for both good and bad. The implications of AI on productivity, employment, and humanity itself were discussed on Monday in the panel “Automation, Artificial Intelligence, and the Economy” in New York City at the Blouin Creative Leadership Summit.
The three panelists each brought very different insights to the table. Daniel Fountenberry spoke on the benign end of the spectrum regarding AI’s implications. He is founder and CEO of Books that Grow, an educational firm whose books change in real-time to suit the individual progress of students, by analyzing user-generated data. This is a demonstration of using AI to solve well-defined problems, more in the realm of adaptive algorithms than anything involving a self-aware artificial entity.
Dr. Thomas Dietterich, Director of Intelligent Systems at Oregon State University’s School of Electrical Engineering and Computer Science, and president of the Association for the Advancement of Artificial Intelligence, then delved into the technical details of AI advancement, including the capabilities of various sensors. In the middle of the spectrum of AI’s impact, he discussed the potential benefits AI could bring, particularly regarding his field of ecosystem management (such as controlling wildfires).
Regarding the economy, Dietterich agreed with the other panelists that automation and AI would displace many jobs (those based on routine tasks). But machines can aid people in many functions, so he predicted that in the future “employers may hire man-machine combinations.” In the following panel, Dr. Hugh Herr, one of the world’s leading experts on prosthetic technology, said that the definition of “human” will get blurred this century by the integration of synthetic materials to augment mankind’s capabilities. Both Dietterich and Herr called for the ethical issues to be worked out now, because it’s all too easy to imagine these technologies resulting in nightmare scenarios.
Voicing a more pessimistic perspective was Jaan Tallinn, best known as the co-founder of Skype, and who is also the co-founder of the Centre for the Study of Existential Risk. Once dismissed as the realm of science-fiction, the relentless advance in AI has made existential risk — the danger that AI would turn against and eliminate humanity — a serious concern. Tallinn pointed out that the crucial time window lies between the point where machines are intelligent but still need human input and control to function, and the point where machines get so smart that they can create their own AI independent of any human influence. On an ominous note, he said that we don’t know how long that time window is, and it may be zero — meaning it may already be too late.
Further advances in AI are inevitable, but their direction can still be limited if developers universally comply on safeguards. Blouin News previously reported on the open letter calling for a complete global ban on offensive autonomous weapons, signed by over a thousand people including Elon Musk, Stephen Hawking, Steve Wozniak, and numerous AI researchers.
But differing opinions on whether AI will ultimately bring prosperity or calamity aside, there is across-the-board consensus today that the world needs to pay a lot more attention to AI and plan for its inevitable disruptions.