As the velocity of the rate of change in today’s technology steadily continues to increase, one of the contributing factors behind this acceleration the rise of artificial intelligence (AI). “Smart” attributes and functionalities are being baked into a multitude of systems that are affecting our lives in many visible and, at other times, transparent ways. Just to name one well-known example of an AI-enabled app is Siri, the voice recognition system in the iPhone. Two recent Subway Fold posts have also examined AI’s applications in law (1) and music (2).
However, notwithstanding all of the technological, social and commercial benefits produced by AI, a widespread reluctance, if not fear, of its capabilities to produce negative effects still persists. Will the future produce consequences resembling those in the Terminator or Matrix movie franchises, the “singularity” predicted by Ray Kurzweil where machine intelligence will eventually surpass human intelligence, or perhaps other more benign and productive outcomes?
During the past two weeks, three articles have appeared where their authors have expressed more upbeat outlooks about AI’s potential. They believe that smarter systems are not going to become the world’s new overlords (3) and, moreover, there is a long way to go before computers will ever achieve human-level intelligence or even consciousness. I highly recommend reading them all in their entirety for their rich content, insights and engaging prose.
I will sum up, annotate and comment upon some of the key points in these pieces, which have quite a bit in common in their optimism, analyses and forecasts.
First is a reassuring column by Dr. Gary Marcus, a university professor and corporate CEO, entitled Artificial Intelligence Isn’t a Threat—Yet, that appeared in the December 12, 2014 edition of The Wall Street Journal. While acknowledging the advances in machine intelligence, he still believes that computers today are nowhere near “anything that looks remotely like human intelligence”. However, computers do not necessarily need to be “superintelligent” to do significant harm such as wild swings in the equities markets resulting from programming errors.(4)
He is not calling for an end to further research and development in AI. Rather, he urges proceeding with caution with safeguards carefully in place focusing upon on the apps access to other networked systems, in areas such as, but not limited to, medicine and autos. Still, the design, implementation and regulation of such “oversight” has yet to be worked out.
Dr. Marcus believes that we might now be overly concerned about any real threats from AI while still acknowledging potential threats from it. He poses questions about levels of transparency and technologies that assess whether AI programs are functioning as intended. Essentially, a form of “infrastructure” should be in place to evaluate and “control the results” if needed.
Second, is an article enumerating five key reasons why the AI apocalypse is not nearly at hand right now. It is aptly entitled Will Artificial Intelligence Destroy Humanity? Here are Reasons Not to Worry, by Timothy B. Lee, which was posted on Vox.com on December 19, 2014. The writer asserts that the fears and dangers of AI are far overstated based on his research and interviews with some AI experts. To sum up these factors:
- Actual “intelligence” is dependent on real world experience such that massive computing power alone will not produce comparable capabilities in machines. The example cited here is studying a foreign language well enough to pass as a native speaker. This involves both book learning and actually speaking with locals in order to include social elements and slang. A computer does not and never will have these experiences nor can they simulate them.
- Computers, by their very nature, must reply on humans for maintenance, materials, repairs and ultimately, replacement. The current state of robotics development is unable to handle these responsibilities. Quite simply, machines need us and will continue to do so for a long time.
- Creating a computerized equivalent of a real human’s brain is very tough and remains beyond the reach of today’s circuitry and programming. Living neurons are indeed quite different in their behaviors and responses than digital devices. The author cites the modeling of weather simulations as one where progress has been relatively small despite the huge increases in available processing capacity. Moreover, simulating brain activity in the an effort to generate a form of intelligence is relatively far more difficult than modeling weather systems.(5)
- Relationships, more than intelligence, are needed to acquire power in the real world. Looking at the achievements of recent US presidents, the author states that they gained their achievements by virtue of their networks, personalities and skills at offering rewards and penalties. Thus, machines assist in attaining great technological breakthroughs, but only governments and companies can assemble to capital and resources to implement great projects. Taking this logic further, machines could never take over the world because they utterly lack the capability to work with the large numbers of people needed to even attempt this. (Take that, SkyNet.)
- Intelligence will become less valuable as its supply increases according to the laws of supply and demand. As the pricing of computing continues to fall, their technological capabilities continues to rise. As the author interprets these market forces, the availability of “super-intelligent computers” will become commoditized and, in turn, produce even more intelligent machines where pricing is competitive. (6)
The third article presents a likewise sanguine view on the future of AI entitled Apocalypse or Golden Age: What Machine Intelligence Will Do to Us, by Patrick Ehlen, was posted on VentureBeat.com on December 23, 2014. He drew his research from a range of leaders, projects and studies to arrive at similar conclusions that the end of the world as we know it is not at hand because of AI. This piece overlaps with the others on a number of key points. It provides the following additional information and ideas:
- Well regarded university researchers and tech giants such as Google are pursuing extensive and costly AI research and development programs in conjunction with their ongoing work into such areas as robotics, machine learning, and modeling simple connectomes (see fn.5 below).
- Unintended bad consequence of well-intentioned research are almost always inevitable. Nonetheless, experts believe that the rate of advancement in this field will continue to accelerate and may well have significant impacts upon the world during the next 20 years.
- On August 6, 2014, the Pew Internet Research Project published a comprehensive report that was directly on point entitled AI, Robotics, and the Future of Jobs by Aaron Smith and Janna Anderson. This was compiled based on surveys of nearly 1,900 AI experts. To greatly oversimplify the results, while there was largely a consensus view on the progress in this field and ever-increasing integration of AI into numerous areas, there was also a significant split of opinion as the economic, employment and educational effects of AI in conjunction with robotics. (I highly recommend taking some time to read through this very enlightening report because of its wealth of insights and diversity of perspectives.)
- Today we are experiencing a “perfect storm” where AI’s progress is further being propelled by the forces of computing power and big data. As a result, we can expect “to create new services that will redefine our expectations”. (7)
- Certain sectors of our economy will realize greater benefits from the surge in AI than others.(8) This, too, will be likely to cause displacements and realignments in employment in these areas.
- Changes to relevant social and public policies will be needed in order to successfully adapt to AI-driven effects upon the economy. (This is similar to Dr. Marcus’s views, above, that news forms of safeguards and infrastructure will become necessary.)
I believe that authors Marcus, Lee and Ehlen have all made persuasive cases that AI will continue to produce remarkable new goods, services and markets without any world threatening consequences. Yet they all alert their readers about the unintended and unforeseeable economic and social impacts that likely await us further down the road. My own follow up questions are as follows:
- Who should take the lead in coordinating the monitoring of these pending changes? Whom should they report to and what, if any, regulatory powers should they have?
- Will any resulting positive or negative changes attributable to AI be global if and when they manifest themselves, or will they be unevenly distributed in among only certain nations, cities, marketplaces, populations and so on?
- Is a “negative” impact of AI only in the eye of the beholder? That is, what metrics and analytics exist or need to be developed in order to assess the magnitudes of plus or minus effects? Could such standards be truly objective in their determinations?
- Assuming that AI development and investment continues to race ahead, will this lead to a possible market/investment bubble or, alternatively, some form of AI Industrial Complex?
- So, is everyone looking forward to the July 2015 release of Terminator Genisys?
1. See Possible Futures for Artificial Intelligence in Law Practice posted on September 1, 2014.
2. See Spotify Enhances Playlist Recommendations Processing with “Deep Learning” Technology posted on August 14, 2014.
3. The origin of the popular “I, for one, welcome our new robot overlords” meme originated in the Season 5 episode 15 of The Simpsons entitled Deep Space Homer. (My favorite scene in this ep is where the – – D’oh! – – potato chips are floating all around the spaceship.)
4. In Flash Boys (W.W. Norton & Company, 2014), renowned author Michael Lewis did an excellent job of reporting on high-speed trading and the ongoing efforts to reform it. Included is coverage of the “flash crash” in 2010 when errant program trading caused a temporary steep decline in the stock market.
5. For an absolutely fascinating deep and wide analysis of current and future projects to map out all of the billions of connections among the neurons in the human brain, I suggest reading Connectome: How the Brain’s Wiring Makes Us Who We Are (Houghton Mifflin, 2012), by Sebastian Seung. See also a most interesting column about the work of Dr. Seung and others by James Gorman in the November 10, 2014 edition of The New York Times entitled Learning How Little We Know About the Brain. (For the sake of all humanity, let’s hope these scientists don’t decide to use Homer J. Simpson, at fn.3 above, as a test subject for their work.)
6. This factor is also closely related to the effects of Moore’s Law which states that the number of transistors that can be packed onto a chip doubles almost doubles in two years (later revised to 18 months). This was originally conceived by Gordon E. Moore, a legendary computer scientists and one of the founders of Intel. This principal has held up for nearly fifty years since it was first published.
7. This technological convergence is fully and enthusiastically explored in an excellent article by Kevin Kelly entitled The Three Breakthroughs That Have Finally Unleashed AI on the World in the November 2014 issue of WIRED.
8. This seems like a perfect opportunity to invoke the often quoted maxim by master sci-fi and speculative fiction author William Gibson that “The future is already here – it’s just not very evenly distributed.”