Can the Human Brain One Day be Fully Digitized and Uploaded?

"Human Brain Illustrated with Millions of Small Nerves", Image by Ars Electronica

“Human Brain Illustrated with Millions of Small Nerves”, Image by Ars Electronica

Can the human brain somehow be digitized? Can someone’s mind  be bitmapped and uploaded to a computer? Even if this ever becomes possible, is it something anyone would actually want to have done?

A Senior Scientist at the Howard Hughes Medical Institute’s Janelia Farm Research Campus named Kenneth Hayworth is currently working on this possibility. He is also the President and Co-Founder of the Brain Preservation Foundation. His work in this field is the subject of a most interesting profile in the May 2015 edition of Smithsonian Magazine entitled The Quest to Upload Your Mind Into the Digital Space by Jerry Adler.

I will sum up, annotate and ask a few questions about this piece. I  also recommend clicking through and reading it for more of the details.

Hayworth’s plan is to digitize and upload his “memory, skills and personality” to a computer. In turn, this system can be programmed to “emulate” the operations of his brain. As well, this system could perhaps enable him live on indefinitely in this electronic form.

This kind of adds a whole new meaning to keeping someone in mind.

If Hayworth does achieve this goal of producing human-level or above intelligence embedded in silicon, it will be considered to be one of the  technological manifestations of The Singularity, an anticipated point in the next few decades where machine intelligence equals and then surpasses human intelligence. The prediction of this event was the subject of a fascinating book by the renowned inventor and computer scientist Ray Kurzweil entitled The Singularity is Near (Penguin Books, 2006). I suggest reading this if you are ever looking for a truly original and challenging science and technology book.

Carboncopies.org is another organization working towards a similar goal of producing a “substrate independent mind” (SIM). Dr. Randall Koene is the founder.

In their best case scenarios, Hayworth and Koene believe this will cost billions and take about 50 years to accomplish. Hayworth’s plans are to devise a chemical or cryonic means to preserve the full human being at death and then scan its structure into a database in order to then achieve the mind’s emulation. However, this remains based upon an as yet unproven hypothesis that all of the “subtleties of the human mind and memory” are held within the brains “anatomical structure”.

Furthermore, these projects will require significant leaps in technological development. One of these, among others, is the building of the connectome, a long-term initiative to fully map the billions of neurons and, in turn, their trillions of connecting synapses in the human brain. As also previously discussed in the December 27, 2014 Subway Fold Post entitled Three New Perspectives on Whether Artificial Intelligence Threatens or Benefits the World :

For an absolutely fascinating deep and wide analysis of current and future projects to map out all of the billions of connections among the neurons in the human brain, I suggest reading Connectome: How the Brain’s Wiring Makes Us Who We Are (Houghton Mifflin, 2012), by Sebastian Seung.  See also a most interesting column about the work of Dr. Seung and others by James Gorman in the November 10, 2014 edition of The New York Times entitled Learning How Little We Know About the Brain. (For the sake of all humanity, let’s hope these scientists don’t decide to use Homer J. Simpson, at fn.3 above, as a test subject for their work.)

Furthermore, a program announced by the US government in 2013  to build a comprehensive map of human brain activity. It is intended to operate on the scale of the Human Genome Project. (For detailed coverage of this see Obama Seeking to Boost Coverage of Human Brain, by John Markoff, in the February 17, 2013 edition of The New York Times.)

Among “mainstream researchers”, opinion is split as to whether Hayworth’s objective is even possible. Moreover, will such machine brains experience comparable human emotions, needs and desires? Will they be truly sentient?

My own questions are as follows:

  • Is this story really about machine capabilities or the ages old human dream of becoming immortal?
  • What protocols and laws, if any, should be drafted and enacted to make certain that this area of development does not lead to any unintended or dangerous consequences? Are Isaac Asimov’s Three Laws of Robotics a logical place to begin studying these issues?
  • In addition to neuroscience and artificial intelligence, what other scientific fields and commercial marketplaces might these projects influence and benefit?
  • What entrepreneurial opportunities might exist now and in the future to facilitate and support these initiatives?
  • What would be the long-term economic and social consequences if this form of singularity is ever achieved?
  • Will the prospect of this achievement be so unsettling that it might result in some form of scientific and/or public backlash?

Finally, the notion of transferring an individual’s consciousness from one person to another has long been a popular plot device in science fiction. My own recommendation for one of the best sci-fi novels I have ever read to ever use this is Altered Carbon by Richard K. Morgan (Del Ray, 2003). It presents a truly, well, mind-bending plot and crackling prose about a future world where brains can be downloaded and implanted multiple times from a form of central server. (September 12, 2016 Update: Altered Carbon is being adapted for a new TV series. The details were reported in a post on Deadline.com today in an article entitled ‘Altered Carbon’: Marlene Forte & Trieu Tran Join Cast of Netflix Series, by Denise Petski. I am definitely looking forward to seeing how the production, writing and acting crew do with this very rich source material.)

Three New Perspectives on Whether Artificial Intelligence Threatens or Benefits the World

"Gritty Refraction", Image by Mark Oakley

“Gritty Refraction”, Image by Mark Oakley

As the velocity of the rate of change in today’s technology steadily continues to increase, one of the contributing factors behind this acceleration the rise of artificial intelligence (AI). “Smart” attributes and functionalities are being baked into a multitude of systems that are affecting our lives in many visible and, at other times, transparent ways. Just to name one well-known example of an AI-enabled app is Siri, the voice recognition system in the iPhone. Two recent Subway Fold posts have also examined AI’s applications in law (1) and music (2).

However, notwithstanding all of the technological, social and commercial benefits produced by AI, a widespread reluctance, if not fear, of its capabilities to produce negative effects still persists. Will the future produce consequences resembling those in the Terminator or Matrix movie franchises, the “singularity” predicted by Ray Kurzweil where machine intelligence will eventually surpass human intelligence, or perhaps other more benign and productive outcomes?

During the past two weeks, three articles have appeared where their authors have expressed more upbeat outlooks about AI’s potential. They believe that smarter systems are not going to become the world’s new overlords (3) and, moreover, there is a long way to go before computers will ever achieve human-level intelligence or even consciousness. I highly recommend reading them all in their entirety for their rich content, insights and engaging prose.

I will sum up, annotate and comment upon some of the key points in these pieces, which have quite a bit in common in their optimism, analyses and forecasts.

First is a reassuring column by Dr. Gary Marcus, a university professor and corporate CEO, entitled Artificial Intelligence Isn’t a Threat—Yet, that appeared in the December 12, 2014 edition of The Wall Street Journal. While acknowledging the advances in machine intelligence, he still believes that computers today are nowhere near “anything that looks remotely like human intelligence”. However,  computers do not necessarily need to be “superintelligent” to do significant harm such as wild swings in the equities markets resulting from programming errors.(4)

He is not calling for an end to further research and development in AI. Rather, he urges proceeding with caution with safeguards carefully in place focusing upon on the apps access to other networked systems, in areas such as, but not limited to, medicine and autos. Still, the design, implementation and regulation of such “oversight” has yet to be worked out.

Dr. Marcus believes that we might now be overly concerned about any real threats from AI while still acknowledging potential threats from it. He poses questions about levels of transparency and technologies that assess whether AI programs are functioning as intended. Essentially, a form of “infrastructure” should be  in place to evaluate and “control the results” if needed.

Second, is an article enumerating five key reasons why the AI apocalypse is not nearly at hand right now. It is aptly entitled Will Artificial Intelligence Destroy Humanity? Here are Reasons Not to Worry, by Timothy B. Lee, which was posted on Vox.com on December 19, 2014. The writer asserts that the fears and dangers of AI are far overstated based on his research and interviews with some AI experts. To sum up these factors:

  • Actual “intelligence” is dependent on real world experience such that massive computing power alone will not produce comparable capabilities in machines. The example cited here is studying a foreign language well enough to pass as a native speaker. This involves both book learning and actually speaking with locals in order to include social elements and slang. A computer does not and never will have these experiences nor can they simulate them.
  • Computers, by their very nature, must reply on humans for maintenance, materials, repairs and ultimately, replacement. The current state of robotics development is unable to handle these responsibilities. Quite simply, machines need us and will continue to do so for a long time.
  • Creating a computerized equivalent of a real human’s brain is very tough and remains beyond the reach of today’s circuitry and programming.  Living neurons are indeed quite different in their behaviors and responses than digital devices.  The author cites the modeling of weather simulations as one where progress has been relatively small despite the huge increases in available processing capacity. Moreover, simulating brain activity in the an effort to generate a form of intelligence is relatively far more difficult than modeling weather systems.(5)
  • Relationships, more than intelligence, are needed to acquire power in the real world. Looking at the achievements of recent US presidents, the author states that they gained their achievements by virtue of their networks, personalities and skills at offering rewards and penalties. Thus, machines assist in attaining great technological breakthroughs, but only governments and companies can assemble to capital and resources to implement great projects. Taking this logic further, machines could never take over the world because they utterly lack the capability to work with the large numbers of people needed to even attempt this. (Take that, SkyNet.)
  • Intelligence will become less valuable as its supply increases according to the laws of supply and demand. As the pricing of computing continues to fall, their technological capabilities continues to rise. As the author interprets these market forces, the availability of “super-intelligent computers” will become commoditized and, in turn, produce even more intelligent machines where pricing is competitive. (6)

The third article presents a likewise sanguine view on the future of AI entitled Apocalypse or Golden Age: What Machine Intelligence Will Do to Us, by Patrick Ehlen, was posted on VentureBeat.com on December 23, 2014. He drew his research from a range of leaders, projects and studies to arrive at similar conclusions that the end of the world as we know it is not at hand because of AI. This piece overlaps with the others on a number of key points. It provides the following additional information and ideas:

  • Well regarded university researchers and tech giants such as Google are pursuing extensive and costly AI research and development programs in conjunction with their ongoing work into such areas as robotics, machine learning, and modeling simple connectomes (see fn.5 below).
  • Unintended bad consequence of well-intentioned research are almost always inevitable. Nonetheless, experts believe that the rate of advancement in this field will continue to accelerate and may well have significant impacts upon the world during the next 20 years.
  • On August 6, 2014, the Pew Internet Research Project published a comprehensive report that was directly on point entitled AI, Robotics, and the Future of Jobs by Aaron Smith and Janna Anderson. This was compiled based on surveys of nearly 1,900 AI experts. To greatly oversimplify the results, while there was largely a consensus view on the progress in this field and ever-increasing integration of AI into numerous areas, there was also a significant split of opinion as the economic,  employment and educational effects of AI in conjunction with robotics. (I highly recommend taking some time to read through this very enlightening report because of its wealth of insights and diversity of perspectives.)
  • Today we are experiencing a “perfect storm” where AI’s progress is further being propelled by the forces of computing power and big data. As a result, we can expect “to create new services that will redefine our expectations”. (7)
  • Certain sectors of our economy will realize greater benefits from the surge in AI than others.(8) This, too, will be likely to cause displacements and realignments in employment in these areas.
  • Changes to relevant social and public policies will be needed in order to successfully adapt to AI-driven effects upon the economy. (This is similar to Dr. Marcus’s views, above, that news forms of safeguards and infrastructure will become necessary.)

I believe that authors Marcus, Lee and Ehlen have all made persuasive cases that AI will continue to produce remarkable new goods, services and markets without any world threatening consequences. Yet they all alert their readers about the unintended and unforeseeable economic and social impacts that likely await us further down the road. My own follow up questions are as follows:

  • Who should take the lead in coordinating the monitoring of these pending changes? Whom should they report to and what, if any, regulatory powers should they have?
  • Will any resulting positive or negative changes attributable to AI be global if and when they manifest themselves, or will they be unevenly distributed in among only certain nations, cities, marketplaces, populations and so on?
  • Is a “negative” impact of AI only in the eye of the beholder? That is, what metrics and analytics exist or need to be developed in order to assess the magnitudes of plus or minus effects? Could such standards be truly objective in their determinations?
  • Assuming that AI development and investment continues to race ahead, will this lead to a possible market/investment bubble or, alternatively, some form of AI Industrial Complex?
  • So, is everyone looking forward to the July 2015 release of Terminator Genisys?

___________________________________

1.  See Possible Futures for Artificial Intelligence in Law Practice posted on September 1, 2014.

2.  See Spotify Enhances Playlist Recommendations Processing with “Deep Learning” Technology posted  on August 14, 2014.

3.  The origin of the popular “I, for one, welcome our new robot overlords” meme originated in the Season 5 episode 15 of The Simpsons entitled Deep Space Homer. (My favorite scene in this ep is where the – – D’oh! – – potato chips are floating all around the spaceship.)

4.   In Flash Boys (W.W. Norton & Company, 2014), renowned author Michael Lewis did an excellent job of reporting on high-speed trading and the ongoing efforts  to reform it. Included is coverage of the “flash crash” in 2010 when errant program trading caused a temporary steep decline in the stock market.

 5For an absolutely fascinating deep and wide analysis of current and future projects to map out all of the billions of connections among the neurons in the human brain, I suggest reading Connectome: How the Brain’s Wiring Makes Us Who We Are (Houghton Mifflin, 2012), by Sebastian Seung.  See also a most interesting column about the work of Dr. Seung and others by James Gorman in the November 10, 2014 edition of The New York Times entitled Learning How Little We Know About the Brain. (For the sake of all humanity, let’s hope these scientists don’t decide to use Homer J. Simpson, at fn.3 above, as a test subject for their work.)

6.  This factor is also closely related to the effects of Moore’s Law which states that the number of transistors that can be packed onto a chip doubles almost doubles in two years (later revised to 18 months). This was originally conceived by Gordon E. Moore, a legendary computer scientists and one of the founders of Intel. This principal has held up for nearly fifty years since it was first published.

7.  This technological convergence is fully and enthusiastically  explored in an excellent article by Kevin Kelly entitled The Three Breakthroughs That Have Finally Unleashed AI on the World in the November 2014 issue of WIRED.

8This seems like a perfect opportunity to invoke the often quoted maxim by master sci-fi and speculative fiction author William Gibson that “The future is already here – it’s just not very evenly distributed.”