Applying Origami Folding Techniques to Strands of DNA to Produce Faster and Cheaper Computer Chips

"Origami", Image by David Wicks

“Origami”, Image by David Wicks

We all learned about the periodic table of elements in high school chemistry class. This involved becoming familiar with the names, symbols and atomic weights of all of the chemical occupants of this display. Today, the only thing I still recall from this academic experience was when the teacher told us on the first day of class that we would soon learn to laugh at the following:

Two hydrogen atoms walk into a bar and the first one says to the other “I’ve lost my electron”. The other one answers “Are you sure?”. The first one says “I’m positive.”

I still find this hilarious but whatever I recall today about learning chemistry would likely get lost at the bottom of a thimble. I know, you are probably thinking “Sew what”.

Facing the Elements

Besides everyone’s all-time favorites like oxygen and hydrogen that love to get mixed up with each other and most of the other 116 elements, another one stands alone as the foundation upon which the modern information age was born and continues to thrive today. Silicon has been used to create integrated circuits, much more commonly known as computer chips.

This has been the case since they were first fabricated in the late 1950’s. It has remained the material of choice including nearly all the chips running every imaginable one of our modern computing and communication devices. Through major advances in design, engineering and fabrication during the last five decades, chip manufacturers have been able to vastly shrink this circuitry and pack millions of components into smaller squares of this remarkable material.

A fundamental principle that has held up and guided the semiconductor industry, under relentlessly rigorous testing during silicon’s enduring run, is Moore’s Law. In its simplest terms, it states that the number of transistors that can be written onto a chip doubles nearly every two years. There have been numerous predictions for many years that the end of Moore’s Law is approaching and that another substrate, other than silicon, will be found in order to continue making chips smaller, faster and cheaper. This has not yet come to pass and may not do so for years to come.

Nonetheless, scientists and developers from a diversity of fields, industries and academia have remained in pursuit of alternative computing materials. This includes elements and compounds to improve or replace silicon’s extensible properties, and other efforts to research and fabricate entirely new computing architectures. One involves exploiting the spin states of electrons in a rapidly growing field called quantum computing (this Wikipedia link provides a detailed and accessible survey of its fundamentals and operations), and another involves using, of all things, DNA as a medium.

The field of DNA computing has actually been around in scientific labs and journals for several decades but has not gained much real traction as a viable alternative ready to produce computing chips for the modern marketplace. Recently though, a new advance was reported in a fascinating article posted on Phys.org on March 13, 2016, entitled DNA ‘origami’ Could Help Build Faster, Cheaper Computer Chips, provided by the American Chemical Society (no author is credited). I will summarize and annotate it in order to add some more context, and then pose several of my own molecular questions.

Know When to Fold ‘Em

A team of researchers reported that fabricating such chips is possible when DNA is folded and “formed into specific shapes” using a process much like origami, the Japanese art of folding paper into sculptures. They presented their findings at the 251st American Chemical Society Meeting & Exposition held in San Diego, CA during March 13 through 17, 2016. Their paper entitled 3D DNA Origami Templated Nanoscale Device Fabrication, appears listed as number 305 on Page 202 of the linked document.  Their presentation on March 14, 2016, was captured on this 16-minute YouTube video, with Adam T. Woolley, Ph.D. of Brigham Young University as the presenter for the researchers.

According to Dr. Woolley, researchers want to use DNA’s “small size, base-pairing capabilities and ability to self-assemble” in order to produce “nanoscale electronics”. By comparison, silicon chips currently in production contain features 14 nanometers wide, which turn out to be 10 times “the diameter of single-stranded DNA”. Thus, DNA could be used to build chips on a much smaller and efficient scale.

However, the problem with using DNA as a chip-building material is that it is not a good conductor of electrical current. To circumvent this, Dr. Woolley and his team is using “DNA as a scaffold” and then adding other materials to the assembly to create electronics. He is working on this with his colleagues, Robert C. Davis, Ph.D. and John N. Harb, Ph.D, at Brigham Young University. They are drawing upon their prior work on “DNA origami and DNA nanofabrication”.

Know When to Hold ‘Em

To create this new configuration of origami-ed DNA, they begin with a single long strand of it, which is comparable to a “shoelace” insofar as it is “flexible and floppy”. Then they mix this with shorter stand of DNA called “staples” which, in turn, “use base pairing” to gather and cross-link numerous other “specific segments of the long strand” to build an intended shape.

Dr. Woolley’s team is not satisfied with just replicating “two-dimensional circuits”, but rather, 3D circuitry because it can hold many more electronic components. An undergraduate who works with Dr. Woolley named Kenneth Lee, has already build such a “3-D, tube-shaped DNA origami structure”. He has been further experimenting with adding more components including “nano-sized gold particles”. He is planning to add still more nano-items to his creations with the objective of “forming a semiconductor”.

The entire team’s lead objective is to “place such tubes, and other DNA origami structures, at particular sites on the substrate”. As well, they are seeking us use gold nanoparticles to create circuits. The DNA is thus being used as “girders” to create integrated circuits.

Dr. Woolley also pointed to the advantageous cost differential between the two methods of fabrication. While traditional silicon chip fabrication facilities can cost more than $1 billion, exploiting DNA’s self-assembling capabilities “would likely entail much lower startup funding” and yield potentially “huge cost savings”.

My Questions

  • What is the optimal range and variety in design, processing power and software that can elevate DNA chips to their highest uses? Are there only very specific applications or can they be more broadly used in commercial computing, telecom, science, and other fields?
  • Can any of the advances currently being made and widely followed in the media using the CRISPR gene editing technology somehow be applied here to make more economical, extensible and/or specialized DNA chips?
  • Does DNA computing represent enough of a potential market to attract additional researchers, startups, venture capital and academic training to be considered a sustainable technology growth sector?
  • Because of the potentially lower startup and investment costs, does DNA chip development lend itself to smaller scale crowd-funded support such Kickstarter campaigns? Might this field also benefit if it was treated more as an open source movement?

February 19, 2017 Update:  On February 15, 2017, on the NOVA science show on PBS in the US, there was an absolutely fascinating documentary shown entitled The Origami Revolution. (The link is to the full 53-minute broadcast.) It covered many of the today’s revolutionary applications of origami in science, mathematics, design, architecture and biology. It was both highly informative and visually stunning. I highly recommend clicking through to learn about how some very smart people are doing incredibly imaginative and practical work in modern applications of this ancient art.

Three New Perspectives on Whether Artificial Intelligence Threatens or Benefits the World

"Gritty Refraction", Image by Mark Oakley

“Gritty Refraction”, Image by Mark Oakley

As the velocity of the rate of change in today’s technology steadily continues to increase, one of the contributing factors behind this acceleration the rise of artificial intelligence (AI). “Smart” attributes and functionalities are being baked into a multitude of systems that are affecting our lives in many visible and, at other times, transparent ways. Just to name one well-known example of an AI-enabled app is Siri, the voice recognition system in the iPhone. Two recent Subway Fold posts have also examined AI’s applications in law (1) and music (2).

However, notwithstanding all of the technological, social and commercial benefits produced by AI, a widespread reluctance, if not fear, of its capabilities to produce negative effects still persists. Will the future produce consequences resembling those in the Terminator or Matrix movie franchises, the “singularity” predicted by Ray Kurzweil where machine intelligence will eventually surpass human intelligence, or perhaps other more benign and productive outcomes?

During the past two weeks, three articles have appeared where their authors have expressed more upbeat outlooks about AI’s potential. They believe that smarter systems are not going to become the world’s new overlords (3) and, moreover, there is a long way to go before computers will ever achieve human-level intelligence or even consciousness. I highly recommend reading them all in their entirety for their rich content, insights and engaging prose.

I will sum up, annotate and comment upon some of the key points in these pieces, which have quite a bit in common in their optimism, analyses and forecasts.

First is a reassuring column by Dr. Gary Marcus, a university professor and corporate CEO, entitled Artificial Intelligence Isn’t a Threat—Yet, that appeared in the December 12, 2014 edition of The Wall Street Journal. While acknowledging the advances in machine intelligence, he still believes that computers today are nowhere near “anything that looks remotely like human intelligence”. However,  computers do not necessarily need to be “superintelligent” to do significant harm such as wild swings in the equities markets resulting from programming errors.(4)

He is not calling for an end to further research and development in AI. Rather, he urges proceeding with caution with safeguards carefully in place focusing upon on the apps access to other networked systems, in areas such as, but not limited to, medicine and autos. Still, the design, implementation and regulation of such “oversight” has yet to be worked out.

Dr. Marcus believes that we might now be overly concerned about any real threats from AI while still acknowledging potential threats from it. He poses questions about levels of transparency and technologies that assess whether AI programs are functioning as intended. Essentially, a form of “infrastructure” should be  in place to evaluate and “control the results” if needed.

Second, is an article enumerating five key reasons why the AI apocalypse is not nearly at hand right now. It is aptly entitled Will Artificial Intelligence Destroy Humanity? Here are Reasons Not to Worry, by Timothy B. Lee, which was posted on Vox.com on December 19, 2014. The writer asserts that the fears and dangers of AI are far overstated based on his research and interviews with some AI experts. To sum up these factors:

  • Actual “intelligence” is dependent on real world experience such that massive computing power alone will not produce comparable capabilities in machines. The example cited here is studying a foreign language well enough to pass as a native speaker. This involves both book learning and actually speaking with locals in order to include social elements and slang. A computer does not and never will have these experiences nor can they simulate them.
  • Computers, by their very nature, must reply on humans for maintenance, materials, repairs and ultimately, replacement. The current state of robotics development is unable to handle these responsibilities. Quite simply, machines need us and will continue to do so for a long time.
  • Creating a computerized equivalent of a real human’s brain is very tough and remains beyond the reach of today’s circuitry and programming.  Living neurons are indeed quite different in their behaviors and responses than digital devices.  The author cites the modeling of weather simulations as one where progress has been relatively small despite the huge increases in available processing capacity. Moreover, simulating brain activity in the an effort to generate a form of intelligence is relatively far more difficult than modeling weather systems.(5)
  • Relationships, more than intelligence, are needed to acquire power in the real world. Looking at the achievements of recent US presidents, the author states that they gained their achievements by virtue of their networks, personalities and skills at offering rewards and penalties. Thus, machines assist in attaining great technological breakthroughs, but only governments and companies can assemble to capital and resources to implement great projects. Taking this logic further, machines could never take over the world because they utterly lack the capability to work with the large numbers of people needed to even attempt this. (Take that, SkyNet.)
  • Intelligence will become less valuable as its supply increases according to the laws of supply and demand. As the pricing of computing continues to fall, their technological capabilities continues to rise. As the author interprets these market forces, the availability of “super-intelligent computers” will become commoditized and, in turn, produce even more intelligent machines where pricing is competitive. (6)

The third article presents a likewise sanguine view on the future of AI entitled Apocalypse or Golden Age: What Machine Intelligence Will Do to Us, by Patrick Ehlen, was posted on VentureBeat.com on December 23, 2014. He drew his research from a range of leaders, projects and studies to arrive at similar conclusions that the end of the world as we know it is not at hand because of AI. This piece overlaps with the others on a number of key points. It provides the following additional information and ideas:

  • Well regarded university researchers and tech giants such as Google are pursuing extensive and costly AI research and development programs in conjunction with their ongoing work into such areas as robotics, machine learning, and modeling simple connectomes (see fn.5 below).
  • Unintended bad consequence of well-intentioned research are almost always inevitable. Nonetheless, experts believe that the rate of advancement in this field will continue to accelerate and may well have significant impacts upon the world during the next 20 years.
  • On August 6, 2014, the Pew Internet Research Project published a comprehensive report that was directly on point entitled AI, Robotics, and the Future of Jobs by Aaron Smith and Janna Anderson. This was compiled based on surveys of nearly 1,900 AI experts. To greatly oversimplify the results, while there was largely a consensus view on the progress in this field and ever-increasing integration of AI into numerous areas, there was also a significant split of opinion as the economic,  employment and educational effects of AI in conjunction with robotics. (I highly recommend taking some time to read through this very enlightening report because of its wealth of insights and diversity of perspectives.)
  • Today we are experiencing a “perfect storm” where AI’s progress is further being propelled by the forces of computing power and big data. As a result, we can expect “to create new services that will redefine our expectations”. (7)
  • Certain sectors of our economy will realize greater benefits from the surge in AI than others.(8) This, too, will be likely to cause displacements and realignments in employment in these areas.
  • Changes to relevant social and public policies will be needed in order to successfully adapt to AI-driven effects upon the economy. (This is similar to Dr. Marcus’s views, above, that news forms of safeguards and infrastructure will become necessary.)

I believe that authors Marcus, Lee and Ehlen have all made persuasive cases that AI will continue to produce remarkable new goods, services and markets without any world threatening consequences. Yet they all alert their readers about the unintended and unforeseeable economic and social impacts that likely await us further down the road. My own follow up questions are as follows:

  • Who should take the lead in coordinating the monitoring of these pending changes? Whom should they report to and what, if any, regulatory powers should they have?
  • Will any resulting positive or negative changes attributable to AI be global if and when they manifest themselves, or will they be unevenly distributed in among only certain nations, cities, marketplaces, populations and so on?
  • Is a “negative” impact of AI only in the eye of the beholder? That is, what metrics and analytics exist or need to be developed in order to assess the magnitudes of plus or minus effects? Could such standards be truly objective in their determinations?
  • Assuming that AI development and investment continues to race ahead, will this lead to a possible market/investment bubble or, alternatively, some form of AI Industrial Complex?
  • So, is everyone looking forward to the July 2015 release of Terminator Genisys?

___________________________________

1.  See Possible Futures for Artificial Intelligence in Law Practice posted on September 1, 2014.

2.  See Spotify Enhances Playlist Recommendations Processing with “Deep Learning” Technology posted  on August 14, 2014.

3.  The origin of the popular “I, for one, welcome our new robot overlords” meme originated in the Season 5 episode 15 of The Simpsons entitled Deep Space Homer. (My favorite scene in this ep is where the – – D’oh! – – potato chips are floating all around the spaceship.)

4.   In Flash Boys (W.W. Norton & Company, 2014), renowned author Michael Lewis did an excellent job of reporting on high-speed trading and the ongoing efforts  to reform it. Included is coverage of the “flash crash” in 2010 when errant program trading caused a temporary steep decline in the stock market.

 5For an absolutely fascinating deep and wide analysis of current and future projects to map out all of the billions of connections among the neurons in the human brain, I suggest reading Connectome: How the Brain’s Wiring Makes Us Who We Are (Houghton Mifflin, 2012), by Sebastian Seung.  See also a most interesting column about the work of Dr. Seung and others by James Gorman in the November 10, 2014 edition of The New York Times entitled Learning How Little We Know About the Brain. (For the sake of all humanity, let’s hope these scientists don’t decide to use Homer J. Simpson, at fn.3 above, as a test subject for their work.)

6.  This factor is also closely related to the effects of Moore’s Law which states that the number of transistors that can be packed onto a chip doubles almost doubles in two years (later revised to 18 months). This was originally conceived by Gordon E. Moore, a legendary computer scientists and one of the founders of Intel. This principal has held up for nearly fifty years since it was first published.

7.  This technological convergence is fully and enthusiastically  explored in an excellent article by Kevin Kelly entitled The Three Breakthroughs That Have Finally Unleashed AI on the World in the November 2014 issue of WIRED.

8This seems like a perfect opportunity to invoke the often quoted maxim by master sci-fi and speculative fiction author William Gibson that “The future is already here – it’s just not very evenly distributed.”