Applying Origami Folding Techniques to Strands of DNA to Produce Faster and Cheaper Computer Chips

"Origami", Image by David Wicks

“Origami”, Image by David Wicks

We all learned about the periodic table of elements in high school chemistry class. This involved becoming familiar with the names, symbols and atomic weights of all of the chemical occupants of this display. Today, the only thing I still recall from this academic experience was when the teacher told us on the first day of class that we would soon learn to laugh at the following:

Two hydrogen atoms walk into a bar and the first one says to the other “I’ve lost my electron”. The other one answers “Are you sure?”. The first one says “I’m positive.”

I still find this hilarious but whatever I recall today about learning chemistry would likely get lost at the bottom of a thimble. I know, you are probably thinking “Sew what”.

Facing the Elements

Besides everyone’s all-time favorites like oxygen and hydrogen that love to get mixed up with each other and most of the other 116 elements, another one stands alone as the foundation upon which the modern information age was born and continues to thrive today. Silicon has been used to create integrated circuits, much more commonly known as computer chips.

This has been the case since they were first fabricated in the late 1950’s. It has remained the material of choice including nearly all the chips running every imaginable one of our modern computing and communication devices. Through major advances in design, engineering and fabrication during the last five decades, chip manufacturers have been able to vastly shrink this circuitry and pack millions of components into smaller squares of this remarkable material.

A fundamental principle that has held up and guided the semiconductor industry, under relentlessly rigorous testing during silicon’s enduring run, is Moore’s Law. In its simplest terms, it states that the number of transistors that can be written onto a chip doubles nearly every two years. There have been numerous predictions for many years that the end of Moore’s Law is approaching and that another substrate, other than silicon, will be found in order to continue making chips smaller, faster and cheaper. This has not yet come to pass and may not do so for years to come.

Nonetheless, scientists and developers from a diversity of fields, industries and academia have remained in pursuit of alternative computing materials. This includes elements and compounds to improve or replace silicon’s extensible properties, and other efforts to research and fabricate entirely new computing architectures. One involves exploiting the spin states of electrons in a rapidly growing field called quantum computing (this Wikipedia link provides a detailed and accessible survey of its fundamentals and operations), and another involves using, of all things, DNA as a medium.

The field of DNA computing has actually been around in scientific labs and journals for several decades but has not gained much real traction as a viable alternative ready to produce computing chips for the modern marketplace. Recently though, a new advance was reported in a fascinating article posted on Phys.org on March 13, 2016, entitled DNA ‘origami’ Could Help Build Faster, Cheaper Computer Chips, provided by the American Chemical Society (no author is credited). I will summarize and annotate it in order to add some more context, and then pose several of my own molecular questions.

Know When to Fold ‘Em

A team of researchers reported that fabricating such chips is possible when DNA is folded and “formed into specific shapes” using a process much like origami, the Japanese art of folding paper into sculptures. They presented their findings at the 251st American Chemical Society Meeting & Exposition held in San Diego, CA during March 13 through 17, 2016. Their paper entitled 3D DNA Origami Templated Nanoscale Device Fabrication, appears listed as number 305 on Page 202 of the linked document.  Their presentation on March 14, 2016, was captured on this 16-minute YouTube video, with Adam T. Woolley, Ph.D. of Brigham Young University as the presenter for the researchers.

According to Dr. Woolley, researchers want to use DNA’s “small size, base-pairing capabilities and ability to self-assemble” in order to produce “nanoscale electronics”. By comparison, silicon chips currently in production contain features 14 nanometers wide, which turn out to be 10 times “the diameter of single-stranded DNA”. Thus, DNA could be used to build chips on a much smaller and efficient scale.

However, the problem with using DNA as a chip-building material is that it is not a good conductor of electrical current. To circumvent this, Dr. Woolley and his team is using “DNA as a scaffold” and then adding other materials to the assembly to create electronics. He is working on this with his colleagues, Robert C. Davis, Ph.D. and John N. Harb, Ph.D, at Brigham Young University. They are drawing upon their prior work on “DNA origami and DNA nanofabrication”.

Know When to Hold ‘Em

To create this new configuration of origami-ed DNA, they begin with a single long strand of it, which is comparable to a “shoelace” insofar as it is “flexible and floppy”. Then they mix this with shorter stand of DNA called “staples” which, in turn, “use base pairing” to gather and cross-link numerous other “specific segments of the long strand” to build an intended shape.

Dr. Woolley’s team is not satisfied with just replicating “two-dimensional circuits”, but rather, 3D circuitry because it can hold many more electronic components. An undergraduate who works with Dr. Woolley named Kenneth Lee, has already build such a “3-D, tube-shaped DNA origami structure”. He has been further experimenting with adding more components including “nano-sized gold particles”. He is planning to add still more nano-items to his creations with the objective of “forming a semiconductor”.

The entire team’s lead objective is to “place such tubes, and other DNA origami structures, at particular sites on the substrate”. As well, they are seeking us use gold nanoparticles to create circuits. The DNA is thus being used as “girders” to create integrated circuits.

Dr. Woolley also pointed to the advantageous cost differential between the two methods of fabrication. While traditional silicon chip fabrication facilities can cost more than $1 billion, exploiting DNA’s self-assembling capabilities “would likely entail much lower startup funding” and yield potentially “huge cost savings”.

My Questions

  • What is the optimal range and variety in design, processing power and software that can elevate DNA chips to their highest uses? Are there only very specific applications or can they be more broadly used in commercial computing, telecom, science, and other fields?
  • Can any of the advances currently being made and widely followed in the media using the CRISPR gene editing technology somehow be applied here to make more economical, extensible and/or specialized DNA chips?
  • Does DNA computing represent enough of a potential market to attract additional researchers, startups, venture capital and academic training to be considered a sustainable technology growth sector?
  • Because of the potentially lower startup and investment costs, does DNA chip development lend itself to smaller scale crowd-funded support such Kickstarter campaigns? Might this field also benefit if it was treated more as an open source movement?

February 19, 2017 Update:  On February 15, 2017, on the NOVA science show on PBS in the US, there was an absolutely fascinating documentary shown entitled The Origami Revolution. (The link is to the full 53-minute broadcast.) It covered many of the today’s revolutionary applications of origami in science, mathematics, design, architecture and biology. It was both highly informative and visually stunning. I highly recommend clicking through to learn about how some very smart people are doing incredibly imaginative and practical work in modern applications of this ancient art.

The BBC is Testing an Experimental Neural Interface for Television Remote Control

"Brain Power", Image by Allan Arifo

“Brain Power”, Image by Allan Arifo

Experimental research into using human brainwaves as an alternative form of input to control computers and prosthetic devices has been underway for a number of years. This technology is often referred to as neural interfaces or brain-computer interfaces. The results thus far have generally been promising. Here is a roundup of reports on ExtremeTech.com.

Another early phase neural interface project has been undertaken by the BBC to develop a system enabling a user to mentally select a program from an onscreen television guide. This was reported in a most interesting article entitled The BBC Wants You to Use Your Brain as a Remote Control by Abhimanyu Ghoshal, posted on TheNextWeb.com on June 18, 2015. While still using my keyboard for now, I will sum up, annotate and pose a few questions.

This endeavor, called the Mind Control TV Project, is a joint effort BBC’s digital unit and a user experience (“UX”) firm called This Place. In its current format, the input hardware is a headset that can read human brainwave signals. The article contains three pictures of the hardware and software (which is a customized version of the BBC’s iPlayer app normally used for viewing TV shows on the network).

To choose from among a number of options present onscreen, the user is required to “‘concentrate’ on it” while wearing the headset. That is, to choose a particular option, the user must concentrate upon it “for a few seconds”. A meter in the interface indicates the level of brain activity the user is generating and the “threshold” he or she must reach in order to initiate their choice.

The BBC hopes that this research will, in the future, benefit people with physical and neural disabilities that restrict their movements.

My questions are as follows:

  • Could this system eventually be so miniaturized that it could be integrated into an ordinary pair of glasses, perhaps Google Glass or something else?
  • Notwithstanding the significant benefits mentioned in this article, what other types of apps and systems might also find advantages in adapting neural interfaces?
  • What entrepreneurial opportunities might be waiting out there as a result of this technology?
  • How might neural interfaces be integrated with the current wave of virtual and augmented reality systems (covered in these seven recent Subway Fold posts), about to very soon enter the consumer market?