Mind Over Subject Matter: Researchers Develop A Better Understanding of How Human Brains Manage So Much Information

"Synapse", Image by Allan Ajifo

“Synapse”, Image by Allan Ajifo

There is an old joke that goes something like this: What do you get for the man who has everything and then where would he put it all?¹ This often comes to mind whenever I have experienced the sensation of information overload caused by too much content presented from too many sources. Especially since the advent of the Web, almost everyone I know has also experienced the same overwhelming experience whenever the amount of information they are inundated with everyday seems increasingly difficult to parse, comprehend and retain.

The multitudes of screens, platforms, websites, newsfeeds, social media posts, emails, tweets, blogs, Post-Its, newsletters, videos, print publications of all types, just to name a few, are relentlessly updated and uploaded globally and 24/7. Nonetheless, for each of us on an individualized basis, a good deal of the substance conveyed by this quantum of bits and ocean of ink somehow still manages to stick somewhere in our brains.

So, how does the human brain accomplish this?

Less Than 1% of the Data

A recent advancement covered in a fascinating report on Phys.org on December 15, 2015 entitled Researchers Demonstrate How the Brain Can Handle So Much Data, by Tara La Bouff describes the latest research into how this happens. I will summarize and annotate this, and pose a few organic material-based questions of my own.

To begin, people learn to identify objects and variations of them rather quickly. For example, a letter of the alphabet, no matter the font or an individual regardless of their clothing and grooming, are always recognizable. We can also identify objects even if the view of them is quite limited. This neurological processing proceeds reliably and accurately moment-by-moment during our lives.

A recent discover by a team of researchers at Georgia Institute of Technology (Georgia Tech)² found that we can make such visual categorizations with less than 1% of the original data. Furthermore, they created and validated an algorithm “to explain human learning”. Their results can also be applied to “machine learning³, data analysis and computer vision4. The team’s full findings were published in the September 28, 2015 issue of Neural Computation in an article entitled Visual Categorization with Random Projection by Rosa I. Arriaga, David Rutter, Maya Cakmak and Santosh S. Vempala. (Dr. Cakmak is from the University of Washington, while the other three are from Georgia Tech.)

Dr. Vempala believes that the reason why humans can quickly make sense of the very complex and robust world is because, as he observes “It’s a computational problem”. His colleagues and team members examined “human performance in ‘random projection tests'”. These measure the degree to which we learn to identify an object. In their work, they showed their test subjects “original, abstract images” and then asked them if they could identify them once again although using a much smaller segment of the image. This led to one of their two principal discoveries that the test subjects required only 0.15% of the data to repeat their identifications.

Algorithmic Agility

In the next phase of their work, the researchers prepared and applied an algorithm to enable computers (running a simple neural network, software capable of imitating very basic human learning characteristics), to undertake the same tasks. These digital counterparts “performed as well as humans”. In turn, the results of this research provided new insight into human learning.

The team’s objective was to devise a “mathematical definition” of typical and non-typical inputs. Next, they wanted to “predict which data” would be the most challenging for the test subjects and computers to learn. As it turned out, they each performed with nearly equal results. Moreover, these results proved that “data will be the hardest to learn over time” can be predicted.

In testing their theory, the team prepared 3 different groups of abstract images of merely 150 pixels each. (See the Phys.org link above containing these images.) Next, they drew up “small sketches” of them. The full image was shown to the test subjects for 10 seconds. Next they were shown 16 of the random sketches. Dr. Vempala of the team was “surprised by how close the performance was” of the humans and the neural network.

While the researchers cannot yet say with certainty that “random projection”, such as was demonstrated in their work, happens within our brains, the results lend support that it might be a “plausible explanation” for this phenomenon.

My Questions

  • Might this research have any implications and/or applications in virtual reality and augment reality systems that rely on both human vision and processing large quantities of data to generate their virtual imagery? (These 13 Subway Fold posts cover a wide range of trends and applications in VR and AR.)
  • Might this research also have any implications and/or applications in medical imaging and interpretation since this science also relies on visual recognition and continual learning?
  • What other markets, professions, universities and consultancies might be able to turn these findings into new entrepreneurial and scientific opportunities?

 


1.  I was unable to definitively source this online but I recall that I may have heard it from the comedian Steven Wright. Please let me know if you are aware of its origin. 

2.  For the work of Georgia’s Tech’s startup incubator see the Subway Fold post entitled Flashpoint Presents Its “Demo Day” in New York on April 16, 2015.

3.   These six Subway Fold posts cover a range of trends and developments in machine learning.

4.   Computer vision was recently taken up in an October 14, 2015 Subway Fold post entitled Visionary Developments: Bionic Eyes and Mechanized Rides Derived from Dragonflies.

Visionary Developments: Bionic Eyes and Mechanized Rides Derived from Dragonflies

"Transparency and Colors", Image by coniferconifer

“Transparency and Colors”, Image by coniferconifer

All manner of software and hardware development projects strive to diligently take out every single bug that can be identified¹. However, a team of researchers who is currently working on a fascinating and potentially valuable project is doing everything possible to, at least figuratively, leave their bugs in.

This involves a team of Australian researchers who are working on modeling the vision of dragonflies. If they are successful, there could be some very helpful implications for applying their work to the advancement of bionic eyes and driverless cars.

When the design and operation of biological systems in nature are adapted to improve man-made technologies as they are being here, such developments are often referred to as being biomimetic².

The very interesting story of this, well, visionary work was reported in an article in the October 6, 2015 edition of The Wall Street Journal entitled Scientists Tap Dragonfly Vision to Build a Better Bionic Eye by Rachel Pannett. I will summarize and annotate it, and pose some bug-free questions of my own. Let’s have a look and see what all of this organic and electronic buzz is really about.

Bionic Eyes

A research team from the University of Adelaide has recently developed this system modeled upon a dragonfly’s vision. It is built upon a foundation that also uses artificial intelligence (AI)³. Their findings appeared in an article entitled Properties of Neuronal Facilitation that Improve Target Tracking in Natural Pursuit Simulations that was published in the June 6, 2015 edition of The Royal Society Interface (access credentials required). The authors include Zahra M. Bagheri, Steven D. Wiederman, Benjamin S. Cazzolato, Steven Grainger, and David C. O’Carroll. The funding grant for their project was provided by the Australian Research Council.

While the vision of dragonflies “cannot distinguish details and shapes of objects” as well as humans, it does possess a “wide field of vision and ability to detect fast movements”. Thus, they can readily track of targets even within an insect swarm.

The researchers, including Dr. Steven Wiederman, the leader of the University of Adelaide team, believe their work could be helpful to the development work on bionic eyes. These devices consist of an  artificial implant placed in a person’s retina that, in turn, is connected to a video camera. What a visually impaired person “sees” while wearing this system is converted into electrical signals that are communicated to the brain. By adding the software model of the dragonfly’s 360-degree field of vision, this will add the capability for the people using it to more readily detect, among other things, “when someone unexpectedly veers into their path”.

Another member of the research team and one of the co-authors of their research paper, a Ph.D. candidate named Zahra Bageri, said that dragonflies are able to fly so quickly and be so accurate “despite their visual acuity and a tiny brain around the size of a grain of rice”4 In other areas of advanced robotics development, this type of “sight and dexterity” needed to avoid humans and objects has proven quite challenging to express in computer code.

One commercial company working on bionic eye systems is Second Sight Medical Products Inc., located in California. They have received US regulatory approval to sell their retinal prosthesis.

Driverless Cars

In the next stage of their work, the research team is currently studying “the motion-detecting neurons in insect optic lobes”, in an effort to build a system that can predict and react to moving objects. They believe this might one day be integrated into driverless cars in order to avoid pedestrians and other cars5. Dr. Wiederman foresees the possible commercialization of their work within the next five to ten years.

However, obstacles remain in getting this to market. Any integration into a test robot would require a “processor big enough to simulate a biological brain”. The research team believes that is can be scaled down since the “insect-based algorithms are much more efficient”.

Ms. Bagheri noted that “detecting and tracking small objects against complex backgrounds” is quite a technical challenge. She gave as an example of this a  baseball outfielder who has only seconds to spot, track and predict where a ball hit will fall in the field in the midst of a colorful stadium and enthusiastic fans6.

My Questions

  • As suggested in the article, might this vision model be applicable in sports to enhancing live broadcasts of games, helping teams review their game day videos afterwards by improving their overall play, and assisting individual players to analyze how they react during key plays?
  • Is the vision model applicable in other potential safety systems for mass transportation such as planes, trains, boats and bicycles?
  • Could this vision model be added to enhance the accuracy, resolution and interactivity of virtual reality and augmented reality systems? (These 11 Subway Fold posts appearing in the category of Virtual and Augmented Reality cover a range of interesting developments in this field.)

 


1.  See this Wikipedia page for a summary of the extraordinary career Admiral Grace Hopper. Among her many technological accomplishments, she was a pioneer in developing modern computer programming. She was also the originator of the term computer “bug”.

2For an earlier example of this, see the August 18, 2014 Subway Fold post entitled IBM’s New TrueNorth Chip Mimics Brain Functions.

3The Subway Fold category of Smart Systems contains 10 posts on AI.

4Speaking of rice-sized technology, see also the April 14, 2015 Subway Fold post entitled Smart Dust: Specialized Computers Fabricated to Be Smaller Than a Single Grain of Rice.

5While the University of Adelaide research team is not working with Google, nonetheless the company has been a leader in the development of autonomous cars with their Google’s Self-Driving Car Project.

6New York’s beloved @Mets might also prove to be worthwhile subjects to model because of their stellar play in the 2015 playoffs. Let’s vanquish those dastardly LA Dodgers on Thursday night. GO METS!