Feat First: New Findings on the Relationship Between Walking and Creativity

"I Heart New York", Image by Gary McCabe

“I Heart New York”, Image by Gary McCabe

New York is an incredibly vast and complex city in a multitude of ways which, despite its extensive mass transit system, also makes it a great place to walk around. Many New Yorkers prefer to travel to their destinations by foot purely for the pleasure of it. I am proudly one among them.

Whether it is on the streets of NYC or anywhere else across the world, bipedal locomotion is a healthy, no cost and deeply sensory experience as you take in all of the sights and sounds along your route. It also gives you the opportunity to think to yourself. Whether it is pondering the particulars of “When am I going to get the laundry done?” up to and including “E=MC²”, plus a gazillion other possible thoughts and subjects in between, putting one foot in front of another and then starting off of your way will transport you to all kinds of intriguing places inside and outside of your head.

Researchers in US universities have recently found compelling evidence that walking can also be quite conducive to creativity. This was the subject of a most interesting article on Quartz.com posted on April 10, 2016, entitled Research Backs Up the Instinct That Walking Improves Creativity, by Olivia Goldhill. I highly recommend reading this in its entirety. I will summarize and add some additional context to this, and then pose some of my own pedestrian questions.

Walking the Walk

"Walk", Image by Paul Evans

“Walk”, Image by Paul Evans

In an earlier article posted on the Stanford University News website on April 24, 2014, entitled Stanford Study Finds Walking Improves Creativity, by May Wong, researchers reported improvements in their test subjects’ Guilford’s alternate uses (GAU) test of creative divergent thinking and their compound remote associates (CRA) test of convergent thinking, conducted during and immediately after walking. The report itself is called Give Your Ideas Some Legs: The Positive Effect of Walking on Creative Thinking, by Marily Oppezzo, Ph.D. and Daniel L. Schwartz, Ph.D.. I also recommend reading both of these publications in their entirety (but please walk, don’t run, while doing so).

The effects seen upon the test subjects’ levels of creativity were nearly equivalent whether they were walking outside or else on a treadmill inside while facing a wall. It was the act of walking itself rather than the surroundings that was responsible.

Dr. Schwartz said that the “physiological changes” related to walking are “very complicated”. The reason why walking benefits “so many thinkers” is not readily apparent. However, he thinks “that the brain is focusing on doing a task it’s quite good at”. As a result, walking relaxes people and enables them to think freely.

While it is scientifically well-known that exercise can improve an individual’s mood, the underlying reason remains unclear whether, in its “more intense forms”, exercise has the same effect when compared to walking. (For the full details on this, the article links to a report entitled The Exercise Effect, by Kirsten Weir, which was the cover story in the December 2011 edition of the Monitor of Psychology, Vol. 42, No. 11.)

Walking the Talk

"Coming and Going", Image by David Robert Bliwas

“Coming and Going”, Image by David Robert Bliwas

Barbara Oakley, is an engineering professor at Oakland University and the author of A Mind for Numbers: How to Excel at Math and Science (Even If You Flunked Algebra), (TarcherPerigee, 2014), about effective learning. Her text includes the beneficial effects of walking. In an interview, she took the position that it is incorrect to assume that people are only learning when they are “focused”. Rather, she believes that walking enables us to “subconsciously process and think in a different way”. This has helped her in her own work when she has become “stuck”. After she takes a walk for 15 minutes, she finds that her ideas begin to flow again.

Some therapists have also recently tried to use the benefits of walking outdoors while conducting sessions with their clients. For example, Clay Cockrell, a therapist in New York, believes that this activity permits “more free form thinking”. He sees 35 to 40 clients each week using this approach and has found them grateful for the opportunity to do so.

Mr. Cockrell believes that New Yorkers mostly travel from destination to destination and, as he says are “never just outside out and about”.

[I respectfully disagree on that last point as I stated in my opening.]

My Questions

  • In order to achieve the full benefits of increased creativity while walking, is it necessary not to have other distractions, specifically mobile phones open, at the same time? That is, should we put away the smartphone?
  • Alternatively, does listening to the music streams or podcast downloads on our phones have any effect upon our creativity while walking?
  • Does walking and talking with other people have a positive or negative effect upon creativity? Should walking be kept to a solo activity when specifically done to spend time thinking about something?

New IBM Watson and Medtronic App Anticipates Low Blood Glucose Levels for People with Diabetes

"Glucose: Ball-and-Stick Model", Image by Siyavula Education

“Glucose: Ball-and-Stick Model”, Image by Siyavula Education

Can a new app jointly developed by IBM with its Watson AI technology in partnership with the medical device maker Medtronic provide a new form of  support for people with diabetes by safely avoiding low blood glucose (BG) levels (called hypoglycemia), in advance? If so, and assuming regulatory approval, this technology could potentially be a very significant boon to the care of this disease.

Basics of Managing Blood Glucose Levels

The daily management of diabetes involves a diverse mix of factors including, but not limited to, regulating insulin dosages, checking BG readings, measuring carbohydrate intakes at meals, gauging activity and exercise levels, and controlling stress levels. There is no perfect algorithm to do this as everyone with this medical condition is different from one another and their bodies react in individual ways in trying to balance all of this while striving to maintain healthy short and long-term control of BG levels.

Diabetes care today operates in a very data-driven environment. BG levels, expressed numerically, can be checked on hand-held meter and test strips using a single drop of blood or a continuous glucose monitoring system (CGM). The latter consists of a thumb drive-size sensor attached with temporary adhesive to the skin and a needle attached to this unit inserted just below the skin. This system provides patients with frequent real-time readings of their BG levels, and whether they are trending up or down, so they can adjust their medication accordingly. That is, for A grams of carbs and B amounts of physical activity and other contributing factors, C amount of insulin can be calculated and dispensed.

Insulin itself can be administered either manually by injection or by an insulin pump (also with a subcutaneously inserted needle). The later of these consists of two devices: The pump itself, a small enclosed device (about the size of a pager), with an infusion needle placed under the patient’s skin and a Bluetooth-enabled handheld device (that looks just like a smartphone), used to adjust the pump’s dosage and timing of insulin released. Some pump manufacturers are also bringing to market their latest generation of CGMs that integrate their data and command functions with their users’ smartphones.

(The links in the previous two paragraphs are to Wikipedia pages with detailed pages and photos on CGMs and insulin pumps. See also, this June 27, 2015 Subway Fold post entitled Medical Researchers are Developing a “Smart Insulin Patch” for another glucose sensing and insulin dispensing system under development.)

The trickiest part of all of these systems is maintaining levels of BG throughout each day that are within an acceptable range of values. High levels can result in a host of difficult symptoms. Hypoglycemic low levels, can quickly become serious, manifesting as dizziness, confusion and other symptoms, and can ultimately lead to unconsciousness in extreme cases if not treated immediately.

New App for Predicting and Preventing Low Blood Glucose Levels

Taking this challenge to an entirely new level, at last week’s annual Consumer Electronics Show (CES) held in Las Vegas, IBM and Medtronic jointly announced their new app to predict hypoglycemic events in advance. The app is built upon Watson’s significant strengths in artificial intelligence (AI) and machine learning to sift through and intuit patterns in large volumes of data, in this case generated from Medtronic’s user base for their CGMs and insulin pumps. This story was covered in a most interesting article posted in The Washington Post on January 6, 2016 entitled IBM Extends Health Care Bet With Under Armour, Medtronic by Jing Cao and Michelle Fay Cortez. I will summarize and annotate this report and then pose some of my own questions.

The announcement and demo of this new app on January 6, 2016 at CES showed the process by which a patient’s data can be collected from their Medtronic devices and then combined with additional information from their wearable activity trackers and food intake. Next, all of this information is processed through Watson in order to “provide feedback” for the patient to “manage their diabetes”.

Present and Future Plans for The App and This Approach

Making the announcement were Virginia Rometty, Chairman, President and CEO of IBM, and Omar Ishrak, Chairman and CEO of Medtronic. The introduction of this technology is expected in the summer of 2016. It still needs to be submitted to the US government’s regulatory review process.

Ms. Rometty said that the capability to predict low BG events, in some cases up to three hours before they occur, is a “breakthrough”. She described Watson as “cognitive computing”, using algorithms to generate “prescriptive and predictive analysis”. The company is currently making a major strategic move into finding and facilitating applications and partners for Watson in the health care industry. (The eight Subway Fold posts cover other various systems and developments using Watson.)

Hooman Hakami, Executive VP and President, of the Diabetes Group at Medtronic, described how his company is working to “anticipate” how the behavior of each person with Diabetes affects their blood glucose levels. With this information they can then “make choices to improve their health”. Here is the page from the company’s website about their partnership with IBM to work together on treating diabetes.

In the future, both companies are aiming to “give patients real-time information” on how their individual data is influencing their BG levels and “provide coaching” to assist them in making adjustments to keep their readings in a “healthy range”. In one scenario, patients might receive a text message that “they have an 85% chance of developing low blood sugar within an hour”. This will also include a recommendation to watch their readings and eat something to raise their BG back up to a safer level.

My Questions

  • Will this make patients more or less diligent in their daily care? Is there potential for patients to possibly assume less responsibility for their care if they sense that the management of their diabetes is running on a form of remote control? Alternatively, might this result in too much information for patients to manage?
  • What would be the possible results if this app is ever engineered to work in conjunction with the artificial pancreas project being led in by Ed Damiano and his group of developers in Boston?
  • If this app receives regulatory approval and gains wide acceptance among people with diabetes, what does this medical ecosystem look like in the future for patients, doctors, medical insurance providers, regulatory agencies, and medical system entrepreneurs? How might it positively or negatively affect the market for insulin pumps and CGMs?
  • Should IBM and Medtronic consider making their app available on and open-source basis to enable other individuals and groups of developers to improve it as well as develop additional new apps?
  • Whether and how will insurance policies for both patients and manufacturers, deal with any potential liability that may arise if the app causes some unforeseen adverse effects? Will medical insurance even cover, encourage or discourage the use of such an app?
  • Will the data generated by the app ever be used in any unforeseen ways that could affect patients’ privacy? Would patients using the new app have to relinquish all rights and interests to their own BG data?
  • What other medical conditions might benefit from a similar type of real-time data, feedback and recommendation system?

Mind Over Subject Matter: Researchers Develop A Better Understanding of How Human Brains Manage So Much Information

"Synapse", Image by Allan Ajifo

“Synapse”, Image by Allan Ajifo

There is an old joke that goes something like this: What do you get for the man who has everything and then where would he put it all?¹ This often comes to mind whenever I have experienced the sensation of information overload caused by too much content presented from too many sources. Especially since the advent of the Web, almost everyone I know has also experienced the same overwhelming experience whenever the amount of information they are inundated with everyday seems increasingly difficult to parse, comprehend and retain.

The multitudes of screens, platforms, websites, newsfeeds, social media posts, emails, tweets, blogs, Post-Its, newsletters, videos, print publications of all types, just to name a few, are relentlessly updated and uploaded globally and 24/7. Nonetheless, for each of us on an individualized basis, a good deal of the substance conveyed by this quantum of bits and ocean of ink somehow still manages to stick somewhere in our brains.

So, how does the human brain accomplish this?

Less Than 1% of the Data

A recent advancement covered in a fascinating report on Phys.org on December 15, 2015 entitled Researchers Demonstrate How the Brain Can Handle So Much Data, by Tara La Bouff describes the latest research into how this happens. I will summarize and annotate this, and pose a few organic material-based questions of my own.

To begin, people learn to identify objects and variations of them rather quickly. For example, a letter of the alphabet, no matter the font or an individual regardless of their clothing and grooming, are always recognizable. We can also identify objects even if the view of them is quite limited. This neurological processing proceeds reliably and accurately moment-by-moment during our lives.

A recent discover by a team of researchers at Georgia Institute of Technology (Georgia Tech)² found that we can make such visual categorizations with less than 1% of the original data. Furthermore, they created and validated an algorithm “to explain human learning”. Their results can also be applied to “machine learning³, data analysis and computer vision4. The team’s full findings were published in the September 28, 2015 issue of Neural Computation in an article entitled Visual Categorization with Random Projection by Rosa I. Arriaga, David Rutter, Maya Cakmak and Santosh S. Vempala. (Dr. Cakmak is from the University of Washington, while the other three are from Georgia Tech.)

Dr. Vempala believes that the reason why humans can quickly make sense of the very complex and robust world is because, as he observes “It’s a computational problem”. His colleagues and team members examined “human performance in ‘random projection tests'”. These measure the degree to which we learn to identify an object. In their work, they showed their test subjects “original, abstract images” and then asked them if they could identify them once again although using a much smaller segment of the image. This led to one of their two principal discoveries that the test subjects required only 0.15% of the data to repeat their identifications.

Algorithmic Agility

In the next phase of their work, the researchers prepared and applied an algorithm to enable computers (running a simple neural network, software capable of imitating very basic human learning characteristics), to undertake the same tasks. These digital counterparts “performed as well as humans”. In turn, the results of this research provided new insight into human learning.

The team’s objective was to devise a “mathematical definition” of typical and non-typical inputs. Next, they wanted to “predict which data” would be the most challenging for the test subjects and computers to learn. As it turned out, they each performed with nearly equal results. Moreover, these results proved that “data will be the hardest to learn over time” can be predicted.

In testing their theory, the team prepared 3 different groups of abstract images of merely 150 pixels each. (See the Phys.org link above containing these images.) Next, they drew up “small sketches” of them. The full image was shown to the test subjects for 10 seconds. Next they were shown 16 of the random sketches. Dr. Vempala of the team was “surprised by how close the performance was” of the humans and the neural network.

While the researchers cannot yet say with certainty that “random projection”, such as was demonstrated in their work, happens within our brains, the results lend support that it might be a “plausible explanation” for this phenomenon.

My Questions

  • Might this research have any implications and/or applications in virtual reality and augment reality systems that rely on both human vision and processing large quantities of data to generate their virtual imagery? (These 13 Subway Fold posts cover a wide range of trends and applications in VR and AR.)
  • Might this research also have any implications and/or applications in medical imaging and interpretation since this science also relies on visual recognition and continual learning?
  • What other markets, professions, universities and consultancies might be able to turn these findings into new entrepreneurial and scientific opportunities?


1.  I was unable to definitively source this online but I recall that I may have heard it from the comedian Steven Wright. Please let me know if you are aware of its origin. 

2.  For the work of Georgia’s Tech’s startup incubator see the Subway Fold post entitled Flashpoint Presents Its “Demo Day” in New York on April 16, 2015.

3.   These six Subway Fold posts cover a range of trends and developments in machine learning.

4.   Computer vision was recently taken up in an October 14, 2015 Subway Fold post entitled Visionary Developments: Bionic Eyes and Mechanized Rides Derived from Dragonflies.

Artificial Fingerprints: Something Worth Touching Upon

"030420_1884_0077_x__s", Image by TNS Sofres

“030420_1884_0077_x__s”, Image by TNS Sofres

Among the recent advancements of the replication of various human senses, particularly for prosthetics and robotics, scientists have just made another interesting achievement in creating, of all things, artificial fingerprints. They can actually sense certain real world stimuli. This development could have some potentially very productive – – and conductive – – applications.

Could someone please cue up The Human Touch by Bruce Springsteen for this?

We looked at a similar development in artificial human vision just recently in the October 14, 2015 Subway Fold post entitled Visionary Developments: Bionic Eyes and Mechanized Rides Derived from Dragonflies.

This latest digital and all-digit story was reported in a fascinating story posted on Sciencemag.org on October 30, 2015 entitled New Artificial Fingerprints Feel Texture, Hear Sound by Hanae Armitage. I will summarize and annotate it, and then add some of my own non-artificial questions.

Design and Materials

An electronic material has been created at the University of Illinois, Urbana-Champaign, that, while still under development in the lab “mimics the swirling design” of fingerprints. It can detect pressure, temperature and sound. The researchers who devised this believe it could be helpful in artificial limbs and perhaps even enhancing our own organic senses.

Dr. John Rogers, a member of the development team, finds this new material is an addition to the “sensor types that can be integrated with the skin”.

Scientists have been working for years on these materials called electronic skins (e-skins). Some of them can imitate the senses of human skin that can monitor pulse and temperature. (See also the October 18, 2015 Subway Fold post entitled Printable, Temporary Tattoo-like Medical Sensors are Under Development.) Dr. Hyunhyub Ko, a chemical engineer at Ulsan National Institute of Science and Technology in South Korea and another member of the artificial fingerprints development team noted that there are further scientific challenges “in replicating fingertips” with their ability to sense very small differences in textures.

Sensory Perceptions

In the team’s work, Dr. Ko and the others began with “a thin, flexible material” textured with features much like human fingerprints. Next, they used this to create a “microstructured ferroelectric skin“. This contains small embedded structures called “microdomes” (as shown in an illustration accompanying the AAAS.org article), that enable the following e-skin’s sensory perceptions*:

  • Pressure: When outside pressure moves two layers of this material together it generates a small electric current that is monitored through embedded electrodes. In effect, the greater the pressure the greater the current.
  • Temperature: The e-skin relaxes in warmer temperatures and stiffens in colder temperatures, likewise generating changes in the electrical current and thus enabling it to sense temperature changes.
  • Sound: While not originally expected, the e-skin was also been found to be sensitive to sound. This occurred in testing by Dr. Ko and his team. They electronically measured the vibrations from pronouncing the letters in the word “skin” right near the e-skin. The results show this affected the microdomes and, in turn, the electric current to register changes.

Dr. Ko said his next challenge is how to transmit all of these sensations to the human brain. This has been done elsewhere using optogenetics (the use of light to control neurons that have been genetically modified) in e-skins, but he plans to research other technologies for this. Specifically, in the increasing scientific interest and development in skin-mounted sensors (such as those described in the October 18, 2015 Subway Fold post linked above), this involves a smart groups of “ideas and materials” to engineer these.

My Questions

  • Might e-skins have applications in virtual reality and augmented reality systems for medicine, engineering, manufacturing, design, robotics, architecture, and gaming? (These 11 Subway Fold posts cover a range of new developments and applications of these technologies.)
  • What other fields and marketplaces might also benefit from integrating e-skin technology? What entrepreneurial opportunities might emerge here?
  • Could e-skins work in conjunction with the system being developed in the June 27, 2015 Subway Fold post entitled Medical Researchers are Developing a “Smart Insulin Patch” ?


For an absolutely magnificent literary exploration of the human senses, I recommend A Natural History of the Senses by Diane Ackerman (Vintage, 1991) in the highest possible terms. It is a gem in both its sparking prose and engaging subject.

See this Wikipedia page for detailed information and online resources about the field known as haptic technology.

Semantic Scholar and BigDIVA: Two New Advanced Search Platforms Launched for Scientists and Historians

"The Chemistry of Inversin", Image by Raymond Bryson

“The Chemistry of Inversion”, Image by Raymond Bryson

As powerful, essential and ubiquitous as Google and its search engine peers are across the world right now, needs often arise in many fields and marketplaces for platforms that can perform much deeper and wider digital excavating. So it is that two new highly specialized search platforms have just come online specifically engineered, in these cases, for scientists and historians. Each is structurally and functionally quite different from the other but nonetheless is aimed at very specific professional user bases with advanced researching needs.

These new systems provide uniquely enhanced levels of context, understanding and visualization with their results. We recently looked at a very similar development in the legal professions in an August 18, 2015 Subway Fold post entitled New Startup’s Legal Research App is Driven by Watson’s AI Technology.

Let’s have a look at both of these latest innovations and their implications. To introduce them, I will summarize and annotate two articles about their introductions, and then I will pose some additional questions of my own.

Semantic Scholar Searches for New Knowledge in Scientific Papers

First, the Allen Institute for Artificial Intelligence (A2I) has just launched its new system called Semantic Scholar, freely accessible on the web. This event was covered on NewScientist.com in a fascinating article entitled AI Tool Scours All the Science on the Web to Find New Knowledge on November 2, 2015 by Mark Harris.

Semantic Scholar is supported by artificial intelligence (AI)¹ technology. It is automated to “read, digest and categorise findings” from approximately two million scientific papers published annually. Its main objective is to assist researchers with generating new ideas and “to identify previously overlooked connections and information”. Because of the of the overwhelming volume of the scientific papers published each year, which no individual scientist could possibly ever read, it offers an original architecture and high-speed manner to mine all of this content.

Oren Etzioni, the director of A2I, termed Semantic Scholar a “scientist’s apprentice”, to assist them in evaluating developments in their fields. For example, a medical researcher could query it about drug interactions in a certain patient cohort having diabetes. Users can also pose their inquiries in natural language format.

Semantic Scholar operates by executing the following functions:

  • crawling the web in search of “publicly available scientific papers”
  • scanning them into its database
  • identifying citations and references that, in turn, are assessed to determine those that are the most “influential or controversial”
  • extracting “key phrases” appearing similar papers, and
  • indexing “the datasets and methods” used

A2I is not alone in their objectives. Other similar initiatives include:

Semantic Scholar will gradually be applied to other fields such as “biology, physics and the remaining hard sciences”.

BigDIVA Searches and Visualized 1,500 Year of History

The second innovative search platform is called Big Data Infrastructure Visualization Application (BigDIVA). The details about its development, operation and goals were covered in a most interesting report posted online on  NC State News on October 12, 2015 entitled Online Tool Aims to Help Researchers Sift Through 15 Centuries of Data by Matt Shipman.

This is joint project by the digital humanities scholars at NC State University and Texas A&M University. Its objective is to assist researchers in, among other fields, literature, religion, art and world history. This is done by increasing the speed and accuracy of searching through “hundreds of thousands of archives and articles” covering 450 A.D. to the present. BigDIVA was formally rolled out at NC State on October 16, 2015.

BigDIVA presents users with an entirely new visual interface, enabling them to search and review “historical documents, images of art and artifacts, and any scholarship associated” with them. Search results, organized by categories of digital resources, are displayed in infographic format4. The linked NC State News article includes a photo of this dynamic looking interface.

This system is still undergoing beta testing and further refinement by its development team. Expansion of its resources on additional historical periods is expected to be an ongoing process. Current plans are to make this system available on a subscription basis to libraries and universities.

My Questions

  • Might the IBM Watson, Semantic Scholar, DARPA and BigDIVA development teams benefit from sharing design and technical resources? Would scientists, doctors, scholars and others benefit from multi-disciplinary teams working together on future upgrades and perhaps even new platforms and interface standards?
  • What other professional, academic, scientific, commercial, entertainment and governmental fields would benefit from these highly specialized search platforms?
  • Would Google, Bing, Yahoo and other commercial search engines benefit from participating with the developers in these projects?
  • Would proprietary enterprise search vendors likewise benefit from similar joint ventures with the types of teams described above?
  • What entrepreneurial opportunities might arise for vendors, developers, designers and consultants who could provide fuller insight and support for developing customized search platforms?


1.  These 11 Subway Fold posts cover various AI applications and developments.

2.  These seven Subway Fold posts cover a range of IBM Watson applications and markets.

3A new history of DARPA written by Annie Jacobsen was recently published entitled The Pentagon’s Brain (Little Brown and Company, 2015).

4.  See this January 30, 2015 Subway Fold post entitled Timely Resources for Studying and Producing Infographics on this topic.

Printable, Temporary Tattoo-like Medical Sensors are Under Development

pulse-trace-163708_640There is a new high-energy action and suspense drama on NBC this year called Blindspot. The first episode began when a woman in left in a luggage bag in the middle of Times Square in New York with tattoos completely covering her and absolutely no memory of who she is or how she got there. She is taken in by the FBI who starts to analyze her tattoos and see if they can figure out who she was before her memory was intentionally destroyed. It turns out that the tattoos are puzzles that, once solved, start to lead a team of agents assigned to her to a series of dangerous criminal operations.

“Jane” as they call her, is quickly made a part of this FBI team because, without knowing why, she immediately exhibits professional level fighting and weapons skills. She is also highly motivated to find out her real identity and is starting to experience brief memory flashbacks. All sorts of subplots and machinations have begun to sprout up regarding her true identity and how she ended up in this dilemma.

So far, the show is doing well in the ratings. Imho, after four episodes it’s off to a compelling and creative start. I plan to keep watching it. (The only minor thing I don’t like about it is the way the production team is using the shaky cam so much it’s making me feel a bit seasick at times.)

The lead actress, Jamie Alexander, who plays Jane, is actually wearing just temporary tattoos on the show. While these cryptic designs are the main device to propel the fictional plots forward in each episode, back in the non-fictional real world temporary tattoo-like devices are also currently being tested by researchers as medical sensors to gather patients’ biological data. This news adds a whole new meaning to the notion of medical application.

This advancement was reported in a most interesting article on Smithsonian.com, posted on October 8, 2015 entitled Tiny, Tattoo-Like Wearables Could Monitor Your Health, by Heather Hansman. I will summarize and annotate it in an effort to provide a, well, ink-ling about this story, and then pose some of my own questions.

Research and Development

This project, in a field called bio-integrated electronics, is being conducted at the University of Texas at Austin’s Cockrell School of Engineering. The research team is being led by Professor Nanshu Lu (who received her Ph.D. from Harvard).  Her team’s experimental patch is currently being applied to test heart rates and blood oxygen levels.

When Dr. Lu and her team were investigating the possibility of creating these “tattoo-like wearables”, their main concern was the manufacturing process, not the sensors themselves because there were many already available. Instead, they focused upon creating these devices to be both disposable and inexpensive. Prior attempts elsewhere had proven to be more “expensive and time-consuming”.

This led them to pursue the use of  3D printing . (These four Subway Fold posts cover other applications of this technology.) They devised a means to print out “patterns on a sheet of metal instead of forming the electronics in a mold”. They easily found the type of metal material for this purpose in a hardware store. Essentially, the patterns were cut into it rather than removed from it. Next, this electronic component was “transfer printed onto medical tape or tattoo adhesive”. Altogether, it is about the size of a credit card. (There is a picture of one at the top of the article on Smithsonian.com linked above.)

The entire printing process takes about 20 minutes and can be done without the use of a dedicated lab. Dr. Lu is working to get the cost of each patch down to around $1.

Current Objectives

The teams further objective is to “integrate multiple sensors and antenna” into the patches in order to capture vital signs and wirelessly transmit them to doctors’ and patient’s computing devices.  They can be used to measure a patient’s:

One of the remaining issues to mass producing the patches is making them wireless using Bluetooth or near field communication (NFC) technology. At this point, chip producers have not made any commitments to make such chips small enough. Nonetheless, Dr. Lu and her team are working on creating their own chip which they expect will be about the size of a coin.

My Questions

  • Could this sensor be adapted to measure blood glucose levels? (See a similar line of research and development covered in the June 27, 2015 Subway Fold post entitled Medical Researchers are Developing a “Smart Insulin Patch”.)
  • Could this sensor be adapted to improve upon the traditional patch test for allergies?
  • Could this sensor be adapted for usage in non-vital sign data for biofeedback therapies?
  • Would adding some artwork to these patches make them aesthetically more pleasing and thus perhaps more acceptable to patients?
  • Could this sensor be further developed to capture multiple types of medical data?
  • Are these sensors being secured in such a manner to protect the patients’ privacy and from any possible tampering?
  • Could the production team of Blindspot please take it easy already with the shaky cam?

Visionary Developments: Bionic Eyes and Mechanized Rides Derived from Dragonflies

"Transparency and Colors", Image by coniferconifer

“Transparency and Colors”, Image by coniferconifer

All manner of software and hardware development projects strive to diligently take out every single bug that can be identified¹. However, a team of researchers who is currently working on a fascinating and potentially valuable project is doing everything possible to, at least figuratively, leave their bugs in.

This involves a team of Australian researchers who are working on modeling the vision of dragonflies. If they are successful, there could be some very helpful implications for applying their work to the advancement of bionic eyes and driverless cars.

When the design and operation of biological systems in nature are adapted to improve man-made technologies as they are being here, such developments are often referred to as being biomimetic².

The very interesting story of this, well, visionary work was reported in an article in the October 6, 2015 edition of The Wall Street Journal entitled Scientists Tap Dragonfly Vision to Build a Better Bionic Eye by Rachel Pannett. I will summarize and annotate it, and pose some bug-free questions of my own. Let’s have a look and see what all of this organic and electronic buzz is really about.

Bionic Eyes

A research team from the University of Adelaide has recently developed this system modeled upon a dragonfly’s vision. It is built upon a foundation that also uses artificial intelligence (AI)³. Their findings appeared in an article entitled Properties of Neuronal Facilitation that Improve Target Tracking in Natural Pursuit Simulations that was published in the June 6, 2015 edition of The Royal Society Interface (access credentials required). The authors include Zahra M. Bagheri, Steven D. Wiederman, Benjamin S. Cazzolato, Steven Grainger, and David C. O’Carroll. The funding grant for their project was provided by the Australian Research Council.

While the vision of dragonflies “cannot distinguish details and shapes of objects” as well as humans, it does possess a “wide field of vision and ability to detect fast movements”. Thus, they can readily track of targets even within an insect swarm.

The researchers, including Dr. Steven Wiederman, the leader of the University of Adelaide team, believe their work could be helpful to the development work on bionic eyes. These devices consist of an  artificial implant placed in a person’s retina that, in turn, is connected to a video camera. What a visually impaired person “sees” while wearing this system is converted into electrical signals that are communicated to the brain. By adding the software model of the dragonfly’s 360-degree field of vision, this will add the capability for the people using it to more readily detect, among other things, “when someone unexpectedly veers into their path”.

Another member of the research team and one of the co-authors of their research paper, a Ph.D. candidate named Zahra Bageri, said that dragonflies are able to fly so quickly and be so accurate “despite their visual acuity and a tiny brain around the size of a grain of rice”4 In other areas of advanced robotics development, this type of “sight and dexterity” needed to avoid humans and objects has proven quite challenging to express in computer code.

One commercial company working on bionic eye systems is Second Sight Medical Products Inc., located in California. They have received US regulatory approval to sell their retinal prosthesis.

Driverless Cars

In the next stage of their work, the research team is currently studying “the motion-detecting neurons in insect optic lobes”, in an effort to build a system that can predict and react to moving objects. They believe this might one day be integrated into driverless cars in order to avoid pedestrians and other cars5. Dr. Wiederman foresees the possible commercialization of their work within the next five to ten years.

However, obstacles remain in getting this to market. Any integration into a test robot would require a “processor big enough to simulate a biological brain”. The research team believes that is can be scaled down since the “insect-based algorithms are much more efficient”.

Ms. Bagheri noted that “detecting and tracking small objects against complex backgrounds” is quite a technical challenge. She gave as an example of this a  baseball outfielder who has only seconds to spot, track and predict where a ball hit will fall in the field in the midst of a colorful stadium and enthusiastic fans6.

My Questions

  • As suggested in the article, might this vision model be applicable in sports to enhancing live broadcasts of games, helping teams review their game day videos afterwards by improving their overall play, and assisting individual players to analyze how they react during key plays?
  • Is the vision model applicable in other potential safety systems for mass transportation such as planes, trains, boats and bicycles?
  • Could this vision model be added to enhance the accuracy, resolution and interactivity of virtual reality and augmented reality systems? (These 11 Subway Fold posts appearing in the category of Virtual and Augmented Reality cover a range of interesting developments in this field.)


1.  See this Wikipedia page for a summary of the extraordinary career Admiral Grace Hopper. Among her many technological accomplishments, she was a pioneer in developing modern computer programming. She was also the originator of the term computer “bug”.

2For an earlier example of this, see the August 18, 2014 Subway Fold post entitled IBM’s New TrueNorth Chip Mimics Brain Functions.

3The Subway Fold category of Smart Systems contains 10 posts on AI.

4Speaking of rice-sized technology, see also the April 14, 2015 Subway Fold post entitled Smart Dust: Specialized Computers Fabricated to Be Smaller Than a Single Grain of Rice.

5While the University of Adelaide research team is not working with Google, nonetheless the company has been a leader in the development of autonomous cars with their Google’s Self-Driving Car Project.

6New York’s beloved @Mets might also prove to be worthwhile subjects to model because of their stellar play in the 2015 playoffs. Let’s vanquish those dastardly LA Dodgers on Thursday night. GO METS!