Visionary Developments: Bionic Eyes and Mechanized Rides Derived from Dragonflies

"Transparency and Colors", Image by coniferconifer

“Transparency and Colors”, Image by coniferconifer

All manner of software and hardware development projects strive to diligently take out every single bug that can be identified¹. However, a team of researchers who is currently working on a fascinating and potentially valuable project is doing everything possible to, at least figuratively, leave their bugs in.

This involves a team of Australian researchers who are working on modeling the vision of dragonflies. If they are successful, there could be some very helpful implications for applying their work to the advancement of bionic eyes and driverless cars.

When the design and operation of biological systems in nature are adapted to improve man-made technologies as they are being here, such developments are often referred to as being biomimetic².

The very interesting story of this, well, visionary work was reported in an article in the October 6, 2015 edition of The Wall Street Journal entitled Scientists Tap Dragonfly Vision to Build a Better Bionic Eye by Rachel Pannett. I will summarize and annotate it, and pose some bug-free questions of my own. Let’s have a look and see what all of this organic and electronic buzz is really about.

Bionic Eyes

A research team from the University of Adelaide has recently developed this system modeled upon a dragonfly’s vision. It is built upon a foundation that also uses artificial intelligence (AI)³. Their findings appeared in an article entitled Properties of Neuronal Facilitation that Improve Target Tracking in Natural Pursuit Simulations that was published in the June 6, 2015 edition of The Royal Society Interface (access credentials required). The authors include Zahra M. Bagheri, Steven D. Wiederman, Benjamin S. Cazzolato, Steven Grainger, and David C. O’Carroll. The funding grant for their project was provided by the Australian Research Council.

While the vision of dragonflies “cannot distinguish details and shapes of objects” as well as humans, it does possess a “wide field of vision and ability to detect fast movements”. Thus, they can readily track of targets even within an insect swarm.

The researchers, including Dr. Steven Wiederman, the leader of the University of Adelaide team, believe their work could be helpful to the development work on bionic eyes. These devices consist of an  artificial implant placed in a person’s retina that, in turn, is connected to a video camera. What a visually impaired person “sees” while wearing this system is converted into electrical signals that are communicated to the brain. By adding the software model of the dragonfly’s 360-degree field of vision, this will add the capability for the people using it to more readily detect, among other things, “when someone unexpectedly veers into their path”.

Another member of the research team and one of the co-authors of their research paper, a Ph.D. candidate named Zahra Bageri, said that dragonflies are able to fly so quickly and be so accurate “despite their visual acuity and a tiny brain around the size of a grain of rice”4 In other areas of advanced robotics development, this type of “sight and dexterity” needed to avoid humans and objects has proven quite challenging to express in computer code.

One commercial company working on bionic eye systems is Second Sight Medical Products Inc., located in California. They have received US regulatory approval to sell their retinal prosthesis.

Driverless Cars

In the next stage of their work, the research team is currently studying “the motion-detecting neurons in insect optic lobes”, in an effort to build a system that can predict and react to moving objects. They believe this might one day be integrated into driverless cars in order to avoid pedestrians and other cars5. Dr. Wiederman foresees the possible commercialization of their work within the next five to ten years.

However, obstacles remain in getting this to market. Any integration into a test robot would require a “processor big enough to simulate a biological brain”. The research team believes that is can be scaled down since the “insect-based algorithms are much more efficient”.

Ms. Bagheri noted that “detecting and tracking small objects against complex backgrounds” is quite a technical challenge. She gave as an example of this a  baseball outfielder who has only seconds to spot, track and predict where a ball hit will fall in the field in the midst of a colorful stadium and enthusiastic fans6.

My Questions

  • As suggested in the article, might this vision model be applicable in sports to enhancing live broadcasts of games, helping teams review their game day videos afterwards by improving their overall play, and assisting individual players to analyze how they react during key plays?
  • Is the vision model applicable in other potential safety systems for mass transportation such as planes, trains, boats and bicycles?
  • Could this vision model be added to enhance the accuracy, resolution and interactivity of virtual reality and augmented reality systems? (These 11 Subway Fold posts appearing in the category of Virtual and Augmented Reality cover a range of interesting developments in this field.)

 


1.  See this Wikipedia page for a summary of the extraordinary career Admiral Grace Hopper. Among her many technological accomplishments, she was a pioneer in developing modern computer programming. She was also the originator of the term computer “bug”.

2For an earlier example of this, see the August 18, 2014 Subway Fold post entitled IBM’s New TrueNorth Chip Mimics Brain Functions.

3The Subway Fold category of Smart Systems contains 10 posts on AI.

4Speaking of rice-sized technology, see also the April 14, 2015 Subway Fold post entitled Smart Dust: Specialized Computers Fabricated to Be Smaller Than a Single Grain of Rice.

5While the University of Adelaide research team is not working with Google, nonetheless the company has been a leader in the development of autonomous cars with their Google’s Self-Driving Car Project.

6New York’s beloved @Mets might also prove to be worthwhile subjects to model because of their stellar play in the 2015 playoffs. Let’s vanquish those dastardly LA Dodgers on Thursday night. GO METS!

How Robots and Computer Algorithms are Challenging Jobs and the Economy

"p8nderInG exIstence", Image by JD Hancock

“p8nderInG exIstence”, Image by JD Hancock

A Silicon Valley entrepreneur named Martin Ford (@MFordFuture) has written a very timely new book entitled Rise of the Robots: Technology and the Threat of a Jobless Future (Basic Books, 2015), which is currently receiving much attention in the media. The depth and significance of the critical issues it raises is responsible for this wide-beam spotlight.*

On May 27, 2015 the author was interviewed on The Brian Lehrer Show on radio station WNYC in New York. The result is available as a truly captivating 30-minute podcast entitled When Will Robots Take Your Job?  I highly recommend listening to this in its entirety. I will sum up. annotate and add some questions of my own to this.

The show’s host, Brian Lehrer, expertly guided Mr. Ford through the key complexities and subtleties of the thesis of his provocative new book. First, for now and increasingly in the future, robots and AI algorithms are taking on increasingly difficult task that are displacing human workers. Especially for those jobs that involve more repetitive and routine tasks, the more likely it will be that machines will replace human workers. This will not occur in just one sector, but rather, “across the board” in all areas of the marketplace.  For example, IBM’s Watson technology can be accessed using natural language which, in the future, might result in humans no longer being able to recognize its responses as coming from a machine.

Mr. Ford believes we are moving towards an economic model where productivity is increasing but jobs and income are decreasing. He asserts that solving this dilemma will be critical. Consequently, his second key point was the challenge of detaching work from income. He is proposing the establishment of some form of system where income is guaranteed. He believes this would still support Capitalism and would “produce plenty of income that could be taxed”. No nation is yet moving in this direction, but he thinks that Europe might be more amenable to it in the future.

He further believes that the US will be most vulnerable to displacement of workers because it leads the world in the use of technology but “has no safety net” for those who will be put out by this phenomenon. (For a more positive perspective on this, see the December 27, 2014 Subway Fold post entitled Three New Perspectives on Whether Artificial Intelligence Threatens or Benefits the World.)

Brian Lehrer asked his listeners to go to a specific page on the site of the regular podcast called Planet Money on National Public Radio. (“NPR” is the network of publicly supported radio stations that includes WNYC). This page entitled Will Your Job be Done by a Machine? displays a searchable database of job titles and the corresponding chance that each will be replaced by automation. Some examples that were discussed included:

  • Real estate agents with a 86.4% chance
  • Financial services workers with a 23% chance
  • Software developers with a 12.8% chance

Then the following six listeners called in to speak with Mr. Ford:

  • Caller 1 asked about finding a better way to get income to the population beyond the job market. This was squarely on point with Mr. Ford’s first main point about decoupling income and jobs. He was not advocating for somehow trying to stop technological progress. However, he reiterated how machines are “becoming autonomous workers, no longer just tools”.
  • Caller 2 asked whether Mr. Ford had seen a YouTube video entitled Humans Need Not Apply. Mr. Ford had seen it and recommended it. The caller said that the most common reply to this video (which tracks very closely with many of Mr. Ford’s themes), he has heard was, wrongly in his opinion, that “People will do something else”. Mr. Ford replied that people must find other things that they can get paid to do. The caller also said that machine had made it much easier and more economical for his to compose and record his own music.
  • Caller 3 raised the topic of automation in the medical profession. Specifically, whether IBM’s Watson could one day soon replace doctors. Mr. Ford believes that Watson will have an increasing effect here, particularly in fields such as radiology. However, it will have a lesser impact in those specialties where doctors and patients need to interact more with each other. (See also these three recent Subway Fold posts on the applications of Watson to TED Talks, business apps and the legal profession.)
  • Caller 4 posited that only humans can conceive ideas and be original. He asked about how can computers identify patterns for which they have not been programmed. He cited the example of the accidental discovery of penicillin. Mr. Ford replied that machines will not replace scientists but they can replace service workers. Therefore, he is “more worried about the average person”. Brian Lehrer then asked him about driverless cars and, perhaps, even driverless Uber cabs one day. Mr. answered that although expectations were high that this will eventually happen. He is concerned that taxi drivers will lose jobs. (See this September 15, 2014 Subway Fold post on Uber and the “sharing economy”.)  Which led to …
  • Caller 5 who is currently a taxi driver in New York. They discussed how, in particular, many types of drivers who drive for commerce are facing this possibility. Brian Lehrer followed-up by asking whether this may somehow lead to the end of Capitalism. Mr. Ford that Capitalism “can continue to work” but it must somehow “adapt to new laws and circumstances”.
  • Caller 6 inquired whether one of the proposals raised in VR pioneer Jaron Lanier’s book entitled Who Owns the Future (Simon & Schuster, 2013), whereby people could perhaps be paid for the information they provide online. This might be a possible means to financially assist people in the future. Mr. Ford’s response was that while it was “an interesting idea” it would be “difficult to implement”. As well, he believes that Google would resist this. He made a further distinction between his concept of guaranteed income and Lanier’s proposal insofar he believes that “Capitalism can adapt” more readily to his concept. (I also highly recommend Lanier’s book for its originality and deep insights.)

Brian Lehrer concluded by raising the prospect of self-aware machines. He noted that Bill Gates and Stephen Hawking had recently warned about this possibility. Mr. Ford responded that “we are too far from this now”. For him, today’s concern is on automation’s threat to jobs, many of which are becoming easier to reduce to a program.

To say the very least, to my own organic and non-programmatic way of thinking, this was an absolutely mind-boggling discussion. I greatly look forward to this topic will continue to gather momentum and expanded media coverage.

My own questions include:

  • How should people at the beginning, middle and end of their careers be advised and educated to adapt to these rapid changes so that they can not only survive, but rather, thrive within them?
  • What role should employers, employees, educators and the government take, in any and all fields, to keep the workforce up-to-date in the competencies they will need to continue to be valuable contributors?
  • Are the challenges of automation most efficiently met on the global, national and/or local levels by all interested contingencies working together? What forms should their cooperation take?

*  For two additional book reviews I recommend reading ‘Rise of the Robots’ and ‘Shadow Work’ by Barbara Ehrenreich in the May 11, 2015 edition of The New York Times, and Soon They’ll Be Driving It, Too by Sumit Paul-Choudhury in the May 15, 2015 edition of The Wall Street Journal (subscription required).

New Chips are Using Deep Learning to Enhance Mobile, Camera and Auto Image Processing Capabilities

"Smartphone Photography", Image by AvenueTheory

“Smartphone Photography”, Image by AvenueTheory

We interface with our devices’ screens for inputs and outputs nearly all day and everyday. What many of the gadgets will soon be able to display and, moreover, understand about digital imagery is about to take a significant leap forward. This will be due to the pending arrival of new chips embedded into their circuitry that are enabled by artificial intelligence (AI) algorithms. Let’s have a look.

This story was reported in a most interesting article on TechnologyReview.com entitled Silicon Chips That See Are Going to Make Your Smartphone Brilliant by Tom Simonite on May 14, 2015. I will sum, annotate and pose some question about it.

The key technology behind these new chips is an AI methodology called deep learning. In these 10 recent Subway Fold posts, deep learning has been covered in a range of applications in various online and real world marketplaces including, among others, entertainment, news, social media, law, medicine, finance and education. The emergence of these smarter new chips will likely bring additional significant enhancements to all of them and many others insofar as their abilities to better comprehend the nature of the content of images.

Two major computer chip companies, Synopsis and Qualcomm, and the Chinese search firm Baidu, are developing systems, based upon deep learning, for mobile devices, autos and other screen-based hardware. They were discussed by their representatives at the May 2015 Embedded Vision Summit held on Tuesday, May 12, 2015, in Santa Clara, California. The companies’ representatives were:

  • Pierre Paul, the director of Research and Development at Synopsis, who presented a demo of a new chip core that “recognized speed limit signs” on the road for vehicles and enabled facial recognition for security apps. This chip uses less power than current chips on the market and, moreover, could add some “visual intelligence” to phone and car apps, and security cameras.  (Here is the link to the abstracts of the presentations, listed by speaker including Mr. Paul’s entitled Low-power Embedded Vision: A Face Tracker Case Study from the Summit’s website.)
  • Ren Wu, Distinguished Scientist, Baidu Institute of Deep Learning, said that deep learning-based chips are important for computers used for research, and called for making such intelligence as ubiquitous as possible. (Here is the link to the abstracts of the presentations, listed by speaker including Mr. Wu’s, entitled Enabling Ubiquitous Visual Intelligence Through Deep Learning from the Summit’s website.)

Both Wu and Gehlhaar said that adding more intelligence to mobile device’s ability to recognize photos could be used to address the privacy implications of some apps by lessening the quantity of personal data they upload to the web.

My questions are as follows:

  • Whether and how should social networks employ these chips? For example, what if such visually intelligent capabilities were to be added to the recently rolled out live video apps Periscope and MeerKat on Twitter?
  • Will these chips be adapted to the forthcoming commercial augmented and virtual reality systems (as discussed in the five recent Subway Fold posts)? If so, what new capabilities might they add to these environments?
  • What additional privacy and security concerns will need to be addressed by manufacturers, consumers and regulators as these chips are introduced into their respective marketplaces?

New Analytics Process Uses Patent Data to Predict Advancements in Specific Technologies

"Crystal Ball", Image by Christian Schnettelker

“Crystal Ball”, Image by Christian Schnettelker

John Oliver did a brilliant and hilarious takedown of patent trolls on his April 19, 2015 edition of his Last Week Tonight show. He raved about the absurdity of such companies who buy up patents and yet produce nothing much themselves other than lawsuits to enforce theses patents. As he said, this is a form of “extortion” that impedes progress and ends up costing the defendants in these actions a great deal of money. If you did not see the show or have not seen the video yet, please have a look and a laugh.

Then compare and contrast that economic fear and needless cost of using patent data in such a negative manner with the publication of a paper last week about how US patent filings are now being used in an entirely opposite, innovative and productive manner. The contrast could not be more dramatic. Indeed, as presented in a new paper published online on April 15, 2015 on PLoS One entitled Quantitative Determination of Technological Improvement from Patent Data, by MIT researchers Christopher L. Benson and Christopher L. Magee, mining recent filings in the US Patent and Trademark Office’s (USPTO) massive database using their new methodology, can determine which technologies are genuinely advancing and at what relative rate.

This very exciting news was reported and analyzed in an article posted on Phys.org on April 15, 2015 entitled New Method Uses Patent Data to Estimate a Technology’s Future Rate of Improvement. I will sum up, annotate and add a few questions to this. I highly recommend clicking through on both this article for the details of how this prediction tool was developed and the full-text of the PLoS One paper for the granular details of how it actually works.

(For cross-reference purposes, this advancement follows on and partially mixes together the topics of two previous Subway Fold posts, one on April 9, 2014 entitled Comprehensive Visualization of Future Paths of Technological Innovations and another on August 8, 2014 entitled New Visualization Service for US Patent and Trademark Data.)

Benson and Magee have devised an analytical means to sift through the USPTO database for precisely choosing the latest patents that “best represent” 28 specific technological domains. These include, among others,  “solar photovoltaics, 3-D printing, fuel-cell technology, and genome sequencing”. Then, applying their methodology, based upon the number of subsequent citations in other new patent filings, they were able to determine those some of the relevant patents displayed an increased likelihood in predicting “a technology’s improvement rate”. In effect, the higher the rate of subsequent citation of Patent X, the higher the rate of innovation. The equations in their predictive tool also include some other patent characteristics.

Among the 28 technologies, those showing the highest rates of advancement were “optical¹ and wireless communications, 3-D printing, and MRI technology²“, while others with slower rates of advance included “batteries, wind turbines, and combustion engines”.

Benson believes that his prediction method could possibly be useful to venture capitalists, startups³. Magee hopes that it may be applied as a form of “rating system” for investors searching out potential “breakthroughs”. Both developers also foresee the possibility that public and private laboratories could use it to investigate potential new areas for research. Furthermore, Magee believes that their approach can be applied to lower the level of uncertainty about the future of a particular technology to “a more manageable number”.

My questions are as follows:

  • Would the accuracy of the predictions from this new system be enhanced by applying its underlying equations to add in other data sources such as online news, social media mentions, and citations to other relevant industry news publications? (X-ref to the December 2, 2014 Subway Fold post entitled Startup is Visualizing and Interpreting Massive Quantities of Daily Online News Content.)
  • Could the underlying equations be applied to other fields such as law to predict the possible outcomes of cases based upon the densities and propensities of cases cited in similar matters and jurisdictions? What about possible applications in medical research or the financial markets?
  • Can levels of probability be quantified with this new system? For example, can it derive a 70% probability that driverless cars will continue to gather technological momentum and then commercially succeed in the marketplace? If so, how might such probabilities be used by the public, governments, researchers and investors?

 


1Could references to patents for optical technologies also be considered, well, cites for sore eyes?

2.  X-ref a September 3, 2014 Subway Fold post entitled Applying MRI Technology to Determine the Effects of Movies and Music on Our Brains.

3.  There are currently 22 Subway Fold posts on a broad range of startups in many industries.