I Can See for Miles: Using Augmented Reality to Analyze Business Data Sets

matrix-1013612__340, Image from Pixabay

While one of The Who’s first hit singles, I Can See for Miles, was most certainly not about data visualization, it still might – – on a bit of a stretch – – find a fitting a new context in describing one of the latest dazzling new technologies in the opening stanza’s declaration “there’s magic in my eye”.  In determining Who’s who and what’s what about all this, let’s have a look at report on a new tool enabling data scientists to indeed “see for miles and miles” in an exciting new manner.

This innovative approach was recently the subject of a fascinating article by an augmented reality (AR) designer named Benjamin Resnick about his team’s work at IBM on a project called Immersive Insights, entitled Visualizing High Dimensional Data In Augmented Reality, posted on July 3, 2017 on Medium.com. (Also embedded is a very cool video of a demo of this system.) They are applying AR’s rapidly advancing technology1 to display, interpret and leverage insights gained from business data. I highly recommend reading this in its entirety. I will summarize and annotate it here and then pose a few real-world questions of my own.

Immersive Insights into Where the Data-Points Point

As Resnick foresees such a system in several years, a user will start his or her workday by donning their AR glasses and viewing a “sea of gently glowing, colored orbs”, each of which visually displays their business’s big data sets2. The user will be able to “reach out select that data” which, in turn, will generate additional details on a nearby monitor. Thus, the user can efficiently track their data in an “aesthetically pleasing” and practical display.

The project team’s key objective is to provide a means to visualize and sum up the key “relationships in the data”. In the short-term, the team is aiming Immersive Insights towards data scientists who are facile coders, enabling them to visualize, using AR’s capabilities upon time series, geographical and networked data. For their long-term goals, they are planning to expand the range of Immersive Insight’s applicability to the work of business analysts.

For example, Instacart, a same-day food delivery service, maintains an open source data set on food purchases (accessible here). Every consumer represents a data-point wherein they can be expressed as a “list of purchased products” from among 50,000 possible items.

How can this sizable pool of data be better understood and the deeper relationships within it be extracted and understood? Traditionally, data scientists create a “matrix of 2D scatter plots” in their efforts to intuit connections in the information’s attributes. However, for those sets with many attributes, this methodology does not scale well.

Consequently, Resnick’s team has been using their own new approach to:

  • Lower complex data to just three dimensions in order to sum up key relationships
  • Visualize the data by applying their Immersive Insights application, and
  • Iteratively label and color-code the data” in conjunction with an “evolving understanding” of its inner workings

Their results have enable them to “validate hypotheses more quickly” and establish a sense about the relationships within the data sets. As well, their system was built to permit users to employ a number of versatile data analysis programming languages.

The types of data sets being used here are likewise deployed in training machine learning systems3. As a result, the potential exists for these three technologies to become complementary and mutually supportive in identifying and understanding relationships within the data as well as deriving any “black box predictive models”.

Analyzing the Instacart Data Set: Food for Thought

Passing over the more technical details provided on the creation of team’s demo in the video (linked above), and next turning to the results of the visualizations, their findings included:

  • A great deal of the variance in Instacart’s customers’ “purchasing patterns” was between those who bought “premium items” and those who chose less expensive “versions of similar items”. In turn, this difference has “meaningful implications” in the company’s “marketing, promotion and recommendation strategies”.
  • Among all food categories, produce was clearly the leader. Nearly all customers buy it.
  • When the users were categorized by the “most common department” they patronized, they were “not linearly separable”. This is, in terms of purchasing patterns, this “categorization” missed most of the variance in the system’s three main components (described above).

Resnick concludes that the three cornerstone technologies of Immersive Insights – – big data, augmented reality and machine learning – – are individually and in complementary combinations “disruptive” and, as such, will affect the “future of business and society”.

Questions

  • Can this system be used on a real-time basis? Can it be configured to handle changing data sets in volatile business markets where there are significant changes within short time periods that may affect time-sensitive decisions?
  • Would web metrics be a worthwhile application, perhaps as an add-on module to a service such as Google Analytics?
  • Is Immersive Insights limited only to business data or can it be adapted to less commercial or non-profit ventures to gain insights into processes that might affect high-level decision-making?
  • Is this system extensible enough so that it will likely end up finding unintended and productive uses that its designers and engineers never could have anticipated? For example, might it be helpful to juries in cases involving technically or financially complex matters such as intellectual property or antitrust?

 


1.  See the Subway Fold category Virtual and Augmented Reality for other posts on emerging AR and VR applications.

2.  See the Subway Fold category of Big Data and Analytics for other posts covering a range of applications in this field.

3.  See the Subway Fold category of Smart Systems for other posts on developments in artificial intelligence, machine learning and expert systems.

4.  For a highly informative and insightful examination of this phenomenon where data scientists on occasion are not exactly sure about how AI and machine learning systems produce their results, I suggest a click-through and reading of The Dark Secret at the Heart of AI,  by Will Knight, which was published in the May/June 2017 issue of MIT Technology Review.

Virtual Reality Universe-ity: The Immersive “Augmentarium” Lab at the U. of Maryland

"A Touch of Science", Image by Mars P.

“A Touch of Science”, Image by Mars P.

Got to classes. Sit through a series of 50 minute lectures. Drink coffee. Pay attention and take notes. Drink more coffee. Go to the library to study, do research and complete assignments. Rinse and repeat for the rest of the semester. Then take your final exams and hope that you passed everything. More or less, things have traditionally been this way in college since Hector was a pup.

Might students instead be interested in participating at the new and experimental learning laboratory called the Augmentarium at the University of Maryland where immersing themselves in their studies takes on an entirely new meaning? This is a place where virtual reality (VR)  is being tested and integrated into the learning process. (There 14 Subway Fold posts cover a range of VR and augmented reality [AR] developments and applications.)

Where do I sign up for this?¹

The story was covered in a fascinating report that appeared on December 8, 2015 on the website of the Chronicle of Higher Education entitled Virtual-Reality Lab Explores New Kinds of Immersive Learning, by Ellen Wexler. I highly recommend reading this in its entirety as well as clicking on the Augmentarium link to learn about some these remarkable projects. I also suggest checking out the hashtag #Augmentarium on Twitter the very latest news and developments. I will summarize and annotate this story, and pose some of my own questions right after I take off my own imaginary VR headset.

Developing VR Apps in the Augmentarium

In 2014, Brendan Iribe, the co-founder of the VR headset company Oculus², as well as a University of Maryland alumni, donated $31 million to the University for its development of VR technology³. During the same year, with addition funding obtained from the National Science Foundation, the Augmentarium was built. Currently, researchers at the facility are working on applications of VR to “health care, public safety, and education”.

Professor Ramani Duraiswami, a PhD and co-founder of a startup called VisiSonics (developers of 3D audio and VR gaming systems), is involved with the Augmentarium. His work is in the area of audio, which he believes has a great effect upon how people perceive the world around them. He further thinks that an audio or video lecture presented via distance learning can be greatly enhanced by using VR to, in his words make “the experience feel more immersive”. He feels this would make you feel as though you are in the very presence of the instructor4.

During a recent showcase there, Professor Duraiswami demo-ed 3D sound5 and a short VR science fiction production called Fixing Incus. (This link is meant to be played on a smartphone that is then embedded within a VR viewer/headset.) This implementation showed the audience what it was like to be immersed into a virtual environment where, when they moved their heads and line of sight, what they were viewing corresponding and seamlessly changed.

Enhancing Virtual Immersions for Medicine and Education

Amitabh Varshney, the Director of the University’s Institute for Advanced Computer Studies, is now researching “how the brain processes information in immersive environments” and how is differs from how this is done on a computer screen.6 He believes that VR applications in the classroom will enable students to immerse themselves in their subjects, such as being able to “walk through buildings they design” and “explore” them beyond “just the equations” involved in creating these structures.

At the lab’s recent showcase, he provided the visitors with (non-VR) 3D glasses and presented “an immersive video of a surgical procedure”. He drew the audience’s attention to the doctors at the operating table who were “crowing around” it. He believes that the use of 3D headsets would provide medical students a better means to “move around” and get an improved sense of what this experience is actually like in the operating room. (The September 22, 2015 Subway Fold post entitled VR in the OR: New Virtual Reality System for Planning, Practicing and Assisting in Surgery is also on point and provides extended coverage on this topic.)

While today’s early iterations of VR headsets (either available now or early in 2016), are “cumbersome”, researchers hope that they will evolve (in a manner similar to mobile phones which, in turn and as mentioned above, are presently a key element in VR viewers), and be applied in “hospitals, grocery stores and classrooms”.  Director Varshney can see them possibly developing along an even faster timeline.

My Questions

  • Is the establishment and operation of the Augmentarium a model that other universities should consider as a means to train students in this field, attract donations, and incubate potential VR and AR startups?
  • What entrepreneurial opportunities might exist for consulting, engineering and tech firms to set up comparable development labs at other schools and in private industry?
  • What other types of academic courses would benefit from VR and AR support? Could students now use these technologies to create or support their academic projects? What sort of grading standards might be applied to them?
  • Do the rapidly expanding markets for VR and AR require that some group in academia and/or the government establish technical and perhaps even ethical standards for such labs and their projects?
  • How are relevant potential intellectual property and technology transfer issues going to be negotiated, arbitrated and litigated if needed?

 


1.  Btw, has anyone ever figured out how the very elusive and mysterious “To Be Announced (TBA)”, the professor who appears in nearly all course catalogs, ends up teaching so many subjects at so many schools at the same time? He or she must have an incredibly busy schedule.

2.  These nine Subway Fold posts cover, among other VR and AR related stories, the technology of Oculus.

3.  This donation was reported in an article on September 11, 2014 in The Washington Post in an article entitled Brendan Iribe, Co-founder of Oculus VR, Makes Record $31 Million Donation to U-Md by Nick Anderson.

4.  See also the February 18, 2015 Subway Fold post entitled A Real Class Act: Massive Open Online Courses (MOOCs) are Changing the Learning Process.

5.  See also Designing Sound for Virtual Reality by Todd Baker posted on Medium.com on December 21, 2015, for a thorough overview of this aspect of VR, and the August 5, 2015 Subway Fold post entitled  Latest Census on Virtual Senses: A Seminar on Augmented Reality in New York covering, among other AR technologies, the development work and 3D sound wireless headphones of Hooke Audio.

6.  On a somewhat related topic, see the December 18, 2015 Subway Fold post entitled Mind Over Subject Matter: Researchers Develop A Better Understanding of How Human Brains Manage So Much Information.

Summary of the Media and Tech Preview 2016 Discussion Panel Held at Frankfurt Kurnit in NYC on December 2, 2015

"dtv svttest", Image by Karl Baron

“dtv svttest”, Image by Karl Baron

GPS everywhere notwithstanding, there are still maps on the walls in most buildings that have a red circle somewhere on them accompanied by the words “You are here”. This is to reassure and reorient visitors by giving them some navigational bearings. Thus you can locate where you are at the moment and then find your way forward.

I had the pleasure of attending an expert panel discussion last week, all of whose participants did an outstanding job of analogously mapping where the media and technology are at the end of 2015 and where their trends are heading going into the New Year. It was entitled Digital Breakfast: Media and Tech Preview 2016, was held at the law firm of Frankfurt Kurnit Klein & Selz in midtown Manhattan. It was organized and presented by Gotham Media, a New York based firm engaged in “Digital Strategy, Marketing and Events” as per their website.

This hour and a half presentation was a top-flight and highly enlightening event from start to finish. My gratitude and admiration for everyone involved in making this happen. Bravo! to all of you.

The panelists’ enthusiasm and perspectives fully engaged and transported the entire audience. I believe that everyone there appreciated and learned much from all of them. The participants included:

The following is a summary based on my notes.

Part 1:  Assessments of Key Media Trends and Events in 2015

The event began on an unintentionally entertaining note when one of the speakers, Jesse Redniss, accidentally slipped out his chair. Someone in the audience called out “Do you need a lawyer?”, and considering the location of the conference, the room erupted into laughter.¹

Once the ensuing hilarity subsided, Mr. Goldblatt began by asking the panel for their media highlights for 2015.

  • Ms. Bond said it was the rise of streaming TV, citing Netflix and Amazon, among other industry leaders. For her, this is a time of interesting competition as consumers have increasing control over what they view. She also believes that this is a “fascinating time” for projects and investments in this market sector. Nonetheless, she does not think that cable will disappear.
  • Mr. Kurnit said that Verizon’s purchase of AOL was one of the critical events of 2015, as Verizon “wants to be 360” and this type of move might portend the future of TV. The second key development was the emergence of self-driving cars, which he expects to see implemented within the next 5 to 15 years.
  • Mr. Redniss concurred on Verizon’s acquisition of AOL. He sees other activity such as the combination of Comcast and Universal as indicative of an ongoing “massive media play” versus Google and Facebook. He also mentioned the significance of Nielsen’s Total Audience Measure service.²
  • Mr. Sreenivasan stated that social media is challenging, as indicated by the recent appearance of “Facebook fatigue” affecting its massive user base. Nonetheless, he said “the empire strikes back” as evidenced in their strong financial performance and the recent launch of Chan Zuckerberg LLC to eventually distribute the couple’s $45B fortune to charity. He also sees that current market looking “like 2006 again” insofar as podcasts, email and blogs making it easy to create and distribute content.

Part 2: Today’s Golden Age of TV

Mr. Goldblatt asked the panel for their POVs on what he termed the current “Golden Age of TV” because of the increasing diversity of new platforms, expanding number of content providers and the abundance of original programming. He started off by asking them for their market assessments.

  • Ms. Bond said that the definition of “television” is now “any video content on any screen”. As a ubiquitous example she cited content on mobile platforms. She also noted proliferation of payment methods as driving this market.
  • Mr. Kurnit said that the industry would remain a bit of a “mess” for the next three or four years because of the tremendous volume of original programming, businesses that operate as content aggregators, and pricing differentials. Sometime thereafter, these markets will “rationalize”. Nonetheless, the quality of today’s content is “terrific”, pointing to examples by such media companies as the programs on AMC and HBO‘s Game of Thrones. He also said that an “unbundled model” of content offerings would enable consumers to watch anywhere.
  • Mr. Redniss believes that “mobile transforms TV” insofar as smartphones have become the “new remote control” providing both access to content and “disoverability” of new offerings. He predicted that content would become “monetized across all screens”.
  • Mr. Sreenivasan mentioned the growing popularity of binge-watching as being an important phenomenon. He believes that the “zeitgeist changes daily” and that other changes are being “led by the audience”.

The panel moved to group discussion mode concerning:

  • Consumer Content Options: Ms. Bond asked how will the audience pay for either bundled or unbundled programming options. She believes that having this choice will provide consumers with “more control and options”. Mr. Redniss then asked how many apps or services will consumers be willing to pay for? He predicted that “everyone will have their own channel”. Mr. Kurnit added that he thought there are currently too many options and that “skinny bundles” of programming will be aggregated. Mr. Sreenivasan pointed towards the “Amazon model” where much content is now available but it is also available elsewhere and then Netflix’s offering of 30 original shows. He also wanted to know “Who will watch all of this good TV?”
  • New Content Creation and Aggregation: Mr. Goldblatt asked the panelists whether a media company can be both a content aggregator and a content creator. Mr. Kurnit said yes and Mr. Redniss immediately followed by citing the long-tail effect (statistical distributions in business analytics where there are higher numbers of data points away from the initial top or central parts of the distribution)³. Therefore, online content providers were not bound by the same rules as the TV networks. Still, he could foresee some of Amazon’s and Netflix’s original content ending up being broadcast on them. He also gave the example of Amazon’s House of Cards original programming as being indicative of the “changing market for more specific audiences”. Ultimately, he believes that meeting such audiences’ needs was part of “playing the long game” in this marketplace. 
  • Binge-Watching: Mr. Kurnit followed up by predicting that binge-watching and the “binge-watching bucket” will go away. Mr. Redniss agreed with him and, moreover, talked about the “need for human interaction” to build up audiences. This now takes the form of “superfans” discussing each episode in online venues. For example, he pointed to the current massive marketing campaign build upon finding out the fate of Jon Snow on Games of Thrones.
  • Cord-Cutting: Mr. Sreenivasan believes that we will still have cable in the future. Ms. Bond said that service offerings like Apple TV will become more prevalent. Mr. Kunit said he currently has 21 cable boxes. Mr. Redniss identified himself as more of a cord-shaver who, through the addition of Netflix and Hulu, has reduced his monthly cable bill.

Part 3: Virtual Reality (VR) and Augmented Reality (AR)

Moving on to two of the hottest media topics of the day, virtual reality and augmented reality, the panelist gave their views.

  • Mr. Sreenivasan expressed his optimism about the prospects of VR and AR, citing the pending market launches of the Oculus Rift headset and Facebook 360 immersive videos. The emergence of these technologies is creating a “new set of contexts”. He also spoke proudly of the Metropolitan Museum Media Lab using Oculus for an implementation called Diving Into Pollack (see the 10th project down on this page), that enables users to “walk into a Jackson Pollack painting”.
  • Mr. Kurnit raised the possibility of using Oculus to view Jurassic Park. In terms of movie production and immersion, he said “This changes everything”.
  • Mr. Redniss said that professional sports were a whole new growth area for VR and AR, where you will need “goggles, not a screen”. Mr. Kurnit followed up mentioning a startup that is placing 33 cameras at Major League Baseball stadiums in order to provide 360 degree video coverage of games. (Although he did not mention the company by name, my own Googling indicates that he was probably referring to the “FreeD” system developed by Replay Technologies.)
  • Ms. Bond posed the question “What does this do for storytelling?”4

(See also these 12 Subway Fold posts) for extensive coverage of VR and AR technologies and applications.)

Part 4: Ad-Blocking Software

Mr. Goldblatt next asked the panels for their thoughts about the impacts and economics of ad-blocking software.

  • Mr. Redniss said that ad-blocking apps will affect how advertisers get their online audience’s attention. He thinks a workable alternative is to use technology to “stitch their ads into content” more effectively.
  • Mr. Sreenivasan believes that “ads must get better” in order to engage their audience rather than have viewers looking for means to avoid them. He noted another alternative used on the show Fargo where network programming does not permit them to use fast-forward to avoid ads.
  • Mr. Kurnit expects that ads will be blocked based on the popularity and extensibility of ad-blocking apps. Thus, he also believes that ads need to improve but he is not confident of the ad industry’s ability to do so. Furthermore, when advertisers are more highly motivated because of cost and audience size, they produce far more creative work for events like the NFL Super Bowl.

Someone from the audience asked the panel how ads will become integrated into VR and AR environments. Mr. Redniss said this will happen in cases where this technology can reproduce “real world experiences” for consumers. An example of this is the Cruise Ship Virtual Tours available on Carnival Cruise’s website.

(See also this August 13, 2015 Subway Fold post entitled New Report Finds Ad Blockers are Quickly Spreading and Costing $Billions in Lost Revenue.)

Part 5: Expectations for Media and Technology in 2016

  • Mr. Sreenivasan thinks that geolocation technology will continue to find new applications in “real-life experiences”. He gave as an example the use of web beacons by the Metropolitan Museum.
  • Ms. Bond foresees more “one-to-one” and “one-to-few” messaging capabilities, branded emjois, and a further examination of the “role of the marketer” in today’s media.
  • Mr. Kurnit believes that drones will continue their momentum into the mainstream. He sees the sky filling up with them as they are “productive tools” for a variety of commercial applications.
  • Mr. Redniss expressed another long-term prospect of “advertisers picking up broadband costs for consumers”. This might take the form of ads being streamed to smart phones during NFL games. In the shorter term, he can foresee Facebook becoming a significant simulcaster of professional sporting events.

 


1.  This immediately reminded of a similar incident years ago when I was attending a presentation at the local bar association on the topic of litigating cases involving brain injuries. The first speaker was a neurologist who opened by telling the audience all about his brand new laptop and how it was the latest state-of-the-art-model. Unfortunately, he could not get it to boot up no matter what he tried. Someone from the back of audience then yelled out “Hey doc, it’s not brain surgery”. The place went into an uproar.

2.  See also these other four Subway Fold posts mentioning other services by Nielsen.

3.  For a fascinating and highly original book on this phenomenon, I very highly recommend reading
The Long Tail: Why the Future of Business Is Selling Less of More (Hyperion, 2005), by Chris Anderson. It was also mentioned in the December 10, 2014 Subway Fold post entitled Is Big Data Calling and Calculating the Tune in Today’s Global Music Market?.

4.  See also the November 4, 2014 Subway Fold post entitled Say, Did You Hear the Story About the Science and Benefits of Being an Effective Storyteller?

New Manual Transmission: Creating an Augmented Reality Car Owner’s Guide

"Engine_Cut-away_SEMA2010", Image by Automotive Rythms

“Engine_Cut-away_SEMA2010”, Image by Automotive Rythms

Like most training and support documentation, car owner’s manuals are usually only consulted when something goes wrong with a vehicle. Some people look through them after they purchase a new ride, but usually their pages are about as interesting to read as watching paint dry on the wall. Does anyone really care about spending much quality time immersed in the scintillating details of how the transmission works unless you really must?

Not unsurprisingly, the results of a Google search on “car owner’s manuals” were, well, manifold as there exists numerous free sites online that contain deep and wide digital repositories of this highly detailed and exhaust-ively diagrammed support.

Now comes news that the prosaic car owner’s manual has been transformed into something entirely new with its transposition into an augmented reality (AR) application. This was the subject of a fascinating report on CNET.com on November 10, 2015 entitled Hyundai Unveils an Augmented-Reality Owner’s Manual by Andrew Krok. I will summarize and annotate it, and then pose a clutch of my own questions.

As well, the press release from the auto manufacturer entitled Hyundai Virtual Guide Introduces Augmented Reality to the Owner’s Manual was also released on November 10th. Both links contain photos of the app being used on a tablet. It can also be seen in operation in this brief video on YouTube. (Furthermore, these eleven Subway Fold posts have recently covered a range of the latest developments and application concerning augmented reality in other fields.)

Adding an entirely new meaning to the term “mobile” app, it is officially called the Hyundai Virtual Guide and can be used on a smartphone or tablet. It will soon be available for downloading on both Google Play and Apple’s App Store. It compresses “hundreds of pages of information” into the app and, in conjunction with the owner’s mobile device’s camera, can recognize dozens of features and several basic maintenance operations. This includes 82 videos and 50 more informational guides. Its equivalent, if traditionally formatted on paper, would be hundreds of pages.

The augmented reality implementation in the app consists of six 3D images. When the user scans his or her mobile over a component of the car such as the engine or dashboard, the screen image will be enhanced with additional “relevant information”. Another example is pointing the mobile’s camera and then clicking on “Engine Oil”, which is then followed by instructions on how to use the dipstick to check the oil level.

To start off, the app will only be available at first for the 2015 Hyundai Sonata model. Other models will later be made compatible with the app.

Hyundai chose which systems of the car to include in the app by surveying buyers on “the most difficult features to figure out”. Because everyone today is so accustomed to accessing information on a screen, the company determined that this was among the best ways to inform buyers about their new Sonatas.

The company has previously created other virtual means to access some of their other manuals. These have included an iPad configured with the owner’s manual of their Equus sedan and another app that displays the manual inside the passenger compartment on the “infotainment system’s touchscreen”.

My questions are as follows:

  • While this AR app is intended for consumers, is Hyundai considering extending the scope of this development to engineer a more detailed and sophisticated app for car dealers and service stations for maintenance and repairs?
  • Could an AR app make the process of yearly state car inspections faster and more economical?
  • Are the heads-up displays that project some of the dashboard’s information into the lower part of the windshield for drivers to see in some production models another form of automotive AR that could somehow be used together with the new AR manual app?
  • Would other consumer products such as, among others, electronics and recreational equipment benefit from AR manuals?
  • Just as we now see traditional car owner’s manuals gathered, cataloged and accessed online as described above, will future automotive AR apps similarly be imported into dedicated online libraries?
  • What entrepreneurial opportunities might be forming in the design, production and implementation of AR manuals and other automotive AR apps?
  • Could other important personal items such as prescription drug packaging benefit from an AR app because so few people every read the literature accompanying their medicines? In other words, would an AR app increase the probability that important information on dosages and potential adverse reactions will be read because of the more engaging and interactive format?

Artificial Fingerprints: Something Worth Touching Upon

"030420_1884_0077_x__s", Image by TNS Sofres

“030420_1884_0077_x__s”, Image by TNS Sofres

Among the recent advancements of the replication of various human senses, particularly for prosthetics and robotics, scientists have just made another interesting achievement in creating, of all things, artificial fingerprints. They can actually sense certain real world stimuli. This development could have some potentially very productive – – and conductive – – applications.

Could someone please cue up The Human Touch by Bruce Springsteen for this?

We looked at a similar development in artificial human vision just recently in the October 14, 2015 Subway Fold post entitled Visionary Developments: Bionic Eyes and Mechanized Rides Derived from Dragonflies.

This latest digital and all-digit story was reported in a fascinating story posted on Sciencemag.org on October 30, 2015 entitled New Artificial Fingerprints Feel Texture, Hear Sound by Hanae Armitage. I will summarize and annotate it, and then add some of my own non-artificial questions.

Design and Materials

An electronic material has been created at the University of Illinois, Urbana-Champaign, that, while still under development in the lab “mimics the swirling design” of fingerprints. It can detect pressure, temperature and sound. The researchers who devised this believe it could be helpful in artificial limbs and perhaps even enhancing our own organic senses.

Dr. John Rogers, a member of the development team, finds this new material is an addition to the “sensor types that can be integrated with the skin”.

Scientists have been working for years on these materials called electronic skins (e-skins). Some of them can imitate the senses of human skin that can monitor pulse and temperature. (See also the October 18, 2015 Subway Fold post entitled Printable, Temporary Tattoo-like Medical Sensors are Under Development.) Dr. Hyunhyub Ko, a chemical engineer at Ulsan National Institute of Science and Technology in South Korea and another member of the artificial fingerprints development team noted that there are further scientific challenges “in replicating fingertips” with their ability to sense very small differences in textures.

Sensory Perceptions

In the team’s work, Dr. Ko and the others began with “a thin, flexible material” textured with features much like human fingerprints. Next, they used this to create a “microstructured ferroelectric skin“. This contains small embedded structures called “microdomes” (as shown in an illustration accompanying the AAAS.org article), that enable the following e-skin’s sensory perceptions*:

  • Pressure: When outside pressure moves two layers of this material together it generates a small electric current that is monitored through embedded electrodes. In effect, the greater the pressure the greater the current.
  • Temperature: The e-skin relaxes in warmer temperatures and stiffens in colder temperatures, likewise generating changes in the electrical current and thus enabling it to sense temperature changes.
  • Sound: While not originally expected, the e-skin was also been found to be sensitive to sound. This occurred in testing by Dr. Ko and his team. They electronically measured the vibrations from pronouncing the letters in the word “skin” right near the e-skin. The results show this affected the microdomes and, in turn, the electric current to register changes.

Dr. Ko said his next challenge is how to transmit all of these sensations to the human brain. This has been done elsewhere using optogenetics (the use of light to control neurons that have been genetically modified) in e-skins, but he plans to research other technologies for this. Specifically, in the increasing scientific interest and development in skin-mounted sensors (such as those described in the October 18, 2015 Subway Fold post linked above), this involves a smart groups of “ideas and materials” to engineer these.

My Questions

  • Might e-skins have applications in virtual reality and augmented reality systems for medicine, engineering, manufacturing, design, robotics, architecture, and gaming? (These 11 Subway Fold posts cover a range of new developments and applications of these technologies.)
  • What other fields and marketplaces might also benefit from integrating e-skin technology? What entrepreneurial opportunities might emerge here?
  • Could e-skins work in conjunction with the system being developed in the June 27, 2015 Subway Fold post entitled Medical Researchers are Developing a “Smart Insulin Patch” ?

 


For an absolutely magnificent literary exploration of the human senses, I recommend A Natural History of the Senses by Diane Ackerman (Vintage, 1991) in the highest possible terms. It is a gem in both its sparking prose and engaging subject.


See this Wikipedia page for detailed information and online resources about the field known as haptic technology.

VR in the OR: New Virtual Reality System for Planning, Practicing and Assisting in Surgery

"Neural Pathways in the Brain", Image by NICHD

“Neural Pathways in the Brain”, Image by NICHD

The work of a new startup and some pioneering doctors has recently given the term “operating system” and entirely new meaning: They are using virtual reality (VR) to experimentally plan, practice and assist in surgery.

Currently, VR technology is rapid diversifying into new and exciting applications across a widening spectrum of fields and markets. This surgical one is particularly promising because of its potential to improve medical care. Indeed, this is far beyond the VR’s more familiar domains of entertainment and media.

During the past year, a series of Subway Fold posts have covered closely related VR advancements and apps in moviesnews reporting, and corporate advisory boards. (These are just three of ten posts here in the category Virtual and Augmented Reality.)

Virtual Surgical Imaging

The details of these new VR surgical systems was reported in a fascinating article entitled posted on Smithsonian.com on September 15, 2015 entitled How Is Brain Surgery Like Flying? Put On a Headset to Find Out by Michelle Z. Donahue. I highly recommend reading it in its entirety. I will summarize, annotate, and pose a few non-surgical questions of my own.

Surgeons and developers are creating virtual environments by combining and enhancing today’s standard two-dimensional medical scans. The surgeons can then use the new VR system to study a particular patient’s internal biological systems. For example, prior to brain surgery, they can explore the virtual representation of the area to be operated upon before as well as after any incision has been made.

For example, a fourth year neurosurgery resident named Osamah Choudhry at NYU’s Langone Medical Center, has had experience doing this recently in a 3D virtualization of a patient’s glioma, a form of brain tumor. His VR headset is an HTC Vive used in conjunction with a game controller that enables him to move around and view the subject from different perspectives, and see the fine details of connecting nerve and blood vessels. Furthermore, he has been able to simulate a view of some of the pending surgical procedures.

SNAP Help for Surgeons

This new system that creates these fully immersive 3D surgical virtualizations is called the Surgical Navigation Advanced Platform (SNAP). It was created by a company in Ohio called Surgical Theater ( @SurgicalTheater ).  It can be used with either the Oculus Rift or HTC Vive VR headsets (neither of which has been commercially released yet).  Originally, SNAP was intended for planning surgery, as it is in the US. Now it is being tested by a number of hospitals for actual usage during surgery in Europe.

Surgeons using SNAP today need to step away from their operations and change gloves. Once engaged with this system, they can  explore the “surgical target” and then “return to the patient with a clear understanding of next steps and obstacles”. For instance, SNAP can assist in accurately measuring and focusing upon which parts of a brain tumor to remove as well which areas to avoid.

SNAP’s chance origin occurred when former Israeli fighter pilots Moty Avisar and Alon Geri were in Cleveland at work on a flight simulator for the US Air Force. While having a cup of coffee, some of their talk was overheard by Dr. Warren Selman who is the chair of neurosurgery at Case Western University. He inquired whether they could adapt their system for surgeons to enable them to “go inside a tumor” in order to see how best to remove a tumor while avoiding “blood vessels and nerves”. This eventually led to Avisar and Geri to form Surgical Theater. At first, their system produced a 3D model that was viewed on a 2D screen. The VR headset was integrated later on.

System Applications

SNAP’s software merges a patient’s CT and MRI images to create its virtual environment. Thereafter, a neurosurgeon can, with the assistance of a handheld controller, use the VR headset to “stand next to or even inside the tumor or aneurysm“.  This helps them to plan the craniotomy, the actual surgery on the skull, and additional parts of the procedures. As well, the surgeon can examine the virtual construct of the patient’s vascular system.

At NYU’s Langone Medical Center, the Chair of Neurosurgery, Dr. John Golfinos, believes that SNAP is a significant advancement in this field as doctors previously had to engage in all manner of “mental gymnastics” when using 2D medical imaging to visualize a patient’s condition. Today, with a system SNAP, simulations are much accurate in presenting patients the way that surgeons see them.

Dr. Golfinos has applied SNAP to evaluating “tricky procedures” such as whether or not to use an endoscopic tool for an operation. SNAP was helpful in deciding to proceed in this manner and the outcome was successful.

UCLA’s medical school, the David Geffen School of Medicine, is using SNAP in “research studies to plan surgeries and a procedure’s effectiveness”. The schools Neurosurgery Chair, Dr. Neil Martin, has been working with Surgical Theater to smooth over the disorientation some users experience with VR headsets.

Dr. Martin and Mr. Avisar believe that SNAP “could take collaboration on surgeries to an international level”. That is, surgeons could consult with each other on a global scale to assist with operations that place then within a shared virtual space to cooperate on an operation.

Patient Education and Future Developments

Dr. Choudhry further believes that the Oculus Rift or Vive headsets can be used to answer the questions of patients who have done their own research and devised their own questions, as well as improve the doctor/patient relationship. He has seen patients quickly “lose interest” when he uses 2D CT and MRI scans to explain these images. However, he believes that 3D VR “is intuitive” as patients recognize what they are viewing.

He also believes that future developments might lead to the integration of augmented reality systems into surgery. These present, through a transparent viewer and headset, a combination of a virtual data overlay within the user’s line of sight upon the real operating room.  (Please see these eight recent Subway Fold posts about augmented reality.)

My own questions are as follows:

  • Are VR surgical systems used only after a decision to operate has been made or are surgeons also using them to assess whether or not to operate?
  • What other types of surgeries could benefit both doctors and patients by introducing the option of using VR surgical systems?
  • Can such systems be used for other non-invasive medical applications such as physical therapy for certain medical conditions and post-operative care?
  • Can such systems be adapted for non-medical applications such as training athletes? That is, can such imagery assist in further optimizing and personalizing training regimens?

May 11, 2017 Update: For a truly fascinating report on a new surgical system that has just been developed using advanced augmented realty technology, see Augmented Reality Goggles Give Surgeons X-ray Vision, by Matt Reynolds, posted on NewScientist.com on May 11, 2017.


 

Latest Census on Virtual Senses: A Seminar on Augmented Reality in New York

"3D Augmented Reality Sculpture 3", Image by Travis Morgan

“3D Augmented Reality Sculpture 3”, Image by Travis Morgan

I stepped out of the 90-degree heat and through the front door of Adorama, a camera, electronics and computer store on West 18th Street in Manhattan just before 6:00 pm on July 28, 2015. I quickly felt like I had entered another world¹ as I took my seat for a seminar entitled the Future of Augmented Reality Panel with Erek Tinker. It was the perfect venue, literally and figuratively to, well, focus on this very exciting emerging technology. (Recent trends and developments in this field have been covered in these six Subway Fold posts.)

An expert panel of four speakers involved in developing augmented reality (AR) discussed the latest trends and demo-ed an array of way cool products and services that wowed the audience.  The moderator, Erek Tinker ( the Director of Business Development at an NYC software development firm The Spry Group), did an outstanding job of presenting the speakers and keeping the audience involved with opportunities for their own insightful questions.

This just-over-the-horizon exploration and displays of AR-enhanced experiences very effectively drew everyone into the current capabilities and future potential of this hybridization of the real and the virtual.

So, is AR really the next big thing or a just another passing fad? All of the panel members made their captivating and compelling cases in this order:

  • Dusty Wright is a writer, musician, and has recently joined FuelFX as the Director of Business Development in AR and VR. The company has recently worked on entertainment apps including, among others, their recent collaboration on a presentation of AR-enhanced images by the artist Ron English at the 2015 SXSW festival.²
  • Brian Hamilton, the head Business Development for the eastern US for a company DAQRI, spoke about and presented a video on the company’s recently rolled out “Smart Helmet“. This is a hardhat with a clear visor that displays, using proprietary software and hardware,  AR imagery and data to the worker wearing it. He described this as “industrial  POV augmented reality” and believes that AR will be a part of the “next industrial revolution” enabling workers to move through their work with the data they need.
  • Miguel Sanchez is the Founder and Creative Director of Mass Ideation, a digital creative agency working with AR, among its other strategic and design projects. He sees a bright future in the continuing commercialization and application of AR, but also believes that the public needs to be better educated on the nature and capabilities of it. He described a project for a restaurant chain that wanted to shorten the time their customers waited for food by providing AR-enabled games and videos. He thinks that in the right environments, users can hold up their smartphones to objects and soon sees all manner of enhanced visual features onscreen.
  • Anthony Mattana is the founder of Hooke Audio which has developed an app and wireless headphones for recording and playing “immersive 3D audio”. The technology is build upon the concept of binaural audio which captures sound identically as it is heard. He showed this video of a band’s live performance contrasting the smartphone’s standard recording capabilities with the company’s technology. The difference in sound quality and depth was quite dramatic. This video and five others appear on Hooke’s home page. He said their first products will be shipped within a few months.

Mr. Tinker then turned to all of the panelists for their perspectives on the following:

  • Adoption Forecasts: When shown a slide of AR’s projected market growth of companies producing this hardware, everyone concurred on this predicted 10-year upward inclination. Mr. Sanchez expects the biggest breakthroughs for AR to be in gaming systems.
  • Apple’s Potential Involvement: Mr. Wright noted that Apple has just recently acquired an AR and computer vision company called Metaio. He thus expects that Apple may create a version of AR similar to their highly popular Garage Band music recording system. Mr. Sanchez added that he expects Apple to connect AR to their Siri and Maps technologies. He further suggested that AR developers should look for apps that solve problems and that in the future users may not even recognize AR technology in operation.
  • AR versus VR: Mr. Mattana said that he finds “AR to be more compelling than VR” and that it is better because you can understand it, educate users about it, and it is “tethered to powerful computing” systems. He thinks the main challenge for AR is to make it “socially acceptable”, noting the much publicized awkwardness perceived awkwardness of Google Glass.

Turning to the audience for Q&A, the following topics were addressed:

  • Privacy: How could workers’ privacy be balanced and protected when an AR system like the Smart Helmet can monitor a worker’s entire shift? Mr. Hamilton replied that he has spoken with union representatives about this. He sees this as a “solvable concern”. Furthermore, workplace privacy with respect to AR must include considerations of corporate policy, supporting data security, training and worker protection.
  • Advertising:  All of the panel members agree that AR content must be somehow monetized. (This topic was covered in detail in the May 25, 2015 Subway Fold post entitled Advertisers Looking for New Opportunities in Virtual and Augmented Spaces.)
  • Education Apps: Mr. Wright believes that AR will be “a great leveler” in education in many school settings and for students with a wide range of instructional requirements, including those with special needs. Further, he anticipates that this technology will be applied to gamify education. Mr. Mattana mentioned that blind people have shown great interest in binaural audio.
  • New Sources and Online Resources: The panelists recommended the following
  • Medical Application: Mr. Wright demo-ed with the use of a tablet held up to a diagram, an application called “Sim Man 3D” created for Methodist Hospital in Houston. This displayed simulated anatomical functioning and sounds.
  • Neural Connections: Will AR one day directly interface with the human brain? While not likely anytime soon, the panel predicted possible integration with electroencephalograms (EEG) and neural interfaces within 10 years or so.
  • Media Integration: The panel speculated about how the media, particularly in news coverage, might add AR to virtually place readers more within the news being reported.

Throughout the seminar, all of the speakers emphasized that AR is still at its earliest stages and that many opportunities await in a diversity of marketplaces. Judging from their knowledge, enthusiasm, imaginations and commitments to this nascent technology, I left thinking they are quite likely to be right.


June 5, 2017 Update: For latest development’s on DAQRI’s AR technology and products, see this June 3, 2017 post entitled Why Daqri Has Spread Its Bets with Augmented Reality Technology, by Dean Takahashi, on VentureBeat.com.


1.  Not so ironically, when someone from the audience was asking a question, he invoked an episode of from the classic sci-fi TV series The Outer Limits. Does anyone remember the truly extraordinary episode entitled Demon with a Glass Hand?

2.  See the March 26, 2015 Subway Fold Thread entitled Virtual Reality Movies Wow Audiences at 2015’s Sundance and SXSW Festivals for extensive coverage on VR at both of these festivals.