Digital Smarts Everywhere: The Emergence of Ambient Intelligence

Image from Pixabay

Image from Pixabay

The Troggs were a legendary rock and roll band who were part of the British Invasion in the late 1960’s. They have always been best known for their iconic rocker Wild Thing. This was also the only Top 10 hit that ever had an ocarina solo. How cool is that! The band went on to have two other major hits, With a Girl Like You and Love is All Around.¹

The third of the band’s classic singles can be stretched a bit to be used as a helpful metaphor to describe an emerging form pervasive “all around”-edness, this time in a more technological context. Upon reading a fascinating recent article on TechCrunch.com entitled The Next Stop on the Road to Revolution is Ambient Intelligence, by Gary Grossman, on May 7, 2016, you will find a compelling (but not too rocking) analysis about how the rapidly expanding universe of digital intelligent systems wired into our daily routines is becoming more ubiquitous, unavoidable and ambient each day.

All around indeed. Just as romance can dramatically affect our actions and perspectives, studies now likewise indicate that the relentless global spread of smarter – – and soon thereafter still smarter – – technologies is comparably affecting people’s lives at many different levels.² 

We have followed just a sampling of developments and trends in the related technologies of artificial intelligence, machine learning, expert systems and swarm intelligence in these 15 Subway Fold posts. I believe this new article, adding “ambient intelligence” to the mix, provides a timely opportunity to bring these related domains closer together in terms of their common goals, implementations and benefits. I highly recommend reading Mr. Grossman’s piece it in its entirety.

I will summarize and annotate it, add some additional context, and then pose some of my own Trogg-inspired questions.

Internet of Experiences

Digital this, that and everything is everywhere in today’s world. There is a surging confluence of connected personal and business devices, the Internet, and the Internet of Things (I0T) ³. Woven closely together on a global scale, we have essentially built “a digital intelligence network that transcends all that has gone before”. In some cases, this quantum of advanced technologies gains the “ability to sense, predict and respond to our needs”, and is becoming part of everyone’s “natural behaviors”.

A forth industrial revolution might even manifest itself in the form of machine intelligence whereby we will interact with the “always-on, interconnected world of things”. As a result, the Internet may become characterized more by experiences where users will converse with ambient intelligent systems everywhere. The supporting planks of this new paradigm include:

A prediction of what more fully realized ambient intelligence might look like using travel as an example appeared in an article entitled Gearing Up for Ambient Intelligence, by Lisa Morgan, on InformationWeek.com on March 14, 2016. Upon leaving his or her plane, the traveler will receive a welcoming message and a request to proceed to the curb to retrieve their luggage. Upon reaching curbside, a self-driving car6 will be waiting with information about the hotel booked for the stay.

Listening

Another article about ambient intelligence entitled Towards a World of Ambient Computing, by Simon Bisson, posted on ZDNet.com on February 14, 2014, is briefly quoted for the line “We will talk, and the world will answer”, to illustrate the point that current technology will be morphing into something in the future that would be nearly unrecognizable today. Grossman’s article proceeds to survey a series of commercial technologies recently brought to market as components of a fuller ambient intelligence that will “understand what we are asking” and provide responsive information.

Starting with Amazon’s Echo, this new device can, among other things:

  • Answer certain types of questions
  • Track shopping lists
  • Place orders on Amazon.com
  • Schedule a ride with Uber
  • Operate a thermostat
  • Provide transit schedules
  • Commence short workouts
  • Review recipes
  • Perform math
  • Request a plumber
  • Provide medical advice

Will it be long before we begin to see similar smart devices everywhere in homes and businesses?

Kevin Kelly, the founding Executive Editor of WIRED and a renowned futurist7, believes that in the near future, digital intelligence will become available in the form of a utility8 and, as he puts it “IQ as a service”. This is already being done by Google, Amazon, IBM and Microsoft who are providing open access to sections of their AI coding.9 He believes that success for the next round of startups will go to those who enhance and transforms something already in existence with the addition of AI. The best example of this is once again self-driving cars.

As well, in a chapter on Ambient Computing from a report by Deloitte UK entitled Tech Trends 2015, it was noted that some products were engineering ambient intelligence into their products as a means to remain competitive.

Recommending

A great deal of AI is founded upon the collection of big data from online searching, the use of apps and the IoT. This universe of information supports neural networks learn from repeated behaviors including people’s responses and interests. In turn, it provides a basis for “deep learning-derived personalized information and services” that can, in turn, derive “increasingly educated guesses with any given content”.

An alternative perspective, that “AI is simply the outsourcing of cognition by machines”, has been expressed by Jason Silva, a technologist, philosopher and video blogger on Shots of Awe. He believes that this process is the “most powerful force in the universe”, that is, of intelligence. Nonetheless, he sees this as an evolutionary process which should not be feared. (See also the December 27, 2014 Subway Fold post entitled  Three New Perspectives on Whether Artificial Intelligence Threatens or Benefits the World.)

Bots are another contemporary manifestation of ambient intelligence. These are a form of software agent, driven by algorithms, that can independently perform a range of sophisticated tasks. Two examples include:

Speaking

Optimally, bots should also be able to listen and “speak” back in return much like a 2-way phone conversation. This would also add much-needed context, more natural interactions and “help to refine understanding” to these human/machine exchanges. Such conversations would “become an intelligent and ambient part” of daily life.

An example of this development path is evident in Google Now. This service combines voice search with predictive analytics to present users with information prior to searching. It is an attempt to create an “omniscient assistant” that can reply to any request for information “including those you haven’t thought of yet”.

Recently, the company created a Bluetooth-enable prototype of lapel pin based on this technology that operates just by tapping it much like the communicators on Star Trek. (For more details, see Google Made a Secret Prototype That Works Like the Star Trek Communicator, by Victor Luckerson, on Time.com, posted on November 22, 2015.)

The configurations and specs of AI-powered devices, be it lapel pins, some form of augmented reality10 headsets or something else altogether, supporting such pervasive and ambient intelligence are not exactly clear yet. Their development and introduction will take time but remain inevitable.

Will ambient intelligence make our lives any better? It remains to be seen, but it is probably a viable means to handle some of more our ordinary daily tasks. It will likely “fade into the fabric of daily life” and be readily accessible everywhere.

Quite possibly then, the world will truly become a better place to live upon the arrival of ambient intelligence-enabled ocarina solos.

My Questions

  • Does the emergence of ambient intelligence, in fact, signal the arrival of a genuine fourth industrial revolution or is this all just a semantic tool to characterize a broader spectrum of smarter technologies?
  • How might this trend affect overall employment in terms of increasing or decreasing jobs on an industry by industry basis and/or the entire workforce? (See also this June 4, 2015 Subway Fold post entitled How Robots and Computer Algorithms Are Challenging Jobs and the Economy.)
  • How might this trend also effect non-commercial spheres such as public interest causes and political movements?
  • As ambient intelligence insinuates itself deeper into our online worlds, will this become a principal driver of new entrepreneurial opportunities for startups? Will ambient intelligence itself provide new tools for startups to launch and thrive?

 


1.   Thanks to Little Steven (@StevieVanZandt) for keeping the band’s music in occasional rotation on The Underground Garage  (#UndergroundGarage.) Also, for an appreciation of this radio show see this August 14, 2014 Subway Fold post entitled The Spirit of Rock and Roll Lives on Little Steven’s Underground Garage.

2.  For a remarkably comprehensive report on the pervasiveness of this phenomenon, see the Pew Research Center report entitled U.S. Smartphone Use in 2015, by Aaron Smith, posted on April 1, 2015.

3These 10 Subway Fold posts touch upon the IoT.

4.  The Subway Fold category Big Data and Analytics contains 50 posts cover this topic in whole or in part.

5.  The Subway Fold category Telecommunications contains 12 posts cover this topic in whole or in part.

6These 5 Subway Fold posts contain references to self-driving cars.

7.   Mr. Kelly is also the author of a forthcoming book entitled The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future, to be published on June 7, 2016 by Viking.

8.  This September 1, 2014 Subway Fold post entitled Possible Futures for Artificial Intelligence in Law Practice, in part summarized an article by Steven Levy in the September 2014 issue of WIRED entitled Siri’s Inventors Are Building a Radical New AI That Does Anything You Ask. This covered a startup called Viv Labs whose objective was to transform AI into a form of utility. Fast forward to the Disrupt NY 2016 conference going on in New York last week. On May 9, 2016, the founder of Viv, Dag Kittlaus, gave his presentation about the Viv platform. This was reported in an article posted on TechCrunch.com entitled Siri-creator Shows Off First Public Demo of Viv, ‘the Intelligent Interface for Everything’, by Romain Dillet, on May 9, 2016. The video of this 28-minute presentation is embedded in this story.

9.  For the full details on this story see a recent article entitled The Race Is On to Control Artificial Intelligence, and Tech’s Future by John Markoff and Steve Lohr, published in the March 25, 2016 edition of The New York Times.

10These 10 Subway Fold posts cover some recent trends and development in augmented reality.

Charge of the Light Brigade: Faster and More Efficient New Chips Using Photons Instead of Electrons

"PACE - PEACE" Image by Etienne Valois

“PACE – PEACE” Image by Etienne Valois

Alfred, Lord Tennyson wrote his immortal classic poem, The Charge of the Light Brigade, in 1854. It was to honor the dead heroes of a doomed infantry charge at the Battle of Balaclava during the Crimean War. Moreover, it strikingly portrayed the horrors of war. In just six short verses, he created a monumental work that has endured ever since for 162 years.

The poem came to mind last week after reading two recent articles on seemingly disparate topics. The first was posted on The New Yorker’s website on December 30, 2015 entitled In Silicon Valley Now, It’s Almost Always Winner Takes All by Om Malik. This is highly insightful analysis of how and why tech giants such as Google in search, Facebook in social networking, and Uber in transportation, have come to dominate their markets. In essence, competition is a fierce and relentless battle in the global digital economy. The second was an article on CNET.com posted on December 23, 2015 entitled Chip Promises Faster Computing with Light, Not Electrical Wires by Stephan Shankland. I highly recommend reading both of them in their entirety.

Taken together, the homonym of “light” both in historical poetry and in tech, seems to tie these two posted pieces together insofar as contemporary competition in tech markets is often described in military terms and metaphors. Focusing on that second story here for purposes of this blog post, about a tantalizing advance in chip design and fabrication, will this survive as it moves forward into the brutal and relentlessly “winner takes all” marketplace? I will summarize and annotate this story, and pose some of my own, hopefully en-light-ening questions.

Forward, the Light Brigade

A team of researchers, all of whom are university professors, including Vladimir Stojanovic from the University of California at Berkeley who led the development, Krste Asanovic also from Berkeley, Rajeev Ram from MIT, and Milos Popovic from the University of Colorado at Boulder, have created a new type of processing chip “that transmits data with light”. As well, its architecture significantly increases processing speed while reducing power consumption.  A report on the team’s work was published in an article in the December 24, 2015 issue of Nature (subscription required) entitled Single-chip Microprocessor That Communicates Directly Using Light by Chen Sun, Mark T. Wade, Yunsup Lee, et al.

This approach, according to Wikipedia, of “using silicon as an optical medium”, is called silicon photonics. IBM (see this link) and Intel (see this link)  have likewise been involved in R&D in this field, but have yet to introduce anything ready for the market.

However, this team of university researchers believes their new approach might be introduced commercially within a year. While their efforts do not make chips run faster per se, the photonic elements “keep chips supplied with data” which avoids them having to lose time by idling. Thus, they can process data faster.

Currently (no pun intended), electrical signals traverse metal wiring across the world on computing and communications devices and networks. For data traveling greater national and international distances, the electronic signals are transposed into light and sent along on high-speed fiber-optic cables. Nonetheless, this approach “isn’t cheap”.

Half a League Onward

What the university researchers’ team has done is create chips with “photonic components” built into them. If they succeed in scaling-up and commercializing their creation, consumers will be likely the beneficiaries. These advantages will probably manifest themselves first when used in data centers that, in turn, could speed up:

  • Google searches
  • Facebook image recognition
  • Other “performance-intensive features not economical today”
  • Remove processing bottlenecks and conserve battery life in smartphones and other personal computing platforms

Professor Stojanovic believes that one of their largest challenges if is to make this technology affordable before it can be later implemented in consumer level computing and communications devices. He is sanguine that such economies of scale can be reached. He anticipates further applications of this technology to enable chips’ onboard processing and memory components to communicate directly with each other.

Additional integrations of silicon photonics might be seen in the lidar remote sensing systems for self-driving cars¹, as well as brain imaging² and environmental sensors. It also holds the potential to alter the traditional methods that computers are assembled. For example, the length of cables is limited to the extent that data can pass through them quickly and efficiently before needed amplification along the way. Optical links may permit data to be transferred significant further along network cabling. The research team’s “prototype used 10-meter optical links”, but Professor Stojanovic believes this could eventually be lengthened to a kilometer. This could potentially result in meaningful savings in energy, hardware and processing efficiency.

Two startups that are also presently working in the silicon photonics space include:

My Questions:

  • Might another one of silicon photonics’ virtues be that it is partially fabricated from more sustainable materials, primarily silicon derived from sand rather than various metals?
  • Could silicon photonics chips and architectures be a solution to the very significant computing needs of the virtual reality (VR) and augmented reality (AR) systems that will be coming onto the market in 2016? This issue was raised in a most interesting article posted on Bloomberg.com on December 30, 2015 entitled Few Computers Are Powerful Enough to Support Virtual Reality by Ian King. (See also these 13 Subway Fold posts on a range of VR and AR developments.)
  • What other new markets, technologies and opportunities for entrepreneurs and researchers might emerge if the university research team’s chips achieve their intended goals and succeed in making it to market?

 


1.  See these six Subway Fold posts for references to autonomous cars.

2.  See these four Subway Fold posts concerning certain developments in brain imaging technology.

Mind Over Subject Matter: Researchers Develop A Better Understanding of How Human Brains Manage So Much Information

"Synapse", Image by Allan Ajifo

“Synapse”, Image by Allan Ajifo

There is an old joke that goes something like this: What do you get for the man who has everything and then where would he put it all?¹ This often comes to mind whenever I have experienced the sensation of information overload caused by too much content presented from too many sources. Especially since the advent of the Web, almost everyone I know has also experienced the same overwhelming experience whenever the amount of information they are inundated with everyday seems increasingly difficult to parse, comprehend and retain.

The multitudes of screens, platforms, websites, newsfeeds, social media posts, emails, tweets, blogs, Post-Its, newsletters, videos, print publications of all types, just to name a few, are relentlessly updated and uploaded globally and 24/7. Nonetheless, for each of us on an individualized basis, a good deal of the substance conveyed by this quantum of bits and ocean of ink somehow still manages to stick somewhere in our brains.

So, how does the human brain accomplish this?

Less Than 1% of the Data

A recent advancement covered in a fascinating report on Phys.org on December 15, 2015 entitled Researchers Demonstrate How the Brain Can Handle So Much Data, by Tara La Bouff describes the latest research into how this happens. I will summarize and annotate this, and pose a few organic material-based questions of my own.

To begin, people learn to identify objects and variations of them rather quickly. For example, a letter of the alphabet, no matter the font or an individual regardless of their clothing and grooming, are always recognizable. We can also identify objects even if the view of them is quite limited. This neurological processing proceeds reliably and accurately moment-by-moment during our lives.

A recent discover by a team of researchers at Georgia Institute of Technology (Georgia Tech)² found that we can make such visual categorizations with less than 1% of the original data. Furthermore, they created and validated an algorithm “to explain human learning”. Their results can also be applied to “machine learning³, data analysis and computer vision4. The team’s full findings were published in the September 28, 2015 issue of Neural Computation in an article entitled Visual Categorization with Random Projection by Rosa I. Arriaga, David Rutter, Maya Cakmak and Santosh S. Vempala. (Dr. Cakmak is from the University of Washington, while the other three are from Georgia Tech.)

Dr. Vempala believes that the reason why humans can quickly make sense of the very complex and robust world is because, as he observes “It’s a computational problem”. His colleagues and team members examined “human performance in ‘random projection tests'”. These measure the degree to which we learn to identify an object. In their work, they showed their test subjects “original, abstract images” and then asked them if they could identify them once again although using a much smaller segment of the image. This led to one of their two principal discoveries that the test subjects required only 0.15% of the data to repeat their identifications.

Algorithmic Agility

In the next phase of their work, the researchers prepared and applied an algorithm to enable computers (running a simple neural network, software capable of imitating very basic human learning characteristics), to undertake the same tasks. These digital counterparts “performed as well as humans”. In turn, the results of this research provided new insight into human learning.

The team’s objective was to devise a “mathematical definition” of typical and non-typical inputs. Next, they wanted to “predict which data” would be the most challenging for the test subjects and computers to learn. As it turned out, they each performed with nearly equal results. Moreover, these results proved that “data will be the hardest to learn over time” can be predicted.

In testing their theory, the team prepared 3 different groups of abstract images of merely 150 pixels each. (See the Phys.org link above containing these images.) Next, they drew up “small sketches” of them. The full image was shown to the test subjects for 10 seconds. Next they were shown 16 of the random sketches. Dr. Vempala of the team was “surprised by how close the performance was” of the humans and the neural network.

While the researchers cannot yet say with certainty that “random projection”, such as was demonstrated in their work, happens within our brains, the results lend support that it might be a “plausible explanation” for this phenomenon.

My Questions

  • Might this research have any implications and/or applications in virtual reality and augment reality systems that rely on both human vision and processing large quantities of data to generate their virtual imagery? (These 13 Subway Fold posts cover a wide range of trends and applications in VR and AR.)
  • Might this research also have any implications and/or applications in medical imaging and interpretation since this science also relies on visual recognition and continual learning?
  • What other markets, professions, universities and consultancies might be able to turn these findings into new entrepreneurial and scientific opportunities?

 


1.  I was unable to definitively source this online but I recall that I may have heard it from the comedian Steven Wright. Please let me know if you are aware of its origin. 

2.  For the work of Georgia’s Tech’s startup incubator see the Subway Fold post entitled Flashpoint Presents Its “Demo Day” in New York on April 16, 2015.

3.   These six Subway Fold posts cover a range of trends and developments in machine learning.

4.   Computer vision was recently taken up in an October 14, 2015 Subway Fold post entitled Visionary Developments: Bionic Eyes and Mechanized Rides Derived from Dragonflies.

Summary of the Media and Tech Preview 2016 Discussion Panel Held at Frankfurt Kurnit in NYC on December 2, 2015

"dtv svttest", Image by Karl Baron

“dtv svttest”, Image by Karl Baron

GPS everywhere notwithstanding, there are still maps on the walls in most buildings that have a red circle somewhere on them accompanied by the words “You are here”. This is to reassure and reorient visitors by giving them some navigational bearings. Thus you can locate where you are at the moment and then find your way forward.

I had the pleasure of attending an expert panel discussion last week, all of whose participants did an outstanding job of analogously mapping where the media and technology are at the end of 2015 and where their trends are heading going into the New Year. It was entitled Digital Breakfast: Media and Tech Preview 2016, was held at the law firm of Frankfurt Kurnit Klein & Selz in midtown Manhattan. It was organized and presented by Gotham Media, a New York based firm engaged in “Digital Strategy, Marketing and Events” as per their website.

This hour and a half presentation was a top-flight and highly enlightening event from start to finish. My gratitude and admiration for everyone involved in making this happen. Bravo! to all of you.

The panelists’ enthusiasm and perspectives fully engaged and transported the entire audience. I believe that everyone there appreciated and learned much from all of them. The participants included:

The following is a summary based on my notes.

Part 1:  Assessments of Key Media Trends and Events in 2015

The event began on an unintentionally entertaining note when one of the speakers, Jesse Redniss, accidentally slipped out his chair. Someone in the audience called out “Do you need a lawyer?”, and considering the location of the conference, the room erupted into laughter.¹

Once the ensuing hilarity subsided, Mr. Goldblatt began by asking the panel for their media highlights for 2015.

  • Ms. Bond said it was the rise of streaming TV, citing Netflix and Amazon, among other industry leaders. For her, this is a time of interesting competition as consumers have increasing control over what they view. She also believes that this is a “fascinating time” for projects and investments in this market sector. Nonetheless, she does not think that cable will disappear.
  • Mr. Kurnit said that Verizon’s purchase of AOL was one of the critical events of 2015, as Verizon “wants to be 360” and this type of move might portend the future of TV. The second key development was the emergence of self-driving cars, which he expects to see implemented within the next 5 to 15 years.
  • Mr. Redniss concurred on Verizon’s acquisition of AOL. He sees other activity such as the combination of Comcast and Universal as indicative of an ongoing “massive media play” versus Google and Facebook. He also mentioned the significance of Nielsen’s Total Audience Measure service.²
  • Mr. Sreenivasan stated that social media is challenging, as indicated by the recent appearance of “Facebook fatigue” affecting its massive user base. Nonetheless, he said “the empire strikes back” as evidenced in their strong financial performance and the recent launch of Chan Zuckerberg LLC to eventually distribute the couple’s $45B fortune to charity. He also sees that current market looking “like 2006 again” insofar as podcasts, email and blogs making it easy to create and distribute content.

Part 2: Today’s Golden Age of TV

Mr. Goldblatt asked the panel for their POVs on what he termed the current “Golden Age of TV” because of the increasing diversity of new platforms, expanding number of content providers and the abundance of original programming. He started off by asking them for their market assessments.

  • Ms. Bond said that the definition of “television” is now “any video content on any screen”. As a ubiquitous example she cited content on mobile platforms. She also noted proliferation of payment methods as driving this market.
  • Mr. Kurnit said that the industry would remain a bit of a “mess” for the next three or four years because of the tremendous volume of original programming, businesses that operate as content aggregators, and pricing differentials. Sometime thereafter, these markets will “rationalize”. Nonetheless, the quality of today’s content is “terrific”, pointing to examples by such media companies as the programs on AMC and HBO‘s Game of Thrones. He also said that an “unbundled model” of content offerings would enable consumers to watch anywhere.
  • Mr. Redniss believes that “mobile transforms TV” insofar as smartphones have become the “new remote control” providing both access to content and “disoverability” of new offerings. He predicted that content would become “monetized across all screens”.
  • Mr. Sreenivasan mentioned the growing popularity of binge-watching as being an important phenomenon. He believes that the “zeitgeist changes daily” and that other changes are being “led by the audience”.

The panel moved to group discussion mode concerning:

  • Consumer Content Options: Ms. Bond asked how will the audience pay for either bundled or unbundled programming options. She believes that having this choice will provide consumers with “more control and options”. Mr. Redniss then asked how many apps or services will consumers be willing to pay for? He predicted that “everyone will have their own channel”. Mr. Kurnit added that he thought there are currently too many options and that “skinny bundles” of programming will be aggregated. Mr. Sreenivasan pointed towards the “Amazon model” where much content is now available but it is also available elsewhere and then Netflix’s offering of 30 original shows. He also wanted to know “Who will watch all of this good TV?”
  • New Content Creation and Aggregation: Mr. Goldblatt asked the panelists whether a media company can be both a content aggregator and a content creator. Mr. Kurnit said yes and Mr. Redniss immediately followed by citing the long-tail effect (statistical distributions in business analytics where there are higher numbers of data points away from the initial top or central parts of the distribution)³. Therefore, online content providers were not bound by the same rules as the TV networks. Still, he could foresee some of Amazon’s and Netflix’s original content ending up being broadcast on them. He also gave the example of Amazon’s House of Cards original programming as being indicative of the “changing market for more specific audiences”. Ultimately, he believes that meeting such audiences’ needs was part of “playing the long game” in this marketplace. 
  • Binge-Watching: Mr. Kurnit followed up by predicting that binge-watching and the “binge-watching bucket” will go away. Mr. Redniss agreed with him and, moreover, talked about the “need for human interaction” to build up audiences. This now takes the form of “superfans” discussing each episode in online venues. For example, he pointed to the current massive marketing campaign build upon finding out the fate of Jon Snow on Games of Thrones.
  • Cord-Cutting: Mr. Sreenivasan believes that we will still have cable in the future. Ms. Bond said that service offerings like Apple TV will become more prevalent. Mr. Kunit said he currently has 21 cable boxes. Mr. Redniss identified himself as more of a cord-shaver who, through the addition of Netflix and Hulu, has reduced his monthly cable bill.

Part 3: Virtual Reality (VR) and Augmented Reality (AR)

Moving on to two of the hottest media topics of the day, virtual reality and augmented reality, the panelist gave their views.

  • Mr. Sreenivasan expressed his optimism about the prospects of VR and AR, citing the pending market launches of the Oculus Rift headset and Facebook 360 immersive videos. The emergence of these technologies is creating a “new set of contexts”. He also spoke proudly of the Metropolitan Museum Media Lab using Oculus for an implementation called Diving Into Pollack (see the 10th project down on this page), that enables users to “walk into a Jackson Pollack painting”.
  • Mr. Kurnit raised the possibility of using Oculus to view Jurassic Park. In terms of movie production and immersion, he said “This changes everything”.
  • Mr. Redniss said that professional sports were a whole new growth area for VR and AR, where you will need “goggles, not a screen”. Mr. Kurnit followed up mentioning a startup that is placing 33 cameras at Major League Baseball stadiums in order to provide 360 degree video coverage of games. (Although he did not mention the company by name, my own Googling indicates that he was probably referring to the “FreeD” system developed by Replay Technologies.)
  • Ms. Bond posed the question “What does this do for storytelling?”4

(See also these 12 Subway Fold posts) for extensive coverage of VR and AR technologies and applications.)

Part 4: Ad-Blocking Software

Mr. Goldblatt next asked the panels for their thoughts about the impacts and economics of ad-blocking software.

  • Mr. Redniss said that ad-blocking apps will affect how advertisers get their online audience’s attention. He thinks a workable alternative is to use technology to “stitch their ads into content” more effectively.
  • Mr. Sreenivasan believes that “ads must get better” in order to engage their audience rather than have viewers looking for means to avoid them. He noted another alternative used on the show Fargo where network programming does not permit them to use fast-forward to avoid ads.
  • Mr. Kurnit expects that ads will be blocked based on the popularity and extensibility of ad-blocking apps. Thus, he also believes that ads need to improve but he is not confident of the ad industry’s ability to do so. Furthermore, when advertisers are more highly motivated because of cost and audience size, they produce far more creative work for events like the NFL Super Bowl.

Someone from the audience asked the panel how ads will become integrated into VR and AR environments. Mr. Redniss said this will happen in cases where this technology can reproduce “real world experiences” for consumers. An example of this is the Cruise Ship Virtual Tours available on Carnival Cruise’s website.

(See also this August 13, 2015 Subway Fold post entitled New Report Finds Ad Blockers are Quickly Spreading and Costing $Billions in Lost Revenue.)

Part 5: Expectations for Media and Technology in 2016

  • Mr. Sreenivasan thinks that geolocation technology will continue to find new applications in “real-life experiences”. He gave as an example the use of web beacons by the Metropolitan Museum.
  • Ms. Bond foresees more “one-to-one” and “one-to-few” messaging capabilities, branded emjois, and a further examination of the “role of the marketer” in today’s media.
  • Mr. Kurnit believes that drones will continue their momentum into the mainstream. He sees the sky filling up with them as they are “productive tools” for a variety of commercial applications.
  • Mr. Redniss expressed another long-term prospect of “advertisers picking up broadband costs for consumers”. This might take the form of ads being streamed to smart phones during NFL games. In the shorter term, he can foresee Facebook becoming a significant simulcaster of professional sporting events.

 


1.  This immediately reminded of a similar incident years ago when I was attending a presentation at the local bar association on the topic of litigating cases involving brain injuries. The first speaker was a neurologist who opened by telling the audience all about his brand new laptop and how it was the latest state-of-the-art-model. Unfortunately, he could not get it to boot up no matter what he tried. Someone from the back of audience then yelled out “Hey doc, it’s not brain surgery”. The place went into an uproar.

2.  See also these other four Subway Fold posts mentioning other services by Nielsen.

3.  For a fascinating and highly original book on this phenomenon, I very highly recommend reading
The Long Tail: Why the Future of Business Is Selling Less of More (Hyperion, 2005), by Chris Anderson. It was also mentioned in the December 10, 2014 Subway Fold post entitled Is Big Data Calling and Calculating the Tune in Today’s Global Music Market?.

4.  See also the November 4, 2014 Subway Fold post entitled Say, Did You Hear the Story About the Science and Benefits of Being an Effective Storyteller?

Latest Census on Virtual Senses: A Seminar on Augmented Reality in New York

"3D Augmented Reality Sculpture 3", Image by Travis Morgan

“3D Augmented Reality Sculpture 3”, Image by Travis Morgan

I stepped out of the 90-degree heat and through the front door of Adorama, a camera, electronics and computer store on West 18th Street in Manhattan just before 6:00 pm on July 28, 2015. I quickly felt like I had entered another world¹ as I took my seat for a seminar entitled the Future of Augmented Reality Panel with Erek Tinker. It was the perfect venue, literally and figuratively to, well, focus on this very exciting emerging technology. (Recent trends and developments in this field have been covered in these six Subway Fold posts.)

An expert panel of four speakers involved in developing augmented reality (AR) discussed the latest trends and demo-ed an array of way cool products and services that wowed the audience.  The moderator, Erek Tinker ( the Director of Business Development at an NYC software development firm The Spry Group), did an outstanding job of presenting the speakers and keeping the audience involved with opportunities for their own insightful questions.

This just-over-the-horizon exploration and displays of AR-enhanced experiences very effectively drew everyone into the current capabilities and future potential of this hybridization of the real and the virtual.

So, is AR really the next big thing or a just another passing fad? All of the panel members made their captivating and compelling cases in this order:

  • Dusty Wright is a writer, musician, and has recently joined FuelFX as the Director of Business Development in AR and VR. The company has recently worked on entertainment apps including, among others, their recent collaboration on a presentation of AR-enhanced images by the artist Ron English at the 2015 SXSW festival.²
  • Brian Hamilton, the head Business Development for the eastern US for a company DAQRI, spoke about and presented a video on the company’s recently rolled out “Smart Helmet“. This is a hardhat with a clear visor that displays, using proprietary software and hardware,  AR imagery and data to the worker wearing it. He described this as “industrial  POV augmented reality” and believes that AR will be a part of the “next industrial revolution” enabling workers to move through their work with the data they need.
  • Miguel Sanchez is the Founder and Creative Director of Mass Ideation, a digital creative agency working with AR, among its other strategic and design projects. He sees a bright future in the continuing commercialization and application of AR, but also believes that the public needs to be better educated on the nature and capabilities of it. He described a project for a restaurant chain that wanted to shorten the time their customers waited for food by providing AR-enabled games and videos. He thinks that in the right environments, users can hold up their smartphones to objects and soon sees all manner of enhanced visual features onscreen.
  • Anthony Mattana is the founder of Hooke Audio which has developed an app and wireless headphones for recording and playing “immersive 3D audio”. The technology is build upon the concept of binaural audio which captures sound identically as it is heard. He showed this video of a band’s live performance contrasting the smartphone’s standard recording capabilities with the company’s technology. The difference in sound quality and depth was quite dramatic. This video and five others appear on Hooke’s home page. He said their first products will be shipped within a few months.

Mr. Tinker then turned to all of the panelists for their perspectives on the following:

  • Adoption Forecasts: When shown a slide of AR’s projected market growth of companies producing this hardware, everyone concurred on this predicted 10-year upward inclination. Mr. Sanchez expects the biggest breakthroughs for AR to be in gaming systems.
  • Apple’s Potential Involvement: Mr. Wright noted that Apple has just recently acquired an AR and computer vision company called Metaio. He thus expects that Apple may create a version of AR similar to their highly popular Garage Band music recording system. Mr. Sanchez added that he expects Apple to connect AR to their Siri and Maps technologies. He further suggested that AR developers should look for apps that solve problems and that in the future users may not even recognize AR technology in operation.
  • AR versus VR: Mr. Mattana said that he finds “AR to be more compelling than VR” and that it is better because you can understand it, educate users about it, and it is “tethered to powerful computing” systems. He thinks the main challenge for AR is to make it “socially acceptable”, noting the much publicized awkwardness perceived awkwardness of Google Glass.

Turning to the audience for Q&A, the following topics were addressed:

  • Privacy: How could workers’ privacy be balanced and protected when an AR system like the Smart Helmet can monitor a worker’s entire shift? Mr. Hamilton replied that he has spoken with union representatives about this. He sees this as a “solvable concern”. Furthermore, workplace privacy with respect to AR must include considerations of corporate policy, supporting data security, training and worker protection.
  • Advertising:  All of the panel members agree that AR content must be somehow monetized. (This topic was covered in detail in the May 25, 2015 Subway Fold post entitled Advertisers Looking for New Opportunities in Virtual and Augmented Spaces.)
  • Education Apps: Mr. Wright believes that AR will be “a great leveler” in education in many school settings and for students with a wide range of instructional requirements, including those with special needs. Further, he anticipates that this technology will be applied to gamify education. Mr. Mattana mentioned that blind people have shown great interest in binaural audio.
  • New Sources and Online Resources: The panelists recommended the following
  • Medical Application: Mr. Wright demo-ed with the use of a tablet held up to a diagram, an application called “Sim Man 3D” created for Methodist Hospital in Houston. This displayed simulated anatomical functioning and sounds.
  • Neural Connections: Will AR one day directly interface with the human brain? While not likely anytime soon, the panel predicted possible integration with electroencephalograms (EEG) and neural interfaces within 10 years or so.
  • Media Integration: The panel speculated about how the media, particularly in news coverage, might add AR to virtually place readers more within the news being reported.

Throughout the seminar, all of the speakers emphasized that AR is still at its earliest stages and that many opportunities await in a diversity of marketplaces. Judging from their knowledge, enthusiasm, imaginations and commitments to this nascent technology, I left thinking they are quite likely to be right.

 


1.  Not so ironically, when someone from the audience was asking a question, he invoked an episode of from the classic sci-fi TV series The Outer Limits. Does anyone remember the truly extraordinary episode entitled Demon with a Glass Hand?

2.  See the March 26, 2015 Subway Fold Thread entitled Virtual Reality Movies Wow Audiences at 2015’s Sundance and SXSW Festivals for extensive coverage on VR at both of these festivals.