AR + #s = $$$: New Processes and Strategies for Extracting Actionable Business Data from Augmented Reality Systems

“Ars Electronica – Light Tank”, image by Uwe Rieger (Denz), Yinan Liu (NZ) (Arcsec Lab) @ St. Mary’s Cathedral

Perhaps taking two disparate assertions, one tacit and one spoken, completely out of their original contexts and re-mixing and re-applying them to a different set of circumstances can be a helpful means to introduce an emerging and potentially prosperous new trend.

First, someone I know has a relatively smart and empathetic dog who will tilt his head from side to side if you ask him (the dog) something that sounds like a question. His owner claims that this is his dog’s way of non-verbally communicating – – to paraphrase (or parabark, maybe) – – something to the effect of “You know, you’re right. I never really thought if it that way”. Second, in an article in the January 4, 2019 edition of The New York Times entitled The Week in Tech: Amazon’s Burning Problems, by David Streitfeld, there is an amusing quote from a writer for WIRED named Craig Mod describing his 2018 Amazon Kindle Oasis as being “about as interactive as a potato”.

So, let’s take some literary license (and, of course, the dog’s license, too), and conflate these two communications in order to paws here to examine the burgeoning commercial a-peel of the rich business data now being generated by augmented reality (AR) systems.

To begin, let’s look no further than the 2019 Consumer Electronics Show (CES) held last month in Las Vegas.  New offerings of AR products and services were all the rage among a number of other cutting-edge technologies and products being displayed, demo-ed and discussed.¹ As demonstrably shown at this massive industry confab, these quickly evolving AR systems that assemble and present a data-infused overlay upon a user’s real-world line of sight, are finding a compelling array of versatile applications in a widening spectrum of industries. So, too, like everything else in today’s hyper-connected world, AR likewise generates waves of data that can be captured, analyzed and leveraged for the benefit and potential profit of many commercial enterprises.

A close and compelling examination of this phenomenon was recently posted in an article entitled Unlocking the Value of Augmented Reality Data, by Joe Biron and Jonathan Lang, on the MIT Sloan Management Review site on December 20, 2018. I highly recommend a click-through and full read if you have an opportunity. I will try to summarize and annotate this piece and, well, augment it with some of my own questions.

[The Subway Fold category of Virtual and Augmented Reality has been tracking a sampling of developments in this area in commerce, academia and the arts for the past several years.]

Image from Pixabay.com

Uncensored Sensors

Prior to the emergence of the Internet of Things (IoT), it was humans who mostly performed the functions of certain specialized sensors in tasks such as detecting environment changes and then transmitting their findings. Currently, as AR systems are increasingly deployed, people will be equipped with phones and headsets, among other devices, embedded with these sensing capabilities. This “provides uncharted opportunities for organizations” to make use of the resulting AR data-enabled analyses to increase their “operational effectiveness” and distinguish the offerings of their goods and services to the consumer public.

AR’s market in 2019 is analogous to where the IoT market was in 2010, garnering significant buzz and demonstrating “early value for new capabilities”. This technology’s capacity to “visualize, instruct, and interact” can become transformative in data usage and analytics. (See Why Every Organization Needs an Augmented Reality Strategy, by Michael E. Porter and James Heppelman, Harvard Business Review, November – December 2017.)

To thereby take advantage of AR, businesses should currently be concentrating on the following questions:

  • How best to plan to optimize and apply AR-generated data?
  • How to create improved “products and processes” based upon AR users’ feedback?

Image from Pixabay.com

AR Systems Generate Expanding Spheres of User Data

Looking again to the past for guidance today, with the introduction of the iPhone and Android phones in 2007 and 2008, these tech industry turning points produced “significant data about how customers engaged with their brand”. This time period further provided engineers with a deeper understanding of user requirements. Next, this inverted the value proposition such that “applications could sense and measure” consumer experiences as they occurred.

Empowered with comparable “sensing capabilities emerging through the IoT”, manufacturers promptly added connectivity, thus generating the emergence of smart, connected products (SCPs). These new devices now comprise much of the IoT. The resulting massive data collection infrastructure and the corresponding data economy have been “disrupting technology laggards ever since”.

Moreover, using “AR-as-a-sensor” for gathering deep quantities of data holds significant potential advantages. Many AR-enabled devices are already embedded with sensing capabilities including “cameras, GPS, Bluetooth, infrared and accelerometers”. More organically, they also unleash human “creativity, intuition and experience” that cannot be otherwise replicated by the current states of hardware and software.²

What can humans with AR-based devices provide to enhance their experiences? New types of data and “behavioral insights” can be harvested from both SCPs and unconnected products. For example, in the case of an unconnected product, a user with a device equipped to operate as a form of AR-as-a-sensor could examine how the product is used and what are the accompanying user preferences for it. For an SCP, the AR-equipped user could examine how usage affects performance and whether the product is adaptable to that particular user’s purposes.

For additionally needed critical context, it is indeed “human interaction” that provides insights into how SCPs and unconnected devices are realistically operating, performing and adapting.

“Rainbow Drops”, Image by Mrs. eNil

Evaluating Potential Business Benefits from AR-Derived Data

This new quantum of AR information further creates a form of feedback loop whereby questions concerning a product’s usage and customization can be assessed. This customer data has become central to “business strategy in the new digital economy”.

In order to more comprehensively understand and apply these AR data resources, a pyramid-shaped model called “DIKW” can be helpful. Its elements include

  • Data
  • Information
  • Knowledge
  • Wisdom

These are deployed in information management operations to process unrefined AR data into “value-rich knowledge and insights”. By then porting the resulting insights into engineering systems, businesses can enhance their “product portfolio, design and features” in previously unseen ways.

AR data troves can also be merged with IoT-generated data from SCPs to support added context and insights. For unconnected devices or digital-only offerings, humans using AR to interact with them can themselves become sensors similarly providing new perspectives on a product’s:

  • Service usage
  • Quality
  • Optimization of the “user experience and value”

“DSC_445”, Image by Frank Cundiff

Preliminary Use Cases

The following are emerging categories and early examples of how companies are capturing and leveraging AR-generated data:

  • Expert Knowledge Transfer: Honeywell is gathering data from experienced employees and then enhancing their collective knowledge to thereafter be transferred to new hires. The company has implemented this by “digitizing knowledge” about their products only made visible through experience. This enables them to better understand their products in entirely new ways. Further details of this initiative is presented on the firm’s website in a feature, photos and a video entitled How Augmented Reality is Revolutionizing Job Training.
  • Voice of the Product: Bicycle manufacturer Cannondale is now shipping their high-end products with an AR phone app to assist owners and bike shop mechanics with details and repairs. This is intended to add a new dimension to bike ownership by joining its physical and digital components. The company can also use this app to collect anonymized data to derive their products’ “voice”. This will consequently provide them with highly informative data on which “features and procedures” are being used the most by cyclists which can then be analyzed to improve their biking experiences. For additional information about their products and the accompanying AR app, see Cannondale Habit Ready to Shred with All-New Proportional Response Design, posted on Bikerumor.com on October 9, 2018. There is also a brief preview of the app on YouTube.
  • Personalized Services: AR is being promoted as “transformative” to online and offline commerce since it enables potential buyers to virtually try something out before they buy it. For instance, Amazon’s new Echo Look permits customers to do this with clothing purchases. (See Amazon’s Echo Look Fashion Camera is Now Available to Everyone in the US, by Chris Welch, posted on TheVerge.com on June 6, 2018.) The company also patented something called “Magic Mirror” in January 2018. When this is combined with Echo Look will point the way towards the next evolution of the functionality of the clothing store dressing room. (See Amazon’s Blended-Reality Mirror Shows You Wearing Virtual Clothes in Virtual Locales, by Alan Boyle, posted on GeekWire.com on January 2, 2018.) The data collected by Echo Look is “being analyzed to create user preference profiles” and, in turn, suggest purchases based upon them. It is reasonably conceivable that combining these two technologies to supplement such personalized clothing recommendations will produce additional AR-based data, elevating “personalized services and experiences” to a heretofore unattained level.³
  • Quality Control: For quite a while, DHL has been a corporate leader in integrating AR technology into its workers’ daily operations. In one instance, the company is using computer vision to perform bar code scanning. They are further using this system to gather and analyze quality assurance data. This enables them to assess how workers’ behavior “may affect order quality and process efficiency”. (See the in-depth report on the company’s website entitled Augmented Reality in Logistics, by Holger Glockner, Kai Jannek, Johannes Mahn and Björn Theis, posted in 2014.)

Image from Pixabay.com

Integrating Strategic Applications of AR-Derived Data

There is clearly a range of meaningful impacts upon business strategies to be conferred by AR-derived data. Besides the four positive examples above, other companies are likewise running comparable projects. However, some of them may likely remain constrained from wider exposure because of “technological or organizational” impediments.

With the emergence of AR-generated data resources, those firms that meaningfully integrate them with other established business data systems such as customer relationship management (CRM) and “digital engagement”, will yield tangible new insights and commercial opportunities. Thus, in order to fully leverage these potential new possibilities, nimble business strategists should establish dedicated multi-departmental teams to pursue these future benefits.

My Questions

  • Because the datastreams from AR are visually based, could this be yet another fertile area to apply machine learning and other aspects of artificial intelligence?
  • What other existing data collection and analysis fields might also potentially benefit from the addition of AR-derived data stream? What about data-driven professional and amateur sports, certain specialties of medical practice such as surgery and radiology, and governmental agencies such as those responsible for the environment and real estate usage?
  • What entrepreneurial opportunities might exist for creating new AR analytical tools, platforms and hardware, as well as integration services with other streams of data to produce original new products and services?
  • What completely new types of career opportunities and job descriptions might be generated by the growth of the AR-as-a-sensor sector of the economy? Should universities consider adding AR data analytics to their curriculum?
  • What data privacy and security issues may emerge here and how might they be different from existing concerns and regulations? How would AR-generated data be treated under the GDPR? Whether and how should people be informed in advance and their consent sought if AR data is being gathered about them?
  • How might AR-generated data affect any or all of the arts and other forms of creative expression?
  • Might some new technical terms of ARt be needed such as “ARformation”, “sensAR” and “stARtegic”?

 


1.  Much of the news and tech media provided extensive coverage of this event. Choosing just one report among many, the January 10, 2019 edition of The New York Times published a roundup and analysis of all of the news and announcements that have occurred in an engaging article with photos entitled CES 2019: It’s the Year of Virtual Assistants and 5G, by Brian X. Chen.

2.   For an alternative perspective on this question see the November 20, 2018 Subway Fold post entitled The Music of the Algorithms: Tune-ing Up Creativity with Artificial Intelligence.

3.  During the 2019 Super Bowl 53 played (or, more accurately, snoozed through), on February 3, 2019, there was an ad for a new product called The Mirror. This is a networked full-size wall mirror where users can do their daily workouts in directly in front of it and receive real-time feedback, performance readings, and communications with other users. From this ad and the company’s website, this device appears to be operating upon a similar concept to Amazon’s whereby users are receiving individualized and immediate feedback.

I Can See for Miles: Using Augmented Reality to Analyze Business Data Sets

matrix-1013612__340, Image from Pixabay

While one of The Who’s first hit singles, I Can See for Miles, was most certainly not about data visualization, it still might – – on a bit of a stretch – – find a fitting a new context in describing one of the latest dazzling new technologies in the opening stanza’s declaration “there’s magic in my eye”.  In determining Who’s who and what’s what about all this, let’s have a look at report on a new tool enabling data scientists to indeed “see for miles and miles” in an exciting new manner.

This innovative approach was recently the subject of a fascinating article by an augmented reality (AR) designer named Benjamin Resnick about his team’s work at IBM on a project called Immersive Insights, entitled Visualizing High Dimensional Data In Augmented Reality, posted on July 3, 2017 on Medium.com. (Also embedded is a very cool video of a demo of this system.) They are applying AR’s rapidly advancing technology1 to display, interpret and leverage insights gained from business data. I highly recommend reading this in its entirety. I will summarize and annotate it here and then pose a few real-world questions of my own.

Immersive Insights into Where the Data-Points Point

As Resnick foresees such a system in several years, a user will start his or her workday by donning their AR glasses and viewing a “sea of gently glowing, colored orbs”, each of which visually displays their business’s big data sets2. The user will be able to “reach out select that data” which, in turn, will generate additional details on a nearby monitor. Thus, the user can efficiently track their data in an “aesthetically pleasing” and practical display.

The project team’s key objective is to provide a means to visualize and sum up the key “relationships in the data”. In the short-term, the team is aiming Immersive Insights towards data scientists who are facile coders, enabling them to visualize, using AR’s capabilities upon time series, geographical and networked data. For their long-term goals, they are planning to expand the range of Immersive Insight’s applicability to the work of business analysts.

For example, Instacart, a same-day food delivery service, maintains an open source data set on food purchases (accessible here). Every consumer represents a data-point wherein they can be expressed as a “list of purchased products” from among 50,000 possible items.

How can this sizable pool of data be better understood and the deeper relationships within it be extracted and understood? Traditionally, data scientists create a “matrix of 2D scatter plots” in their efforts to intuit connections in the information’s attributes. However, for those sets with many attributes, this methodology does not scale well.

Consequently, Resnick’s team has been using their own new approach to:

  • Lower complex data to just three dimensions in order to sum up key relationships
  • Visualize the data by applying their Immersive Insights application, and
  • Iteratively label and color-code the data” in conjunction with an “evolving understanding” of its inner workings

Their results have enable them to “validate hypotheses more quickly” and establish a sense about the relationships within the data sets. As well, their system was built to permit users to employ a number of versatile data analysis programming languages.

The types of data sets being used here are likewise deployed in training machine learning systems3. As a result, the potential exists for these three technologies to become complementary and mutually supportive in identifying and understanding relationships within the data as well as deriving any “black box predictive models”.

Analyzing the Instacart Data Set: Food for Thought

Passing over the more technical details provided on the creation of team’s demo in the video (linked above), and next turning to the results of the visualizations, their findings included:

  • A great deal of the variance in Instacart’s customers’ “purchasing patterns” was between those who bought “premium items” and those who chose less expensive “versions of similar items”. In turn, this difference has “meaningful implications” in the company’s “marketing, promotion and recommendation strategies”.
  • Among all food categories, produce was clearly the leader. Nearly all customers buy it.
  • When the users were categorized by the “most common department” they patronized, they were “not linearly separable”. This is, in terms of purchasing patterns, this “categorization” missed most of the variance in the system’s three main components (described above).

Resnick concludes that the three cornerstone technologies of Immersive Insights – – big data, augmented reality and machine learning – – are individually and in complementary combinations “disruptive” and, as such, will affect the “future of business and society”.

Questions

  • Can this system be used on a real-time basis? Can it be configured to handle changing data sets in volatile business markets where there are significant changes within short time periods that may affect time-sensitive decisions?
  • Would web metrics be a worthwhile application, perhaps as an add-on module to a service such as Google Analytics?
  • Is Immersive Insights limited only to business data or can it be adapted to less commercial or non-profit ventures to gain insights into processes that might affect high-level decision-making?
  • Is this system extensible enough so that it will likely end up finding unintended and productive uses that its designers and engineers never could have anticipated? For example, might it be helpful to juries in cases involving technically or financially complex matters such as intellectual property or antitrust?

 


1.  See the Subway Fold category Virtual and Augmented Reality for other posts on emerging AR and VR applications.

2.  See the Subway Fold category of Big Data and Analytics for other posts covering a range of applications in this field.

3.  See the Subway Fold category of Smart Systems for other posts on developments in artificial intelligence, machine learning and expert systems.

4.  For a highly informative and insightful examination of this phenomenon where data scientists on occasion are not exactly sure about how AI and machine learning systems produce their results, I suggest a click-through and reading of The Dark Secret at the Heart of AI,  by Will Knight, which was published in the May/June 2017 issue of MIT Technology Review.

Hacking Matter Really Matters: A New Programmable Material Has Been Developed

Image from Pixabay

Image from Pixabay

The sales receipt from The Strand Bookstore in New York is dated April 5, 2003. It still remains tucked into one of the most brain-bendingly different books I have ever bought and read called Hacking Matter: Levitating Chairs, Quantum Mirages, and the Infinite Weirdness of Programmable Atoms (Basic Books, 2003), by Wil McCarthy. It was a fascinating deep dive into what was then the nascent nanotechnology research on creating a form of “programmable atoms” called quantum dots. This technology has since found applications in the production of semiconductors.

Fast forward thirteen years to a recent article entitled Exoskin: A Programmable Hybrid Shape-Changing Material, by Evan Ackerman, posted on IEEE Spectrum on June 3, 2016. This is about an all-new and entirely different development, quite separate from quantum dots, but nonetheless a current variation on the concept that matter can be programmed for new applications. While we always think of programming as involving systems and software, this new story takes and literally stretches this long-established process into some entirely new directions.

I highly recommend reading this most interesting report in its entirety and viewing the two short video demos embedded within it. I will summarize and annotate it, and then pose several questions of my own on this, well, matter. I also think it fits in well with these 10 Subway Fold posts on other recent developments in material science including, among others, such way cool stuff as Q-Carbon, self-healing concrete and metamaterials.

Matter of Fact

The science of programmable matter is still in its formative stages. The Tangible Media Group at MIT Media Lab is currently working on this challenge included in its scores of imaginative projects. A student pursuing his Master’s Degree in this group is Basheer Tome. Among his current research projects, he is working on a type of programmable material he calls “Exoskin” which he describes as “membrane-backed rigid material”. It is composed of “tessellated triangles of firm silicone mounted on top of a stack of flexible silicone bladders”. By inflating these bladders in specific ways, Exoskin can change its shape in reaction to the user’s touch. This activity can, in turn, be used to relay information and “change functionality”.

Although this might sound a bit abstract, the two accompanying videos make the Exoskin’s operations quite clear. For example, it can be applied to a steering wheel which, through “tactile feedback”, can inform the driver about direction-finding using GPS navigation and other relevant driving data. This is intended to lower driver distractions and “simplify previously complex multitasking” behind the wheel.

The Exoskin, in part, by its very nature makes use of haptics (using touch as a form of interface). One of the advantages of this approach is that it enables “fast reflexive motor responses to stimuli”. Moreover, the Exoskin actually involves inputs that “are both highly tactily perceptible and visually interpretable”.

Fabrication Issues

A gap still exists between the current prototype and a commercially viable product in the future in terms of the user’s degree of “granular control” over the Exoskin. The number of “bladders” underneath the rigid top materials will play a key role in this. Under existing fabrication methods, multiple bladders in certain configurations are “not practical” at this time.

However, this restriction might be changing. Soon it may be possible to produce bladders for each “individual Exoskin element” rather than a single bladder for all of them. (Again, the videos present this.) This would involve a system of “reversible electrolysis” that alternatively separates water into hydrogen and oxygen and then back again into water. Other options to solve this fabrication issue are also under consideration.

Mt. Tome hopes this line of research disrupts the distinction between what is “rigid and soft” as well as “animate and inanimate” to inspire Human-Computer Interaction researchers at MIT to create “more interfaces using physical materials”.

My Questions

  • In what other fields might this technology find viable applications? What about medicine, architecture, education and online gaming just to begin?
  • Might Exoskin present new opportunities to enhance users’ experience with the current and future releases virtual reality and augmented reality systems? (These 15 Subway Fold posts cover a sampling of trends and developments in VR and AR.)
  • How might such an Exoskin-embedded steering wheel possibly improve drivers’ and riders’ experiences with Uber and other ride-sharing services?
  • What entrepreneurial opportunities in design, engineering, programming and manufacturing might present themselves if Exoskin becomes commercialized?

Digital Smarts Everywhere: The Emergence of Ambient Intelligence

Image from Pixabay

Image from Pixabay

The Troggs were a legendary rock and roll band who were part of the British Invasion in the late 1960’s. They have always been best known for their iconic rocker Wild Thing. This was also the only Top 10 hit that ever had an ocarina solo. How cool is that! The band went on to have two other major hits, With a Girl Like You and Love is All Around.¹

The third of the band’s classic singles can be stretched a bit to be used as a helpful metaphor to describe an emerging form pervasive “all around”-edness, this time in a more technological context. Upon reading a fascinating recent article on TechCrunch.com entitled The Next Stop on the Road to Revolution is Ambient Intelligence, by Gary Grossman, on May 7, 2016, you will find a compelling (but not too rocking) analysis about how the rapidly expanding universe of digital intelligent systems wired into our daily routines is becoming more ubiquitous, unavoidable and ambient each day.

All around indeed. Just as romance can dramatically affect our actions and perspectives, studies now likewise indicate that the relentless global spread of smarter – – and soon thereafter still smarter – – technologies is comparably affecting people’s lives at many different levels.² 

We have followed just a sampling of developments and trends in the related technologies of artificial intelligence, machine learning, expert systems and swarm intelligence in these 15 Subway Fold posts. I believe this new article, adding “ambient intelligence” to the mix, provides a timely opportunity to bring these related domains closer together in terms of their common goals, implementations and benefits. I highly recommend reading Mr. Grossman’s piece it in its entirety.

I will summarize and annotate it, add some additional context, and then pose some of my own Troggs-inspired questions.

Internet of Experiences

Digital this, that and everything is everywhere in today’s world. There is a surging confluence of connected personal and business devices, the Internet, and the Internet of Things (I0T) ³. Woven closely together on a global scale, we have essentially built “a digital intelligence network that transcends all that has gone before”. In some cases, this quantum of advanced technologies gains the “ability to sense, predict and respond to our needs”, and is becoming part of everyone’s “natural behaviors”.

A forth industrial revolution might even manifest itself in the form of machine intelligence whereby we will interact with the “always-on, interconnected world of things”. As a result, the Internet may become characterized more by experiences where users will converse with ambient intelligent systems everywhere. The supporting planks of this new paradigm include:

A prediction of what more fully realized ambient intelligence might look like using travel as an example appeared in an article entitled Gearing Up for Ambient Intelligence, by Lisa Morgan, on InformationWeek.com on March 14, 2016. Upon leaving his or her plane, the traveler will receive a welcoming message and a request to proceed to the curb to retrieve their luggage. Upon reaching curbside, a self-driving car6 will be waiting with information about the hotel booked for the stay.

Listening

Another article about ambient intelligence entitled Towards a World of Ambient Computing, by Simon Bisson, posted on ZDNet.com on February 14, 2014, is briefly quoted for the line “We will talk, and the world will answer”, to illustrate the point that current technology will be morphing into something in the future that would be nearly unrecognizable today. Grossman’s article proceeds to survey a series of commercial technologies recently brought to market as components of a fuller ambient intelligence that will “understand what we are asking” and provide responsive information.

Starting with Amazon’s Echo, this new device can, among other things:

  • Answer certain types of questions
  • Track shopping lists
  • Place orders on Amazon.com
  • Schedule a ride with Uber
  • Operate a thermostat
  • Provide transit schedules
  • Commence short workouts
  • Review recipes
  • Perform math
  • Request a plumber
  • Provide medical advice

Will it be long before we begin to see similar smart devices everywhere in homes and businesses?

Kevin Kelly, the founding Executive Editor of WIRED and a renowned futurist7, believes that in the near future, digital intelligence will become available in the form of a utility8 and, as he puts it “IQ as a service”. This is already being done by Google, Amazon, IBM and Microsoft who are providing open access to sections of their AI coding.9 He believes that success for the next round of startups will go to those who enhance and transforms something already in existence with the addition of AI. The best example of this is once again self-driving cars.

As well, in a chapter on Ambient Computing from a report by Deloitte UK entitled Tech Trends 2015, it was noted that some products were engineering ambient intelligence into their products as a means to remain competitive.

Recommending

A great deal of AI is founded upon the collection of big data from online searching, the use of apps and the IoT. This universe of information supports neural networks learn from repeated behaviors including people’s responses and interests. In turn, it provides a basis for “deep learning-derived personalized information and services” that can, in turn, derive “increasingly educated guesses with any given content”.

An alternative perspective, that “AI is simply the outsourcing of cognition by machines”, has been expressed by Jason Silva, a technologist, philosopher and video blogger on Shots of Awe. He believes that this process is the “most powerful force in the universe”, that is, of intelligence. Nonetheless, he sees this as an evolutionary process which should not be feared. (See also the December 27, 2014 Subway Fold post entitled  Three New Perspectives on Whether Artificial Intelligence Threatens or Benefits the World.)

Bots are another contemporary manifestation of ambient intelligence. These are a form of software agent, driven by algorithms, that can independently perform a range of sophisticated tasks. Two examples include:

Speaking

Optimally, bots should also be able to listen and “speak” back in return much like a 2-way phone conversation. This would also add much-needed context, more natural interactions and “help to refine understanding” to these human/machine exchanges. Such conversations would “become an intelligent and ambient part” of daily life.

An example of this development path is evident in Google Now. This service combines voice search with predictive analytics to present users with information prior to searching. It is an attempt to create an “omniscient assistant” that can reply to any request for information “including those you haven’t thought of yet”.

Recently, the company created a Bluetooth-enable prototype of lapel pin based on this technology that operates just by tapping it much like the communicators on Star Trek. (For more details, see Google Made a Secret Prototype That Works Like the Star Trek Communicator, by Victor Luckerson, on Time.com, posted on November 22, 2015.)

The configurations and specs of AI-powered devices, be it lapel pins, some form of augmented reality10 headsets or something else altogether, supporting such pervasive and ambient intelligence are not exactly clear yet. Their development and introduction will take time but remain inevitable.

Will ambient intelligence make our lives any better? It remains to be seen, but it is probably a viable means to handle some of more our ordinary daily tasks. It will likely “fade into the fabric of daily life” and be readily accessible everywhere.

Quite possibly then, the world will truly become a better place to live upon the arrival of ambient intelligence-enabled ocarina solos.

My Questions

  • Does the emergence of ambient intelligence, in fact, signal the arrival of a genuine fourth industrial revolution or is this all just a semantic tool to characterize a broader spectrum of smarter technologies?
  • How might this trend affect overall employment in terms of increasing or decreasing jobs on an industry by industry basis and/or the entire workforce? (See also this June 4, 2015 Subway Fold post entitled How Robots and Computer Algorithms Are Challenging Jobs and the Economy.)
  • How might this trend also effect non-commercial spheres such as public interest causes and political movements?
  • As ambient intelligence insinuates itself deeper into our online worlds, will this become a principal driver of new entrepreneurial opportunities for startups? Will ambient intelligence itself provide new tools for startups to launch and thrive?

 


1.   Thanks to Little Steven (@StevieVanZandt) for keeping the band’s music in occasional rotation on The Underground Garage  (#UndergroundGarage.) Also, for an appreciation of this radio show see this August 14, 2014 Subway Fold post entitled The Spirit of Rock and Roll Lives on Little Steven’s Underground Garage.

2.  For a remarkably comprehensive report on the pervasiveness of this phenomenon, see the Pew Research Center report entitled U.S. Smartphone Use in 2015, by Aaron Smith, posted on April 1, 2015.

3These 10 Subway Fold posts touch upon the IoT.

4.  The Subway Fold category Big Data and Analytics contains 50 posts cover this topic in whole or in part.

5.  The Subway Fold category Telecommunications contains 12 posts cover this topic in whole or in part.

6These 5 Subway Fold posts contain references to self-driving cars.

7.   Mr. Kelly is also the author of a forthcoming book entitled The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future, to be published on June 7, 2016 by Viking.

8.  This September 1, 2014 Subway Fold post entitled Possible Futures for Artificial Intelligence in Law Practice, in part summarized an article by Steven Levy in the September 2014 issue of WIRED entitled Siri’s Inventors Are Building a Radical New AI That Does Anything You Ask. This covered a startup called Viv Labs whose objective was to transform AI into a form of utility. Fast forward to the Disrupt NY 2016 conference going on in New York last week. On May 9, 2016, the founder of Viv, Dag Kittlaus, gave his presentation about the Viv platform. This was reported in an article posted on TechCrunch.com entitled Siri-creator Shows Off First Public Demo of Viv, ‘the Intelligent Interface for Everything’, by Romain Dillet, on May 9, 2016. The video of this 28-minute presentation is embedded in this story.

9.  For the full details on this story see a recent article entitled The Race Is On to Control Artificial Intelligence, and Tech’s Future by John Markoff and Steve Lohr, published in the March 25, 2016 edition of The New York Times.

10These 10 Subway Fold posts cover some recent trends and development in augmented reality.

LinkNYC Rollout Brings Speedy Free WiFi and New Opportunities for Marketers to New York

Link.NYC WiFi Kiosk 5, Image by Alan Rothman

Link.NYC WiFi Kiosk 5, Image by Alan Rothman

Back in the halcyon days of yore before the advent of smartphones and WiFi, there were payphones and phone booths all over of the streets in New York. Most have disappeared, but a few scattered survivors have still managed to hang on. An article entitled And Then There Were Four: Phone Booths Saved on Upper West Side Sidewalks, by Corey Kilgannon, posted on NYTimes.com on February 10, 2016, recounts the stories of some of the last lonely public phones.

Taking their place comes a highly innovative new program called LinkNYC (also @LinkNYC and #LinkNYC). This initiative has just begun to roll out across all five boroughs with a network of what will become thousands of WiFi kiosks providing free and way fast free web access and phone calling, plus a host of other online NYC support services. The kiosks occupy the same physical spaces as the previous payphones.

The first batch of them has started to appear along Third Avenue in Manhattan. I took the photos accompanying this post of one kiosk at the corner of 14th Street and Third Avenue. While standing there, I was able to connect to the web on my phone and try out some of the LinkNYC functions. My reaction: This is very cool beans!

LinkNYC also presents some potentially great new opportunities for marketers. The launch of the program and the companies getting into it on the ground floor were covered in a terrific new article on AdWeek.com on February 15, 2015 entitled What It Means for Consumers and Brands That New York Is Becoming a ‘Smart City’, by Janet Stilson. I recommend reading it in its entirety. I will summarize and annotate it to add some additional context, and pose some of my own ad-free questions.

LinkNYC Set to Proliferate Across NYC

Link.NYC WiFi Kiosk 2, Image by Alan Rothman

Link.NYC WiFi Kiosk 2, Image by Alan Rothman

When completed, LinkNYC will give New York a highly advanced mobile network spanning the entire city. Moreover, it will help to transform it into a very well-wired “smart city“.¹ That is, an urban area comprehensively collecting, analyzing and optimizing vast quantities of data generated by a wide array of sensors and other technologies. It is a network and a host of network effects where a city learns about itself and leverages this knowledge for multiple benefits for it citizenry.²

Beyond mobile devices and advertising, smart cities can potentially facilitate many other services. The consulting firm Frost & Sullivan predicts that there will be 26 smart cities across the globe during by 2020. Currently, everyone is looking to NYC to see how the implementation of LinkNYC works out.

According to Mike Gamaroff, the head of innovation in the New York office of Kinetic Active a global media and marketing firm, LinkNYC is primarily a “utility” for New Yorkers as well as “an advertising network”. Its throughput rates are at gigabit speeds thereby making it the fastest web access available when compared to large commercial ISP’s average rates of merely 20 to 30 megabits.

Nick Cardillicchio, a strategic account manager at Civiq Smartscapes, the designer and manufacturer of the LinkNYC kiosks, said that LinkNYC is the only place where consumers can access the Net at such speeds. For the AdWeek.com article, he took the writer, Janet Stilson, on a tour of the kiosks include the one at Third Avenue and 14th Street, where one of the first ones is in place. (Coincidentally, this is the same kiosk I photographed for this post.)

There are a total of 16 currently operational for the initial testing. The WiFi web access is accessible with 150 feet of the kiosk and can range up to 400 feet. Perhaps those New Yorkers actually living within this range will soon no longer need their commercial ISPs.

Link.NYC WiFi Kiosk 4, Image by Alan Rothman

Link.NYC WiFi Kiosk 4, Image by Alan Rothman

The initial advertisers appearing in rotation on the large digital screen include Poland Spring (see the photo at the right), MillerCoors, Pager and Citibank. Eventually “smaller tablet screens” will be added to enable users to make free domestic voice or video calls. As well, they will present maps, local activities and emergency information in and about NYC. Users will also be able to charge up their mobile devices.

However, it is still too soon to assess and quantify the actual impact on such providers. According to David Krupp, CEO, North America, for Kinetic, neither Poland Spring nor MillerCoors has produced an adequate amount of data to yet analyze their respective LinkNYC ad campaigns. (Kinetic is involved in supporting marketing activities.)

Commercializing the Kiosks

The organization managing LinkNYC, the CityBridge consortium (consisting of Qualcomm, Intersection, and Civiq Smartscapes) , is not yet indicating when the new network will progress into a more “commercial stage”. However, once the network is fully implemented with the next few years, the number of kiosks might end up being somewhere between 75,000 and 10,000. That would make it the largest such network in the world.

CityBridge is also in charge of all the network’s advertising sales. These revenues will be split with the city. Under the 12-year contract now in place, this arrangement is predicted to produce $500M for NYC, with positive cash flow anticipated within 5 years. Brad Gleeson, the chief commercial officer at Civiq, said this project depends upon the degree to which LinkNYC is “embraced by Madison Avenue” and the time need for the network to reach “critical mass”.

Because of the breadth and complexity of this project, achieving this inflection point will be quite challenging according to David Etherington, the chief strategy officer at Intersection. He expressed his firm’s “dreams and aspirations” for LinkNYC, including providing advertisers with “greater strategic and creative flexibility”, offering such capabilities as:

  • Dayparting  – dividing a day’s advertising into several segments dependent on a range of factors about the intended audience, and
  • Hypertargeting – delivering advertising to very highly defined segments of an audience

Barry Frey, the president and CEO of the Digital Place-based Advertising Association, was also along for the tour of the new kiosks on Third Avenue. He was “impressed” by the capability it will offer advertisers to “co-locate their signs and fund services to the public” for such services as free WiFi and long-distance calling.

As to the brand marketers:

  • MillerCoors is using information at each kiosk location from Shazam, for the company’s “Sounds of the Street” ad campaign which presents “lists of the most-Shazammed tunes in the area”. (For more about Shazam, see the December 10, 2014 Subway Fold post entitled Is Big Data Calling and Calculating the Tune in Today’s Global Music Market?)
  • Poland Spring is now running a 5-week campaign featuring a digital ad (as seen in the third photo above). It relies upon “the brand’s popularity in New York”.

Capturing and Interpreting the Network’s Data

Link.NYC WiFi Kiosk 1, Image by Alan Rothman

Link.NYC WiFi Kiosk 1, Image by Alan Rothman

Thus far, LinkNYC has been “a little vague” about its methods for capturing the network’s data, but has said that it will maintain the privacy of all consumers’ information. One source has indicated that LinkNYC will collect, among other points “age, gender and behavioral data”. As well, the kiosks can track mobile devices within its variably 150 to 400 WiFi foot radius to ascertain the length of time a user stops by.  Third-party data is also being added to “round out the information”.³

Some industry experts’ expectations of the value and applications of this data include:

  • Helma Larkin, the CEO of Posterscope, a New York based firm specializing in “out-of- home communications (OOH)“, believes that LinkNYC is an entirely “new out-of-home medium”. This is because the data it will generate “will enhance the media itself”. The LinkNYC initiative presents an opportunity to build this network “from the ground up”. It will also create an opportunity to develop data about its own audience.
  • David Krupp of Kinetic thinks that data that will be generated will be quite meaningful insofar as producing a “more hypertargeted connection to consumers”.

Other US and International Smart City Initiatives

Currently in the US, there is nothing else yet approaching the scale of LinkNYC. Nonetheless, Kansas City is now developing a “smaller advertiser-supported  network of kiosks” with wireless support from Sprint. Other cities are also working on smart city projects. Civiq is now in discussions with about 20 of them.

Internationally, Rio de Janeiro is working on a smart city program in conjunction with the 2016 Olympics. This project is being supported by Renato Lucio de Castro, a consultant on smart city projects. (Here is a brief video of him describing this undertaking.)

A key challenge facing all smart city projects is finding officials in local governments who likewise have the enthusiasm for efforts like LinkNYC. Michael Lake, the CEO of Leading Cities, a firm that help cities with smart city projects, believes that programs such as LinkNYC will “continue to catch on” because of the additional security benefits they provide and the revenues they can generate.

My Questions

  • Should domestic and international smart cities to cooperate to share their resources, know-how and experience for each other’s mutual benefit? Might this in some small way help to promote urban growth and development on a more cooperative global scale?
  • Should LinkNYC also consider offering civic support services such as voter registration or transportation scheduling apps as well as charitable functions where pedestrians can donate to local causes?
  • Should LinkNYC add some augmented reality capabilities to enhance the data capabilities and displays of the kiosks? (See these 10 Subway Fold posts covering a range of news and trends on this technology.)

February 19, 2017 Update:  For the latest status report on LinkNYC nearly a year after this post was first uploaded, please see After Controversy, LinkNYC Finds Its Niche, by Gerald Schifman, on CrainsNewYork.com, dated February 15, 2017.


1.   While Googling “smart cities” might nearly cause the Earth to shift off its axis with its resulting 70 million hits, I suggest reading a very informative and timely feature from the December 11, 2015 edition of The Wall Street Journal entitled As World Crowds In, Cities Become Digital Laboratories, by Robert Lee Hotz.

2.   Smart Cities: Big Data, Civic Hackers, and the Quest for a New Utopia (W. W. Norton & Company, 2013), by Anthony M. Townsend, is a deep and wide book-length exploration of how big data and analytics are being deployed in large urban areas by local governments and independent citizens. I very highly recommend reading this fascinating exploration of the nearly limitless possibilities for smart cities.

3.   See, for example, How Publishers Utilize Big Data for Audience Segmentation, by Arvid Tchivzhel, posted on Datasciencecentral.com on November 17, 2015


These items just in from the Pop Culture Department: It would seem nearly impossible to film an entire movie thriller about a series of events centered around a public phone, but a movie called – – not so surprisingly – – Phone Booth managed to do this quite effectively in 2002. It stared Colin Farrell, Kiefer Sutherland and Forest Whitaker. Imho, it is still worth seeing.

Furthermore, speaking of Kiefer Sutherland, Fox announced on January 15, 2016 that it will be making 24: Legacy, a complete reboot of the 24 franchise, this time without him playing Jack Bauer. Rather, they have cast Corey Hawkins in the lead role. Hawkins can now be seen doing an excellent job playing Heath on season 6 of The Walking Dead. Watch out Grimes Gang, here comes Negan!!


Charge of the Light Brigade: Faster and More Efficient New Chips Using Photons Instead of Electrons

"PACE - PEACE" Image by Etienne Valois

“PACE – PEACE” Image by Etienne Valois

Alfred, Lord Tennyson wrote his immortal classic poem, The Charge of the Light Brigade, in 1854. It was to honor the dead heroes of a doomed infantry charge at the Battle of Balaclava during the Crimean War. Moreover, it strikingly portrayed the horrors of war. In just six short verses, he created a monumental work that has endured ever since for 162 years.

The poem came to mind last week after reading two recent articles on seemingly disparate topics. The first was posted on The New Yorker’s website on December 30, 2015 entitled In Silicon Valley Now, It’s Almost Always Winner Takes All by Om Malik. This is highly insightful analysis of how and why tech giants such as Google in search, Facebook in social networking, and Uber in transportation, have come to dominate their markets. In essence, competition is a fierce and relentless battle in the global digital economy. The second was an article on CNET.com posted on December 23, 2015 entitled Chip Promises Faster Computing with Light, Not Electrical Wires by Stephan Shankland. I highly recommend reading both of them in their entirety.

Taken together, the homonym of “light” both in historical poetry and in tech, seems to tie these two posted pieces together insofar as contemporary competition in tech markets is often described in military terms and metaphors. Focusing on that second story here for purposes of this blog post, about a tantalizing advance in chip design and fabrication, will this survive as it moves forward into the brutal and relentlessly “winner takes all” marketplace? I will summarize and annotate this story, and pose some of my own, hopefully en-light-ening questions.

Forward, the Light Brigade

A team of researchers, all of whom are university professors, including Vladimir Stojanovic from the University of California at Berkeley who led the development, Krste Asanovic also from Berkeley, Rajeev Ram from MIT, and Milos Popovic from the University of Colorado at Boulder, have created a new type of processing chip “that transmits data with light”. As well, its architecture significantly increases processing speed while reducing power consumption.  A report on the team’s work was published in an article in the December 24, 2015 issue of Nature (subscription required) entitled Single-chip Microprocessor That Communicates Directly Using Light by Chen Sun, Mark T. Wade, Yunsup Lee, et al.

This approach, according to Wikipedia, of “using silicon as an optical medium”, is called silicon photonics. IBM (see this link) and Intel (see this link)  have likewise been involved in R&D in this field, but have yet to introduce anything ready for the market.

However, this team of university researchers believes their new approach might be introduced commercially within a year. While their efforts do not make chips run faster per se, the photonic elements “keep chips supplied with data” which avoids them having to lose time by idling. Thus, they can process data faster.

Currently (no pun intended), electrical signals traverse metal wiring across the world on computing and communications devices and networks. For data traveling greater national and international distances, the electronic signals are transposed into light and sent along on high-speed fiber-optic cables. Nonetheless, this approach “isn’t cheap”.

Half a League Onward

What the university researchers’ team has done is create chips with “photonic components” built into them. If they succeed in scaling-up and commercializing their creation, consumers will be likely the beneficiaries. These advantages will probably manifest themselves first when used in data centers that, in turn, could speed up:

  • Google searches
  • Facebook image recognition
  • Other “performance-intensive features not economical today”
  • Remove processing bottlenecks and conserve battery life in smartphones and other personal computing platforms

Professor Stojanovic believes that one of their largest challenges if is to make this technology affordable before it can be later implemented in consumer level computing and communications devices. He is sanguine that such economies of scale can be reached. He anticipates further applications of this technology to enable chips’ onboard processing and memory components to communicate directly with each other.

Additional integrations of silicon photonics might be seen in the lidar remote sensing systems for self-driving cars¹, as well as brain imaging² and environmental sensors. It also holds the potential to alter the traditional methods that computers are assembled. For example, the length of cables is limited to the extent that data can pass through them quickly and efficiently before needed amplification along the way. Optical links may permit data to be transferred significant further along network cabling. The research team’s “prototype used 10-meter optical links”, but Professor Stojanovic believes this could eventually be lengthened to a kilometer. This could potentially result in meaningful savings in energy, hardware and processing efficiency.

Two startups that are also presently working in the silicon photonics space include:

My Questions:

  • Might another one of silicon photonics’ virtues be that it is partially fabricated from more sustainable materials, primarily silicon derived from sand rather than various metals?
  • Could silicon photonics chips and architectures be a solution to the very significant computing needs of the virtual reality (VR) and augmented reality (AR) systems that will be coming onto the market in 2016? This issue was raised in a most interesting article posted on Bloomberg.com on December 30, 2015 entitled Few Computers Are Powerful Enough to Support Virtual Reality by Ian King. (See also these 13 Subway Fold posts on a range of VR and AR developments.)
  • What other new markets, technologies and opportunities for entrepreneurs and researchers might emerge if the university research team’s chips achieve their intended goals and succeed in making it to market?

May 17, 2017 UpdateFor an update on one of the latest developments in photonics with potential applications in advanced computing and materials science, see Photonic Hypercrystals Are Now a Reality and Light Will Never Be the Same, by Dexter Johnson, posted on May 10, 2017, on IEEESpectrum.com. 


1.  See these six Subway Fold posts for references to autonomous cars.

2.  See these four Subway Fold posts concerning certain developments in brain imaging technology.

Virtual Reality Universe-ity: The Immersive “Augmentarium” Lab at the U. of Maryland

"A Touch of Science", Image by Mars P.

“A Touch of Science”, Image by Mars P.

Got to classes. Sit through a series of 50 minute lectures. Drink coffee. Pay attention and take notes. Drink more coffee. Go to the library to study, do research and complete assignments. Rinse and repeat for the rest of the semester. Then take your final exams and hope that you passed everything. More or less, things have traditionally been this way in college since Hector was a pup.

Might students instead be interested in participating at the new and experimental learning laboratory called the Augmentarium at the University of Maryland where immersing themselves in their studies takes on an entirely new meaning? This is a place where virtual reality (VR)  is being tested and integrated into the learning process. (There 14 Subway Fold posts cover a range of VR and augmented reality [AR] developments and applications.)

Where do I sign up for this?¹

The story was covered in a fascinating report that appeared on December 8, 2015 on the website of the Chronicle of Higher Education entitled Virtual-Reality Lab Explores New Kinds of Immersive Learning, by Ellen Wexler. I highly recommend reading this in its entirety as well as clicking on the Augmentarium link to learn about some these remarkable projects. I also suggest checking out the hashtag #Augmentarium on Twitter the very latest news and developments. I will summarize and annotate this story, and pose some of my own questions right after I take off my own imaginary VR headset.

Developing VR Apps in the Augmentarium

In 2014, Brendan Iribe, the co-founder of the VR headset company Oculus², as well as a University of Maryland alumni, donated $31 million to the University for its development of VR technology³. During the same year, with addition funding obtained from the National Science Foundation, the Augmentarium was built. Currently, researchers at the facility are working on applications of VR to “health care, public safety, and education”.

Professor Ramani Duraiswami, a PhD and co-founder of a startup called VisiSonics (developers of 3D audio and VR gaming systems), is involved with the Augmentarium. His work is in the area of audio, which he believes has a great effect upon how people perceive the world around them. He further thinks that an audio or video lecture presented via distance learning can be greatly enhanced by using VR to, in his words make “the experience feel more immersive”. He feels this would make you feel as though you are in the very presence of the instructor4.

During a recent showcase there, Professor Duraiswami demo-ed 3D sound5 and a short VR science fiction production called Fixing Incus. (This link is meant to be played on a smartphone that is then embedded within a VR viewer/headset.) This implementation showed the audience what it was like to be immersed into a virtual environment where, when they moved their heads and line of sight, what they were viewing corresponding and seamlessly changed.

Enhancing Virtual Immersions for Medicine and Education

Amitabh Varshney, the Director of the University’s Institute for Advanced Computer Studies, is now researching “how the brain processes information in immersive environments” and how is differs from how this is done on a computer screen.6 He believes that VR applications in the classroom will enable students to immerse themselves in their subjects, such as being able to “walk through buildings they design” and “explore” them beyond “just the equations” involved in creating these structures.

At the lab’s recent showcase, he provided the visitors with (non-VR) 3D glasses and presented “an immersive video of a surgical procedure”. He drew the audience’s attention to the doctors at the operating table who were “crowing around” it. He believes that the use of 3D headsets would provide medical students a better means to “move around” and get an improved sense of what this experience is actually like in the operating room. (The September 22, 2015 Subway Fold post entitled VR in the OR: New Virtual Reality System for Planning, Practicing and Assisting in Surgery is also on point and provides extended coverage on this topic.)

While today’s early iterations of VR headsets (either available now or early in 2016), are “cumbersome”, researchers hope that they will evolve (in a manner similar to mobile phones which, in turn and as mentioned above, are presently a key element in VR viewers), and be applied in “hospitals, grocery stores and classrooms”.  Director Varshney can see them possibly developing along an even faster timeline.

My Questions

  • Is the establishment and operation of the Augmentarium a model that other universities should consider as a means to train students in this field, attract donations, and incubate potential VR and AR startups?
  • What entrepreneurial opportunities might exist for consulting, engineering and tech firms to set up comparable development labs at other schools and in private industry?
  • What other types of academic courses would benefit from VR and AR support? Could students now use these technologies to create or support their academic projects? What sort of grading standards might be applied to them?
  • Do the rapidly expanding markets for VR and AR require that some group in academia and/or the government establish technical and perhaps even ethical standards for such labs and their projects?
  • How are relevant potential intellectual property and technology transfer issues going to be negotiated, arbitrated and litigated if needed?

 


1.  Btw, has anyone ever figured out how the very elusive and mysterious “To Be Announced (TBA)”, the professor who appears in nearly all course catalogs, ends up teaching so many subjects at so many schools at the same time? He or she must have an incredibly busy schedule.

2.  These nine Subway Fold posts cover, among other VR and AR related stories, the technology of Oculus.

3.  This donation was reported in an article on September 11, 2014 in The Washington Post in an article entitled Brendan Iribe, Co-founder of Oculus VR, Makes Record $31 Million Donation to U-Md by Nick Anderson.

4.  See also the February 18, 2015 Subway Fold post entitled A Real Class Act: Massive Open Online Courses (MOOCs) are Changing the Learning Process.

5.  See also Designing Sound for Virtual Reality by Todd Baker posted on Medium.com on December 21, 2015, for a thorough overview of this aspect of VR, and the August 5, 2015 Subway Fold post entitled  Latest Census on Virtual Senses: A Seminar on Augmented Reality in New York covering, among other AR technologies, the development work and 3D sound wireless headphones of Hooke Audio.

6.  On a somewhat related topic, see the December 18, 2015 Subway Fold post entitled Mind Over Subject Matter: Researchers Develop A Better Understanding of How Human Brains Manage So Much Information.