I Can See for Miles: Using Augmented Reality to Analyze Business Data Sets

matrix-1013612__340, Image from Pixabay

While one of The Who’s first hit singles, I Can See for Miles, was most certainly not about data visualization, it still might – – on a bit of a stretch – – find a fitting a new context in describing one of the latest dazzling new technologies in the opening stanza’s declaration “there’s magic in my eye”.  In determining Who’s who and what’s what about all this, let’s have a look at report on a new tool enabling data scientists to indeed “see for miles and miles” in an exciting new manner.

This innovative approach was recently the subject of a fascinating article by an augmented reality (AR) designer named Benjamin Resnick about his team’s work at IBM on a project called Immersive Insights, entitled Visualizing High Dimensional Data In Augmented Reality, posted on July 3, 2017 on Medium.com. (Also embedded is a very cool video of a demo of this system.) They are applying AR’s rapidly advancing technology1 to display, interpret and leverage insights gained from business data. I highly recommend reading this in its entirety. I will summarize and annotate it here and then pose a few real-world questions of my own.

Immersive Insights into Where the Data-Points Point

As Resnick foresees such a system in several years, a user will start his or her workday by donning their AR glasses and viewing a “sea of gently glowing, colored orbs”, each of which visually displays their business’s big data sets2. The user will be able to “reach out select that data” which, in turn, will generate additional details on a nearby monitor. Thus, the user can efficiently track their data in an “aesthetically pleasing” and practical display.

The project team’s key objective is to provide a means to visualize and sum up the key “relationships in the data”. In the short-term, the team is aiming Immersive Insights towards data scientists who are facile coders, enabling them to visualize, using AR’s capabilities upon time series, geographical and networked data. For their long-term goals, they are planning to expand the range of Immersive Insight’s applicability to the work of business analysts.

For example, Instacart, a same-day food delivery service, maintains an open source data set on food purchases (accessible here). Every consumer represents a data-point wherein they can be expressed as a “list of purchased products” from among 50,000 possible items.

How can this sizable pool of data be better understood and the deeper relationships within it be extracted and understood? Traditionally, data scientists create a “matrix of 2D scatter plots” in their efforts to intuit connections in the information’s attributes. However, for those sets with many attributes, this methodology does not scale well.

Consequently, Resnick’s team has been using their own new approach to:

  • Lower complex data to just three dimensions in order to sum up key relationships
  • Visualize the data by applying their Immersive Insights application, and
  • Iteratively label and color-code the data” in conjunction with an “evolving understanding” of its inner workings

Their results have enable them to “validate hypotheses more quickly” and establish a sense about the relationships within the data sets. As well, their system was built to permit users to employ a number of versatile data analysis programming languages.

The types of data sets being used here are likewise deployed in training machine learning systems3. As a result, the potential exists for these three technologies to become complementary and mutually supportive in identifying and understanding relationships within the data as well as deriving any “black box predictive models”.

Analyzing the Instacart Data Set: Food for Thought

Passing over the more technical details provided on the creation of team’s demo in the video (linked above), and next turning to the results of the visualizations, their findings included:

  • A great deal of the variance in Instacart’s customers’ “purchasing patterns” was between those who bought “premium items” and those who chose less expensive “versions of similar items”. In turn, this difference has “meaningful implications” in the company’s “marketing, promotion and recommendation strategies”.
  • Among all food categories, produce was clearly the leader. Nearly all customers buy it.
  • When the users were categorized by the “most common department” they patronized, they were “not linearly separable”. This is, in terms of purchasing patterns, this “categorization” missed most of the variance in the system’s three main components (described above).

Resnick concludes that the three cornerstone technologies of Immersive Insights – – big data, augmented reality and machine learning – – are individually and in complementary combinations “disruptive” and, as such, will affect the “future of business and society”.

Questions

  • Can this system be used on a real-time basis? Can it be configured to handle changing data sets in volatile business markets where there are significant changes within short time periods that may affect time-sensitive decisions?
  • Would web metrics be a worthwhile application, perhaps as an add-on module to a service such as Google Analytics?
  • Is Immersive Insights limited only to business data or can it be adapted to less commercial or non-profit ventures to gain insights into processes that might affect high-level decision-making?
  • Is this system extensible enough so that it will likely end up finding unintended and productive uses that its designers and engineers never could have anticipated? For example, might it be helpful to juries in cases involving technically or financially complex matters such as intellectual property or antitrust?

 


1.  See the Subway Fold category Virtual and Augmented Reality for other posts on emerging AR and VR applications.

2.  See the Subway Fold category of Big Data and Analytics for other posts covering a range of applications in this field.

3.  See the Subway Fold category of Smart Systems for other posts on developments in artificial intelligence, machine learning and expert systems.

4.  For a highly informative and insightful examination of this phenomenon where data scientists on occasion are not exactly sure about how AI and machine learning systems produce their results, I suggest a click-through and reading of The Dark Secret at the Heart of AI,  by Will Knight, which was published in the May/June 2017 issue of MIT Technology Review.

Digital Smarts Everywhere: The Emergence of Ambient Intelligence

Image from Pixabay

Image from Pixabay

The Troggs were a legendary rock and roll band who were part of the British Invasion in the late 1960’s. They have always been best known for their iconic rocker Wild Thing. This was also the only Top 10 hit that ever had an ocarina solo. How cool is that! The band went on to have two other major hits, With a Girl Like You and Love is All Around.¹

The third of the band’s classic singles can be stretched a bit to be used as a helpful metaphor to describe an emerging form pervasive “all around”-edness, this time in a more technological context. Upon reading a fascinating recent article on TechCrunch.com entitled The Next Stop on the Road to Revolution is Ambient Intelligence, by Gary Grossman, on May 7, 2016, you will find a compelling (but not too rocking) analysis about how the rapidly expanding universe of digital intelligent systems wired into our daily routines is becoming more ubiquitous, unavoidable and ambient each day.

All around indeed. Just as romance can dramatically affect our actions and perspectives, studies now likewise indicate that the relentless global spread of smarter – – and soon thereafter still smarter – – technologies is comparably affecting people’s lives at many different levels.² 

We have followed just a sampling of developments and trends in the related technologies of artificial intelligence, machine learning, expert systems and swarm intelligence in these 15 Subway Fold posts. I believe this new article, adding “ambient intelligence” to the mix, provides a timely opportunity to bring these related domains closer together in terms of their common goals, implementations and benefits. I highly recommend reading Mr. Grossman’s piece it in its entirety.

I will summarize and annotate it, add some additional context, and then pose some of my own Trogg-inspired questions.

Internet of Experiences

Digital this, that and everything is everywhere in today’s world. There is a surging confluence of connected personal and business devices, the Internet, and the Internet of Things (I0T) ³. Woven closely together on a global scale, we have essentially built “a digital intelligence network that transcends all that has gone before”. In some cases, this quantum of advanced technologies gains the “ability to sense, predict and respond to our needs”, and is becoming part of everyone’s “natural behaviors”.

A forth industrial revolution might even manifest itself in the form of machine intelligence whereby we will interact with the “always-on, interconnected world of things”. As a result, the Internet may become characterized more by experiences where users will converse with ambient intelligent systems everywhere. The supporting planks of this new paradigm include:

A prediction of what more fully realized ambient intelligence might look like using travel as an example appeared in an article entitled Gearing Up for Ambient Intelligence, by Lisa Morgan, on InformationWeek.com on March 14, 2016. Upon leaving his or her plane, the traveler will receive a welcoming message and a request to proceed to the curb to retrieve their luggage. Upon reaching curbside, a self-driving car6 will be waiting with information about the hotel booked for the stay.

Listening

Another article about ambient intelligence entitled Towards a World of Ambient Computing, by Simon Bisson, posted on ZDNet.com on February 14, 2014, is briefly quoted for the line “We will talk, and the world will answer”, to illustrate the point that current technology will be morphing into something in the future that would be nearly unrecognizable today. Grossman’s article proceeds to survey a series of commercial technologies recently brought to market as components of a fuller ambient intelligence that will “understand what we are asking” and provide responsive information.

Starting with Amazon’s Echo, this new device can, among other things:

  • Answer certain types of questions
  • Track shopping lists
  • Place orders on Amazon.com
  • Schedule a ride with Uber
  • Operate a thermostat
  • Provide transit schedules
  • Commence short workouts
  • Review recipes
  • Perform math
  • Request a plumber
  • Provide medical advice

Will it be long before we begin to see similar smart devices everywhere in homes and businesses?

Kevin Kelly, the founding Executive Editor of WIRED and a renowned futurist7, believes that in the near future, digital intelligence will become available in the form of a utility8 and, as he puts it “IQ as a service”. This is already being done by Google, Amazon, IBM and Microsoft who are providing open access to sections of their AI coding.9 He believes that success for the next round of startups will go to those who enhance and transforms something already in existence with the addition of AI. The best example of this is once again self-driving cars.

As well, in a chapter on Ambient Computing from a report by Deloitte UK entitled Tech Trends 2015, it was noted that some products were engineering ambient intelligence into their products as a means to remain competitive.

Recommending

A great deal of AI is founded upon the collection of big data from online searching, the use of apps and the IoT. This universe of information supports neural networks learn from repeated behaviors including people’s responses and interests. In turn, it provides a basis for “deep learning-derived personalized information and services” that can, in turn, derive “increasingly educated guesses with any given content”.

An alternative perspective, that “AI is simply the outsourcing of cognition by machines”, has been expressed by Jason Silva, a technologist, philosopher and video blogger on Shots of Awe. He believes that this process is the “most powerful force in the universe”, that is, of intelligence. Nonetheless, he sees this as an evolutionary process which should not be feared. (See also the December 27, 2014 Subway Fold post entitled  Three New Perspectives on Whether Artificial Intelligence Threatens or Benefits the World.)

Bots are another contemporary manifestation of ambient intelligence. These are a form of software agent, driven by algorithms, that can independently perform a range of sophisticated tasks. Two examples include:

Speaking

Optimally, bots should also be able to listen and “speak” back in return much like a 2-way phone conversation. This would also add much-needed context, more natural interactions and “help to refine understanding” to these human/machine exchanges. Such conversations would “become an intelligent and ambient part” of daily life.

An example of this development path is evident in Google Now. This service combines voice search with predictive analytics to present users with information prior to searching. It is an attempt to create an “omniscient assistant” that can reply to any request for information “including those you haven’t thought of yet”.

Recently, the company created a Bluetooth-enable prototype of lapel pin based on this technology that operates just by tapping it much like the communicators on Star Trek. (For more details, see Google Made a Secret Prototype That Works Like the Star Trek Communicator, by Victor Luckerson, on Time.com, posted on November 22, 2015.)

The configurations and specs of AI-powered devices, be it lapel pins, some form of augmented reality10 headsets or something else altogether, supporting such pervasive and ambient intelligence are not exactly clear yet. Their development and introduction will take time but remain inevitable.

Will ambient intelligence make our lives any better? It remains to be seen, but it is probably a viable means to handle some of more our ordinary daily tasks. It will likely “fade into the fabric of daily life” and be readily accessible everywhere.

Quite possibly then, the world will truly become a better place to live upon the arrival of ambient intelligence-enabled ocarina solos.

My Questions

  • Does the emergence of ambient intelligence, in fact, signal the arrival of a genuine fourth industrial revolution or is this all just a semantic tool to characterize a broader spectrum of smarter technologies?
  • How might this trend affect overall employment in terms of increasing or decreasing jobs on an industry by industry basis and/or the entire workforce? (See also this June 4, 2015 Subway Fold post entitled How Robots and Computer Algorithms Are Challenging Jobs and the Economy.)
  • How might this trend also effect non-commercial spheres such as public interest causes and political movements?
  • As ambient intelligence insinuates itself deeper into our online worlds, will this become a principal driver of new entrepreneurial opportunities for startups? Will ambient intelligence itself provide new tools for startups to launch and thrive?

 


1.   Thanks to Little Steven (@StevieVanZandt) for keeping the band’s music in occasional rotation on The Underground Garage  (#UndergroundGarage.) Also, for an appreciation of this radio show see this August 14, 2014 Subway Fold post entitled The Spirit of Rock and Roll Lives on Little Steven’s Underground Garage.

2.  For a remarkably comprehensive report on the pervasiveness of this phenomenon, see the Pew Research Center report entitled U.S. Smartphone Use in 2015, by Aaron Smith, posted on April 1, 2015.

3These 10 Subway Fold posts touch upon the IoT.

4.  The Subway Fold category Big Data and Analytics contains 50 posts cover this topic in whole or in part.

5.  The Subway Fold category Telecommunications contains 12 posts cover this topic in whole or in part.

6These 5 Subway Fold posts contain references to self-driving cars.

7.   Mr. Kelly is also the author of a forthcoming book entitled The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future, to be published on June 7, 2016 by Viking.

8.  This September 1, 2014 Subway Fold post entitled Possible Futures for Artificial Intelligence in Law Practice, in part summarized an article by Steven Levy in the September 2014 issue of WIRED entitled Siri’s Inventors Are Building a Radical New AI That Does Anything You Ask. This covered a startup called Viv Labs whose objective was to transform AI into a form of utility. Fast forward to the Disrupt NY 2016 conference going on in New York last week. On May 9, 2016, the founder of Viv, Dag Kittlaus, gave his presentation about the Viv platform. This was reported in an article posted on TechCrunch.com entitled Siri-creator Shows Off First Public Demo of Viv, ‘the Intelligent Interface for Everything’, by Romain Dillet, on May 9, 2016. The video of this 28-minute presentation is embedded in this story.

9.  For the full details on this story see a recent article entitled The Race Is On to Control Artificial Intelligence, and Tech’s Future by John Markoff and Steve Lohr, published in the March 25, 2016 edition of The New York Times.

10These 10 Subway Fold posts cover some recent trends and development in augmented reality.

Data Analysis and Visualizations of All U.S. Presidential State of the Union Addresses

"President Obama's State of the Union Address 2013", Word cloud image by Kurtis Garbutt

“President Obama’s State of the Union Address 2013”, Word cloud image by Kurtis Garbutt

While data analytics and visualization tools have accumulated a significant historical record of accomplishments, now, in turn, this technology is being applied to actual significant historical accomplishments. Let’s have a look.

Every year in January, the President of the United States gives the State of the Union speech before both houses of the U.S. Congress. This is to address the condition of the nation, his legislative agenda and other national priorities. The requirement for this presentation appears in Article II of the U.S. Constitution.

This talk with the nation has been given every year (with only one exception), since 1790. The resulting total of 224 speeches presents a remarkable and dynamic historical record of U.S. history and policy. Researchers at Columbia University and the University of Paris have recently applied sophisticated data analytics and visualization tools to this trove of presidential addresses. Their findings were published in the August 10, 2015 edition of the Proceedings of the National Academy of Sciences in a truly fascinating paper entitled Lexical Shifts, Substantive Changes, and Continuity in State of the Union Discourse, 1790–2014, by Alix Rule, Jean-Philippe Cointet, and Peter S. Bearman.

A very informative and concise summary of this paper was also posted in an article on Phys.org, also on August 10, 2015, entitled in a post entitled Big Data Analysis of State of the Union Remarks Changes View of American History, (no author is listed). I will summarize, annotate and post a few questions of my own. I highly recommend clicking through and reading the full report and the summary article together for a fuller perspective on this achievement. (Similar types of textual and graphical analyses of US law were covered in the May 15, 2015 Subway Fold post entitled Recent Visualization Projects Involving US Law and The Supreme Court.)

The researchers developed custom algorithms for their research. They were applied to the total number of words used in all of the addresses, from 1790 to 2014, of 1.8 million.  By identifying the frequencies of “how often words appear jointly” and “mapping their relation to other clusters of words”, the team was able to highlight “dominant social and political” issues and their relative historical time frames. (See Figure 1 at the bottom of Page 2 of the full report for this lexigraphical mapping.)

One of the researchers’ key findings was that although the topics of “industry, finance, and foreign policy” were predominant and persist throughout all of the addresses, following World War II the recurring keywords focus further upon “nation building, the regulation of business and the financing of public infrastructure”. While it is well know that these emergent terms were all about modern topics, the researchers were thus able to pinpoint the exact time frames when they first appeared. (See Page 5 of the full report for the graphic charting these data trends.)

Foreign Policy Patters

The year 1917 struck the researchers as a critical turning point because it represented a dramatic shift in the data containing words indicative of more modern times. This was the year that the US sent its troops into battle in Europe in WWI. It was then that new keywords in the State of the Union including “democracy,” “unity,” “peace” and “terror” started to appear and recur. Later, by the 1940’s, word clusters concerning the Navy appeared, possibly indicating emerging U.S. isolationism. However, they suddenly disappeared again as the U.S. became far more involved in world events.

Domestic Policy Patterns

Over time, the researchers identified changes in the terminology used when addressing domestic matters. These concerned the government’s size, economic regulation, and equal opportunity. Although the focus of the State of the Union speeches remained constant, new keywords appeared whereby “tax relief,” “incentives” and “welfare” have replaced “Treasury,” “amount” and “expenditures”.

An important issue facing this project was that during the more than two centuries being studied, keywords could substantially change in meaning over time. To address this, the researchers applied new network analysis methods developed by Jean-Philippe Cointet, a team member, co-author and physicist at the University of Paris. They were intended to identify changes whereby “some political topics morph into similar topics with common threads” as others fade away. (See Figure 3 at the bottom of Page 4 of the full paper for this enlightening graphic.*)

As a result, they were able to parse the relative meanings of words as they appear with each other and, on a more macro level, in the “context of evolving topics”. For example, it was discovered that the word “Constitution” was:

  • closely associated with the word “people” in early U.S. history
  • linked to “state” following the Civil War
  • linked to “law” during WWI and WWII, and
  • returned to “people” during the 1970’s

Thus, the meaning of “Constitution” must be assessed in its historical context.

My own questions are as follows:

  • Would this analytical approach yield new and original insights if other long-running historical records such as the Congressional Record were like subject to the research team’s algorithms and analytics?
  • Could companies and other commercial businesses derive any benefits from having their historical records similarly analyzed? For example, might it yield new insights and recommendations for corporate governance and information governance policies and procedures?
  • Could this methodology be used as an electronic discovery tool for litigators as they parse corporate documents produced during a case?

 


*  This is also resembles the methodology and appearance to the graphic on Page 29 of the law review article entitled A Quantitative Analysis of the Writing Style of the U.S. Supreme Court, by Keith Carlson, Michael A. Livermore, and Daniel Rockmore, Dated March 11, 2015, linked to and discussed with the May 15, 2015 Subway Fold post cited above.

How Robots and Computer Algorithms are Challenging Jobs and the Economy

"p8nderInG exIstence", Image by JD Hancock

“p8nderInG exIstence”, Image by JD Hancock

A Silicon Valley entrepreneur named Martin Ford (@MFordFuture) has written a very timely new book entitled Rise of the Robots: Technology and the Threat of a Jobless Future (Basic Books, 2015), which is currently receiving much attention in the media. The depth and significance of the critical issues it raises is responsible for this wide-beam spotlight.*

On May 27, 2015 the author was interviewed on The Brian Lehrer Show on radio station WNYC in New York. The result is available as a truly captivating 30-minute podcast entitled When Will Robots Take Your Job?  I highly recommend listening to this in its entirety. I will sum up. annotate and add some questions of my own to this.

The show’s host, Brian Lehrer, expertly guided Mr. Ford through the key complexities and subtleties of the thesis of his provocative new book. First, for now and increasingly in the future, robots and AI algorithms are taking on increasingly difficult task that are displacing human workers. Especially for those jobs that involve more repetitive and routine tasks, the more likely it will be that machines will replace human workers. This will not occur in just one sector, but rather, “across the board” in all areas of the marketplace.  For example, IBM’s Watson technology can be accessed using natural language which, in the future, might result in humans no longer being able to recognize its responses as coming from a machine.

Mr. Ford believes we are moving towards an economic model where productivity is increasing but jobs and income are decreasing. He asserts that solving this dilemma will be critical. Consequently, his second key point was the challenge of detaching work from income. He is proposing the establishment of some form of system where income is guaranteed. He believes this would still support Capitalism and would “produce plenty of income that could be taxed”. No nation is yet moving in this direction, but he thinks that Europe might be more amenable to it in the future.

He further believes that the US will be most vulnerable to displacement of workers because it leads the world in the use of technology but “has no safety net” for those who will be put out by this phenomenon. (For a more positive perspective on this, see the December 27, 2014 Subway Fold post entitled Three New Perspectives on Whether Artificial Intelligence Threatens or Benefits the World.)

Brian Lehrer asked his listeners to go to a specific page on the site of the regular podcast called Planet Money on National Public Radio. (“NPR” is the network of publicly supported radio stations that includes WNYC). This page entitled Will Your Job be Done by a Machine? displays a searchable database of job titles and the corresponding chance that each will be replaced by automation. Some examples that were discussed included:

  • Real estate agents with a 86.4% chance
  • Financial services workers with a 23% chance
  • Software developers with a 12.8% chance

Then the following six listeners called in to speak with Mr. Ford:

  • Caller 1 asked about finding a better way to get income to the population beyond the job market. This was squarely on point with Mr. Ford’s first main point about decoupling income and jobs. He was not advocating for somehow trying to stop technological progress. However, he reiterated how machines are “becoming autonomous workers, no longer just tools”.
  • Caller 2 asked whether Mr. Ford had seen a YouTube video entitled Humans Need Not Apply. Mr. Ford had seen it and recommended it. The caller said that the most common reply to this video (which tracks very closely with many of Mr. Ford’s themes), he has heard was, wrongly in his opinion, that “People will do something else”. Mr. Ford replied that people must find other things that they can get paid to do. The caller also said that machine had made it much easier and more economical for his to compose and record his own music.
  • Caller 3 raised the topic of automation in the medical profession. Specifically, whether IBM’s Watson could one day soon replace doctors. Mr. Ford believes that Watson will have an increasing effect here, particularly in fields such as radiology. However, it will have a lesser impact in those specialties where doctors and patients need to interact more with each other. (See also these three recent Subway Fold posts on the applications of Watson to TED Talks, business apps and the legal profession.)
  • Caller 4 posited that only humans can conceive ideas and be original. He asked about how can computers identify patterns for which they have not been programmed. He cited the example of the accidental discovery of penicillin. Mr. Ford replied that machines will not replace scientists but they can replace service workers. Therefore, he is “more worried about the average person”. Brian Lehrer then asked him about driverless cars and, perhaps, even driverless Uber cabs one day. Mr. answered that although expectations were high that this will eventually happen. He is concerned that taxi drivers will lose jobs. (See this September 15, 2014 Subway Fold post on Uber and the “sharing economy”.)  Which led to …
  • Caller 5 who is currently a taxi driver in New York. They discussed how, in particular, many types of drivers who drive for commerce are facing this possibility. Brian Lehrer followed-up by asking whether this may somehow lead to the end of Capitalism. Mr. Ford that Capitalism “can continue to work” but it must somehow “adapt to new laws and circumstances”.
  • Caller 6 inquired whether one of the proposals raised in VR pioneer Jaron Lanier’s book entitled Who Owns the Future (Simon & Schuster, 2013), whereby people could perhaps be paid for the information they provide online. This might be a possible means to financially assist people in the future. Mr. Ford’s response was that while it was “an interesting idea” it would be “difficult to implement”. As well, he believes that Google would resist this. He made a further distinction between his concept of guaranteed income and Lanier’s proposal insofar he believes that “Capitalism can adapt” more readily to his concept. (I also highly recommend Lanier’s book for its originality and deep insights.)

Brian Lehrer concluded by raising the prospect of self-aware machines. He noted that Bill Gates and Stephen Hawking had recently warned about this possibility. Mr. Ford responded that “we are too far from this now”. For him, today’s concern is on automation’s threat to jobs, many of which are becoming easier to reduce to a program.

To say the very least, to my own organic and non-programmatic way of thinking, this was an absolutely mind-boggling discussion. I greatly look forward to this topic will continue to gather momentum and expanded media coverage.

My own questions include:

  • How should people at the beginning, middle and end of their careers be advised and educated to adapt to these rapid changes so that they can not only survive, but rather, thrive within them?
  • What role should employers, employees, educators and the government take, in any and all fields, to keep the workforce up-to-date in the competencies they will need to continue to be valuable contributors?
  • Are the challenges of automation most efficiently met on the global, national and/or local levels by all interested contingencies working together? What forms should their cooperation take?

*  For two additional book reviews I recommend reading ‘Rise of the Robots’ and ‘Shadow Work’ by Barbara Ehrenreich in the May 11, 2015 edition of The New York Times, and Soon They’ll Be Driving It, Too by Sumit Paul-Choudhury in the May 15, 2015 edition of The Wall Street Journal (subscription required).

Artificial Intelligence Apps for Business are Approaching a Tipping Point

5528275910_52c9b1ffb4_z

“Algorithmic Contaminations”, Image by Derek Gavey

There have been many points during the long decades of the development of business applications using artificial intelligence (AI) when it appeared that The Rubicon was about to be crossed. That is, this technology often seemed to be right on the verge of going mainstream in global commerce. Yet it has still to achieve a pervasive critical mass despite the vast resources and best intentions behind it.

Today, with the advent of big data and analytics and their many manifestations¹ spreading across a wide spectrum of industries, AI is now closer than ever to reaching such a tipping point. Consultant, researcher and writer Brad Power makes a timely and very persuasive case for this in a highly insightful and informative article entitled Artificial Intelligence Is Almost Ready for Business, posted on the Harvard Business Review site on March 19, 2015. I will summarize some of the key points, add some links and annotations, and pose a few questions.

Mr. Power sees AI being brought to this threshold by the convergence of rapidly increasing tech sophistication, “smarter analytics engines, and the surge in data”. Further adding to this mix is the incursion and growth of the Internet of Things (Iot), better means to analyze “unstructured” data, and the extensive categorization and tagging of data. Furthermore,  there is the dynamic development and application of smarter algorithms to  discern complex patterns in data and to generate increasingly accurate predictive models.

So, too, does machine learning² play a highly significant role in AI applications. It can be used to generate “thousands of models a week”. For example, a model premised upon machine learning can be used to select which ads should be placed on what websites within milliseconds in order to achieve the greatest effectiveness in reaching an intended audience. DataXu is one of the model-generating firms in this space.

Tom Davenport, a professor at Babson College and an analytics expert³, was one of the experts interviewed by Power for this article. To paraphrase part of his quote, he believes that AI and machine learning would be useful adjuncts to the human analysts (often referred to as “quants”4). Such living experts can far better understand what goes into and comes out of a model than a machine learning app alone. In turn, these people can persuade business managers to apply such “analytical insights” to actual business processes.

AI can also now produce greater competitive efficiencies by closing the time gap between analyzing vast troves of data at high speeds and decision-making on how to apply the results.

IBM, one of the leading integrators of AI, has recently invested $1B in the creation of their Watson Group, dedicated to exploring and leveraging commercial applications for Watson technology. (X-ref to the December 1, 2014 Subway Fold post entitled Possible Futures for Artificial Intelligence in Law Practice for a previous mention and links concerning Watson.) This AI technology is currently finding significant applications in:

  • Health Care: Due to Watson’s ability to process large, complex and dynamic quantities of text-based data, in turn, it can “generate and evaluate hypotheses”. With specialized training, these systems can then make recommendation about treating particular patients. A number of elite medical teaching institutions in the US are currently engaging with IBM to deploy Watson to “better understand patients’ diseases” and recommend treatments.
  • Finance: IBM is presently working with 45 companies on app including “digital virtual agents” to work with their clients in a more “personalized way”; a “wealth advisor” for financial planning5; and on “risk and compliance management”. For example, USAA provides financial services to active members of the military services and to their veterans. Watson is being used to provide a range of financial support functions to soldiers as they move to civilian status.
  • Startups: The company has designated $100 million for introducing Watson into startups. An example is WayBlazer which, according to its home page, is “an intelligence search discovery system” to assist travelers throughout all aspects of their trips. This online service is designed to be an easy-to-use series of tools to provide personalized answers and support for all sort of journeys. At the very bottom of their home page on the left-hand side are the words “Powered by IBM Watson”.

To get a sense of the trends and future of AI in business, Power spoke with the following venture capitalists who are knowledgeable about commercial AI systems:

  • Mark Gorenberg, Managing Director at Zetta Venture Partners which invests in big data and analytics startups, believes that AI is an “embedded technology”. It is akin to adding “a brain”  – – in the form of cognitive computing – – to an application through the use of machine learning.
  • Promod Haque, senior managing partner at Norwest Venture Partners, believes that when systems can draw correlations and construct models on their own, and thus labor is reduced and better speed is achieved. As a result, a system such as Watson can be used to automate analytics.
  • Manoj Saxena, a venture capitalists (formerly with IBM), sees analytics migrating to the “cognitive cloud”, a virtual place where vast amounts of data from various sources will be processed in such a manner to “deliver real-time analytics and learning”. In effect, this will promote smoother integration of data with analytics, something that still remains challenging. He is an investor in a startup called Cognitive Scale working in this space.

My own questions (not derived through machine learning), are as follows:

  • Just as Watson has begun to take root in the medical profession as described above, will it likewise begin to propagate across the legal profession? For a fascinating analysis as a starting point, I highly recommend 10 Predictions About How IBM’s Watson Will Impact the Legal Profession, by Paul Lippe and Daniel Katz, posted on the ABA Journal website on October 4, 2014. I wonder whether the installation of Watson in law offices take on other manifestations that cannot even be foreseen until the systems are fully integrated and running? Might the Law of Unintended Consequences also come into play and produce some negative results?
  • What other professions, industries and services might also be receptive to the introduction of AI apps that have not even considered it yet?
  • Does the implementation of AI always produce reductions in jobs or is this just a misconception? Are there instances where it could increase the number of jobs in a business? What might be some of the new types of jobs that could result? How about AI Facilitator, AI Change Manager, AI Instructor, AI Project Manager, AI Fun Specialist, Chief AI Officer,  or perhaps AI Intrapreneur?

______________________

1.  There are 27 Subway Fold posts in the category of Big Data and Analytics.

2.  See the Subway Fold posts on December 12, 2014 entitled Three New Perspectives on Whether Artificial Intelligence Threatens or Benefits the World and then another on December 10, 2014 entitled  Is Big Data Calling and Calculating the Tune in Today’s Global Music Market? for specific examples of machine learning.

3.  I had the great privilege of reading one of Mr. Davenport’s very insightful and enlightening books entitled Competing on Analytics: The New Science of Winning (Harvard Business Review Press, 2007), when it was first published. I learned a great deal from it and this book was responsible for my initial interest in the applications of analytics in commerce. Although big data and analytics have grown exponentially since its publication, I still highly recommend this book for its clarity, usefulness and enthusiasm for this field.

4.  For a terrific and highly engaging study of the work and influence of these analysts, I also recommend reading The Quants: How a New Breed of Math Whizzes Conquered Wall Street and Nearly Destroyed It (Crown Business, 2011), by Scott Patterson.

5.  There was a most interesting side-by-side comparison of human versus automated financial advisors entitled Robo-Advisors Vs. Financial Advisors: Which Is Better For Your Money? by Libby Kane, posted on BusinessInsider.com on July 21, 2014.