I Can See for Miles: Using Augmented Reality to Analyze Business Data Sets

matrix-1013612__340, Image from Pixabay

While one of The Who’s first hit singles, I Can See for Miles, was most certainly not about data visualization, it still might – – on a bit of a stretch – – find a fitting a new context in describing one of the latest dazzling new technologies in the opening stanza’s declaration “there’s magic in my eye”.  In determining Who’s who and what’s what about all this, let’s have a look at report on a new tool enabling data scientists to indeed “see for miles and miles” in an exciting new manner.

This innovative approach was recently the subject of a fascinating article by an augmented reality (AR) designer named Benjamin Resnick about his team’s work at IBM on a project called Immersive Insights, entitled Visualizing High Dimensional Data In Augmented Reality, posted on July 3, 2017 on Medium.com. (Also embedded is a very cool video of a demo of this system.) They are applying AR’s rapidly advancing technology1 to display, interpret and leverage insights gained from business data. I highly recommend reading this in its entirety. I will summarize and annotate it here and then pose a few real-world questions of my own.

Immersive Insights into Where the Data-Points Point

As Resnick foresees such a system in several years, a user will start his or her workday by donning their AR glasses and viewing a “sea of gently glowing, colored orbs”, each of which visually displays their business’s big data sets2. The user will be able to “reach out select that data” which, in turn, will generate additional details on a nearby monitor. Thus, the user can efficiently track their data in an “aesthetically pleasing” and practical display.

The project team’s key objective is to provide a means to visualize and sum up the key “relationships in the data”. In the short-term, the team is aiming Immersive Insights towards data scientists who are facile coders, enabling them to visualize, using AR’s capabilities upon time series, geographical and networked data. For their long-term goals, they are planning to expand the range of Immersive Insight’s applicability to the work of business analysts.

For example, Instacart, a same-day food delivery service, maintains an open source data set on food purchases (accessible here). Every consumer represents a data-point wherein they can be expressed as a “list of purchased products” from among 50,000 possible items.

How can this sizable pool of data be better understood and the deeper relationships within it be extracted and understood? Traditionally, data scientists create a “matrix of 2D scatter plots” in their efforts to intuit connections in the information’s attributes. However, for those sets with many attributes, this methodology does not scale well.

Consequently, Resnick’s team has been using their own new approach to:

  • Lower complex data to just three dimensions in order to sum up key relationships
  • Visualize the data by applying their Immersive Insights application, and
  • Iteratively label and color-code the data” in conjunction with an “evolving understanding” of its inner workings

Their results have enable them to “validate hypotheses more quickly” and establish a sense about the relationships within the data sets. As well, their system was built to permit users to employ a number of versatile data analysis programming languages.

The types of data sets being used here are likewise deployed in training machine learning systems3. As a result, the potential exists for these three technologies to become complementary and mutually supportive in identifying and understanding relationships within the data as well as deriving any “black box predictive models”.

Analyzing the Instacart Data Set: Food for Thought

Passing over the more technical details provided on the creation of team’s demo in the video (linked above), and next turning to the results of the visualizations, their findings included:

  • A great deal of the variance in Instacart’s customers’ “purchasing patterns” was between those who bought “premium items” and those who chose less expensive “versions of similar items”. In turn, this difference has “meaningful implications” in the company’s “marketing, promotion and recommendation strategies”.
  • Among all food categories, produce was clearly the leader. Nearly all customers buy it.
  • When the users were categorized by the “most common department” they patronized, they were “not linearly separable”. This is, in terms of purchasing patterns, this “categorization” missed most of the variance in the system’s three main components (described above).

Resnick concludes that the three cornerstone technologies of Immersive Insights – – big data, augmented reality and machine learning – – are individually and in complementary combinations “disruptive” and, as such, will affect the “future of business and society”.

Questions

  • Can this system be used on a real-time basis? Can it be configured to handle changing data sets in volatile business markets where there are significant changes within short time periods that may affect time-sensitive decisions?
  • Would web metrics be a worthwhile application, perhaps as an add-on module to a service such as Google Analytics?
  • Is Immersive Insights limited only to business data or can it be adapted to less commercial or non-profit ventures to gain insights into processes that might affect high-level decision-making?
  • Is this system extensible enough so that it will likely end up finding unintended and productive uses that its designers and engineers never could have anticipated? For example, might it be helpful to juries in cases involving technically or financially complex matters such as intellectual property or antitrust?

 


1.  See the Subway Fold category Virtual and Augmented Reality for other posts on emerging AR and VR applications.

2.  See the Subway Fold category of Big Data and Analytics for other posts covering a range of applications in this field.

3.  See the Subway Fold category of Smart Systems for other posts on developments in artificial intelligence, machine learning and expert systems.

4.  For a highly informative and insightful examination of this phenomenon where data scientists on occasion are not exactly sure about how AI and machine learning systems produce their results, I suggest a click-through and reading of The Dark Secret at the Heart of AI,  by Will Knight, which was published in the May/June 2017 issue of MIT Technology Review.

Digital Smarts Everywhere: The Emergence of Ambient Intelligence

Image from Pixabay

Image from Pixabay

The Troggs were a legendary rock and roll band who were part of the British Invasion in the late 1960’s. They have always been best known for their iconic rocker Wild Thing. This was also the only Top 10 hit that ever had an ocarina solo. How cool is that! The band went on to have two other major hits, With a Girl Like You and Love is All Around.¹

The third of the band’s classic singles can be stretched a bit to be used as a helpful metaphor to describe an emerging form pervasive “all around”-edness, this time in a more technological context. Upon reading a fascinating recent article on TechCrunch.com entitled The Next Stop on the Road to Revolution is Ambient Intelligence, by Gary Grossman, on May 7, 2016, you will find a compelling (but not too rocking) analysis about how the rapidly expanding universe of digital intelligent systems wired into our daily routines is becoming more ubiquitous, unavoidable and ambient each day.

All around indeed. Just as romance can dramatically affect our actions and perspectives, studies now likewise indicate that the relentless global spread of smarter – – and soon thereafter still smarter – – technologies is comparably affecting people’s lives at many different levels.² 

We have followed just a sampling of developments and trends in the related technologies of artificial intelligence, machine learning, expert systems and swarm intelligence in these 15 Subway Fold posts. I believe this new article, adding “ambient intelligence” to the mix, provides a timely opportunity to bring these related domains closer together in terms of their common goals, implementations and benefits. I highly recommend reading Mr. Grossman’s piece it in its entirety.

I will summarize and annotate it, add some additional context, and then pose some of my own Troggs-inspired questions.

Internet of Experiences

Digital this, that and everything is everywhere in today’s world. There is a surging confluence of connected personal and business devices, the Internet, and the Internet of Things (I0T) ³. Woven closely together on a global scale, we have essentially built “a digital intelligence network that transcends all that has gone before”. In some cases, this quantum of advanced technologies gains the “ability to sense, predict and respond to our needs”, and is becoming part of everyone’s “natural behaviors”.

A forth industrial revolution might even manifest itself in the form of machine intelligence whereby we will interact with the “always-on, interconnected world of things”. As a result, the Internet may become characterized more by experiences where users will converse with ambient intelligent systems everywhere. The supporting planks of this new paradigm include:

A prediction of what more fully realized ambient intelligence might look like using travel as an example appeared in an article entitled Gearing Up for Ambient Intelligence, by Lisa Morgan, on InformationWeek.com on March 14, 2016. Upon leaving his or her plane, the traveler will receive a welcoming message and a request to proceed to the curb to retrieve their luggage. Upon reaching curbside, a self-driving car6 will be waiting with information about the hotel booked for the stay.

Listening

Another article about ambient intelligence entitled Towards a World of Ambient Computing, by Simon Bisson, posted on ZDNet.com on February 14, 2014, is briefly quoted for the line “We will talk, and the world will answer”, to illustrate the point that current technology will be morphing into something in the future that would be nearly unrecognizable today. Grossman’s article proceeds to survey a series of commercial technologies recently brought to market as components of a fuller ambient intelligence that will “understand what we are asking” and provide responsive information.

Starting with Amazon’s Echo, this new device can, among other things:

  • Answer certain types of questions
  • Track shopping lists
  • Place orders on Amazon.com
  • Schedule a ride with Uber
  • Operate a thermostat
  • Provide transit schedules
  • Commence short workouts
  • Review recipes
  • Perform math
  • Request a plumber
  • Provide medical advice

Will it be long before we begin to see similar smart devices everywhere in homes and businesses?

Kevin Kelly, the founding Executive Editor of WIRED and a renowned futurist7, believes that in the near future, digital intelligence will become available in the form of a utility8 and, as he puts it “IQ as a service”. This is already being done by Google, Amazon, IBM and Microsoft who are providing open access to sections of their AI coding.9 He believes that success for the next round of startups will go to those who enhance and transforms something already in existence with the addition of AI. The best example of this is once again self-driving cars.

As well, in a chapter on Ambient Computing from a report by Deloitte UK entitled Tech Trends 2015, it was noted that some products were engineering ambient intelligence into their products as a means to remain competitive.

Recommending

A great deal of AI is founded upon the collection of big data from online searching, the use of apps and the IoT. This universe of information supports neural networks learn from repeated behaviors including people’s responses and interests. In turn, it provides a basis for “deep learning-derived personalized information and services” that can, in turn, derive “increasingly educated guesses with any given content”.

An alternative perspective, that “AI is simply the outsourcing of cognition by machines”, has been expressed by Jason Silva, a technologist, philosopher and video blogger on Shots of Awe. He believes that this process is the “most powerful force in the universe”, that is, of intelligence. Nonetheless, he sees this as an evolutionary process which should not be feared. (See also the December 27, 2014 Subway Fold post entitled  Three New Perspectives on Whether Artificial Intelligence Threatens or Benefits the World.)

Bots are another contemporary manifestation of ambient intelligence. These are a form of software agent, driven by algorithms, that can independently perform a range of sophisticated tasks. Two examples include:

Speaking

Optimally, bots should also be able to listen and “speak” back in return much like a 2-way phone conversation. This would also add much-needed context, more natural interactions and “help to refine understanding” to these human/machine exchanges. Such conversations would “become an intelligent and ambient part” of daily life.

An example of this development path is evident in Google Now. This service combines voice search with predictive analytics to present users with information prior to searching. It is an attempt to create an “omniscient assistant” that can reply to any request for information “including those you haven’t thought of yet”.

Recently, the company created a Bluetooth-enable prototype of lapel pin based on this technology that operates just by tapping it much like the communicators on Star Trek. (For more details, see Google Made a Secret Prototype That Works Like the Star Trek Communicator, by Victor Luckerson, on Time.com, posted on November 22, 2015.)

The configurations and specs of AI-powered devices, be it lapel pins, some form of augmented reality10 headsets or something else altogether, supporting such pervasive and ambient intelligence are not exactly clear yet. Their development and introduction will take time but remain inevitable.

Will ambient intelligence make our lives any better? It remains to be seen, but it is probably a viable means to handle some of more our ordinary daily tasks. It will likely “fade into the fabric of daily life” and be readily accessible everywhere.

Quite possibly then, the world will truly become a better place to live upon the arrival of ambient intelligence-enabled ocarina solos.

My Questions

  • Does the emergence of ambient intelligence, in fact, signal the arrival of a genuine fourth industrial revolution or is this all just a semantic tool to characterize a broader spectrum of smarter technologies?
  • How might this trend affect overall employment in terms of increasing or decreasing jobs on an industry by industry basis and/or the entire workforce? (See also this June 4, 2015 Subway Fold post entitled How Robots and Computer Algorithms Are Challenging Jobs and the Economy.)
  • How might this trend also effect non-commercial spheres such as public interest causes and political movements?
  • As ambient intelligence insinuates itself deeper into our online worlds, will this become a principal driver of new entrepreneurial opportunities for startups? Will ambient intelligence itself provide new tools for startups to launch and thrive?

 


1.   Thanks to Little Steven (@StevieVanZandt) for keeping the band’s music in occasional rotation on The Underground Garage  (#UndergroundGarage.) Also, for an appreciation of this radio show see this August 14, 2014 Subway Fold post entitled The Spirit of Rock and Roll Lives on Little Steven’s Underground Garage.

2.  For a remarkably comprehensive report on the pervasiveness of this phenomenon, see the Pew Research Center report entitled U.S. Smartphone Use in 2015, by Aaron Smith, posted on April 1, 2015.

3These 10 Subway Fold posts touch upon the IoT.

4.  The Subway Fold category Big Data and Analytics contains 50 posts cover this topic in whole or in part.

5.  The Subway Fold category Telecommunications contains 12 posts cover this topic in whole or in part.

6These 5 Subway Fold posts contain references to self-driving cars.

7.   Mr. Kelly is also the author of a forthcoming book entitled The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future, to be published on June 7, 2016 by Viking.

8.  This September 1, 2014 Subway Fold post entitled Possible Futures for Artificial Intelligence in Law Practice, in part summarized an article by Steven Levy in the September 2014 issue of WIRED entitled Siri’s Inventors Are Building a Radical New AI That Does Anything You Ask. This covered a startup called Viv Labs whose objective was to transform AI into a form of utility. Fast forward to the Disrupt NY 2016 conference going on in New York last week. On May 9, 2016, the founder of Viv, Dag Kittlaus, gave his presentation about the Viv platform. This was reported in an article posted on TechCrunch.com entitled Siri-creator Shows Off First Public Demo of Viv, ‘the Intelligent Interface for Everything’, by Romain Dillet, on May 9, 2016. The video of this 28-minute presentation is embedded in this story.

9.  For the full details on this story see a recent article entitled The Race Is On to Control Artificial Intelligence, and Tech’s Future by John Markoff and Steve Lohr, published in the March 25, 2016 edition of The New York Times.

10These 10 Subway Fold posts cover some recent trends and development in augmented reality.

The Mediachain Project: Developing a Global Creative Rights Database Using Blockchain Technology

Image from Pixabay

Image from Pixabay

When people are dating it is often said that they are looking for “Mr. Right” or “Ms. Right”. That is, finding someone who is just the right romantic match for them.

In the case of today’s rapid development, experimentation and implementation of blockchain technology, if a startup’s new technology takes hold, it might soon find a highly productive (but maybe not so romantic) match in finding Mr. or Ms. [literal] Right by deploying the blockchain as a form of global registry of creative works ownership.

These 5 Subway Fold posts have followed just a few of the voluminous developments in bitcoin and blockchain technologies. Among them, the August 21, 2015 post entitled Two Startups’ Note-Worthy Efforts to Adapt Blockchain Technology for the Music Industry has drawn the most number of clicks. A new report on Coindesk.com on February 23, 2016 entitled Mediachain is Using Blockchain to Create a Global Rights Database by Pete Rizzo provides a most interesting and worthwhile follow on related to this topic. I recommend reading it in its entirety. I will summarize and annotate it to provide some additional context, and then pose several of my own questions.

Producing a New Protocol for Ownership, Protection and Monetization

Applications of blockchain technology for the potential management of economic and distribution benefits of “creative professions”, including writers, musicians and others, that have been significantly affected by prolific online file copying still remains relatively unexplored. As a result, they do not yet have the means to “prove and protect ownership” of their work. Moreover, they do have an adequate system to monetize their digital works. But the blockchain, by virtue of its structural and operational nature, can supply these creators with “provenance, identity and micropayments“. (See also the October 27, 2015 Subway Fold post entitled Summary of the Bitcoin Seminar Held at Kaye Scholer in New York on October 15, 2015 for some background on these three elements.)

Now on to the efforts of a startup called Mine ( @mine_labs ), co-founded by Jesse Walden and Denis Nazarov¹. They are preparing to launch a new metadata protocol called Mediachain that enables creators working in digital media to write data describing their work along with a timestamp directly onto the blockchain. (Yet another opportunity to go out on a sort of, well, date.)  This system is based upon the InterPlanetary File System (IPFS). Mine believes that IPSF is a “more readable format” than others presently available.

Walden thinks that Mediachain’s “decentralized nature”, rather than a more centralized model, is critical to its objectives. Previously, a very “high-profile” somewhat similar initiative to establish a similarly global “database of musical rights and works” called the Global Repertoire Database (GDR) had failed.

(Mine maintains this page of a dozen recent posts on Medium.com about their technology that provides some interesting perspectives and details about the Mediachain project.)

Mediachain’s Objectives

Walden and Nazarov have tried to innovate by means of changing how media businesses interact with the Internet, as opposed to trying to get them to work within its established standards. Thus, the Mediachain project has emerged with its focal point being the inclusion of descriptive data and attribution for image files by combining blockchain technology and machine learning². As well, it can accommodate reverse queries to identify the creators of images.

Nazarov views Mediachain “as a global rights database for images”. When used in conjunction with, among others, Instagram, he and Walden foresee a time when users of this technology can retrieve “historic information” about a file. By doing so, they intend to assist in “preserving identity”, given the present challenges of enforcing creator rights and “monetizing content”. In the future, they hope that Mediachain inspires the development of new platforms for music and movies that would permit ready access to “identifying information for creative works”. According to Walden, their objective is to “unbundle identity and distribution” and provide the means to build new and more modern platforms to distribute creative works.

Potential Applications for Public Institutions

Mine’s co-founders believe that there is further meaningful potential for Mediachain to be used by public organizations who provide “open data sets for images used in galleries, libraries and archives”. For example:

  • The Metropolitan Museum of Art (“The Met” as it is referred to on their website and by all of my fellow New York City residents), has a mandate to license the metadata about the contents of their collections. The museum might have a “metadata platform” of its own to host many such projects.
  • The New York Public Library has used their own historical images, that are available to the public to, among other things, create maps.³ Nazarov and Walden believe they could “bootstrap the effort” by promoting Mediachain’s expanded apps in “consumer-facing projects”.

Maintaining the Platform Security, Integrity and Extensibility

Prior to Mediachain’s pending launch, Walden and Nazarov are highly interested in protecting the platform’s legitimate users from “bad actors” who might wrongfully claim ownership of others’ rightfully owned works. As a result, to ensure the “trust of its users”, their strategy is to engage public institutions as a model upon which to base this. Specifically, Mine’s developers are adding key functionality to Mediachain that enables the annotation of images.

The new platform will also include a “reputation system” so that subsequent users will start to “trust the information on its platform”. In effect, their methodology empowers users “to vouch for a metadata’s correctness”. The co-founders also believe that the “Mediachain community” will increase or decrease trust in the long-term depending on how it operates as an “open access resource”. Nazarov pointed to the success of Wikipedia to characterize this.

Following the launch of Mediachain, the startup’s team believes this technology could be integrated into other existing social media sites such as the blogging platform Tumblr. Here they think it would enable users to search images including those that may have been subsequently altered for various purposes. As a result, Tumblr would then be able to improve its monetization efforts through the application of better web usage analytics.

The same level of potential, by virtue of using Mediachain, may likewise be found waiting on still other established social media platforms. Nazarov and Walden mentioned seeing Apple and Facebook as prospects for exploration. Nazarov said that, for instance, Coindesk.com could set its own terms for its usage and consumption on Facebook Instant Articles (a platform used by publishers to distribute their multimedia content on FB). Thereafter, Mediachain could possibly facilitate the emergence of entirely new innovative media services.

Nazarov and Walden temper their optimism because the underlying IPFS basis is so new and acceptance and adoption of it may take time. As well, they anticipate “subsequent issues” concerning the platform’s durability and the creation of “standards for metadata”. Overall though, they remain sanguine about Mediachain’s prospects and are presently seeking developers to embrace these challenges.

My Questions

  • How would new platforms and apps using Mediachain and IPSF be affected by the copyright and patent laws and procedures of the US and other nations?
  • How would applications built upon Mediachain affect or integrate with digital creative works distributed by means of a Creative Commons license?
  • What new entrepreneurial opportunities for startup services might arise if this technology eventually gains web-wide adoption and trust among creative communities?  For example, would lawyers and accountants, among many others, with clients in the arts need to develop and offer new forms of guidance and services to navigate a Mediachain-enabled marketplace?
  • How and by whom should standards for using Mediachain and other potential development path splits (also known as “forks“), be established and managed with a high level of transparency for all interested parties?
  • Does analogizing what Bitcoin is to the blockchain also hold equally true for what Mediachain is to the blockchain, or should alternative analogies and perspectives be developed to assist in the explanation, acceptance and usage of this new platform?

June 1, 2016 Update:  For an informative new report on Mediachain’s activities since this post was uploaded in March, I recommend clicking through and reading Mediachain Enivisions a Blockchain-based Tool for Identifying Artists’ Work Across the Internet, by Jonathan Shieber, posted today on TechCrunch.com.


1.   This link from Mine’s website is to an article entitled Introducing Mediachain by Denis Nazarov, originally published on Medium.com on January 2, 2016. He mentions in his text an earlier startup called Diaspora that ultimately failed in its attempt at creating something akin to the Mediachain project. This December 4, 2014 Subway Fold post entitled Book Review of “More Awesome Than Money” concerned a book that expertly explored the fascinating and ultimately tragic inside story of Diaspora.

2.   Many of the more than two dozen Subway Fold posts in the category of Smart Systems cover some of the recent news, trends and applications in machine learning.

3.  For details, see the January 5, 2016 posting on the NY Public Library’s website entitled Free for All: NYPL Enhances Public Domain Collections for Sharing and Reuse, by Shana Kimball and Steven A. Schwarzman.

New IBM Watson and Medtronic App Anticipates Low Blood Glucose Levels for People with Diabetes

"Glucose: Ball-and-Stick Model", Image by Siyavula Education

“Glucose: Ball-and-Stick Model”, Image by Siyavula Education

Can a new app jointly developed by IBM with its Watson AI technology in partnership with the medical device maker Medtronic provide a new form of  support for people with diabetes by safely avoiding low blood glucose (BG) levels (called hypoglycemia), in advance? If so, and assuming regulatory approval, this technology could potentially be a very significant boon to the care of this disease.

Basics of Managing Blood Glucose Levels

The daily management of diabetes involves a diverse mix of factors including, but not limited to, regulating insulin dosages, checking BG readings, measuring carbohydrate intakes at meals, gauging activity and exercise levels, and controlling stress levels. There is no perfect algorithm to do this as everyone with this medical condition is different from one another and their bodies react in individual ways in trying to balance all of this while striving to maintain healthy short and long-term control of BG levels.

Diabetes care today operates in a very data-driven environment. BG levels, expressed numerically, can be checked on hand-held meter and test strips using a single drop of blood or a continuous glucose monitoring system (CGM). The latter consists of a thumb drive-size sensor attached with temporary adhesive to the skin and a needle attached to this unit inserted just below the skin. This system provides patients with frequent real-time readings of their BG levels, and whether they are trending up or down, so they can adjust their medication accordingly. That is, for A grams of carbs and B amounts of physical activity and other contributing factors, C amount of insulin can be calculated and dispensed.

Insulin itself can be administered either manually by injection or by an insulin pump (also with a subcutaneously inserted needle). The later of these consists of two devices: The pump itself, a small enclosed device (about the size of a pager), with an infusion needle placed under the patient’s skin and a Bluetooth-enabled handheld device (that looks just like a smartphone), used to adjust the pump’s dosage and timing of insulin released. Some pump manufacturers are also bringing to market their latest generation of CGMs that integrate their data and command functions with their users’ smartphones.

(The links in the previous two paragraphs are to Wikipedia pages with detailed pages and photos on CGMs and insulin pumps. See also, this June 27, 2015 Subway Fold post entitled Medical Researchers are Developing a “Smart Insulin Patch” for another glucose sensing and insulin dispensing system under development.)

The trickiest part of all of these systems is maintaining levels of BG throughout each day that are within an acceptable range of values. High levels can result in a host of difficult symptoms. Hypoglycemic low levels, can quickly become serious, manifesting as dizziness, confusion and other symptoms, and can ultimately lead to unconsciousness in extreme cases if not treated immediately.

New App for Predicting and Preventing Low Blood Glucose Levels

Taking this challenge to an entirely new level, at last week’s annual Consumer Electronics Show (CES) held in Las Vegas, IBM and Medtronic jointly announced their new app to predict hypoglycemic events in advance. The app is built upon Watson’s significant strengths in artificial intelligence (AI) and machine learning to sift through and intuit patterns in large volumes of data, in this case generated from Medtronic’s user base for their CGMs and insulin pumps. This story was covered in a most interesting article posted in The Washington Post on January 6, 2016 entitled IBM Extends Health Care Bet With Under Armour, Medtronic by Jing Cao and Michelle Fay Cortez. I will summarize and annotate this report and then pose some of my own questions.

The announcement and demo of this new app on January 6, 2016 at CES showed the process by which a patient’s data can be collected from their Medtronic devices and then combined with additional information from their wearable activity trackers and food intake. Next, all of this information is processed through Watson in order to “provide feedback” for the patient to “manage their diabetes”.

Present and Future Plans for The App and This Approach

Making the announcement were Virginia Rometty, Chairman, President and CEO of IBM, and Omar Ishrak, Chairman and CEO of Medtronic. The introduction of this technology is expected in the summer of 2016. It still needs to be submitted to the US government’s regulatory review process.

Ms. Rometty said that the capability to predict low BG events, in some cases up to three hours before they occur, is a “breakthrough”. She described Watson as “cognitive computing”, using algorithms to generate “prescriptive and predictive analysis”. The company is currently making a major strategic move into finding and facilitating applications and partners for Watson in the health care industry. (The eight Subway Fold posts cover other various systems and developments using Watson.)

Hooman Hakami, Executive VP and President, of the Diabetes Group at Medtronic, described how his company is working to “anticipate” how the behavior of each person with Diabetes affects their blood glucose levels. With this information they can then “make choices to improve their health”. Here is the page from the company’s website about their partnership with IBM to work together on treating diabetes.

In the future, both companies are aiming to “give patients real-time information” on how their individual data is influencing their BG levels and “provide coaching” to assist them in making adjustments to keep their readings in a “healthy range”. In one scenario, patients might receive a text message that “they have an 85% chance of developing low blood sugar within an hour”. This will also include a recommendation to watch their readings and eat something to raise their BG back up to a safer level.

My Questions

  • Will this make patients more or less diligent in their daily care? Is there potential for patients to possibly assume less responsibility for their care if they sense that the management of their diabetes is running on a form of remote control? Alternatively, might this result in too much information for patients to manage?
  • What would be the possible results if this app is ever engineered to work in conjunction with the artificial pancreas project being led in by Ed Damiano and his group of developers in Boston?
  • If this app receives regulatory approval and gains wide acceptance among people with diabetes, what does this medical ecosystem look like in the future for patients, doctors, medical insurance providers, regulatory agencies, and medical system entrepreneurs? How might it positively or negatively affect the market for insulin pumps and CGMs?
  • Should IBM and Medtronic consider making their app available on and open-source basis to enable other individuals and groups of developers to improve it as well as develop additional new apps?
  • Whether and how will insurance policies for both patients and manufacturers, deal with any potential liability that may arise if the app causes some unforeseen adverse effects? Will medical insurance even cover, encourage or discourage the use of such an app?
  • Will the data generated by the app ever be used in any unforeseen ways that could affect patients’ privacy? Would patients using the new app have to relinquish all rights and interests to their own BG data?
  • What other medical conditions might benefit from a similar type of real-time data, feedback and recommendation system?

Mind Over Subject Matter: Researchers Develop A Better Understanding of How Human Brains Manage So Much Information

"Synapse", Image by Allan Ajifo

“Synapse”, Image by Allan Ajifo

There is an old joke that goes something like this: What do you get for the man who has everything and then where would he put it all?¹ This often comes to mind whenever I have experienced the sensation of information overload caused by too much content presented from too many sources. Especially since the advent of the Web, almost everyone I know has also experienced the same overwhelming experience whenever the amount of information they are inundated with everyday seems increasingly difficult to parse, comprehend and retain.

The multitudes of screens, platforms, websites, newsfeeds, social media posts, emails, tweets, blogs, Post-Its, newsletters, videos, print publications of all types, just to name a few, are relentlessly updated and uploaded globally and 24/7. Nonetheless, for each of us on an individualized basis, a good deal of the substance conveyed by this quantum of bits and ocean of ink somehow still manages to stick somewhere in our brains.

So, how does the human brain accomplish this?

Less Than 1% of the Data

A recent advancement covered in a fascinating report on Phys.org on December 15, 2015 entitled Researchers Demonstrate How the Brain Can Handle So Much Data, by Tara La Bouff describes the latest research into how this happens. I will summarize and annotate this, and pose a few organic material-based questions of my own.

To begin, people learn to identify objects and variations of them rather quickly. For example, a letter of the alphabet, no matter the font or an individual regardless of their clothing and grooming, are always recognizable. We can also identify objects even if the view of them is quite limited. This neurological processing proceeds reliably and accurately moment-by-moment during our lives.

A recent discover by a team of researchers at Georgia Institute of Technology (Georgia Tech)² found that we can make such visual categorizations with less than 1% of the original data. Furthermore, they created and validated an algorithm “to explain human learning”. Their results can also be applied to “machine learning³, data analysis and computer vision4. The team’s full findings were published in the September 28, 2015 issue of Neural Computation in an article entitled Visual Categorization with Random Projection by Rosa I. Arriaga, David Rutter, Maya Cakmak and Santosh S. Vempala. (Dr. Cakmak is from the University of Washington, while the other three are from Georgia Tech.)

Dr. Vempala believes that the reason why humans can quickly make sense of the very complex and robust world is because, as he observes “It’s a computational problem”. His colleagues and team members examined “human performance in ‘random projection tests'”. These measure the degree to which we learn to identify an object. In their work, they showed their test subjects “original, abstract images” and then asked them if they could identify them once again although using a much smaller segment of the image. This led to one of their two principal discoveries that the test subjects required only 0.15% of the data to repeat their identifications.

Algorithmic Agility

In the next phase of their work, the researchers prepared and applied an algorithm to enable computers (running a simple neural network, software capable of imitating very basic human learning characteristics), to undertake the same tasks. These digital counterparts “performed as well as humans”. In turn, the results of this research provided new insight into human learning.

The team’s objective was to devise a “mathematical definition” of typical and non-typical inputs. Next, they wanted to “predict which data” would be the most challenging for the test subjects and computers to learn. As it turned out, they each performed with nearly equal results. Moreover, these results proved that “data will be the hardest to learn over time” can be predicted.

In testing their theory, the team prepared 3 different groups of abstract images of merely 150 pixels each. (See the Phys.org link above containing these images.) Next, they drew up “small sketches” of them. The full image was shown to the test subjects for 10 seconds. Next they were shown 16 of the random sketches. Dr. Vempala of the team was “surprised by how close the performance was” of the humans and the neural network.

While the researchers cannot yet say with certainty that “random projection”, such as was demonstrated in their work, happens within our brains, the results lend support that it might be a “plausible explanation” for this phenomenon.

My Questions

  • Might this research have any implications and/or applications in virtual reality and augment reality systems that rely on both human vision and processing large quantities of data to generate their virtual imagery? (These 13 Subway Fold posts cover a wide range of trends and applications in VR and AR.)
  • Might this research also have any implications and/or applications in medical imaging and interpretation since this science also relies on visual recognition and continual learning?
  • What other markets, professions, universities and consultancies might be able to turn these findings into new entrepreneurial and scientific opportunities?

 


1.  I was unable to definitively source this online but I recall that I may have heard it from the comedian Steven Wright. Please let me know if you are aware of its origin. 

2.  For the work of Georgia’s Tech’s startup incubator see the Subway Fold post entitled Flashpoint Presents Its “Demo Day” in New York on April 16, 2015.

3.   These six Subway Fold posts cover a range of trends and developments in machine learning.

4.   Computer vision was recently taken up in an October 14, 2015 Subway Fold post entitled Visionary Developments: Bionic Eyes and Mechanized Rides Derived from Dragonflies.

New Startup’s Legal Research App is Driven by Watson’s AI Technology

"Supreme Court, 60 Centre Street, Lower Manhattan", Image by Jeffrey Zeldman

[New York] “Supreme Court, 60 Centre Street, Lower Manhattan”, Image by Jeffrey Zeldman

May 9, 2016: An update on this post appears below.


Casey Stengel had a very long, productive and colorful career in professional baseball as a player for five teams and later as a manager for four teams. He was also consistently quotable (although not to the extraordinary extent of his Yankee teammate Yogi Berra). Among the many things Casey said was his frequent use of the imperative “You could look it up”¹.

Transposing this gem of wisdom from baseball to law practice², looking something up has recently taken on an entirely new meaning. According to a fascinating article posted on Wired.com on August 8, 2015 entitled Your Lawyer May Soon Ask for This AI-Powered App for Legal Help by Davey Alba, a startup called ROSS Intelligence has created a unique new system for legal research. I will summarize, annotate and pose a few questions of my own.

One of the founders of ROSS, Jimoh Ovbiagele (@findingjimoh), was influenced by his childhood and adolescent experiences to pursue studying either law or computer science. He chose the latter and eventually ended up working on an artificial intelligence (AI) project at the University of Toronto. It occurred to him then that machine learning (a branch of AI), would be a helpful means to assist lawyers with their daily research requirements.

Mr. Ovbiagele joined with a group of co-founders from diverse fields including “law to computers to neuroscience” in order to launch ROSS Intelligence. The legal research app they have created is built upon the AI capabilities of IBM’s Watson as well as voice recognition. Since June, it has been tested in “small-scale pilot programs inside law firms”.

AI, machine learning, and IBM’s Watson technology have been variously taken up in these nine Subway Fold posts. Among them, the September 1, 2014 post entitled Possible Futures for Artificial Intelligence in Law Practice covered the possible legal applications of IBM’s Watson (prior to the advent of ROSS), and the technology of a startup called Viv Labs.

Essentially, the new ROSS app enables users to ask legal research questions in natural language. (See also the July 31, 2015 Subway Fold post entitled Watson, is That You? Yes, and I’ve Just Demo-ed My Analytics Skills at IBM’s New York Office.) Similar in operation to Apple’s Siri, when a question is verbally posed to ROSS, it searches through its data base of legal documents to provide an answer along with the source documents used to derive it. The reply is also assessed and assigned a “confidence rating”. The app further prompts the user to evaluate the response’s accuracy with an onscreen “thumbs up” or “thumbs down”. The latter will prompt ROSS to produce another result.

Andrew Arruda (@AndrewArruda), another co-founder of ROSS, described the development process as beginning with a “blank slate” version of Watson into which they uploaded “thousands of pages of legal documents”, and trained their system to make use of Watson’s “question-and-answer APIs³. Next, they added machine learning capabilities they called “LegalRank” (a reference to Google’s PageRank algorithm), which, among others things, designates preferential results depending upon the supporting documents’ numbers of citations and the deciding courts’ jurisdiction.

ROSS is currently concentrating on bankruptcy and insolvency issues. Mr. Ovbiagele and Mr. Arruda are sanguine about the possibilities of adding other practice areas to its capabilities. Furthermore, they believe that this would meaningfully reduce the $9.6 billion annually spent on legal research, some of which is presently being outsourced to other countries.

In another recent and unprecedented development, the global law firm Dentons has formed its own incubator for legal technology startups called NextLaw Labs. According to this August 7, 2015 news release on Denton’s website, the first company they have signed up for their portfolio is ROSS Intelligence.

Although it might be too early to exclaim “You could look it up” at this point, my own questions are as follows:

  • What pricing model(s) will ROSS use to determine the cost structure of their service?
  • Will ROSS consider making its app available to public interest attorneys and public defenders who might otherwise not have the resources to pay for access fees?
  • Will ROSS consider making their service available to the local, state and federal courts?
  • Should ROSS make their service available to law schools or might this somehow impair their traditional teaching of the fundamentals of legal research?
  • Will ROSS consider making their service available to non-lawyers in order to assist them in represent themselves on a pro se basis?
  • In addition to ROSS, what other entrepreneurial opportunities exist for other legal startups to deploy Watson technology?

Finally, for an excellent roundup of five recent articles and blog posts about the prospects of Watson for law practice, I highly recommend a click-through to read Five Solid Links to Get Smart on What Watson Means for Legal, by Frank Strong, posted on The Business of Law Blog on August 11, 2015.


May 9, 2016 Update:  The global law firm of Baker & Hostetler, headquartered in Cleveland, Ohio, has become the first US AmLaw 100 firm to announce that it has licensed the ROSS Intelligence’s AI product for its bankruptcy practice. The full details on this were covered in an article posted on May 6, 2016 entitled AI Pioneer ROSS Intelligence Lands Its First Big Law Clients by Susan Beck, on Law.com.

Some follow up questions:

  • Will other large law firms, as well as medium and smaller firms, and in-house corporate departments soon be following this lead?
  • Will they instead wait and see whether this produces tangible results for attorneys and their clients?
  • If so, what would these results look like in terms of the quality of legal services rendered, legal business development, client satisfaction, and/or the incentives for other legal startups to move into the legal AI space?

1.  This was also the title of one of his many biographies,  written by Maury Allen, published Times Books in 1979.

2.  For the best of both worlds, see the legendary law review article entitled The Common Law Origins of the Infield Fly Rule, by William S. Stevens, 123 U. Penn. L. Rev. 1474 (1975).

3For more details about APIs see the July 2, 2015 Subway Fold post entitled The Need for Specialized Application Programming Interfaces for Human Genomics R&D Initiatives