New IBM Watson and Medtronic App Anticipates Low Blood Glucose Levels for People with Diabetes

"Glucose: Ball-and-Stick Model", Image by Siyavula Education

“Glucose: Ball-and-Stick Model”, Image by Siyavula Education

Can a new app jointly developed by IBM with its Watson AI technology in partnership with the medical device maker Medtronic provide a new form of  support for people with diabetes by safely avoiding low blood glucose (BG) levels (called hypoglycemia), in advance? If so, and assuming regulatory approval, this technology could potentially be a very significant boon to the care of this disease.

Basics of Managing Blood Glucose Levels

The daily management of diabetes involves a diverse mix of factors including, but not limited to, regulating insulin dosages, checking BG readings, measuring carbohydrate intakes at meals, gauging activity and exercise levels, and controlling stress levels. There is no perfect algorithm to do this as everyone with this medical condition is different from one another and their bodies react in individual ways in trying to balance all of this while striving to maintain healthy short and long-term control of BG levels.

Diabetes care today operates in a very data-driven environment. BG levels, expressed numerically, can be checked on hand-held meter and test strips using a single drop of blood or a continuous glucose monitoring system (CGM). The latter consists of a thumb drive-size sensor attached with temporary adhesive to the skin and a needle attached to this unit inserted just below the skin. This system provides patients with frequent real-time readings of their BG levels, and whether they are trending up or down, so they can adjust their medication accordingly. That is, for A grams of carbs and B amounts of physical activity and other contributing factors, C amount of insulin can be calculated and dispensed.

Insulin itself can be administered either manually by injection or by an insulin pump (also with a subcutaneously inserted needle). The later of these consists of two devices: The pump itself, a small enclosed device (about the size of a pager), with an infusion needle placed under the patient’s skin and a Bluetooth-enabled handheld device (that looks just like a smartphone), used to adjust the pump’s dosage and timing of insulin released. Some pump manufacturers are also bringing to market their latest generation of CGMs that integrate their data and command functions with their users’ smartphones.

(The links in the previous two paragraphs are to Wikipedia pages with detailed pages and photos on CGMs and insulin pumps. See also, this June 27, 2015 Subway Fold post entitled Medical Researchers are Developing a “Smart Insulin Patch” for another glucose sensing and insulin dispensing system under development.)

The trickiest part of all of these systems is maintaining levels of BG throughout each day that are within an acceptable range of values. High levels can result in a host of difficult symptoms. Hypoglycemic low levels, can quickly become serious, manifesting as dizziness, confusion and other symptoms, and can ultimately lead to unconsciousness in extreme cases if not treated immediately.

New App for Predicting and Preventing Low Blood Glucose Levels

Taking this challenge to an entirely new level, at last week’s annual Consumer Electronics Show (CES) held in Las Vegas, IBM and Medtronic jointly announced their new app to predict hypoglycemic events in advance. The app is built upon Watson’s significant strengths in artificial intelligence (AI) and machine learning to sift through and intuit patterns in large volumes of data, in this case generated from Medtronic’s user base for their CGMs and insulin pumps. This story was covered in a most interesting article posted in The Washington Post on January 6, 2016 entitled IBM Extends Health Care Bet With Under Armour, Medtronic by Jing Cao and Michelle Fay Cortez. I will summarize and annotate this report and then pose some of my own questions.

The announcement and demo of this new app on January 6, 2016 at CES showed the process by which a patient’s data can be collected from their Medtronic devices and then combined with additional information from their wearable activity trackers and food intake. Next, all of this information is processed through Watson in order to “provide feedback” for the patient to “manage their diabetes”.

Present and Future Plans for The App and This Approach

Making the announcement were Virginia Rometty, Chairman, President and CEO of IBM, and Omar Ishrak, Chairman and CEO of Medtronic. The introduction of this technology is expected in the summer of 2016. It still needs to be submitted to the US government’s regulatory review process.

Ms. Rometty said that the capability to predict low BG events, in some cases up to three hours before they occur, is a “breakthrough”. She described Watson as “cognitive computing”, using algorithms to generate “prescriptive and predictive analysis”. The company is currently making a major strategic move into finding and facilitating applications and partners for Watson in the health care industry. (The eight Subway Fold posts cover other various systems and developments using Watson.)

Hooman Hakami, Executive VP and President, of the Diabetes Group at Medtronic, described how his company is working to “anticipate” how the behavior of each person with Diabetes affects their blood glucose levels. With this information they can then “make choices to improve their health”. Here is the page from the company’s website about their partnership with IBM to work together on treating diabetes.

In the future, both companies are aiming to “give patients real-time information” on how their individual data is influencing their BG levels and “provide coaching” to assist them in making adjustments to keep their readings in a “healthy range”. In one scenario, patients might receive a text message that “they have an 85% chance of developing low blood sugar within an hour”. This will also include a recommendation to watch their readings and eat something to raise their BG back up to a safer level.

My Questions

  • Will this make patients more or less diligent in their daily care? Is there potential for patients to possibly assume less responsibility for their care if they sense that the management of their diabetes is running on a form of remote control? Alternatively, might this result in too much information for patients to manage?
  • What would be the possible results if this app is ever engineered to work in conjunction with the artificial pancreas project being led in by Ed Damiano and his group of developers in Boston?
  • If this app receives regulatory approval and gains wide acceptance among people with diabetes, what does this medical ecosystem look like in the future for patients, doctors, medical insurance providers, regulatory agencies, and medical system entrepreneurs? How might it positively or negatively affect the market for insulin pumps and CGMs?
  • Should IBM and Medtronic consider making their app available on and open-source basis to enable other individuals and groups of developers to improve it as well as develop additional new apps?
  • Whether and how will insurance policies for both patients and manufacturers, deal with any potential liability that may arise if the app causes some unforeseen adverse effects? Will medical insurance even cover, encourage or discourage the use of such an app?
  • Will the data generated by the app ever be used in any unforeseen ways that could affect patients’ privacy? Would patients using the new app have to relinquish all rights and interests to their own BG data?
  • What other medical conditions might benefit from a similar type of real-time data, feedback and recommendation system?

Semantic Scholar and BigDIVA: Two New Advanced Search Platforms Launched for Scientists and Historians

"The Chemistry of Inversin", Image by Raymond Bryson

“The Chemistry of Inversion”, Image by Raymond Bryson

As powerful, essential and ubiquitous as Google and its search engine peers are across the world right now, needs often arise in many fields and marketplaces for platforms that can perform much deeper and wider digital excavating. So it is that two new highly specialized search platforms have just come online specifically engineered, in these cases, for scientists and historians. Each is structurally and functionally quite different from the other but nonetheless is aimed at very specific professional user bases with advanced researching needs.

These new systems provide uniquely enhanced levels of context, understanding and visualization with their results. We recently looked at a very similar development in the legal professions in an August 18, 2015 Subway Fold post entitled New Startup’s Legal Research App is Driven by Watson’s AI Technology.

Let’s have a look at both of these latest innovations and their implications. To introduce them, I will summarize and annotate two articles about their introductions, and then I will pose some additional questions of my own.

Semantic Scholar Searches for New Knowledge in Scientific Papers

First, the Allen Institute for Artificial Intelligence (A2I) has just launched its new system called Semantic Scholar, freely accessible on the web. This event was covered on NewScientist.com in a fascinating article entitled AI Tool Scours All the Science on the Web to Find New Knowledge on November 2, 2015 by Mark Harris.

Semantic Scholar is supported by artificial intelligence (AI)¹ technology. It is automated to “read, digest and categorise findings” from approximately two million scientific papers published annually. Its main objective is to assist researchers with generating new ideas and “to identify previously overlooked connections and information”. Because of the of the overwhelming volume of the scientific papers published each year, which no individual scientist could possibly ever read, it offers an original architecture and high-speed manner to mine all of this content.

Oren Etzioni, the director of A2I, termed Semantic Scholar a “scientist’s apprentice”, to assist them in evaluating developments in their fields. For example, a medical researcher could query it about drug interactions in a certain patient cohort having diabetes. Users can also pose their inquiries in natural language format.

Semantic Scholar operates by executing the following functions:

  • crawling the web in search of “publicly available scientific papers”
  • scanning them into its database
  • identifying citations and references that, in turn, are assessed to determine those that are the most “influential or controversial”
  • extracting “key phrases” appearing similar papers, and
  • indexing “the datasets and methods” used

A2I is not alone in their objectives. Other similar initiatives include:

Semantic Scholar will gradually be applied to other fields such as “biology, physics and the remaining hard sciences”.

BigDIVA Searches and Visualized 1,500 Year of History

The second innovative search platform is called Big Data Infrastructure Visualization Application (BigDIVA). The details about its development, operation and goals were covered in a most interesting report posted online on  NC State News on October 12, 2015 entitled Online Tool Aims to Help Researchers Sift Through 15 Centuries of Data by Matt Shipman.

This is joint project by the digital humanities scholars at NC State University and Texas A&M University. Its objective is to assist researchers in, among other fields, literature, religion, art and world history. This is done by increasing the speed and accuracy of searching through “hundreds of thousands of archives and articles” covering 450 A.D. to the present. BigDIVA was formally rolled out at NC State on October 16, 2015.

BigDIVA presents users with an entirely new visual interface, enabling them to search and review “historical documents, images of art and artifacts, and any scholarship associated” with them. Search results, organized by categories of digital resources, are displayed in infographic format4. The linked NC State News article includes a photo of this dynamic looking interface.

This system is still undergoing beta testing and further refinement by its development team. Expansion of its resources on additional historical periods is expected to be an ongoing process. Current plans are to make this system available on a subscription basis to libraries and universities.

My Questions

  • Might the IBM Watson, Semantic Scholar, DARPA and BigDIVA development teams benefit from sharing design and technical resources? Would scientists, doctors, scholars and others benefit from multi-disciplinary teams working together on future upgrades and perhaps even new platforms and interface standards?
  • What other professional, academic, scientific, commercial, entertainment and governmental fields would benefit from these highly specialized search platforms?
  • Would Google, Bing, Yahoo and other commercial search engines benefit from participating with the developers in these projects?
  • Would proprietary enterprise search vendors likewise benefit from similar joint ventures with the types of teams described above?
  • What entrepreneurial opportunities might arise for vendors, developers, designers and consultants who could provide fuller insight and support for developing customized search platforms?

 


October 19, 2017 Update: For the latest progress and applications of the Semantic Scholar system, see the latest report in a new post on the Economist.com entitled A Better Way to Search Through Scientific Papers, dated October 19, 2017.


1.  These 11 Subway Fold posts cover various AI applications and developments.

2.  These seven Subway Fold posts cover a range of IBM Watson applications and markets.

3A new history of DARPA written by Annie Jacobsen was recently published entitled The Pentagon’s Brain (Little Brown and Company, 2015).

4.  See this January 30, 2015 Subway Fold post entitled Timely Resources for Studying and Producing Infographics on this topic.

New Startup’s Legal Research App is Driven by Watson’s AI Technology

"Supreme Court, 60 Centre Street, Lower Manhattan", Image by Jeffrey Zeldman

[New York] “Supreme Court, 60 Centre Street, Lower Manhattan”, Image by Jeffrey Zeldman

May 9, 2016: An update on this post appears below.


Casey Stengel had a very long, productive and colorful career in professional baseball as a player for five teams and later as a manager for four teams. He was also consistently quotable (although not to the extraordinary extent of his Yankee teammate Yogi Berra). Among the many things Casey said was his frequent use of the imperative “You could look it up”¹.

Transposing this gem of wisdom from baseball to law practice², looking something up has recently taken on an entirely new meaning. According to a fascinating article posted on Wired.com on August 8, 2015 entitled Your Lawyer May Soon Ask for This AI-Powered App for Legal Help by Davey Alba, a startup called ROSS Intelligence has created a unique new system for legal research. I will summarize, annotate and pose a few questions of my own.

One of the founders of ROSS, Jimoh Ovbiagele (@findingjimoh), was influenced by his childhood and adolescent experiences to pursue studying either law or computer science. He chose the latter and eventually ended up working on an artificial intelligence (AI) project at the University of Toronto. It occurred to him then that machine learning (a branch of AI), would be a helpful means to assist lawyers with their daily research requirements.

Mr. Ovbiagele joined with a group of co-founders from diverse fields including “law to computers to neuroscience” in order to launch ROSS Intelligence. The legal research app they have created is built upon the AI capabilities of IBM’s Watson as well as voice recognition. Since June, it has been tested in “small-scale pilot programs inside law firms”.

AI, machine learning, and IBM’s Watson technology have been variously taken up in these nine Subway Fold posts. Among them, the September 1, 2014 post entitled Possible Futures for Artificial Intelligence in Law Practice covered the possible legal applications of IBM’s Watson (prior to the advent of ROSS), and the technology of a startup called Viv Labs.

Essentially, the new ROSS app enables users to ask legal research questions in natural language. (See also the July 31, 2015 Subway Fold post entitled Watson, is That You? Yes, and I’ve Just Demo-ed My Analytics Skills at IBM’s New York Office.) Similar in operation to Apple’s Siri, when a question is verbally posed to ROSS, it searches through its data base of legal documents to provide an answer along with the source documents used to derive it. The reply is also assessed and assigned a “confidence rating”. The app further prompts the user to evaluate the response’s accuracy with an onscreen “thumbs up” or “thumbs down”. The latter will prompt ROSS to produce another result.

Andrew Arruda (@AndrewArruda), another co-founder of ROSS, described the development process as beginning with a “blank slate” version of Watson into which they uploaded “thousands of pages of legal documents”, and trained their system to make use of Watson’s “question-and-answer APIs³. Next, they added machine learning capabilities they called “LegalRank” (a reference to Google’s PageRank algorithm), which, among others things, designates preferential results depending upon the supporting documents’ numbers of citations and the deciding courts’ jurisdiction.

ROSS is currently concentrating on bankruptcy and insolvency issues. Mr. Ovbiagele and Mr. Arruda are sanguine about the possibilities of adding other practice areas to its capabilities. Furthermore, they believe that this would meaningfully reduce the $9.6 billion annually spent on legal research, some of which is presently being outsourced to other countries.

In another recent and unprecedented development, the global law firm Dentons has formed its own incubator for legal technology startups called NextLaw Labs. According to this August 7, 2015 news release on Denton’s website, the first company they have signed up for their portfolio is ROSS Intelligence.

Although it might be too early to exclaim “You could look it up” at this point, my own questions are as follows:

  • What pricing model(s) will ROSS use to determine the cost structure of their service?
  • Will ROSS consider making its app available to public interest attorneys and public defenders who might otherwise not have the resources to pay for access fees?
  • Will ROSS consider making their service available to the local, state and federal courts?
  • Should ROSS make their service available to law schools or might this somehow impair their traditional teaching of the fundamentals of legal research?
  • Will ROSS consider making their service available to non-lawyers in order to assist them in represent themselves on a pro se basis?
  • In addition to ROSS, what other entrepreneurial opportunities exist for other legal startups to deploy Watson technology?

Finally, for an excellent roundup of five recent articles and blog posts about the prospects of Watson for law practice, I highly recommend a click-through to read Five Solid Links to Get Smart on What Watson Means for Legal, by Frank Strong, posted on The Business of Law Blog on August 11, 2015.


May 9, 2016 Update:  The global law firm of Baker & Hostetler, headquartered in Cleveland, Ohio, has become the first US AmLaw 100 firm to announce that it has licensed the ROSS Intelligence’s AI product for its bankruptcy practice. The full details on this were covered in an article posted on May 6, 2016 entitled AI Pioneer ROSS Intelligence Lands Its First Big Law Clients by Susan Beck, on Law.com.

Some follow up questions:

  • Will other large law firms, as well as medium and smaller firms, and in-house corporate departments soon be following this lead?
  • Will they instead wait and see whether this produces tangible results for attorneys and their clients?
  • If so, what would these results look like in terms of the quality of legal services rendered, legal business development, client satisfaction, and/or the incentives for other legal startups to move into the legal AI space?

1.  This was also the title of one of his many biographies,  written by Maury Allen, published Times Books in 1979.

2.  For the best of both worlds, see the legendary law review article entitled The Common Law Origins of the Infield Fly Rule, by William S. Stevens, 123 U. Penn. L. Rev. 1474 (1975).

3For more details about APIs see the July 2, 2015 Subway Fold post entitled The Need for Specialized Application Programming Interfaces for Human Genomics R&D Initiatives

Watson, is That You? Yes, and I’ve Just Demo-ed My Analytics Skills at IBM’s New York Office

IMAG0082

My photo of the entrance to IBM’s office at 590 Madison Avenue in New York, taken on July 29, 2015.

I don’t know if my heart can take this much excitement. Yesterday morning, on July 29, 2015, I attended a very compelling presentation and demo of IBM’s Watson technology. (This AI-driven platform has been previously covered in these five Subway Fold posts.) Just the night before, I saw I saw a demo of some ultra-cool new augmented reality systems.

These experiences combined to make me think of the evocative line from Supernaut by Black Sabbath with Ozzie belting out “I’ve seen the future and I’ve left it behind”. (Incidentally, this prehistoric metal classic also has, IMHO, one of the most infectious guitar riffs with near warp speed shredding ever recorded.)

Yesterday’s demo of Watson Analytics, one key component among several on the platform, was held at IBM’s office in the heart of midtown Manhattan at 590 Madison Avenue and 57th Street. The company very graciously put this on for free. All three IBM employees who spoke were outstanding in their mastery of the technology, enthusiasm for its capabilities, and informative Q&A interactions with the audience. Massive kudos to everyone involved at the company in making this happen. Thanks, too, for all of attendees who asked such excellent questions.

Here is my summary of the event:

Part 1: What is Watson Analytics?

The first two speakers began with a fundamental truth about all organizations today: They have significant quantities of data that are driving all operations. However, a bottleneck often occurs when business users understand this but do not have the technical skills to fully leverage it while, correspondingly, IT workers do not always understand the business context of the data. As a result, business users have avenues they can explore but not the best or most timely means to do so.

This is where Watson can be introduced because it can make these business users self-sufficient with an accessible, extensible and easier to use analytics platform. It is, as one the speakers said “self-service analytics in the cloud”. Thus, Watson’s constituents can be seen as follows:

  • “What” is how to discover and define business problems.
  • “Why” is to understand the existence and nature of these problems.
  • “How” is to share this process in order to affect change.

However, Watson is specifically not intended to be a replacement for IT in any way.

Also, one of Watson’s key capabilities is enabling users to pursue their questions by using a natural language dialog. This involves querying Watson with questions posed in ordinary spoken terms.

Part 2: A Real World Demo Using Airline Customer Data

Taken directly from the world of commerce, the IBM speakers presented a demo of Watson Analytics’ capabilities by using a hypothetical situation in the airline industry. This involved a business analyst in the marketing department for an airline who was given a compilation of market data prepared by a third-party vendor. The business analyst was then assigned by his manager with researching and planning how to reduce customer churn.

Next, by enlisting Watson Analytics for this project, the two central issues became how the data could be:

  • Better understand, leveraged and applied to increase customers’ positive opinions while simultaneously decreasing the defections to the airline’s competitors.
  • Comprehensively modeled in order to understand the elements of the customer base’s satisfaction, or lack thereof, with the airline’s services.

The speakers then put Watson Analytics through its paces up on large screens for the audience to observe and ask questions. The goal of this was to demonstrate how the business analyst could query Watson Analytics and, in turn, the system would provide alternative paths to explore the data in search of viable solutions.

Included among the variables that were dexterously tested and spun into enlightening interactive visualizations were:

  • Satisfaction levels by other peer airlines and the hypothetical Watson customer airline
  • Why customers are, and are not, satisfied with their travel experience
  • Airline “status” segments such as “platinum” level flyers who pay a premium for additional select services
  • Types of travel including for business and vacation
  • Other customer demographic points

This results of this exercise as they appeared onscreen showed how Watson could, with its unique architecture and tool set:

  • Generate “guided suggestions” using natural language dialogs
  • Identify and test all manner of connections among the population of data
  • Use predictive analytics to make business forecasts¹
  • Calculate a “data quality score” to assess the quality of the data upon which business decisions are based
  • Map out a wide variety of data dashboards and reports to view and continually test the data in an effort to “tell a story”
  • Integrate an extensible set of analytical and graphics tools to sift through large data sets from relevant Twitter streams²

Part 3: The Development Roadmap

The third and final IBM speaker outlined the following paths for Watson Analytics that are currently in beta stage development:

  • User engagement developers are working on an updated visual engine, increased connectivity and capabilities for mobile devices, and social media commentary.
  • Collaboration developers are working on accommodating work groups and administrators, and dashboards that can be filtered and distributed.
  • Data connector developers are working on new data linkages, improving the quality and shape of connections, and increasing the degrees of confidence in predictions. For example, a connection to weather data is underway that would be very helpful to the airline (among other industries), in the above hypothetical.
  • New analytics developers are working on new functionality for business forecasting, time series analyses, optimization, and social media analytics.

Everyone in the audience, judging by the numerous informal conversations that quickly formed in the follow-up networking session, left with much to consider about the potential applications of this technology.


1.  Please see these six Subway Fold posts covering predictive analytics in other markets.

2.  Please see these ten Subway Fold posts for a variety of other applications of Twitter analytics.

 

How Robots and Computer Algorithms are Challenging Jobs and the Economy

"p8nderInG exIstence", Image by JD Hancock

“p8nderInG exIstence”, Image by JD Hancock

A Silicon Valley entrepreneur named Martin Ford (@MFordFuture) has written a very timely new book entitled Rise of the Robots: Technology and the Threat of a Jobless Future (Basic Books, 2015), which is currently receiving much attention in the media. The depth and significance of the critical issues it raises is responsible for this wide-beam spotlight.*

On May 27, 2015 the author was interviewed on The Brian Lehrer Show on radio station WNYC in New York. The result is available as a truly captivating 30-minute podcast entitled When Will Robots Take Your Job?  I highly recommend listening to this in its entirety. I will sum up. annotate and add some questions of my own to this.

The show’s host, Brian Lehrer, expertly guided Mr. Ford through the key complexities and subtleties of the thesis of his provocative new book. First, for now and increasingly in the future, robots and AI algorithms are taking on increasingly difficult task that are displacing human workers. Especially for those jobs that involve more repetitive and routine tasks, the more likely it will be that machines will replace human workers. This will not occur in just one sector, but rather, “across the board” in all areas of the marketplace.  For example, IBM’s Watson technology can be accessed using natural language which, in the future, might result in humans no longer being able to recognize its responses as coming from a machine.

Mr. Ford believes we are moving towards an economic model where productivity is increasing but jobs and income are decreasing. He asserts that solving this dilemma will be critical. Consequently, his second key point was the challenge of detaching work from income. He is proposing the establishment of some form of system where income is guaranteed. He believes this would still support Capitalism and would “produce plenty of income that could be taxed”. No nation is yet moving in this direction, but he thinks that Europe might be more amenable to it in the future.

He further believes that the US will be most vulnerable to displacement of workers because it leads the world in the use of technology but “has no safety net” for those who will be put out by this phenomenon. (For a more positive perspective on this, see the December 27, 2014 Subway Fold post entitled Three New Perspectives on Whether Artificial Intelligence Threatens or Benefits the World.)

Brian Lehrer asked his listeners to go to a specific page on the site of the regular podcast called Planet Money on National Public Radio. (“NPR” is the network of publicly supported radio stations that includes WNYC). This page entitled Will Your Job be Done by a Machine? displays a searchable database of job titles and the corresponding chance that each will be replaced by automation. Some examples that were discussed included:

  • Real estate agents with a 86.4% chance
  • Financial services workers with a 23% chance
  • Software developers with a 12.8% chance

Then the following six listeners called in to speak with Mr. Ford:

  • Caller 1 asked about finding a better way to get income to the population beyond the job market. This was squarely on point with Mr. Ford’s first main point about decoupling income and jobs. He was not advocating for somehow trying to stop technological progress. However, he reiterated how machines are “becoming autonomous workers, no longer just tools”.
  • Caller 2 asked whether Mr. Ford had seen a YouTube video entitled Humans Need Not Apply. Mr. Ford had seen it and recommended it. The caller said that the most common reply to this video (which tracks very closely with many of Mr. Ford’s themes), he has heard was, wrongly in his opinion, that “People will do something else”. Mr. Ford replied that people must find other things that they can get paid to do. The caller also said that machine had made it much easier and more economical for his to compose and record his own music.
  • Caller 3 raised the topic of automation in the medical profession. Specifically, whether IBM’s Watson could one day soon replace doctors. Mr. Ford believes that Watson will have an increasing effect here, particularly in fields such as radiology. However, it will have a lesser impact in those specialties where doctors and patients need to interact more with each other. (See also these three recent Subway Fold posts on the applications of Watson to TED Talks, business apps and the legal profession.)
  • Caller 4 posited that only humans can conceive ideas and be original. He asked about how can computers identify patterns for which they have not been programmed. He cited the example of the accidental discovery of penicillin. Mr. Ford replied that machines will not replace scientists but they can replace service workers. Therefore, he is “more worried about the average person”. Brian Lehrer then asked him about driverless cars and, perhaps, even driverless Uber cabs one day. Mr. answered that although expectations were high that this will eventually happen. He is concerned that taxi drivers will lose jobs. (See this September 15, 2014 Subway Fold post on Uber and the “sharing economy”.)  Which led to …
  • Caller 5 who is currently a taxi driver in New York. They discussed how, in particular, many types of drivers who drive for commerce are facing this possibility. Brian Lehrer followed-up by asking whether this may somehow lead to the end of Capitalism. Mr. Ford that Capitalism “can continue to work” but it must somehow “adapt to new laws and circumstances”.
  • Caller 6 inquired whether one of the proposals raised in VR pioneer Jaron Lanier’s book entitled Who Owns the Future (Simon & Schuster, 2013), whereby people could perhaps be paid for the information they provide online. This might be a possible means to financially assist people in the future. Mr. Ford’s response was that while it was “an interesting idea” it would be “difficult to implement”. As well, he believes that Google would resist this. He made a further distinction between his concept of guaranteed income and Lanier’s proposal insofar he believes that “Capitalism can adapt” more readily to his concept. (I also highly recommend Lanier’s book for its originality and deep insights.)

Brian Lehrer concluded by raising the prospect of self-aware machines. He noted that Bill Gates and Stephen Hawking had recently warned about this possibility. Mr. Ford responded that “we are too far from this now”. For him, today’s concern is on automation’s threat to jobs, many of which are becoming easier to reduce to a program.

To say the very least, to my own organic and non-programmatic way of thinking, this was an absolutely mind-boggling discussion. I greatly look forward to this topic will continue to gather momentum and expanded media coverage.

My own questions include:

  • How should people at the beginning, middle and end of their careers be advised and educated to adapt to these rapid changes so that they can not only survive, but rather, thrive within them?
  • What role should employers, employees, educators and the government take, in any and all fields, to keep the workforce up-to-date in the competencies they will need to continue to be valuable contributors?
  • Are the challenges of automation most efficiently met on the global, national and/or local levels by all interested contingencies working together? What forms should their cooperation take?

*  For two additional book reviews I recommend reading ‘Rise of the Robots’ and ‘Shadow Work’ by Barbara Ehrenreich in the May 11, 2015 edition of The New York Times, and Soon They’ll Be Driving It, Too by Sumit Paul-Choudhury in the May 15, 2015 edition of The Wall Street Journal (subscription required).

IBM’s Watson is Now Data Mining TED Talks to Extract New Forms of Knowledge

"sydneytocairns_385", Image by Daniel Dimarco

“sydneytocairns_385”, Image by Daniel Dimarco

Who really benefited from the California Gold Rush of 1849? Was it the miners, only some of whom were successfully, or the merchants who sold them their equipment? Historians have differed as to the relative degree, but they largely believe it was the merchants.

Today, it seems we have somewhat of a modern analog to this in our very digital world: The gold rush of 2015 is populated by data miners and IBM is providing them with access to its innovative Watson technology in order for these contemporary prospectors to discover new forms of knowledge.

So then, what happens when Watson is deployed to sift through the thousands of incredibly original and inspiring videos of online TED Talks? Can the results be such that TED can really talk and, when processed by Watson, yield genuine knowledge with meaning and context?

Last week, the extraordinary results of this were on display at the four-day World of Watson exposition here in New York. A fascinating report on it entitled How IBM Watson Can Mine Knowledge from TED Talks by Jeffrey Coveyduc, Director, IBM Watson, and Emily McManus, Editor, TED.com was posted on the TED Blog on May 5, 2015. This was the same day that the newfangled Watson + TED system was introduced at the event. The story also includes a captivating video of a prior 2014 TED Talk by Dario Gil of IBM entitled Cognitive Systems and the Future of Expertise that came to play a critical role in launching this undertaking.

Let’s have a look and see what we can learn from the initial results. I will sum up and annotate this report, and then ask a few additional questions.

One of the key objectives of this new system is to enable users to query it in natural language. An example given in the article is “Will new innovations give me a longer life?”. Thus, users can ask questions about ideas expressed among the full database of TED talks and, for the results, view video excerpts where such ideas have been explored. Watson’s results are further accompanied by a “timeline” of related concepts contained in a particular video clip permitting users to “tunnel sideways” if they wish and explore other topics that are “contextually related”.

The rest of the article is a dialog between the project’s leaders Jeffrey Coveyduc from IBM and TED.com editor Emily McManus that took place at Watson World.  They discussed how this new idea was transformed into a “prototype” of a fresh new means to extract “insights” from within “unstructured video”.

Ms. McManus began by recounting how she had attended Mr. Dario’s TED Talk about cognitive computing. Her admiration of his presentation led her to wonder whether Watson could be applied to TED Talks’ full content whereby users would be able to pose their own questions to it in natural language. She asked Mr. Dario if this might be possible.

Mr. Coveyduc said that Mr. Dario then approached him to discuss the proposed project. They agreed that it was not just the content per se, but rather, that TED’s mission of spreading ideas was so compelling. Because one of Watson’s key objectives is to “extract knowledge” that’s meaningful to the user, it thus appeared to be “a great match”.

Ms. McManus mentioned that TED Talks maintains an application programming interface (API) to assist developers in accessing their nearly 2,000 videos and transcripts. She agreed to provide access to TED’s voluminous content to IBM. The company assembled its multidisciplinary project team in about eight weeks.

They began with no preconceptions as to where their efforts would lead. Mr. Coveyduc said they “needed the freedom to be creative”. They drew from a wide range of Watson’s existing technical services. In early iterations of their work they found that “ideas began to group themselves”. In turn, this led them to “new insights” within TED’s vast content base.

Ms. McManus recently received a call from Mr. Dario asking her to stop by his office in New York. He demo-ed the new system which had completely indexed the TED content. Moreover, he showed how it could display, according to her “a universe of concepts extracted” from the content’s core. Next, using the all important natural language capabilities to pose questions, they demonstrated how the results in the form of numerous short clips which, taken altogether, were compiling “a nuanced and complex answer to a big question”, as she described it.

Mr. Coveyduc believes this new system simplifies how users can inspect and inquire about “diverse expertise and viewpoints” expressed in video. He cited other potential areas of exploration such as broadcast journalism and online courses (also known as MOOCs*). Furthermore, the larger concept underlying this project is that Watson can distill the major “ideas and concepts” of each TED Talk and thus give users the knowledge they are seeking.

Going beyond Watson + TED’s accomplishments, he believes that video search remains quite challenging but this project demonstrates it can indeed be done. As a result, he thinks that mining such deep and wide knowledge within massive video libraries may turn into “a shared source of creativity and innovation”.

My questions are as follows:

  • What if Watson was similarly applied to the vast troves of video classes used by professionals to maintain their ongoing license certifications in, among others, law, medicine and accounting? Would new forms of potentially applicable and actionable knowledge emerge that would benefit these professionals as well as the consumers of their services? Rather than restricting Watson to processing the video classes of each profession separately, what might be the results of instead processing them together in various combinations and permutations?
  • What if Watson was configured to process the video repositories of today’s popular MOOC providers  such as Coursera or edX? The same as well for universities around the world who are putting their classes online. Their missions are more or less the same in enabling remote learning across the web in a multitude of subjects. The results could possibly hold new revelations about subjects that no one can presently discern.

Two other recent Subway Fold posts that can provide additional information, resources and questions that I suggest checking out include Artificial Intelligence Apps for Business are Approaching a Tipping Point posted on March 31, 2015, and Three New Perspectives on Whether Artificial Intelligence Threatens or Benefits the World posted on December 27, 2014.


*  See the September 18, 2014 Subway Fold post entitled A Real Class Act: Massive Open Online Courses (MOOCs) are Changing the Learning Process for the full details and some supporting links.

Artificial Intelligence Apps for Business are Approaching a Tipping Point

5528275910_52c9b1ffb4_z

“Algorithmic Contaminations”, Image by Derek Gavey

There have been many points during the long decades of the development of business applications using artificial intelligence (AI) when it appeared that The Rubicon was about to be crossed. That is, this technology often seemed to be right on the verge of going mainstream in global commerce. Yet it has still to achieve a pervasive critical mass despite the vast resources and best intentions behind it.

Today, with the advent of big data and analytics and their many manifestations¹ spreading across a wide spectrum of industries, AI is now closer than ever to reaching such a tipping point. Consultant, researcher and writer Brad Power makes a timely and very persuasive case for this in a highly insightful and informative article entitled Artificial Intelligence Is Almost Ready for Business, posted on the Harvard Business Review site on March 19, 2015. I will summarize some of the key points, add some links and annotations, and pose a few questions.

Mr. Power sees AI being brought to this threshold by the convergence of rapidly increasing tech sophistication, “smarter analytics engines, and the surge in data”. Further adding to this mix is the incursion and growth of the Internet of Things (Iot), better means to analyze “unstructured” data, and the extensive categorization and tagging of data. Furthermore,  there is the dynamic development and application of smarter algorithms to  discern complex patterns in data and to generate increasingly accurate predictive models.

So, too, does machine learning² play a highly significant role in AI applications. It can be used to generate “thousands of models a week”. For example, a model premised upon machine learning can be used to select which ads should be placed on what websites within milliseconds in order to achieve the greatest effectiveness in reaching an intended audience. DataXu is one of the model-generating firms in this space.

Tom Davenport, a professor at Babson College and an analytics expert³, was one of the experts interviewed by Power for this article. To paraphrase part of his quote, he believes that AI and machine learning would be useful adjuncts to the human analysts (often referred to as “quants”4). Such living experts can far better understand what goes into and comes out of a model than a machine learning app alone. In turn, these people can persuade business managers to apply such “analytical insights” to actual business processes.

AI can also now produce greater competitive efficiencies by closing the time gap between analyzing vast troves of data at high speeds and decision-making on how to apply the results.

IBM, one of the leading integrators of AI, has recently invested $1B in the creation of their Watson Group, dedicated to exploring and leveraging commercial applications for Watson technology. (X-ref to the December 1, 2014 Subway Fold post entitled Possible Futures for Artificial Intelligence in Law Practice for a previous mention and links concerning Watson.) This AI technology is currently finding significant applications in:

  • Health Care: Due to Watson’s ability to process large, complex and dynamic quantities of text-based data, in turn, it can “generate and evaluate hypotheses”. With specialized training, these systems can then make recommendation about treating particular patients. A number of elite medical teaching institutions in the US are currently engaging with IBM to deploy Watson to “better understand patients’ diseases” and recommend treatments.
  • Finance: IBM is presently working with 45 companies on app including “digital virtual agents” to work with their clients in a more “personalized way”; a “wealth advisor” for financial planning5; and on “risk and compliance management”. For example, USAA provides financial services to active members of the military services and to their veterans. Watson is being used to provide a range of financial support functions to soldiers as they move to civilian status.
  • Startups: The company has designated $100 million for introducing Watson into startups. An example is WayBlazer which, according to its home page, is “an intelligence search discovery system” to assist travelers throughout all aspects of their trips. This online service is designed to be an easy-to-use series of tools to provide personalized answers and support for all sort of journeys. At the very bottom of their home page on the left-hand side are the words “Powered by IBM Watson”.

To get a sense of the trends and future of AI in business, Power spoke with the following venture capitalists who are knowledgeable about commercial AI systems:

  • Mark Gorenberg, Managing Director at Zetta Venture Partners which invests in big data and analytics startups, believes that AI is an “embedded technology”. It is akin to adding “a brain”  – – in the form of cognitive computing – – to an application through the use of machine learning.
  • Promod Haque, senior managing partner at Norwest Venture Partners, believes that when systems can draw correlations and construct models on their own, and thus labor is reduced and better speed is achieved. As a result, a system such as Watson can be used to automate analytics.
  • Manoj Saxena, a venture capitalists (formerly with IBM), sees analytics migrating to the “cognitive cloud”, a virtual place where vast amounts of data from various sources will be processed in such a manner to “deliver real-time analytics and learning”. In effect, this will promote smoother integration of data with analytics, something that still remains challenging. He is an investor in a startup called Cognitive Scale working in this space.

My own questions (not derived through machine learning), are as follows:

  • Just as Watson has begun to take root in the medical profession as described above, will it likewise begin to propagate across the legal profession? For a fascinating analysis as a starting point, I highly recommend 10 Predictions About How IBM’s Watson Will Impact the Legal Profession, by Paul Lippe and Daniel Katz, posted on the ABA Journal website on October 4, 2014. I wonder whether the installation of Watson in law offices take on other manifestations that cannot even be foreseen until the systems are fully integrated and running? Might the Law of Unintended Consequences also come into play and produce some negative results?
  • What other professions, industries and services might also be receptive to the introduction of AI apps that have not even considered it yet?
  • Does the implementation of AI always produce reductions in jobs or is this just a misconception? Are there instances where it could increase the number of jobs in a business? What might be some of the new types of jobs that could result? How about AI Facilitator, AI Change Manager, AI Instructor, AI Project Manager, AI Fun Specialist, Chief AI Officer,  or perhaps AI Intrapreneur?

______________________

1.  There are 27 Subway Fold posts in the category of Big Data and Analytics.

2.  See the Subway Fold posts on December 12, 2014 entitled Three New Perspectives on Whether Artificial Intelligence Threatens or Benefits the World and then another on December 10, 2014 entitled  Is Big Data Calling and Calculating the Tune in Today’s Global Music Market? for specific examples of machine learning.

3.  I had the great privilege of reading one of Mr. Davenport’s very insightful and enlightening books entitled Competing on Analytics: The New Science of Winning (Harvard Business Review Press, 2007), when it was first published. I learned a great deal from it and this book was responsible for my initial interest in the applications of analytics in commerce. Although big data and analytics have grown exponentially since its publication, I still highly recommend this book for its clarity, usefulness and enthusiasm for this field.

4.  For a terrific and highly engaging study of the work and influence of these analysts, I also recommend reading The Quants: How a New Breed of Math Whizzes Conquered Wall Street and Nearly Destroyed It (Crown Business, 2011), by Scott Patterson.

5.  There was a most interesting side-by-side comparison of human versus automated financial advisors entitled Robo-Advisors Vs. Financial Advisors: Which Is Better For Your Money? by Libby Kane, posted on BusinessInsider.com on July 21, 2014.