I Can See for Miles: Using Augmented Reality to Analyze Business Data Sets

matrix-1013612__340, Image from Pixabay

While one of The Who’s first hit singles, I Can See for Miles, was most certainly not about data visualization, it still might – – on a bit of a stretch – – find a fitting a new context in describing one of the latest dazzling new technologies in the opening stanza’s declaration “there’s magic in my eye”.  In determining Who’s who and what’s what about all this, let’s have a look at report on a new tool enabling data scientists to indeed “see for miles and miles” in an exciting new manner.

This innovative approach was recently the subject of a fascinating article by an augmented reality (AR) designer named Benjamin Resnick about his team’s work at IBM on a project called Immersive Insights, entitled Visualizing High Dimensional Data In Augmented Reality, posted on July 3, 2017 on Medium.com. (Also embedded is a very cool video of a demo of this system.) They are applying AR’s rapidly advancing technology1 to display, interpret and leverage insights gained from business data. I highly recommend reading this in its entirety. I will summarize and annotate it here and then pose a few real-world questions of my own.

Immersive Insights into Where the Data-Points Point

As Resnick foresees such a system in several years, a user will start his or her workday by donning their AR glasses and viewing a “sea of gently glowing, colored orbs”, each of which visually displays their business’s big data sets2. The user will be able to “reach out select that data” which, in turn, will generate additional details on a nearby monitor. Thus, the user can efficiently track their data in an “aesthetically pleasing” and practical display.

The project team’s key objective is to provide a means to visualize and sum up the key “relationships in the data”. In the short-term, the team is aiming Immersive Insights towards data scientists who are facile coders, enabling them to visualize, using AR’s capabilities upon time series, geographical and networked data. For their long-term goals, they are planning to expand the range of Immersive Insight’s applicability to the work of business analysts.

For example, Instacart, a same-day food delivery service, maintains an open source data set on food purchases (accessible here). Every consumer represents a data-point wherein they can be expressed as a “list of purchased products” from among 50,000 possible items.

How can this sizable pool of data be better understood and the deeper relationships within it be extracted and understood? Traditionally, data scientists create a “matrix of 2D scatter plots” in their efforts to intuit connections in the information’s attributes. However, for those sets with many attributes, this methodology does not scale well.

Consequently, Resnick’s team has been using their own new approach to:

  • Lower complex data to just three dimensions in order to sum up key relationships
  • Visualize the data by applying their Immersive Insights application, and
  • Iteratively label and color-code the data” in conjunction with an “evolving understanding” of its inner workings

Their results have enable them to “validate hypotheses more quickly” and establish a sense about the relationships within the data sets. As well, their system was built to permit users to employ a number of versatile data analysis programming languages.

The types of data sets being used here are likewise deployed in training machine learning systems3. As a result, the potential exists for these three technologies to become complementary and mutually supportive in identifying and understanding relationships within the data as well as deriving any “black box predictive models”.

Analyzing the Instacart Data Set: Food for Thought

Passing over the more technical details provided on the creation of team’s demo in the video (linked above), and next turning to the results of the visualizations, their findings included:

  • A great deal of the variance in Instacart’s customers’ “purchasing patterns” was between those who bought “premium items” and those who chose less expensive “versions of similar items”. In turn, this difference has “meaningful implications” in the company’s “marketing, promotion and recommendation strategies”.
  • Among all food categories, produce was clearly the leader. Nearly all customers buy it.
  • When the users were categorized by the “most common department” they patronized, they were “not linearly separable”. This is, in terms of purchasing patterns, this “categorization” missed most of the variance in the system’s three main components (described above).

Resnick concludes that the three cornerstone technologies of Immersive Insights – – big data, augmented reality and machine learning – – are individually and in complementary combinations “disruptive” and, as such, will affect the “future of business and society”.

Questions

  • Can this system be used on a real-time basis? Can it be configured to handle changing data sets in volatile business markets where there are significant changes within short time periods that may affect time-sensitive decisions?
  • Would web metrics be a worthwhile application, perhaps as an add-on module to a service such as Google Analytics?
  • Is Immersive Insights limited only to business data or can it be adapted to less commercial or non-profit ventures to gain insights into processes that might affect high-level decision-making?
  • Is this system extensible enough so that it will likely end up finding unintended and productive uses that its designers and engineers never could have anticipated? For example, might it be helpful to juries in cases involving technically or financially complex matters such as intellectual property or antitrust?

 


1.  See the Subway Fold category Virtual and Augmented Reality for other posts on emerging AR and VR applications.

2.  See the Subway Fold category of Big Data and Analytics for other posts covering a range of applications in this field.

3.  See the Subway Fold category of Smart Systems for other posts on developments in artificial intelligence, machine learning and expert systems.

4.  For a highly informative and insightful examination of this phenomenon where data scientists on occasion are not exactly sure about how AI and machine learning systems produce their results, I suggest a click-through and reading of The Dark Secret at the Heart of AI,  by Will Knight, which was published in the May/June 2017 issue of MIT Technology Review.

Semantic Scholar and BigDIVA: Two New Advanced Search Platforms Launched for Scientists and Historians

"The Chemistry of Inversin", Image by Raymond Bryson

“The Chemistry of Inversion”, Image by Raymond Bryson

As powerful, essential and ubiquitous as Google and its search engine peers are across the world right now, needs often arise in many fields and marketplaces for platforms that can perform much deeper and wider digital excavating. So it is that two new highly specialized search platforms have just come online specifically engineered, in these cases, for scientists and historians. Each is structurally and functionally quite different from the other but nonetheless is aimed at very specific professional user bases with advanced researching needs.

These new systems provide uniquely enhanced levels of context, understanding and visualization with their results. We recently looked at a very similar development in the legal professions in an August 18, 2015 Subway Fold post entitled New Startup’s Legal Research App is Driven by Watson’s AI Technology.

Let’s have a look at both of these latest innovations and their implications. To introduce them, I will summarize and annotate two articles about their introductions, and then I will pose some additional questions of my own.

Semantic Scholar Searches for New Knowledge in Scientific Papers

First, the Allen Institute for Artificial Intelligence (A2I) has just launched its new system called Semantic Scholar, freely accessible on the web. This event was covered on NewScientist.com in a fascinating article entitled AI Tool Scours All the Science on the Web to Find New Knowledge on November 2, 2015 by Mark Harris.

Semantic Scholar is supported by artificial intelligence (AI)¹ technology. It is automated to “read, digest and categorise findings” from approximately two million scientific papers published annually. Its main objective is to assist researchers with generating new ideas and “to identify previously overlooked connections and information”. Because of the of the overwhelming volume of the scientific papers published each year, which no individual scientist could possibly ever read, it offers an original architecture and high-speed manner to mine all of this content.

Oren Etzioni, the director of A2I, termed Semantic Scholar a “scientist’s apprentice”, to assist them in evaluating developments in their fields. For example, a medical researcher could query it about drug interactions in a certain patient cohort having diabetes. Users can also pose their inquiries in natural language format.

Semantic Scholar operates by executing the following functions:

  • crawling the web in search of “publicly available scientific papers”
  • scanning them into its database
  • identifying citations and references that, in turn, are assessed to determine those that are the most “influential or controversial”
  • extracting “key phrases” appearing similar papers, and
  • indexing “the datasets and methods” used

A2I is not alone in their objectives. Other similar initiatives include:

Semantic Scholar will gradually be applied to other fields such as “biology, physics and the remaining hard sciences”.

BigDIVA Searches and Visualized 1,500 Year of History

The second innovative search platform is called Big Data Infrastructure Visualization Application (BigDIVA). The details about its development, operation and goals were covered in a most interesting report posted online on  NC State News on October 12, 2015 entitled Online Tool Aims to Help Researchers Sift Through 15 Centuries of Data by Matt Shipman.

This is joint project by the digital humanities scholars at NC State University and Texas A&M University. Its objective is to assist researchers in, among other fields, literature, religion, art and world history. This is done by increasing the speed and accuracy of searching through “hundreds of thousands of archives and articles” covering 450 A.D. to the present. BigDIVA was formally rolled out at NC State on October 16, 2015.

BigDIVA presents users with an entirely new visual interface, enabling them to search and review “historical documents, images of art and artifacts, and any scholarship associated” with them. Search results, organized by categories of digital resources, are displayed in infographic format4. The linked NC State News article includes a photo of this dynamic looking interface.

This system is still undergoing beta testing and further refinement by its development team. Expansion of its resources on additional historical periods is expected to be an ongoing process. Current plans are to make this system available on a subscription basis to libraries and universities.

My Questions

  • Might the IBM Watson, Semantic Scholar, DARPA and BigDIVA development teams benefit from sharing design and technical resources? Would scientists, doctors, scholars and others benefit from multi-disciplinary teams working together on future upgrades and perhaps even new platforms and interface standards?
  • What other professional, academic, scientific, commercial, entertainment and governmental fields would benefit from these highly specialized search platforms?
  • Would Google, Bing, Yahoo and other commercial search engines benefit from participating with the developers in these projects?
  • Would proprietary enterprise search vendors likewise benefit from similar joint ventures with the types of teams described above?
  • What entrepreneurial opportunities might arise for vendors, developers, designers and consultants who could provide fuller insight and support for developing customized search platforms?

 


October 19, 2017 Update: For the latest progress and applications of the Semantic Scholar system, see the latest report in a new post on the Economist.com entitled A Better Way to Search Through Scientific Papers, dated October 19, 2017.


1.  These 11 Subway Fold posts cover various AI applications and developments.

2.  These seven Subway Fold posts cover a range of IBM Watson applications and markets.

3A new history of DARPA written by Annie Jacobsen was recently published entitled The Pentagon’s Brain (Little Brown and Company, 2015).

4.  See this January 30, 2015 Subway Fold post entitled Timely Resources for Studying and Producing Infographics on this topic.

VR in the OR: New Virtual Reality System for Planning, Practicing and Assisting in Surgery

"Neural Pathways in the Brain", Image by NICHD

“Neural Pathways in the Brain”, Image by NICHD

The work of a new startup and some pioneering doctors has recently given the term “operating system” and entirely new meaning: They are using virtual reality (VR) to experimentally plan, practice and assist in surgery.

Currently, VR technology is rapid diversifying into new and exciting applications across a widening spectrum of fields and markets. This surgical one is particularly promising because of its potential to improve medical care. Indeed, this is far beyond the VR’s more familiar domains of entertainment and media.

During the past year, a series of Subway Fold posts have covered closely related VR advancements and apps in moviesnews reporting, and corporate advisory boards. (These are just three of ten posts here in the category Virtual and Augmented Reality.)

Virtual Surgical Imaging

The details of these new VR surgical systems was reported in a fascinating article entitled posted on Smithsonian.com on September 15, 2015 entitled How Is Brain Surgery Like Flying? Put On a Headset to Find Out by Michelle Z. Donahue. I highly recommend reading it in its entirety. I will summarize, annotate, and pose a few non-surgical questions of my own.

Surgeons and developers are creating virtual environments by combining and enhancing today’s standard two-dimensional medical scans. The surgeons can then use the new VR system to study a particular patient’s internal biological systems. For example, prior to brain surgery, they can explore the virtual representation of the area to be operated upon before as well as after any incision has been made.

For example, a fourth year neurosurgery resident named Osamah Choudhry at NYU’s Langone Medical Center, has had experience doing this recently in a 3D virtualization of a patient’s glioma, a form of brain tumor. His VR headset is an HTC Vive used in conjunction with a game controller that enables him to move around and view the subject from different perspectives, and see the fine details of connecting nerve and blood vessels. Furthermore, he has been able to simulate a view of some of the pending surgical procedures.

SNAP Help for Surgeons

This new system that creates these fully immersive 3D surgical virtualizations is called the Surgical Navigation Advanced Platform (SNAP). It was created by a company in Ohio called Surgical Theater ( @SurgicalTheater ).  It can be used with either the Oculus Rift or HTC Vive VR headsets (neither of which has been commercially released yet).  Originally, SNAP was intended for planning surgery, as it is in the US. Now it is being tested by a number of hospitals for actual usage during surgery in Europe.

Surgeons using SNAP today need to step away from their operations and change gloves. Once engaged with this system, they can  explore the “surgical target” and then “return to the patient with a clear understanding of next steps and obstacles”. For instance, SNAP can assist in accurately measuring and focusing upon which parts of a brain tumor to remove as well which areas to avoid.

SNAP’s chance origin occurred when former Israeli fighter pilots Moty Avisar and Alon Geri were in Cleveland at work on a flight simulator for the US Air Force. While having a cup of coffee, some of their talk was overheard by Dr. Warren Selman who is the chair of neurosurgery at Case Western University. He inquired whether they could adapt their system for surgeons to enable them to “go inside a tumor” in order to see how best to remove a tumor while avoiding “blood vessels and nerves”. This eventually led to Avisar and Geri to form Surgical Theater. At first, their system produced a 3D model that was viewed on a 2D screen. The VR headset was integrated later on.

System Applications

SNAP’s software merges a patient’s CT and MRI images to create its virtual environment. Thereafter, a neurosurgeon can, with the assistance of a handheld controller, use the VR headset to “stand next to or even inside the tumor or aneurysm“.  This helps them to plan the craniotomy, the actual surgery on the skull, and additional parts of the procedures. As well, the surgeon can examine the virtual construct of the patient’s vascular system.

At NYU’s Langone Medical Center, the Chair of Neurosurgery, Dr. John Golfinos, believes that SNAP is a significant advancement in this field as doctors previously had to engage in all manner of “mental gymnastics” when using 2D medical imaging to visualize a patient’s condition. Today, with a system SNAP, simulations are much accurate in presenting patients the way that surgeons see them.

Dr. Golfinos has applied SNAP to evaluating “tricky procedures” such as whether or not to use an endoscopic tool for an operation. SNAP was helpful in deciding to proceed in this manner and the outcome was successful.

UCLA’s medical school, the David Geffen School of Medicine, is using SNAP in “research studies to plan surgeries and a procedure’s effectiveness”. The schools Neurosurgery Chair, Dr. Neil Martin, has been working with Surgical Theater to smooth over the disorientation some users experience with VR headsets.

Dr. Martin and Mr. Avisar believe that SNAP “could take collaboration on surgeries to an international level”. That is, surgeons could consult with each other on a global scale to assist with operations that place then within a shared virtual space to cooperate on an operation.

Patient Education and Future Developments

Dr. Choudhry further believes that the Oculus Rift or Vive headsets can be used to answer the questions of patients who have done their own research and devised their own questions, as well as improve the doctor/patient relationship. He has seen patients quickly “lose interest” when he uses 2D CT and MRI scans to explain these images. However, he believes that 3D VR “is intuitive” as patients recognize what they are viewing.

He also believes that future developments might lead to the integration of augmented reality systems into surgery. These present, through a transparent viewer and headset, a combination of a virtual data overlay within the user’s line of sight upon the real operating room.  (Please see these eight recent Subway Fold posts about augmented reality.)

My own questions are as follows:

  • Are VR surgical systems used only after a decision to operate has been made or are surgeons also using them to assess whether or not to operate?
  • What other types of surgeries could benefit both doctors and patients by introducing the option of using VR surgical systems?
  • Can such systems be used for other non-invasive medical applications such as physical therapy for certain medical conditions and post-operative care?
  • Can such systems be adapted for non-medical applications such as training athletes? That is, can such imagery assist in further optimizing and personalizing training regimens?

May 11, 2017 Update: For a truly fascinating report on a new surgical system that has just been developed using advanced augmented realty technology, see Augmented Reality Goggles Give Surgeons X-ray Vision, by Matt Reynolds, posted on NewScientist.com on May 11, 2017.


 

New Visualization Maps Out the Concepts of the “Theory of Everything”

“DC891- Black Hole”, Image by Adriana Arias

January 6, 2017: An update on this post appears below.


While I was a student in the fourth grade at Public School 79, my teacher introduced the class to the concept of fractions. She demonstrated this using the classic example of the cutting up a pie into different numbers of slices. She explained to the class about slicing it into halves, thirds, quarters and so on. During this introductory lesson, she kept emphasizing that the sum of all the parts always added up to the whole pie and that they could never be equal to more than or less than the whole.

I thought I could deal with this fractions business back then. As far as I know, it still holds up pretty well today.

On an infinitely grander and brain-bendingly complex scale that is more than just pieces of π, physicists have been working for decades on a Theory of Everything (ToE). The objective is to build a comprehensive framework that fully unites and explains the theoretical foundations of physics across the universe. The greatest minds in this field have approached this ultimate challenge with a variety of highly complex and advanced mathematics, theoretical constructs and proposals. Many individuals and multidisciplinary teams are still at work try to achieve the ToE. If and when anyone of them succeeds formulating and proving it, the result will be the type of breakthrough that will potentially have profound changes on our understanding of our world and the universe we inhabit.

Einstein was one of the early pioneers in this field. He invested a great deal of effort in this challenge but even a Promethean genius such as him never succeeded at it.  His General Theory of Relativity continues to be one of the cornerstones of the ToE endeavor. The entire September 2015 issue of Scientific American is devoted to the 100th anniversary of this monumental accomplishment. I highly recommend reading this issue in its entirety.

I also strongly urge you to check out a remarkable interactive visualization of the component theories and concepts of the ToE that was posted in an August 3, 2015 post on Quantamagazine.org entitled Theories of Everything, Mapped by Natalie Wolchover. The author very concisely explains how the builder of the map, developer Emily Fuhrman, created it in order to teach people about ToE. Furthermore, it shows that there are areas with substantial “disunions, holes and inconsistencies” remaining that comprise the “deep questions that must be answered” in order to achieve the ToE.

The full map is embedded at the top of the article, ready for visitors to click into it and immerse themselves immerse in such topics as, among many others, grand unification, quantum gravity and dark matter.  All along the way, there are numerous linked resources within it available for further extensive explorations. In my humble opinion, Ms. Fuhrman has done a brilliant job of creating this.

Having now spent a bit of time clicking all over this bounty of fascinating information, I was reminded of my favorite line from Bob Dylan’s My Back Pages that goes “Using ideas as my maps”. (The Byrds also had a hauntingly beautiful Top 40 hit covering this.)

In these 26 prior Subway Fold posts we have examined a wide range of the highly inventive and creative work that can be done with contemporary visualization tools. This ToE map is yet another inspiring example. Even if subjects like space-time and the cosmological constant are not familiar to you, this particularly engaging visualization expertly arranges and explains the basics of these theoretical worlds. It also speaks to the power of effective visualization in capturing the viewer’s imagination about a subject which, if otherwise only left as text, would not succeed in drawing most online viewers in so deeply.


 

January 6, 2017 Update:

Well, it looks like those Grand Unified Fielders have recently suffered another disappointing bump in the road (or perhaps in the universe), as they have been unable to find any genuine proton decay. Although this might sound like something your dentist has repeatedly warned you about, it is rather an anticipated physical phenomenon on the road to the finding the Theory of Everything that has yet to be observed and measured. This put that quest on hold for time being unless and until either it is observed or physicists and theorists can work around its absence. The full details appear in a new article entitled Grand Unification Dream Kept at Bay, by Natalie Wolchover (the same author whose earlier article on this was summarized above), in QuantaMagazine.com, posted on December 15, 2016.

Data Analysis and Visualizations of All U.S. Presidential State of the Union Addresses

"President Obama's State of the Union Address 2013", Word cloud image by Kurtis Garbutt

“President Obama’s State of the Union Address 2013”, Word cloud image by Kurtis Garbutt

While data analytics and visualization tools have accumulated a significant historical record of accomplishments, now, in turn, this technology is being applied to actual significant historical accomplishments. Let’s have a look.

Every year in January, the President of the United States gives the State of the Union speech before both houses of the U.S. Congress. This is to address the condition of the nation, his legislative agenda and other national priorities. The requirement for this presentation appears in Article II of the U.S. Constitution.

This talk with the nation has been given every year (with only one exception), since 1790. The resulting total of 224 speeches presents a remarkable and dynamic historical record of U.S. history and policy. Researchers at Columbia University and the University of Paris have recently applied sophisticated data analytics and visualization tools to this trove of presidential addresses. Their findings were published in the August 10, 2015 edition of the Proceedings of the National Academy of Sciences in a truly fascinating paper entitled Lexical Shifts, Substantive Changes, and Continuity in State of the Union Discourse, 1790–2014, by Alix Rule, Jean-Philippe Cointet, and Peter S. Bearman.

A very informative and concise summary of this paper was also posted in an article on Phys.org, also on August 10, 2015, entitled in a post entitled Big Data Analysis of State of the Union Remarks Changes View of American History, (no author is listed). I will summarize, annotate and post a few questions of my own. I highly recommend clicking through and reading the full report and the summary article together for a fuller perspective on this achievement. (Similar types of textual and graphical analyses of US law were covered in the May 15, 2015 Subway Fold post entitled Recent Visualization Projects Involving US Law and The Supreme Court.)

The researchers developed custom algorithms for their research. They were applied to the total number of words used in all of the addresses, from 1790 to 2014, of 1.8 million.  By identifying the frequencies of “how often words appear jointly” and “mapping their relation to other clusters of words”, the team was able to highlight “dominant social and political” issues and their relative historical time frames. (See Figure 1 at the bottom of Page 2 of the full report for this lexigraphical mapping.)

One of the researchers’ key findings was that although the topics of “industry, finance, and foreign policy” were predominant and persist throughout all of the addresses, following World War II the recurring keywords focus further upon “nation building, the regulation of business and the financing of public infrastructure”. While it is well know that these emergent terms were all about modern topics, the researchers were thus able to pinpoint the exact time frames when they first appeared. (See Page 5 of the full report for the graphic charting these data trends.)

Foreign Policy Patters

The year 1917 struck the researchers as a critical turning point because it represented a dramatic shift in the data containing words indicative of more modern times. This was the year that the US sent its troops into battle in Europe in WWI. It was then that new keywords in the State of the Union including “democracy,” “unity,” “peace” and “terror” started to appear and recur. Later, by the 1940’s, word clusters concerning the Navy appeared, possibly indicating emerging U.S. isolationism. However, they suddenly disappeared again as the U.S. became far more involved in world events.

Domestic Policy Patterns

Over time, the researchers identified changes in the terminology used when addressing domestic matters. These concerned the government’s size, economic regulation, and equal opportunity. Although the focus of the State of the Union speeches remained constant, new keywords appeared whereby “tax relief,” “incentives” and “welfare” have replaced “Treasury,” “amount” and “expenditures”.

An important issue facing this project was that during the more than two centuries being studied, keywords could substantially change in meaning over time. To address this, the researchers applied new network analysis methods developed by Jean-Philippe Cointet, a team member, co-author and physicist at the University of Paris. They were intended to identify changes whereby “some political topics morph into similar topics with common threads” as others fade away. (See Figure 3 at the bottom of Page 4 of the full paper for this enlightening graphic.*)

As a result, they were able to parse the relative meanings of words as they appear with each other and, on a more macro level, in the “context of evolving topics”. For example, it was discovered that the word “Constitution” was:

  • closely associated with the word “people” in early U.S. history
  • linked to “state” following the Civil War
  • linked to “law” during WWI and WWII, and
  • returned to “people” during the 1970’s

Thus, the meaning of “Constitution” must be assessed in its historical context.

My own questions are as follows:

  • Would this analytical approach yield new and original insights if other long-running historical records such as the Congressional Record were like subject to the research team’s algorithms and analytics?
  • Could companies and other commercial businesses derive any benefits from having their historical records similarly analyzed? For example, might it yield new insights and recommendations for corporate governance and information governance policies and procedures?
  • Could this methodology be used as an electronic discovery tool for litigators as they parse corporate documents produced during a case?

 


*  This is also resembles the methodology and appearance to the graphic on Page 29 of the law review article entitled A Quantitative Analysis of the Writing Style of the U.S. Supreme Court, by Keith Carlson, Michael A. Livermore, and Daniel Rockmore, Dated March 11, 2015, linked to and discussed with the May 15, 2015 Subway Fold post cited above.

Watson, is That You? Yes, and I’ve Just Demo-ed My Analytics Skills at IBM’s New York Office

IMAG0082

My photo of the entrance to IBM’s office at 590 Madison Avenue in New York, taken on July 29, 2015.

I don’t know if my heart can take this much excitement. Yesterday morning, on July 29, 2015, I attended a very compelling presentation and demo of IBM’s Watson technology. (This AI-driven platform has been previously covered in these five Subway Fold posts.) Just the night before, I saw I saw a demo of some ultra-cool new augmented reality systems.

These experiences combined to make me think of the evocative line from Supernaut by Black Sabbath with Ozzie belting out “I’ve seen the future and I’ve left it behind”. (Incidentally, this prehistoric metal classic also has, IMHO, one of the most infectious guitar riffs with near warp speed shredding ever recorded.)

Yesterday’s demo of Watson Analytics, one key component among several on the platform, was held at IBM’s office in the heart of midtown Manhattan at 590 Madison Avenue and 57th Street. The company very graciously put this on for free. All three IBM employees who spoke were outstanding in their mastery of the technology, enthusiasm for its capabilities, and informative Q&A interactions with the audience. Massive kudos to everyone involved at the company in making this happen. Thanks, too, for all of attendees who asked such excellent questions.

Here is my summary of the event:

Part 1: What is Watson Analytics?

The first two speakers began with a fundamental truth about all organizations today: They have significant quantities of data that are driving all operations. However, a bottleneck often occurs when business users understand this but do not have the technical skills to fully leverage it while, correspondingly, IT workers do not always understand the business context of the data. As a result, business users have avenues they can explore but not the best or most timely means to do so.

This is where Watson can be introduced because it can make these business users self-sufficient with an accessible, extensible and easier to use analytics platform. It is, as one the speakers said “self-service analytics in the cloud”. Thus, Watson’s constituents can be seen as follows:

  • “What” is how to discover and define business problems.
  • “Why” is to understand the existence and nature of these problems.
  • “How” is to share this process in order to affect change.

However, Watson is specifically not intended to be a replacement for IT in any way.

Also, one of Watson’s key capabilities is enabling users to pursue their questions by using a natural language dialog. This involves querying Watson with questions posed in ordinary spoken terms.

Part 2: A Real World Demo Using Airline Customer Data

Taken directly from the world of commerce, the IBM speakers presented a demo of Watson Analytics’ capabilities by using a hypothetical situation in the airline industry. This involved a business analyst in the marketing department for an airline who was given a compilation of market data prepared by a third-party vendor. The business analyst was then assigned by his manager with researching and planning how to reduce customer churn.

Next, by enlisting Watson Analytics for this project, the two central issues became how the data could be:

  • Better understand, leveraged and applied to increase customers’ positive opinions while simultaneously decreasing the defections to the airline’s competitors.
  • Comprehensively modeled in order to understand the elements of the customer base’s satisfaction, or lack thereof, with the airline’s services.

The speakers then put Watson Analytics through its paces up on large screens for the audience to observe and ask questions. The goal of this was to demonstrate how the business analyst could query Watson Analytics and, in turn, the system would provide alternative paths to explore the data in search of viable solutions.

Included among the variables that were dexterously tested and spun into enlightening interactive visualizations were:

  • Satisfaction levels by other peer airlines and the hypothetical Watson customer airline
  • Why customers are, and are not, satisfied with their travel experience
  • Airline “status” segments such as “platinum” level flyers who pay a premium for additional select services
  • Types of travel including for business and vacation
  • Other customer demographic points

This results of this exercise as they appeared onscreen showed how Watson could, with its unique architecture and tool set:

  • Generate “guided suggestions” using natural language dialogs
  • Identify and test all manner of connections among the population of data
  • Use predictive analytics to make business forecasts¹
  • Calculate a “data quality score” to assess the quality of the data upon which business decisions are based
  • Map out a wide variety of data dashboards and reports to view and continually test the data in an effort to “tell a story”
  • Integrate an extensible set of analytical and graphics tools to sift through large data sets from relevant Twitter streams²

Part 3: The Development Roadmap

The third and final IBM speaker outlined the following paths for Watson Analytics that are currently in beta stage development:

  • User engagement developers are working on an updated visual engine, increased connectivity and capabilities for mobile devices, and social media commentary.
  • Collaboration developers are working on accommodating work groups and administrators, and dashboards that can be filtered and distributed.
  • Data connector developers are working on new data linkages, improving the quality and shape of connections, and increasing the degrees of confidence in predictions. For example, a connection to weather data is underway that would be very helpful to the airline (among other industries), in the above hypothetical.
  • New analytics developers are working on new functionality for business forecasting, time series analyses, optimization, and social media analytics.

Everyone in the audience, judging by the numerous informal conversations that quickly formed in the follow-up networking session, left with much to consider about the potential applications of this technology.


1.  Please see these six Subway Fold posts covering predictive analytics in other markets.

2.  Please see these ten Subway Fold posts for a variety of other applications of Twitter analytics.

 

Prints Charming: A New App Combines Music With 3D Printing

"Totem", Image by Brooke Novak

“Totem”, Image by Brooke Novak

What does a song actually look like in 3D? Everyone knows that music has always been evocative of all kinds of people, memories, emotions and sensations. In a Subway Fold post back on November 30, 2014, we first looked at Music Visualizations and Visualizations About Music. But can a representation of a tune now be taken further and transformed into a tangible object?

Yes, and it looks pretty darn cool. A fascinating article was posted on Wired.com on July 15, 2015, entitled What Songs Look Like as 3-D Printed Sculptures by Liz Stinson, about a new Kickstarter campaign to raise funding for the NYC startup called Reify working on this. I will sum up, annotate and try to sculpt a few questions of my own.

Reify’s technology uses sound waves in conjunction with 3D printing¹ to shape a physical “totem” or object of it. (The Wired article and the Reify website contain pictures of samples.) Then an augmented reality² app in a mobile device will provide an on-screen visual experience accompanying the song when the camera is pointed towards it. This page on their website contains a video of a demo of their system.

The firm is led by Allison Wood and Kei Gowda. Ms. Wood founded it in order to study “digital synesthesia”. (Synthesia is a rare condition where people can use multiple senses in unusual combinations to, for example, “hear” colors, and was previously covered in the Subway Fold post about music visualization linked to above.) She began to explore how to “translate music’s ephemeral nature” into a genuine object and came up with the concept of using a totem.

Designing each totem is an individualized process. It starts with analyzing a song’s “structure, rhythm, amplitude, and more” by playing it through the Echo Nest API.³ In turn, the results generated correspond to measurements including “height, weight and mass”. The tempo and genre of a song also have a direct influence on the shaping of the totem. As well, the musical artists themselves have significant input into the final form.

The mobile app comes into play when it is used to “read” the totem and interpret its form “like a stylus on a record player or a laser on a CD”. The result is, while the music playing, the augmented reality component of the app captures and then generates an animated visualization incorporating the totem on-screen.  The process is vividly shown in the demo video linked above.

Reify’s work can also be likened to a form of information design in the form of data visualization4. According to Ms. Wood, the process involves “translating data from one form into another”.

My questions are as follows:

  • Is Reify working with, or considering working with, Microsoft on its pending HoloLens augmented reality system and/or companies such as Oculus, Samsung and Google on their virtual reality platforms as covered in the posts linked to in Footnote 2 below?
  • How might Reify’s system be integrated into the marketing strategies of musicians? For example, perhaps printing up a number of totems for a band and then distributing them at concerts.
  • Would long-established musicians and performers possibly use Reify to create totems of some their classics? For instance, what might a totem and augmented reality visualization for Springsteen’s anthem, Born to Run, look like?

1.  See these two Subway Fold posts mentioning 3D printing.

2.  See these eight Subway Fold posts covering some of the latest developments in virtual and augmented reality.

3API’s in a medical and scientific context were covered in a July 2, 2015 Subway Fold Post entitled The Need for Specialized Application Programming Interfaces for Human Genomics R&D Initiatives.

4.  This topic is covered extensively in dozens of Subway Fold posts in the Big Data and Analytics and Visualization categories.