I Can See for Miles: Using Augmented Reality to Analyze Business Data Sets

matrix-1013612__340, Image from Pixabay

While one of The Who’s first hit singles, I Can See for Miles, was most certainly not about data visualization, it still might – – on a bit of a stretch – – find a fitting a new context in describing one of the latest dazzling new technologies in the opening stanza’s declaration “there’s magic in my eye”.  In determining Who’s who and what’s what about all this, let’s have a look at report on a new tool enabling data scientists to indeed “see for miles and miles” in an exciting new manner.

This innovative approach was recently the subject of a fascinating article by an augmented reality (AR) designer named Benjamin Resnick about his team’s work at IBM on a project called Immersive Insights, entitled Visualizing High Dimensional Data In Augmented Reality, posted on July 3, 2017 on Medium.com. (Also embedded is a very cool video of a demo of this system.) They are applying AR’s rapidly advancing technology1 to display, interpret and leverage insights gained from business data. I highly recommend reading this in its entirety. I will summarize and annotate it here and then pose a few real-world questions of my own.

Immersive Insights into Where the Data-Points Point

As Resnick foresees such a system in several years, a user will start his or her workday by donning their AR glasses and viewing a “sea of gently glowing, colored orbs”, each of which visually displays their business’s big data sets2. The user will be able to “reach out select that data” which, in turn, will generate additional details on a nearby monitor. Thus, the user can efficiently track their data in an “aesthetically pleasing” and practical display.

The project team’s key objective is to provide a means to visualize and sum up the key “relationships in the data”. In the short-term, the team is aiming Immersive Insights towards data scientists who are facile coders, enabling them to visualize, using AR’s capabilities upon time series, geographical and networked data. For their long-term goals, they are planning to expand the range of Immersive Insight’s applicability to the work of business analysts.

For example, Instacart, a same-day food delivery service, maintains an open source data set on food purchases (accessible here). Every consumer represents a data-point wherein they can be expressed as a “list of purchased products” from among 50,000 possible items.

How can this sizable pool of data be better understood and the deeper relationships within it be extracted and understood? Traditionally, data scientists create a “matrix of 2D scatter plots” in their efforts to intuit connections in the information’s attributes. However, for those sets with many attributes, this methodology does not scale well.

Consequently, Resnick’s team has been using their own new approach to:

  • Lower complex data to just three dimensions in order to sum up key relationships
  • Visualize the data by applying their Immersive Insights application, and
  • Iteratively label and color-code the data” in conjunction with an “evolving understanding” of its inner workings

Their results have enable them to “validate hypotheses more quickly” and establish a sense about the relationships within the data sets. As well, their system was built to permit users to employ a number of versatile data analysis programming languages.

The types of data sets being used here are likewise deployed in training machine learning systems3. As a result, the potential exists for these three technologies to become complementary and mutually supportive in identifying and understanding relationships within the data as well as deriving any “black box predictive models”.

Analyzing the Instacart Data Set: Food for Thought

Passing over the more technical details provided on the creation of team’s demo in the video (linked above), and next turning to the results of the visualizations, their findings included:

  • A great deal of the variance in Instacart’s customers’ “purchasing patterns” was between those who bought “premium items” and those who chose less expensive “versions of similar items”. In turn, this difference has “meaningful implications” in the company’s “marketing, promotion and recommendation strategies”.
  • Among all food categories, produce was clearly the leader. Nearly all customers buy it.
  • When the users were categorized by the “most common department” they patronized, they were “not linearly separable”. This is, in terms of purchasing patterns, this “categorization” missed most of the variance in the system’s three main components (described above).

Resnick concludes that the three cornerstone technologies of Immersive Insights – – big data, augmented reality and machine learning – – are individually and in complementary combinations “disruptive” and, as such, will affect the “future of business and society”.

Questions

  • Can this system be used on a real-time basis? Can it be configured to handle changing data sets in volatile business markets where there are significant changes within short time periods that may affect time-sensitive decisions?
  • Would web metrics be a worthwhile application, perhaps as an add-on module to a service such as Google Analytics?
  • Is Immersive Insights limited only to business data or can it be adapted to less commercial or non-profit ventures to gain insights into processes that might affect high-level decision-making?
  • Is this system extensible enough so that it will likely end up finding unintended and productive uses that its designers and engineers never could have anticipated? For example, might it be helpful to juries in cases involving technically or financially complex matters such as intellectual property or antitrust?

 


1.  See the Subway Fold category Virtual and Augmented Reality for other posts on emerging AR and VR applications.

2.  See the Subway Fold category of Big Data and Analytics for other posts covering a range of applications in this field.

3.  See the Subway Fold category of Smart Systems for other posts on developments in artificial intelligence, machine learning and expert systems.

4.  For a highly informative and insightful examination of this phenomenon where data scientists on occasion are not exactly sure about how AI and machine learning systems produce their results, I suggest a click-through and reading of The Dark Secret at the Heart of AI,  by Will Knight, which was published in the May/June 2017 issue of MIT Technology Review.

New IBM Watson and Medtronic App Anticipates Low Blood Glucose Levels for People with Diabetes

"Glucose: Ball-and-Stick Model", Image by Siyavula Education

“Glucose: Ball-and-Stick Model”, Image by Siyavula Education

Can a new app jointly developed by IBM with its Watson AI technology in partnership with the medical device maker Medtronic provide a new form of  support for people with diabetes by safely avoiding low blood glucose (BG) levels (called hypoglycemia), in advance? If so, and assuming regulatory approval, this technology could potentially be a very significant boon to the care of this disease.

Basics of Managing Blood Glucose Levels

The daily management of diabetes involves a diverse mix of factors including, but not limited to, regulating insulin dosages, checking BG readings, measuring carbohydrate intakes at meals, gauging activity and exercise levels, and controlling stress levels. There is no perfect algorithm to do this as everyone with this medical condition is different from one another and their bodies react in individual ways in trying to balance all of this while striving to maintain healthy short and long-term control of BG levels.

Diabetes care today operates in a very data-driven environment. BG levels, expressed numerically, can be checked on hand-held meter and test strips using a single drop of blood or a continuous glucose monitoring system (CGM). The latter consists of a thumb drive-size sensor attached with temporary adhesive to the skin and a needle attached to this unit inserted just below the skin. This system provides patients with frequent real-time readings of their BG levels, and whether they are trending up or down, so they can adjust their medication accordingly. That is, for A grams of carbs and B amounts of physical activity and other contributing factors, C amount of insulin can be calculated and dispensed.

Insulin itself can be administered either manually by injection or by an insulin pump (also with a subcutaneously inserted needle). The later of these consists of two devices: The pump itself, a small enclosed device (about the size of a pager), with an infusion needle placed under the patient’s skin and a Bluetooth-enabled handheld device (that looks just like a smartphone), used to adjust the pump’s dosage and timing of insulin released. Some pump manufacturers are also bringing to market their latest generation of CGMs that integrate their data and command functions with their users’ smartphones.

(The links in the previous two paragraphs are to Wikipedia pages with detailed pages and photos on CGMs and insulin pumps. See also, this June 27, 2015 Subway Fold post entitled Medical Researchers are Developing a “Smart Insulin Patch” for another glucose sensing and insulin dispensing system under development.)

The trickiest part of all of these systems is maintaining levels of BG throughout each day that are within an acceptable range of values. High levels can result in a host of difficult symptoms. Hypoglycemic low levels, can quickly become serious, manifesting as dizziness, confusion and other symptoms, and can ultimately lead to unconsciousness in extreme cases if not treated immediately.

New App for Predicting and Preventing Low Blood Glucose Levels

Taking this challenge to an entirely new level, at last week’s annual Consumer Electronics Show (CES) held in Las Vegas, IBM and Medtronic jointly announced their new app to predict hypoglycemic events in advance. The app is built upon Watson’s significant strengths in artificial intelligence (AI) and machine learning to sift through and intuit patterns in large volumes of data, in this case generated from Medtronic’s user base for their CGMs and insulin pumps. This story was covered in a most interesting article posted in The Washington Post on January 6, 2016 entitled IBM Extends Health Care Bet With Under Armour, Medtronic by Jing Cao and Michelle Fay Cortez. I will summarize and annotate this report and then pose some of my own questions.

The announcement and demo of this new app on January 6, 2016 at CES showed the process by which a patient’s data can be collected from their Medtronic devices and then combined with additional information from their wearable activity trackers and food intake. Next, all of this information is processed through Watson in order to “provide feedback” for the patient to “manage their diabetes”.

Present and Future Plans for The App and This Approach

Making the announcement were Virginia Rometty, Chairman, President and CEO of IBM, and Omar Ishrak, Chairman and CEO of Medtronic. The introduction of this technology is expected in the summer of 2016. It still needs to be submitted to the US government’s regulatory review process.

Ms. Rometty said that the capability to predict low BG events, in some cases up to three hours before they occur, is a “breakthrough”. She described Watson as “cognitive computing”, using algorithms to generate “prescriptive and predictive analysis”. The company is currently making a major strategic move into finding and facilitating applications and partners for Watson in the health care industry. (The eight Subway Fold posts cover other various systems and developments using Watson.)

Hooman Hakami, Executive VP and President, of the Diabetes Group at Medtronic, described how his company is working to “anticipate” how the behavior of each person with Diabetes affects their blood glucose levels. With this information they can then “make choices to improve their health”. Here is the page from the company’s website about their partnership with IBM to work together on treating diabetes.

In the future, both companies are aiming to “give patients real-time information” on how their individual data is influencing their BG levels and “provide coaching” to assist them in making adjustments to keep their readings in a “healthy range”. In one scenario, patients might receive a text message that “they have an 85% chance of developing low blood sugar within an hour”. This will also include a recommendation to watch their readings and eat something to raise their BG back up to a safer level.

My Questions

  • Will this make patients more or less diligent in their daily care? Is there potential for patients to possibly assume less responsibility for their care if they sense that the management of their diabetes is running on a form of remote control? Alternatively, might this result in too much information for patients to manage?
  • What would be the possible results if this app is ever engineered to work in conjunction with the artificial pancreas project being led in by Ed Damiano and his group of developers in Boston?
  • If this app receives regulatory approval and gains wide acceptance among people with diabetes, what does this medical ecosystem look like in the future for patients, doctors, medical insurance providers, regulatory agencies, and medical system entrepreneurs? How might it positively or negatively affect the market for insulin pumps and CGMs?
  • Should IBM and Medtronic consider making their app available on and open-source basis to enable other individuals and groups of developers to improve it as well as develop additional new apps?
  • Whether and how will insurance policies for both patients and manufacturers, deal with any potential liability that may arise if the app causes some unforeseen adverse effects? Will medical insurance even cover, encourage or discourage the use of such an app?
  • Will the data generated by the app ever be used in any unforeseen ways that could affect patients’ privacy? Would patients using the new app have to relinquish all rights and interests to their own BG data?
  • What other medical conditions might benefit from a similar type of real-time data, feedback and recommendation system?

Charge of the Light Brigade: Faster and More Efficient New Chips Using Photons Instead of Electrons

"PACE - PEACE" Image by Etienne Valois

“PACE – PEACE” Image by Etienne Valois

Alfred, Lord Tennyson wrote his immortal classic poem, The Charge of the Light Brigade, in 1854. It was to honor the dead heroes of a doomed infantry charge at the Battle of Balaclava during the Crimean War. Moreover, it strikingly portrayed the horrors of war. In just six short verses, he created a monumental work that has endured ever since for 162 years.

The poem came to mind last week after reading two recent articles on seemingly disparate topics. The first was posted on The New Yorker’s website on December 30, 2015 entitled In Silicon Valley Now, It’s Almost Always Winner Takes All by Om Malik. This is highly insightful analysis of how and why tech giants such as Google in search, Facebook in social networking, and Uber in transportation, have come to dominate their markets. In essence, competition is a fierce and relentless battle in the global digital economy. The second was an article on CNET.com posted on December 23, 2015 entitled Chip Promises Faster Computing with Light, Not Electrical Wires by Stephan Shankland. I highly recommend reading both of them in their entirety.

Taken together, the homonym of “light” both in historical poetry and in tech, seems to tie these two posted pieces together insofar as contemporary competition in tech markets is often described in military terms and metaphors. Focusing on that second story here for purposes of this blog post, about a tantalizing advance in chip design and fabrication, will this survive as it moves forward into the brutal and relentlessly “winner takes all” marketplace? I will summarize and annotate this story, and pose some of my own, hopefully en-light-ening questions.

Forward, the Light Brigade

A team of researchers, all of whom are university professors, including Vladimir Stojanovic from the University of California at Berkeley who led the development, Krste Asanovic also from Berkeley, Rajeev Ram from MIT, and Milos Popovic from the University of Colorado at Boulder, have created a new type of processing chip “that transmits data with light”. As well, its architecture significantly increases processing speed while reducing power consumption.  A report on the team’s work was published in an article in the December 24, 2015 issue of Nature (subscription required) entitled Single-chip Microprocessor That Communicates Directly Using Light by Chen Sun, Mark T. Wade, Yunsup Lee, et al.

This approach, according to Wikipedia, of “using silicon as an optical medium”, is called silicon photonics. IBM (see this link) and Intel (see this link)  have likewise been involved in R&D in this field, but have yet to introduce anything ready for the market.

However, this team of university researchers believes their new approach might be introduced commercially within a year. While their efforts do not make chips run faster per se, the photonic elements “keep chips supplied with data” which avoids them having to lose time by idling. Thus, they can process data faster.

Currently (no pun intended), electrical signals traverse metal wiring across the world on computing and communications devices and networks. For data traveling greater national and international distances, the electronic signals are transposed into light and sent along on high-speed fiber-optic cables. Nonetheless, this approach “isn’t cheap”.

Half a League Onward

What the university researchers’ team has done is create chips with “photonic components” built into them. If they succeed in scaling-up and commercializing their creation, consumers will be likely the beneficiaries. These advantages will probably manifest themselves first when used in data centers that, in turn, could speed up:

  • Google searches
  • Facebook image recognition
  • Other “performance-intensive features not economical today”
  • Remove processing bottlenecks and conserve battery life in smartphones and other personal computing platforms

Professor Stojanovic believes that one of their largest challenges if is to make this technology affordable before it can be later implemented in consumer level computing and communications devices. He is sanguine that such economies of scale can be reached. He anticipates further applications of this technology to enable chips’ onboard processing and memory components to communicate directly with each other.

Additional integrations of silicon photonics might be seen in the lidar remote sensing systems for self-driving cars¹, as well as brain imaging² and environmental sensors. It also holds the potential to alter the traditional methods that computers are assembled. For example, the length of cables is limited to the extent that data can pass through them quickly and efficiently before needed amplification along the way. Optical links may permit data to be transferred significant further along network cabling. The research team’s “prototype used 10-meter optical links”, but Professor Stojanovic believes this could eventually be lengthened to a kilometer. This could potentially result in meaningful savings in energy, hardware and processing efficiency.

Two startups that are also presently working in the silicon photonics space include:

My Questions:

  • Might another one of silicon photonics’ virtues be that it is partially fabricated from more sustainable materials, primarily silicon derived from sand rather than various metals?
  • Could silicon photonics chips and architectures be a solution to the very significant computing needs of the virtual reality (VR) and augmented reality (AR) systems that will be coming onto the market in 2016? This issue was raised in a most interesting article posted on Bloomberg.com on December 30, 2015 entitled Few Computers Are Powerful Enough to Support Virtual Reality by Ian King. (See also these 13 Subway Fold posts on a range of VR and AR developments.)
  • What other new markets, technologies and opportunities for entrepreneurs and researchers might emerge if the university research team’s chips achieve their intended goals and succeed in making it to market?

May 17, 2017 UpdateFor an update on one of the latest developments in photonics with potential applications in advanced computing and materials science, see Photonic Hypercrystals Are Now a Reality and Light Will Never Be the Same, by Dexter Johnson, posted on May 10, 2017, on IEEESpectrum.com. 


1.  See these six Subway Fold posts for references to autonomous cars.

2.  See these four Subway Fold posts concerning certain developments in brain imaging technology.

Digital.NYC Site Launches as a Comprehensive Resource for New York City Tech and Startups

Being a very proud native of New York City, I was thrilled to see an article on TechCrunch.com posted on October 1, 2014, whose title just about said it all with Digital.nyc Launches To Be The Hub For New York Tech by Jonathan Shieber. This announced the launch of a brand new site called Digital.NYC, a hub destination concerning nearly anything and everything about the thriving tech and startup markets here in The Big Apple. For anyone interested in startups, workspaces, incubators, jobs, investing, training, news and access to a gazillion other relevant resources, this is meant to be an essential must-click  resource.The site is the product of a cooperative venture by the City of New York, IBM and the venture capital firm Gust. For additional reporting, see also IBM Starts Online Hub for NYC Tech Firms posted the same day on usatoday.com, by Mike Snider.

I highly recommend a click-through and thorough perusal of this site for the remarkable depth and richness of its offerings, timeliness, and sense of excitement and vitality that threads throughout all of it pages. While the term “platform” is often overused to describe a program or site, I believe that Digital.NYC truly lives up to this term of art.

What also really slew me about this site was its elegant design and ease of navigation that belie its vastness. The site clearly evinces its designers’ and builders’ passion for the subject matter and incredible hard work they put into getting it all just right. What a daunting task they must have faced in trying to meld all of these content categories together in a layout that is so highly functional, navigable and engaging. Bravo! to everyone involved in making this happen.

Indeed, for me it passes the Man from Mars Test: If you just landed on Earth and started out knowing little or nothing about the tech market in NYC, some time spent with this site would handily start you on your way to assessing its massive dimensions, operations and opportunities. Alternatively, very savvy and veteran entrepreneurs, investors, programmers, web designers, students, venture capitalists, urban planners and others will likewise find much to learn and use here.

I Googled around a bit to see whether other cities had similar hub sites. My initial research shows that there is nothing else per se like Digital.NYC currently online. Please post a comment here or send me an email if you do know of any others out there and I will post them. However, In my online travels I did find a site called Entrepreneurial Insights that has compiled on a page entitled Startup Hubs a series of recently posted in-depth reports global startup hubs. These cities include Paris, Toronto, Boston, Mumbai, Rio de Janeiro, Bangkok, Istanbul, Singapore, Beijing, Tel Aviv, Barcelona, Berlin and New York.

Possible Futures for Artificial Intelligence in Law Practice

As the legal marketplace continues to see significant economic and productivity gains from many practice-specific technologies, is it possible that attorneys themselves could one day be supplanted by sophisticated systems driven by artificial intelligence (AI) such as IBM’s Watson?

Jeopardy championships aside for the moment, leading legal technology expert and blogger Ron Friedmann has posted a fascinating report and analysis on August 24, 2014 on his Prism Legal Strategic Technology Blog entitled Meet Your New Lawyer, IBM Watson. He covers an invitation-only session held for CIO’s of large global law firms held at this summer’s annual meeting held by International Legal Technology Association (ILTA) where a Watson senor manager made a presentation to this group. Ron, as he always does on his consistently excellent blog, offers his own deep and valuable insights on the practical and economic implications regarding the possible adaptation of Watson to the work done at large law firms. I highly recommend clicking-through and full read of this post.

(X-ref also to an earlier post here ILTA’s New Multi-dimensional Report on the Future of Legal Information Technology.)

As I was preparing to write this post a few days ago, lo and behold, my September 2014 subscription edition of WIRED arrived. It carries a highly relevant feature about a hush-hush AI startup, entitled Siri’s Inventors Are Building a Radical New AI That Does Anything You Ask, by Steven Levy. This is about the work of the founders of Viv Labs who are developing the next generation of AI technology. Even in a crowded field where many others have competed, the article indicates that this new company may really be onto something very new. That is, AI as a form of utility that can:

  • Access and integrate vast numbers of big data sources
  • Continually teach itself to do new things and autonomously generate supporting code to accomplish them
  • Handle voice queries on mobile devices that involve compound and multi-level questions,steps and sources to resolve

Please check out the full text of this article for all of the details about how Viv’s technology works and its exciting prospective uses.

That said, would Viv’s utility architecture as opposed to Watson’s larger scale technology be more conducive to today’s legal applications? Assuming for the moment that it’s technically feasible, how would the ability to operate by such voice-based AI input/output affect the operation and quality of results for, say, legal research services, document assembly applications, precedent libraries, enterprise search, wikis, extranets, and perhaps even Continuing Legal Education courses? What might be a tipping point towards a greater engagement of AI in the law across many types of practices and office settings? Might this result in in-house counsel bringing more work to their own staffs rather than going to outside counsel? Would public interest law offices be able to provide more economical services to clients who cannot normal afford to pay legal fees? Might this have further impacts upon the trends towards fixed fee-based billing arrangements?

IBM’s New TrueNorth Chip Mimics Brain Functions

To borrow a title made famous by Monty Python to characterize a development announced in the August 7, 2014 edition of the New York Times, now for something completely different in, well, computing architecture, IBM has created a chip called TrueNorth that mimics some of the operations of the human brain. As covered in this report entitled IBM Develops a New Chip That Functions Like a Brain, this chip uses far less power than other chips built on more traditional technologies and, it is hoped, may enable the faster and more extensible processing and interpretation of certain classes of data.  This article contains a link to the August 8, 2014 issue of Science by the IBM researchers with the technical details of their accomplishments. In addition to reading the full details of this fascinating article, I also suggest a click-through to another article on IBM Research’s own website to an article entitled Introducing a Brain-inspired Computer.

This is one of those remarkable developments where the inspiration for a unique technological advancement has been derived from human biology. The field of biomimetics has likewise produced innovative systems, designs and materials in many diverse fields such as, among others, aeronautics, pharmaceuticals and robotics.

As reported in the NYTimes story, the TrueNorth chip, this is being termed a “neuromorphic” chip because it imitates the functions of the brain’s neurons to better recognize patterns such as changes to the intensity and color of light or particular physical movements made by a person. The May/June 2014 edition of MIT’s Technology Review in its annual report on the Top 10 Breakthrough Technologies carried a highly informative article entitled Neuromorphic Chips as among one of 2014’s such areas.

The report further states that the chip’s “neurons” all run in parallel and can compute 46 millions operations per second. While not as fast as many of today’s other chips, by its very nature it is better able to handle certain types of operations that faster chips can process. Moreover, scientists believe that the speed of these chips will continue to scale up.

I am certain that as with many other strikingly original advances such as this, other applications will continue to emerge in the future for these chips that no has currently anticipated. I am greatly looking forward to seeing what they are and where they occur.