AR + #s = $$$: New Processes and Strategies for Extracting Actionable Business Data from Augmented Reality Systems

“Ars Electronica – Light Tank”, image by Uwe Rieger (Denz), Yinan Liu (NZ) (Arcsec Lab) @ St. Mary’s Cathedral

Perhaps taking two disparate assertions, one tacit and one spoken, completely out of their original contexts and re-mixing and re-applying them to a different set of circumstances can be a helpful means to introduce an emerging and potentially prosperous new trend.

First, someone I know has a relatively smart and empathetic dog who will tilt his head from side to side if you ask him (the dog) something that sounds like a question. His owner claims that this is his dog’s way of non-verbally communicating – – to paraphrase (or parabark, maybe) – – something to the effect of “You know, you’re right. I never really thought if it that way”. Second, in an article in the January 4, 2019 edition of The New York Times entitled The Week in Tech: Amazon’s Burning Problems, by David Streitfeld, there is an amusing quote from a writer for WIRED named Craig Mod describing his 2018 Amazon Kindle Oasis as being “about as interactive as a potato”.

So, let’s take some literary license (and, of course, the dog’s license, too), and conflate these two communications in order to paws here to examine the burgeoning commercial a-peel of the rich business data now being generated by augmented reality (AR) systems.

To begin, let’s look no further than the 2019 Consumer Electronics Show (CES) held last month in Las Vegas.  New offerings of AR products and services were all the rage among a number of other cutting-edge technologies and products being displayed, demo-ed and discussed.¹ As demonstrably shown at this massive industry confab, these quickly evolving AR systems that assemble and present a data-infused overlay upon a user’s real-world line of sight, are finding a compelling array of versatile applications in a widening spectrum of industries. So, too, like everything else in today’s hyper-connected world, AR likewise generates waves of data that can be captured, analyzed and leveraged for the benefit and potential profit of many commercial enterprises.

A close and compelling examination of this phenomenon was recently posted in an article entitled Unlocking the Value of Augmented Reality Data, by Joe Biron and Jonathan Lang, on the MIT Sloan Management Review site on December 20, 2018. I highly recommend a click-through and full read if you have an opportunity. I will try to summarize and annotate this piece and, well, augment it with some of my own questions.

[The Subway Fold category of Virtual and Augmented Reality has been tracking a sampling of developments in this area in commerce, academia and the arts for the past several years.]

Image from Pixabay.com

Uncensored Sensors

Prior to the emergence of the Internet of Things (IoT), it was humans who mostly performed the functions of certain specialized sensors in tasks such as detecting environment changes and then transmitting their findings. Currently, as AR systems are increasingly deployed, people will be equipped with phones and headsets, among other devices, embedded with these sensing capabilities. This “provides uncharted opportunities for organizations” to make use of the resulting AR data-enabled analyses to increase their “operational effectiveness” and distinguish the offerings of their goods and services to the consumer public.

AR’s market in 2019 is analogous to where the IoT market was in 2010, garnering significant buzz and demonstrating “early value for new capabilities”. This technology’s capacity to “visualize, instruct, and interact” can become transformative in data usage and analytics. (See Why Every Organization Needs an Augmented Reality Strategy, by Michael E. Porter and James Heppelman, Harvard Business Review, November – December 2017.)

To thereby take advantage of AR, businesses should currently be concentrating on the following questions:

  • How best to plan to optimize and apply AR-generated data?
  • How to create improved “products and processes” based upon AR users’ feedback?

Image from Pixabay.com

AR Systems Generate Expanding Spheres of User Data

Looking again to the past for guidance today, with the introduction of the iPhone and Android phones in 2007 and 2008, these tech industry turning points produced “significant data about how customers engaged with their brand”. This time period further provided engineers with a deeper understanding of user requirements. Next, this inverted the value proposition such that “applications could sense and measure” consumer experiences as they occurred.

Empowered with comparable “sensing capabilities emerging through the IoT”, manufacturers promptly added connectivity, thus generating the emergence of smart, connected products (SCPs). These new devices now comprise much of the IoT. The resulting massive data collection infrastructure and the corresponding data economy have been “disrupting technology laggards ever since”.

Moreover, using “AR-as-a-sensor” for gathering deep quantities of data holds significant potential advantages. Many AR-enabled devices are already embedded with sensing capabilities including “cameras, GPS, Bluetooth, infrared and accelerometers”. More organically, they also unleash human “creativity, intuition and experience” that cannot be otherwise replicated by the current states of hardware and software.²

What can humans with AR-based devices provide to enhance their experiences? New types of data and “behavioral insights” can be harvested from both SCPs and unconnected products. For example, in the case of an unconnected product, a user with a device equipped to operate as a form of AR-as-a-sensor could examine how the product is used and what are the accompanying user preferences for it. For an SCP, the AR-equipped user could examine how usage affects performance and whether the product is adaptable to that particular user’s purposes.

For additionally needed critical context, it is indeed “human interaction” that provides insights into how SCPs and unconnected devices are realistically operating, performing and adapting.

“Rainbow Drops”, Image by Mrs. eNil

Evaluating Potential Business Benefits from AR-Derived Data

This new quantum of AR information further creates a form of feedback loop whereby questions concerning a product’s usage and customization can be assessed. This customer data has become central to “business strategy in the new digital economy”.

In order to more comprehensively understand and apply these AR data resources, a pyramid-shaped model called “DIKW” can be helpful. Its elements include

  • Data
  • Information
  • Knowledge
  • Wisdom

These are deployed in information management operations to process unrefined AR data into “value-rich knowledge and insights”. By then porting the resulting insights into engineering systems, businesses can enhance their “product portfolio, design and features” in previously unseen ways.

AR data troves can also be merged with IoT-generated data from SCPs to support added context and insights. For unconnected devices or digital-only offerings, humans using AR to interact with them can themselves become sensors similarly providing new perspectives on a product’s:

  • Service usage
  • Quality
  • Optimization of the “user experience and value”

“DSC_445”, Image by Frank Cundiff

Preliminary Use Cases

The following are emerging categories and early examples of how companies are capturing and leveraging AR-generated data:

  • Expert Knowledge Transfer: Honeywell is gathering data from experienced employees and then enhancing their collective knowledge to thereafter be transferred to new hires. The company has implemented this by “digitizing knowledge” about their products only made visible through experience. This enables them to better understand their products in entirely new ways. Further details of this initiative is presented on the firm’s website in a feature, photos and a video entitled How Augmented Reality is Revolutionizing Job Training.
  • Voice of the Product: Bicycle manufacturer Cannondale is now shipping their high-end products with an AR phone app to assist owners and bike shop mechanics with details and repairs. This is intended to add a new dimension to bike ownership by joining its physical and digital components. The company can also use this app to collect anonymized data to derive their products’ “voice”. This will consequently provide them with highly informative data on which “features and procedures” are being used the most by cyclists which can then be analyzed to improve their biking experiences. For additional information about their products and the accompanying AR app, see Cannondale Habit Ready to Shred with All-New Proportional Response Design, posted on Bikerumor.com on October 9, 2018. There is also a brief preview of the app on YouTube.
  • Personalized Services: AR is being promoted as “transformative” to online and offline commerce since it enables potential buyers to virtually try something out before they buy it. For instance, Amazon’s new Echo Look permits customers to do this with clothing purchases. (See Amazon’s Echo Look Fashion Camera is Now Available to Everyone in the US, by Chris Welch, posted on TheVerge.com on June 6, 2018.) The company also patented something called “Magic Mirror” in January 2018. When this is combined with Echo Look will point the way towards the next evolution of the functionality of the clothing store dressing room. (See Amazon’s Blended-Reality Mirror Shows You Wearing Virtual Clothes in Virtual Locales, by Alan Boyle, posted on GeekWire.com on January 2, 2018.) The data collected by Echo Look is “being analyzed to create user preference profiles” and, in turn, suggest purchases based upon them. It is reasonably conceivable that combining these two technologies to supplement such personalized clothing recommendations will produce additional AR-based data, elevating “personalized services and experiences” to a heretofore unattained level.³
  • Quality Control: For quite a while, DHL has been a corporate leader in integrating AR technology into its workers’ daily operations. In one instance, the company is using computer vision to perform bar code scanning. They are further using this system to gather and analyze quality assurance data. This enables them to assess how workers’ behavior “may affect order quality and process efficiency”. (See the in-depth report on the company’s website entitled Augmented Reality in Logistics, by Holger Glockner, Kai Jannek, Johannes Mahn and Björn Theis, posted in 2014.)

Image from Pixabay.com

Integrating Strategic Applications of AR-Derived Data

There is clearly a range of meaningful impacts upon business strategies to be conferred by AR-derived data. Besides the four positive examples above, other companies are likewise running comparable projects. However, some of them may likely remain constrained from wider exposure because of “technological or organizational” impediments.

With the emergence of AR-generated data resources, those firms that meaningfully integrate them with other established business data systems such as customer relationship management (CRM) and “digital engagement”, will yield tangible new insights and commercial opportunities. Thus, in order to fully leverage these potential new possibilities, nimble business strategists should establish dedicated multi-departmental teams to pursue these future benefits.

My Questions

  • Because the datastreams from AR are visually based, could this be yet another fertile area to apply machine learning and other aspects of artificial intelligence?
  • What other existing data collection and analysis fields might also potentially benefit from the addition of AR-derived data stream? What about data-driven professional and amateur sports, certain specialties of medical practice such as surgery and radiology, and governmental agencies such as those responsible for the environment and real estate usage?
  • What entrepreneurial opportunities might exist for creating new AR analytical tools, platforms and hardware, as well as integration services with other streams of data to produce original new products and services?
  • What completely new types of career opportunities and job descriptions might be generated by the growth of the AR-as-a-sensor sector of the economy? Should universities consider adding AR data analytics to their curriculum?
  • What data privacy and security issues may emerge here and how might they be different from existing concerns and regulations? How would AR-generated data be treated under the GDPR? Whether and how should people be informed in advance and their consent sought if AR data is being gathered about them?
  • How might AR-generated data affect any or all of the arts and other forms of creative expression?
  • Might some new technical terms of ARt be needed such as “ARformation”, “sensAR” and “stARtegic”?

 


1.  Much of the news and tech media provided extensive coverage of this event. Choosing just one report among many, the January 10, 2019 edition of The New York Times published a roundup and analysis of all of the news and announcements that have occurred in an engaging article with photos entitled CES 2019: It’s the Year of Virtual Assistants and 5G, by Brian X. Chen.

2.   For an alternative perspective on this question see the November 20, 2018 Subway Fold post entitled The Music of the Algorithms: Tune-ing Up Creativity with Artificial Intelligence.

3.  During the 2019 Super Bowl 53 played (or, more accurately, snoozed through), on February 3, 2019, there was an ad for a new product called The Mirror. This is a networked full-size wall mirror where users can do their daily workouts in directly in front of it and receive real-time feedback, performance readings, and communications with other users. From this ad and the company’s website, this device appears to be operating upon a similar concept to Amazon’s whereby users are receiving individualized and immediate feedback.

As a Matter of Fact: A New AI Tool for Real-Time Fact-Checking of News Using Voice Analysis

Image from Pixabay.com

When I first saw an article entitled Fact-Checking Live News In Just a Few Second, by Laine Higgins in the November 24-25, 2018 print edition of The Wall Street Journal (subscription required online), I though the pagination might be in error. The upper left corner showed the number to be “B4”. I think it would have been more accurate to have number the page “B4 and After” because of the coverage of a remarkable new program being developed called Voyc.

At a time of such heightened passions in domestic US and international news, with endless charges and counter-charges of “fake news” and assertions of “real news”, this technology can assess the audio of live news media broadcasts to determine the veracity of statements made within seconds of being spoken.

Someone please boot up John Lennon’s Gimme Some Truth as he passionately protested the need for truth in the world when it was first released on his classic Imagine album in 1971.¹ This still sounds as timely and relevant today as it did 47 years ago, particularly in this new article that, well, sings the praises of this new fact-checking app.

I suggest naming a new category for these programs and services to be called “fact-check tech”. I think it’s kinds catchy.

Let’s focus the specifics of this remarkable report. I highly recommend reading it in its entirety if you have full WSJ.com access. Below I will summarize and annotate it, and then pose some of my own question-checked questions.

Downstreaming

Image from Pixabay.com

The current process of “fact-checking live news” has been slow. Quite often, by the time a human fact-checker has researched and affirmed a disputed claim, subsequently misleading information based upon it has been distributed and “consumed”.

Voyc has the potential to expedite all this. It is being developed by Sparks Grove, the innovation and experience department of a consultancy called North Highland. Part of the development process involved interviewing a large number of “print and broadcast journalists” from the US, UK and Ireland about how to minimize and push back against misinformation as it affects the news.

The software is built upon artificial intelligence technology. It is capable of identifying a “questionable statement” in as quickly as two seconds. The system transcribes live audio and then runs it through a “database of fact” compiled from “verified government courses” and “accredited fact-checking organizations”. In the process, it can:

  • highlight statements that conflict as highlighted by its vetting process
  • send an alert to a news producer identifying the conflict, or
  • contact someone else who can make further inquiries about the assertion in question

This system was conceived by Jack Stenson, Spark Grove’s innovation lead. He said that within the news media, in an attempt to shorten the connection of “people to information”, Voyc is an effort to connect them “to what might be the most accurate truth”. Voyc’s designers were very cautious to avoid dispositively labeling any misleading statements it finds as being neither “true” nor “false”. Mr. Stenson does not want a result that “shuts down the conversation”, but rather, intends for the system to assist in stimulating “debates”.

Currently, there are other similar initiatives to develop similar technologies. These include,

Voyc is distinguished from them insofar as it fact-checks news audio in nearly real-time whereas the others do their checks against existing published sources.

Image from Pixabay.com

Upstreaming

Mr. Stenson foresees applications of Voyc by news producers to motivate “presenters” to explore their story topics with follow-up analyses in the forms of :

  • one-on-one interviews
  • panel discussions
  • debates

This software in still in its prototype stage and there is no target date for its introduction into television production facilities. Its developers are working to improving its accuracy when recording and “transcribing idiosyncratic speech patterns”. These include dialects, as well as “ums” and “ahs” ² when people speak.

According to Lucas Graves, an associate professor at the University of Wisconsin Madison’s School of Journalism and Mass Communication, because of the “nuanced nature” involved in fact-checking (as Voyc is attempting), this process involves both identifying and contextualizing a  statement in dispute. This is the critical factor in “verifying claims made on live news”. As broadcasters do not want to appear “partisan” or otherwise making a contemporaneous challenge without all of the facts readily at hand, the real utility of a fact-checking a system will be to  challenge a claim in very close proximity to its being spoken and broadcasted.

Looking back in time to dramatize the exciting potential of this forward-looking technology, let’s recall what Edith Anne (played by Lily Tomlin) on Saturday Night Live always said in concluding her appearances when she exclaimed “and that’s the truth“.

“Polygraph”, image by Rodger Bridges

My Questions

  • What additional features and functionalities should Voyc’s developers consider adding or modifying? What might future releases and upgrades look like?
  • Would Voyc be a viable add-on to currently popular voice enabled assistants such as Amazon’s Alexa and Google’s Assistant?
  • What data and personal privacy and ethical considerations should Voyc’s designers and programmers take into consideration in their work?
  • What other market sectors might benefit from fact-check tech such as applying it during expert testimony, education training or government hearings?
  • Could Voyc be licensed to other developers on a commercial or open source basis?
  • Can and should Voyc be tweaked to be more industry-specific, knowledge domain-specific or cultural-specific?

 


1.  This was also the opening theme song of the radio show called Idiot’s Delight, hosted for many years by Vin Scelsa who, for nearly five decades, on various commercial, satellite and public stations in New York was a leading figure in rock, progressive and freeform radio.

2.  These are known as speech disfluencies.

The Music of the Algorithms: Tune-ing Up Creativity with Artificial Intelligence

Image from Pexels.com

No, “The Algorithms” was not a stylishly alternative spelling for a rock and roll band once led by the former 45th Vice President of the United States that was originally called The Al Gore Rhythms.

That said, could anything possibly be more quintessentially human than all of the world’s many arts during the past several millennia? From the drawings done by Og the Caveman¹ on the walls of prehistoric caves right up through whatever musician’s album has dropped online today, the unique sparks of creative minds that are transformed into enduring works of music, literature, film and many other media are paramount among the diversity of things that truly set us apart from the rest of life on Earth.

Originality would seem to completely defy being reduced and formatted into an artificial intelligence (AI) algorithm that can produce new artistic works on its own. Surely something as compelling, imaginative and evocative as Springsteen’s [someone who really does have a rock and roll band with his name in it] epic Thunder Road could never have been generated by an app.

Well, “sit tight, take hold”, because the dawn of new music produced by AI might really be upon us. Whether you’ve got a “guitar and learned how to make it talk” or not, something new is emerging out there. Should all artists now take notice of this? Moreover, is this a threat to their livelihood or a new tool to be embraced by both musicians and their audiences alike? Will the traditional battle of the bands be transformed into the battle of AI’s? Before you “hide ‘neath the covers and study your pain” over this development, let’s have a look at what’s going on.

A fascinating and intriguing new report about this entitled A.I. Songwriting Has Arrived. Don’t Panic, by Dan Reilly, was posted on Fortune.com on October 25, 2018. I highly recommend clicking through and reading it if you have an opportunity. I will summarize and annotate it here, and then pose several of my own instrument-al questions.

Treble Clef

Image from Pexels.com

Previously, “music purists” disagreed about whether music tech innovations such as sampling and synthesizers were a form of cheating among recording artists. After all, these have been used in numerous hit tunes during the past several decades.

Now comes a new controversy over whether using artificial intelligence in songwriting will become a form of challenge to genuine creativity. Some current estimates indicate that during the next ten years somewhere between 20 to 30 percent of the Top 40 chart will be in part or in full composed by machine learning systems. The current types of AI-based musical applications include:

  • Cueing “an array of instrumentation” ranging from orchestral to hip-hop compositions, and
  • Systems managing alterations of “mood, tempo and genre”

Leonard Brody, who is the co-founder of Creative Labs, in a joint venture with the leading industry talent representatives Creative Artists Agency, analogizes the current state of AI in music to that of self-driving cars wherein:

  • “Level 1” artists use machines for assistance
  • “Level 2” music is “crafted by a machine” but still performed by real musicians
  • “Level 3” music is both crafted and performed by machines

Drew Silverstein, the CEO of Amper Music, a software company in New York, has developed “AI-based music composition software”. This product enables musicians “to create and download ‘stems'”, the company’s terminology for “unique portions of a track” on a particular instrument and then to and modify them. Silverstein believes that such “predictive tools” are part of an evolving process in original music.²

Other participants in this nascent space of applying algorithms in a variety of new ways to help songwriters and musicians include:

Image from Pexels.com

Bass Clef

The applications of AI in music are not as entirely new as they might seem. For example, David Bowie helped in creating a program called Verbasizer for the Apple Mac. It was used on his 1995 album entitled Outside to create “randomized portions of his inputted text” to generate original lyrics “with new meanings and moods”. Bowie discussed his usage of the Verbasizer in a 1997 documentary about his own creative processes entitled Inspirations.

Other musicians, including Taryn Southern, who was previously a contestant on American Idol, used software from Amper Music, Watson Beat and other vendors for the eight songs on her debut album, the palindrome-entitled I Am AI, released in 2017. (Here is the YouTube video of the first track entitled Break Free.) She believes that using these tools for songwriting is not depriving anyone of work, but rather, just “making them work differently”.

Taking a different perspective about this is Will.i.am, the music producer, songwriter and member of the Black Eyed Peas. He is skeptical of using AI in music because of his concerns over how this technological assistance “is helping creative songwriters” He also expressed doubts concerning the following issues:

  • What is AI’s efficacy in the composition process?
  • How will the resulting music be distributed?
  • Who is the audience?
  • How profitable will it be?

He also believes that AI cannot reproduce the natural talents of some of the legendary songwriters and performers he cites, in addition to the complexities of the “recording processes” they applied to achieve their most famous recordings.

For musical talent and their representatives, the critical issue is money including, among other things, “production costs to copyright and royalties”. For instance, Taryn Southern credits herself and Amper with the songwriting for I Am AI. However, using this software enabled her to spend her funding for other costs besides the traditional costs including “human songwriters”, studio musicians, and the use of a recording studio”.

To sum up, at this point in time in the development of music AIs, it is not anticipated that any truly iconic songs or albums will emerge from them. Rather, it is more likely that a musician “with the right chops and ingenuity” might still achieve something meaningful and in less time with the use of AI.

Indeed, depending on the individual circumstances of their usage, these emerging AI and machine learning music systems may well approach industry recognition of being, speaking of iconic albums – – forgive me, Bruce – – born to run.

Image from Pixabay.com

My Questions

  • Should musicians be ethically and/or legally obligated to notify purchasers of their recordings and concert tickets that an AI has been used to create their music?
  • Who owns the intellectual property rights of AI-assisted or wholly derived music? Is it the songwriter, music publisher, software vendor, the AI developers, or some other combination therein? Do new forms of contracts or revisions to existing forms of entertainment contracts need to be created to meets such needs? Would the Creative Commons licenses be usable here? How, and to whom, would royalties be paid and at what percentage rates?
  • Can AI-derived music be freely sampled for incorporation into new musical creations by other artists? What about the rights and limitations of sampling multiple tracks of AI-derived music itself?
  • How would musicians, IP owners, music publishers and other parties be affected, and what are the implications for the music industry, if developers of a musical AIs make their algorithms available on an open source basis?
  • What new entrepreneurial and artistic opportunities might arise for developing customized add-ons, plug-ins and extensions to music AIs? How might these impact IP and music industry employment issues?

1.  Og was one of the many fictional and metaphorical characters created by the New York City radio and Public Broadcasting TV legend Jean Shepherd. If he was still alive today, his work would have been perfect for podcasting. He is probably best known for the use of several of his short stories becoming the basis for the holiday movie classic A Christmas Story. A review of a biography about him appears in the second half of the November 4, 2014 Subway Fold Post entitled Say, Did You Hear the Story About the Science and Benefits of Being an Effective Storyteller?

2.  I attended a very interesting presentation by Drew Silverstein, the CEO and founder of Amper Music, of his system on November 6, 2017, at a monthly MeetUp.com meeting of the NYC Bots and Artificial Intelligence group. The program included two speakers from other startups in this sector in an evening entitled Creating, Discovering, and Listening to Audio with Artificial Intelligence. For me, the high point of these demos was watching and listening as Silverstein deployed his system to create some original music live based upon suggestions from the audience.

I Can See for Miles: Using Augmented Reality to Analyze Business Data Sets

matrix-1013612__340, Image from Pixabay

While one of The Who’s first hit singles, I Can See for Miles, was most certainly not about data visualization, it still might – – on a bit of a stretch – – find a fitting a new context in describing one of the latest dazzling new technologies in the opening stanza’s declaration “there’s magic in my eye”.  In determining Who’s who and what’s what about all this, let’s have a look at report on a new tool enabling data scientists to indeed “see for miles and miles” in an exciting new manner.

This innovative approach was recently the subject of a fascinating article by an augmented reality (AR) designer named Benjamin Resnick about his team’s work at IBM on a project called Immersive Insights, entitled Visualizing High Dimensional Data In Augmented Reality, posted on July 3, 2017 on Medium.com. (Also embedded is a very cool video of a demo of this system.) They are applying AR’s rapidly advancing technology1 to display, interpret and leverage insights gained from business data. I highly recommend reading this in its entirety. I will summarize and annotate it here and then pose a few real-world questions of my own.

Immersive Insights into Where the Data-Points Point

As Resnick foresees such a system in several years, a user will start his or her workday by donning their AR glasses and viewing a “sea of gently glowing, colored orbs”, each of which visually displays their business’s big data sets2. The user will be able to “reach out select that data” which, in turn, will generate additional details on a nearby monitor. Thus, the user can efficiently track their data in an “aesthetically pleasing” and practical display.

The project team’s key objective is to provide a means to visualize and sum up the key “relationships in the data”. In the short-term, the team is aiming Immersive Insights towards data scientists who are facile coders, enabling them to visualize, using AR’s capabilities upon time series, geographical and networked data. For their long-term goals, they are planning to expand the range of Immersive Insight’s applicability to the work of business analysts.

For example, Instacart, a same-day food delivery service, maintains an open source data set on food purchases (accessible here). Every consumer represents a data-point wherein they can be expressed as a “list of purchased products” from among 50,000 possible items.

How can this sizable pool of data be better understood and the deeper relationships within it be extracted and understood? Traditionally, data scientists create a “matrix of 2D scatter plots” in their efforts to intuit connections in the information’s attributes. However, for those sets with many attributes, this methodology does not scale well.

Consequently, Resnick’s team has been using their own new approach to:

  • Lower complex data to just three dimensions in order to sum up key relationships
  • Visualize the data by applying their Immersive Insights application, and
  • Iteratively label and color-code the data” in conjunction with an “evolving understanding” of its inner workings

Their results have enable them to “validate hypotheses more quickly” and establish a sense about the relationships within the data sets. As well, their system was built to permit users to employ a number of versatile data analysis programming languages.

The types of data sets being used here are likewise deployed in training machine learning systems3. As a result, the potential exists for these three technologies to become complementary and mutually supportive in identifying and understanding relationships within the data as well as deriving any “black box predictive models”.

Analyzing the Instacart Data Set: Food for Thought

Passing over the more technical details provided on the creation of team’s demo in the video (linked above), and next turning to the results of the visualizations, their findings included:

  • A great deal of the variance in Instacart’s customers’ “purchasing patterns” was between those who bought “premium items” and those who chose less expensive “versions of similar items”. In turn, this difference has “meaningful implications” in the company’s “marketing, promotion and recommendation strategies”.
  • Among all food categories, produce was clearly the leader. Nearly all customers buy it.
  • When the users were categorized by the “most common department” they patronized, they were “not linearly separable”. This is, in terms of purchasing patterns, this “categorization” missed most of the variance in the system’s three main components (described above).

Resnick concludes that the three cornerstone technologies of Immersive Insights – – big data, augmented reality and machine learning – – are individually and in complementary combinations “disruptive” and, as such, will affect the “future of business and society”.

Questions

  • Can this system be used on a real-time basis? Can it be configured to handle changing data sets in volatile business markets where there are significant changes within short time periods that may affect time-sensitive decisions?
  • Would web metrics be a worthwhile application, perhaps as an add-on module to a service such as Google Analytics?
  • Is Immersive Insights limited only to business data or can it be adapted to less commercial or non-profit ventures to gain insights into processes that might affect high-level decision-making?
  • Is this system extensible enough so that it will likely end up finding unintended and productive uses that its designers and engineers never could have anticipated? For example, might it be helpful to juries in cases involving technically or financially complex matters such as intellectual property or antitrust?

 


1.  See the Subway Fold category Virtual and Augmented Reality for other posts on emerging AR and VR applications.

2.  See the Subway Fold category of Big Data and Analytics for other posts covering a range of applications in this field.

3.  See the Subway Fold category of Smart Systems for other posts on developments in artificial intelligence, machine learning and expert systems.

4.  For a highly informative and insightful examination of this phenomenon where data scientists on occasion are not exactly sure about how AI and machine learning systems produce their results, I suggest a click-through and reading of The Dark Secret at the Heart of AI,  by Will Knight, which was published in the May/June 2017 issue of MIT Technology Review.

Digital Smarts Everywhere: The Emergence of Ambient Intelligence

Image from Pixabay

Image from Pixabay

The Troggs were a legendary rock and roll band who were part of the British Invasion in the late 1960’s. They have always been best known for their iconic rocker Wild Thing. This was also the only Top 10 hit that ever had an ocarina solo. How cool is that! The band went on to have two other major hits, With a Girl Like You and Love is All Around.¹

The third of the band’s classic singles can be stretched a bit to be used as a helpful metaphor to describe an emerging form pervasive “all around”-edness, this time in a more technological context. Upon reading a fascinating recent article on TechCrunch.com entitled The Next Stop on the Road to Revolution is Ambient Intelligence, by Gary Grossman, on May 7, 2016, you will find a compelling (but not too rocking) analysis about how the rapidly expanding universe of digital intelligent systems wired into our daily routines is becoming more ubiquitous, unavoidable and ambient each day.

All around indeed. Just as romance can dramatically affect our actions and perspectives, studies now likewise indicate that the relentless global spread of smarter – – and soon thereafter still smarter – – technologies is comparably affecting people’s lives at many different levels.² 

We have followed just a sampling of developments and trends in the related technologies of artificial intelligence, machine learning, expert systems and swarm intelligence in these 15 Subway Fold posts. I believe this new article, adding “ambient intelligence” to the mix, provides a timely opportunity to bring these related domains closer together in terms of their common goals, implementations and benefits. I highly recommend reading Mr. Grossman’s piece it in its entirety.

I will summarize and annotate it, add some additional context, and then pose some of my own Troggs-inspired questions.

Internet of Experiences

Digital this, that and everything is everywhere in today’s world. There is a surging confluence of connected personal and business devices, the Internet, and the Internet of Things (I0T) ³. Woven closely together on a global scale, we have essentially built “a digital intelligence network that transcends all that has gone before”. In some cases, this quantum of advanced technologies gains the “ability to sense, predict and respond to our needs”, and is becoming part of everyone’s “natural behaviors”.

A forth industrial revolution might even manifest itself in the form of machine intelligence whereby we will interact with the “always-on, interconnected world of things”. As a result, the Internet may become characterized more by experiences where users will converse with ambient intelligent systems everywhere. The supporting planks of this new paradigm include:

A prediction of what more fully realized ambient intelligence might look like using travel as an example appeared in an article entitled Gearing Up for Ambient Intelligence, by Lisa Morgan, on InformationWeek.com on March 14, 2016. Upon leaving his or her plane, the traveler will receive a welcoming message and a request to proceed to the curb to retrieve their luggage. Upon reaching curbside, a self-driving car6 will be waiting with information about the hotel booked for the stay.

Listening

Another article about ambient intelligence entitled Towards a World of Ambient Computing, by Simon Bisson, posted on ZDNet.com on February 14, 2014, is briefly quoted for the line “We will talk, and the world will answer”, to illustrate the point that current technology will be morphing into something in the future that would be nearly unrecognizable today. Grossman’s article proceeds to survey a series of commercial technologies recently brought to market as components of a fuller ambient intelligence that will “understand what we are asking” and provide responsive information.

Starting with Amazon’s Echo, this new device can, among other things:

  • Answer certain types of questions
  • Track shopping lists
  • Place orders on Amazon.com
  • Schedule a ride with Uber
  • Operate a thermostat
  • Provide transit schedules
  • Commence short workouts
  • Review recipes
  • Perform math
  • Request a plumber
  • Provide medical advice

Will it be long before we begin to see similar smart devices everywhere in homes and businesses?

Kevin Kelly, the founding Executive Editor of WIRED and a renowned futurist7, believes that in the near future, digital intelligence will become available in the form of a utility8 and, as he puts it “IQ as a service”. This is already being done by Google, Amazon, IBM and Microsoft who are providing open access to sections of their AI coding.9 He believes that success for the next round of startups will go to those who enhance and transforms something already in existence with the addition of AI. The best example of this is once again self-driving cars.

As well, in a chapter on Ambient Computing from a report by Deloitte UK entitled Tech Trends 2015, it was noted that some products were engineering ambient intelligence into their products as a means to remain competitive.

Recommending

A great deal of AI is founded upon the collection of big data from online searching, the use of apps and the IoT. This universe of information supports neural networks learn from repeated behaviors including people’s responses and interests. In turn, it provides a basis for “deep learning-derived personalized information and services” that can, in turn, derive “increasingly educated guesses with any given content”.

An alternative perspective, that “AI is simply the outsourcing of cognition by machines”, has been expressed by Jason Silva, a technologist, philosopher and video blogger on Shots of Awe. He believes that this process is the “most powerful force in the universe”, that is, of intelligence. Nonetheless, he sees this as an evolutionary process which should not be feared. (See also the December 27, 2014 Subway Fold post entitled  Three New Perspectives on Whether Artificial Intelligence Threatens or Benefits the World.)

Bots are another contemporary manifestation of ambient intelligence. These are a form of software agent, driven by algorithms, that can independently perform a range of sophisticated tasks. Two examples include:

Speaking

Optimally, bots should also be able to listen and “speak” back in return much like a 2-way phone conversation. This would also add much-needed context, more natural interactions and “help to refine understanding” to these human/machine exchanges. Such conversations would “become an intelligent and ambient part” of daily life.

An example of this development path is evident in Google Now. This service combines voice search with predictive analytics to present users with information prior to searching. It is an attempt to create an “omniscient assistant” that can reply to any request for information “including those you haven’t thought of yet”.

Recently, the company created a Bluetooth-enable prototype of lapel pin based on this technology that operates just by tapping it much like the communicators on Star Trek. (For more details, see Google Made a Secret Prototype That Works Like the Star Trek Communicator, by Victor Luckerson, on Time.com, posted on November 22, 2015.)

The configurations and specs of AI-powered devices, be it lapel pins, some form of augmented reality10 headsets or something else altogether, supporting such pervasive and ambient intelligence are not exactly clear yet. Their development and introduction will take time but remain inevitable.

Will ambient intelligence make our lives any better? It remains to be seen, but it is probably a viable means to handle some of more our ordinary daily tasks. It will likely “fade into the fabric of daily life” and be readily accessible everywhere.

Quite possibly then, the world will truly become a better place to live upon the arrival of ambient intelligence-enabled ocarina solos.

My Questions

  • Does the emergence of ambient intelligence, in fact, signal the arrival of a genuine fourth industrial revolution or is this all just a semantic tool to characterize a broader spectrum of smarter technologies?
  • How might this trend affect overall employment in terms of increasing or decreasing jobs on an industry by industry basis and/or the entire workforce? (See also this June 4, 2015 Subway Fold post entitled How Robots and Computer Algorithms Are Challenging Jobs and the Economy.)
  • How might this trend also effect non-commercial spheres such as public interest causes and political movements?
  • As ambient intelligence insinuates itself deeper into our online worlds, will this become a principal driver of new entrepreneurial opportunities for startups? Will ambient intelligence itself provide new tools for startups to launch and thrive?

 


1.   Thanks to Little Steven (@StevieVanZandt) for keeping the band’s music in occasional rotation on The Underground Garage  (#UndergroundGarage.) Also, for an appreciation of this radio show see this August 14, 2014 Subway Fold post entitled The Spirit of Rock and Roll Lives on Little Steven’s Underground Garage.

2.  For a remarkably comprehensive report on the pervasiveness of this phenomenon, see the Pew Research Center report entitled U.S. Smartphone Use in 2015, by Aaron Smith, posted on April 1, 2015.

3These 10 Subway Fold posts touch upon the IoT.

4.  The Subway Fold category Big Data and Analytics contains 50 posts cover this topic in whole or in part.

5.  The Subway Fold category Telecommunications contains 12 posts cover this topic in whole or in part.

6These 5 Subway Fold posts contain references to self-driving cars.

7.   Mr. Kelly is also the author of a forthcoming book entitled The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future, to be published on June 7, 2016 by Viking.

8.  This September 1, 2014 Subway Fold post entitled Possible Futures for Artificial Intelligence in Law Practice, in part summarized an article by Steven Levy in the September 2014 issue of WIRED entitled Siri’s Inventors Are Building a Radical New AI That Does Anything You Ask. This covered a startup called Viv Labs whose objective was to transform AI into a form of utility. Fast forward to the Disrupt NY 2016 conference going on in New York last week. On May 9, 2016, the founder of Viv, Dag Kittlaus, gave his presentation about the Viv platform. This was reported in an article posted on TechCrunch.com entitled Siri-creator Shows Off First Public Demo of Viv, ‘the Intelligent Interface for Everything’, by Romain Dillet, on May 9, 2016. The video of this 28-minute presentation is embedded in this story.

9.  For the full details on this story see a recent article entitled The Race Is On to Control Artificial Intelligence, and Tech’s Future by John Markoff and Steve Lohr, published in the March 25, 2016 edition of The New York Times.

10These 10 Subway Fold posts cover some recent trends and development in augmented reality.

The Mediachain Project: Developing a Global Creative Rights Database Using Blockchain Technology

Image from Pixabay

Image from Pixabay

When people are dating it is often said that they are looking for “Mr. Right” or “Ms. Right”. That is, finding someone who is just the right romantic match for them.

In the case of today’s rapid development, experimentation and implementation of blockchain technology, if a startup’s new technology takes hold, it might soon find a highly productive (but maybe not so romantic) match in finding Mr. or Ms. [literal] Right by deploying the blockchain as a form of global registry of creative works ownership.

These 5 Subway Fold posts have followed just a few of the voluminous developments in bitcoin and blockchain technologies. Among them, the August 21, 2015 post entitled Two Startups’ Note-Worthy Efforts to Adapt Blockchain Technology for the Music Industry has drawn the most number of clicks. A new report on Coindesk.com on February 23, 2016 entitled Mediachain is Using Blockchain to Create a Global Rights Database by Pete Rizzo provides a most interesting and worthwhile follow on related to this topic. I recommend reading it in its entirety. I will summarize and annotate it to provide some additional context, and then pose several of my own questions.

Producing a New Protocol for Ownership, Protection and Monetization

Applications of blockchain technology for the potential management of economic and distribution benefits of “creative professions”, including writers, musicians and others, that have been significantly affected by prolific online file copying still remains relatively unexplored. As a result, they do not yet have the means to “prove and protect ownership” of their work. Moreover, they do have an adequate system to monetize their digital works. But the blockchain, by virtue of its structural and operational nature, can supply these creators with “provenance, identity and micropayments“. (See also the October 27, 2015 Subway Fold post entitled Summary of the Bitcoin Seminar Held at Kaye Scholer in New York on October 15, 2015 for some background on these three elements.)

Now on to the efforts of a startup called Mine ( @mine_labs ), co-founded by Jesse Walden and Denis Nazarov¹. They are preparing to launch a new metadata protocol called Mediachain that enables creators working in digital media to write data describing their work along with a timestamp directly onto the blockchain. (Yet another opportunity to go out on a sort of, well, date.)  This system is based upon the InterPlanetary File System (IPFS). Mine believes that IPSF is a “more readable format” than others presently available.

Walden thinks that Mediachain’s “decentralized nature”, rather than a more centralized model, is critical to its objectives. Previously, a very “high-profile” somewhat similar initiative to establish a similarly global “database of musical rights and works” called the Global Repertoire Database (GDR) had failed.

(Mine maintains this page of a dozen recent posts on Medium.com about their technology that provides some interesting perspectives and details about the Mediachain project.)

Mediachain’s Objectives

Walden and Nazarov have tried to innovate by means of changing how media businesses interact with the Internet, as opposed to trying to get them to work within its established standards. Thus, the Mediachain project has emerged with its focal point being the inclusion of descriptive data and attribution for image files by combining blockchain technology and machine learning². As well, it can accommodate reverse queries to identify the creators of images.

Nazarov views Mediachain “as a global rights database for images”. When used in conjunction with, among others, Instagram, he and Walden foresee a time when users of this technology can retrieve “historic information” about a file. By doing so, they intend to assist in “preserving identity”, given the present challenges of enforcing creator rights and “monetizing content”. In the future, they hope that Mediachain inspires the development of new platforms for music and movies that would permit ready access to “identifying information for creative works”. According to Walden, their objective is to “unbundle identity and distribution” and provide the means to build new and more modern platforms to distribute creative works.

Potential Applications for Public Institutions

Mine’s co-founders believe that there is further meaningful potential for Mediachain to be used by public organizations who provide “open data sets for images used in galleries, libraries and archives”. For example:

  • The Metropolitan Museum of Art (“The Met” as it is referred to on their website and by all of my fellow New York City residents), has a mandate to license the metadata about the contents of their collections. The museum might have a “metadata platform” of its own to host many such projects.
  • The New York Public Library has used their own historical images, that are available to the public to, among other things, create maps.³ Nazarov and Walden believe they could “bootstrap the effort” by promoting Mediachain’s expanded apps in “consumer-facing projects”.

Maintaining the Platform Security, Integrity and Extensibility

Prior to Mediachain’s pending launch, Walden and Nazarov are highly interested in protecting the platform’s legitimate users from “bad actors” who might wrongfully claim ownership of others’ rightfully owned works. As a result, to ensure the “trust of its users”, their strategy is to engage public institutions as a model upon which to base this. Specifically, Mine’s developers are adding key functionality to Mediachain that enables the annotation of images.

The new platform will also include a “reputation system” so that subsequent users will start to “trust the information on its platform”. In effect, their methodology empowers users “to vouch for a metadata’s correctness”. The co-founders also believe that the “Mediachain community” will increase or decrease trust in the long-term depending on how it operates as an “open access resource”. Nazarov pointed to the success of Wikipedia to characterize this.

Following the launch of Mediachain, the startup’s team believes this technology could be integrated into other existing social media sites such as the blogging platform Tumblr. Here they think it would enable users to search images including those that may have been subsequently altered for various purposes. As a result, Tumblr would then be able to improve its monetization efforts through the application of better web usage analytics.

The same level of potential, by virtue of using Mediachain, may likewise be found waiting on still other established social media platforms. Nazarov and Walden mentioned seeing Apple and Facebook as prospects for exploration. Nazarov said that, for instance, Coindesk.com could set its own terms for its usage and consumption on Facebook Instant Articles (a platform used by publishers to distribute their multimedia content on FB). Thereafter, Mediachain could possibly facilitate the emergence of entirely new innovative media services.

Nazarov and Walden temper their optimism because the underlying IPFS basis is so new and acceptance and adoption of it may take time. As well, they anticipate “subsequent issues” concerning the platform’s durability and the creation of “standards for metadata”. Overall though, they remain sanguine about Mediachain’s prospects and are presently seeking developers to embrace these challenges.

My Questions

  • How would new platforms and apps using Mediachain and IPSF be affected by the copyright and patent laws and procedures of the US and other nations?
  • How would applications built upon Mediachain affect or integrate with digital creative works distributed by means of a Creative Commons license?
  • What new entrepreneurial opportunities for startup services might arise if this technology eventually gains web-wide adoption and trust among creative communities?  For example, would lawyers and accountants, among many others, with clients in the arts need to develop and offer new forms of guidance and services to navigate a Mediachain-enabled marketplace?
  • How and by whom should standards for using Mediachain and other potential development path splits (also known as “forks“), be established and managed with a high level of transparency for all interested parties?
  • Does analogizing what Bitcoin is to the blockchain also hold equally true for what Mediachain is to the blockchain, or should alternative analogies and perspectives be developed to assist in the explanation, acceptance and usage of this new platform?

June 1, 2016 Update:  For an informative new report on Mediachain’s activities since this post was uploaded in March, I recommend clicking through and reading Mediachain Enivisions a Blockchain-based Tool for Identifying Artists’ Work Across the Internet, by Jonathan Shieber, posted today on TechCrunch.com.


1.   This link from Mine’s website is to an article entitled Introducing Mediachain by Denis Nazarov, originally published on Medium.com on January 2, 2016. He mentions in his text an earlier startup called Diaspora that ultimately failed in its attempt at creating something akin to the Mediachain project. This December 4, 2014 Subway Fold post entitled Book Review of “More Awesome Than Money” concerned a book that expertly explored the fascinating and ultimately tragic inside story of Diaspora.

2.   Many of the more than two dozen Subway Fold posts in the category of Smart Systems cover some of the recent news, trends and applications in machine learning.

3.  For details, see the January 5, 2016 posting on the NY Public Library’s website entitled Free for All: NYPL Enhances Public Domain Collections for Sharing and Reuse, by Shana Kimball and Steven A. Schwarzman.

New IBM Watson and Medtronic App Anticipates Low Blood Glucose Levels for People with Diabetes

"Glucose: Ball-and-Stick Model", Image by Siyavula Education

“Glucose: Ball-and-Stick Model”, Image by Siyavula Education

Can a new app jointly developed by IBM with its Watson AI technology in partnership with the medical device maker Medtronic provide a new form of  support for people with diabetes by safely avoiding low blood glucose (BG) levels (called hypoglycemia), in advance? If so, and assuming regulatory approval, this technology could potentially be a very significant boon to the care of this disease.

Basics of Managing Blood Glucose Levels

The daily management of diabetes involves a diverse mix of factors including, but not limited to, regulating insulin dosages, checking BG readings, measuring carbohydrate intakes at meals, gauging activity and exercise levels, and controlling stress levels. There is no perfect algorithm to do this as everyone with this medical condition is different from one another and their bodies react in individual ways in trying to balance all of this while striving to maintain healthy short and long-term control of BG levels.

Diabetes care today operates in a very data-driven environment. BG levels, expressed numerically, can be checked on hand-held meter and test strips using a single drop of blood or a continuous glucose monitoring system (CGM). The latter consists of a thumb drive-size sensor attached with temporary adhesive to the skin and a needle attached to this unit inserted just below the skin. This system provides patients with frequent real-time readings of their BG levels, and whether they are trending up or down, so they can adjust their medication accordingly. That is, for A grams of carbs and B amounts of physical activity and other contributing factors, C amount of insulin can be calculated and dispensed.

Insulin itself can be administered either manually by injection or by an insulin pump (also with a subcutaneously inserted needle). The later of these consists of two devices: The pump itself, a small enclosed device (about the size of a pager), with an infusion needle placed under the patient’s skin and a Bluetooth-enabled handheld device (that looks just like a smartphone), used to adjust the pump’s dosage and timing of insulin released. Some pump manufacturers are also bringing to market their latest generation of CGMs that integrate their data and command functions with their users’ smartphones.

(The links in the previous two paragraphs are to Wikipedia pages with detailed pages and photos on CGMs and insulin pumps. See also, this June 27, 2015 Subway Fold post entitled Medical Researchers are Developing a “Smart Insulin Patch” for another glucose sensing and insulin dispensing system under development.)

The trickiest part of all of these systems is maintaining levels of BG throughout each day that are within an acceptable range of values. High levels can result in a host of difficult symptoms. Hypoglycemic low levels, can quickly become serious, manifesting as dizziness, confusion and other symptoms, and can ultimately lead to unconsciousness in extreme cases if not treated immediately.

New App for Predicting and Preventing Low Blood Glucose Levels

Taking this challenge to an entirely new level, at last week’s annual Consumer Electronics Show (CES) held in Las Vegas, IBM and Medtronic jointly announced their new app to predict hypoglycemic events in advance. The app is built upon Watson’s significant strengths in artificial intelligence (AI) and machine learning to sift through and intuit patterns in large volumes of data, in this case generated from Medtronic’s user base for their CGMs and insulin pumps. This story was covered in a most interesting article posted in The Washington Post on January 6, 2016 entitled IBM Extends Health Care Bet With Under Armour, Medtronic by Jing Cao and Michelle Fay Cortez. I will summarize and annotate this report and then pose some of my own questions.

The announcement and demo of this new app on January 6, 2016 at CES showed the process by which a patient’s data can be collected from their Medtronic devices and then combined with additional information from their wearable activity trackers and food intake. Next, all of this information is processed through Watson in order to “provide feedback” for the patient to “manage their diabetes”.

Present and Future Plans for The App and This Approach

Making the announcement were Virginia Rometty, Chairman, President and CEO of IBM, and Omar Ishrak, Chairman and CEO of Medtronic. The introduction of this technology is expected in the summer of 2016. It still needs to be submitted to the US government’s regulatory review process.

Ms. Rometty said that the capability to predict low BG events, in some cases up to three hours before they occur, is a “breakthrough”. She described Watson as “cognitive computing”, using algorithms to generate “prescriptive and predictive analysis”. The company is currently making a major strategic move into finding and facilitating applications and partners for Watson in the health care industry. (The eight Subway Fold posts cover other various systems and developments using Watson.)

Hooman Hakami, Executive VP and President, of the Diabetes Group at Medtronic, described how his company is working to “anticipate” how the behavior of each person with Diabetes affects their blood glucose levels. With this information they can then “make choices to improve their health”. Here is the page from the company’s website about their partnership with IBM to work together on treating diabetes.

In the future, both companies are aiming to “give patients real-time information” on how their individual data is influencing their BG levels and “provide coaching” to assist them in making adjustments to keep their readings in a “healthy range”. In one scenario, patients might receive a text message that “they have an 85% chance of developing low blood sugar within an hour”. This will also include a recommendation to watch their readings and eat something to raise their BG back up to a safer level.

My Questions

  • Will this make patients more or less diligent in their daily care? Is there potential for patients to possibly assume less responsibility for their care if they sense that the management of their diabetes is running on a form of remote control? Alternatively, might this result in too much information for patients to manage?
  • What would be the possible results if this app is ever engineered to work in conjunction with the artificial pancreas project being led in by Ed Damiano and his group of developers in Boston?
  • If this app receives regulatory approval and gains wide acceptance among people with diabetes, what does this medical ecosystem look like in the future for patients, doctors, medical insurance providers, regulatory agencies, and medical system entrepreneurs? How might it positively or negatively affect the market for insulin pumps and CGMs?
  • Should IBM and Medtronic consider making their app available on and open-source basis to enable other individuals and groups of developers to improve it as well as develop additional new apps?
  • Whether and how will insurance policies for both patients and manufacturers, deal with any potential liability that may arise if the app causes some unforeseen adverse effects? Will medical insurance even cover, encourage or discourage the use of such an app?
  • Will the data generated by the app ever be used in any unforeseen ways that could affect patients’ privacy? Would patients using the new app have to relinquish all rights and interests to their own BG data?
  • What other medical conditions might benefit from a similar type of real-time data, feedback and recommendation system?