Digital Smarts Everywhere: The Emergence of Ambient Intelligence

Image from Pixabay

Image from Pixabay

The Troggs were a legendary rock and roll band who were part of the British Invasion in the late 1960’s. They have always been best known for their iconic rocker Wild Thing. This was also the only Top 10 hit that ever had an ocarina solo. How cool is that! The band went on to have two other major hits, With a Girl Like You and Love is All Around.¹

The third of the band’s classic singles can be stretched a bit to be used as a helpful metaphor to describe an emerging form pervasive “all around”-edness, this time in a more technological context. Upon reading a fascinating recent article on TechCrunch.com entitled The Next Stop on the Road to Revolution is Ambient Intelligence, by Gary Grossman, on May 7, 2016, you will find a compelling (but not too rocking) analysis about how the rapidly expanding universe of digital intelligent systems wired into our daily routines is becoming more ubiquitous, unavoidable and ambient each day.

All around indeed. Just as romance can dramatically affect our actions and perspectives, studies now likewise indicate that the relentless global spread of smarter – – and soon thereafter still smarter – – technologies is comparably affecting people’s lives at many different levels.² 

We have followed just a sampling of developments and trends in the related technologies of artificial intelligence, machine learning, expert systems and swarm intelligence in these 15 Subway Fold posts. I believe this new article, adding “ambient intelligence” to the mix, provides a timely opportunity to bring these related domains closer together in terms of their common goals, implementations and benefits. I highly recommend reading Mr. Grossman’s piece it in its entirety.

I will summarize and annotate it, add some additional context, and then pose some of my own Trogg-inspired questions.

Internet of Experiences

Digital this, that and everything is everywhere in today’s world. There is a surging confluence of connected personal and business devices, the Internet, and the Internet of Things (I0T) ³. Woven closely together on a global scale, we have essentially built “a digital intelligence network that transcends all that has gone before”. In some cases, this quantum of advanced technologies gains the “ability to sense, predict and respond to our needs”, and is becoming part of everyone’s “natural behaviors”.

A forth industrial revolution might even manifest itself in the form of machine intelligence whereby we will interact with the “always-on, interconnected world of things”. As a result, the Internet may become characterized more by experiences where users will converse with ambient intelligent systems everywhere. The supporting planks of this new paradigm include:

A prediction of what more fully realized ambient intelligence might look like using travel as an example appeared in an article entitled Gearing Up for Ambient Intelligence, by Lisa Morgan, on InformationWeek.com on March 14, 2016. Upon leaving his or her plane, the traveler will receive a welcoming message and a request to proceed to the curb to retrieve their luggage. Upon reaching curbside, a self-driving car6 will be waiting with information about the hotel booked for the stay.

Listening

Another article about ambient intelligence entitled Towards a World of Ambient Computing, by Simon Bisson, posted on ZDNet.com on February 14, 2014, is briefly quoted for the line “We will talk, and the world will answer”, to illustrate the point that current technology will be morphing into something in the future that would be nearly unrecognizable today. Grossman’s article proceeds to survey a series of commercial technologies recently brought to market as components of a fuller ambient intelligence that will “understand what we are asking” and provide responsive information.

Starting with Amazon’s Echo, this new device can, among other things:

  • Answer certain types of questions
  • Track shopping lists
  • Place orders on Amazon.com
  • Schedule a ride with Uber
  • Operate a thermostat
  • Provide transit schedules
  • Commence short workouts
  • Review recipes
  • Perform math
  • Request a plumber
  • Provide medical advice

Will it be long before we begin to see similar smart devices everywhere in homes and businesses?

Kevin Kelly, the founding Executive Editor of WIRED and a renowned futurist7, believes that in the near future, digital intelligence will become available in the form of a utility8 and, as he puts it “IQ as a service”. This is already being done by Google, Amazon, IBM and Microsoft who are providing open access to sections of their AI coding.9 He believes that success for the next round of startups will go to those who enhance and transforms something already in existence with the addition of AI. The best example of this is once again self-driving cars.

As well, in a chapter on Ambient Computing from a report by Deloitte UK entitled Tech Trends 2015, it was noted that some products were engineering ambient intelligence into their products as a means to remain competitive.

Recommending

A great deal of AI is founded upon the collection of big data from online searching, the use of apps and the IoT. This universe of information supports neural networks learn from repeated behaviors including people’s responses and interests. In turn, it provides a basis for “deep learning-derived personalized information and services” that can, in turn, derive “increasingly educated guesses with any given content”.

An alternative perspective, that “AI is simply the outsourcing of cognition by machines”, has been expressed by Jason Silva, a technologist, philosopher and video blogger on Shots of Awe. He believes that this process is the “most powerful force in the universe”, that is, of intelligence. Nonetheless, he sees this as an evolutionary process which should not be feared. (See also the December 27, 2014 Subway Fold post entitled  Three New Perspectives on Whether Artificial Intelligence Threatens or Benefits the World.)

Bots are another contemporary manifestation of ambient intelligence. These are a form of software agent, driven by algorithms, that can independently perform a range of sophisticated tasks. Two examples include:

Speaking

Optimally, bots should also be able to listen and “speak” back in return much like a 2-way phone conversation. This would also add much-needed context, more natural interactions and “help to refine understanding” to these human/machine exchanges. Such conversations would “become an intelligent and ambient part” of daily life.

An example of this development path is evident in Google Now. This service combines voice search with predictive analytics to present users with information prior to searching. It is an attempt to create an “omniscient assistant” that can reply to any request for information “including those you haven’t thought of yet”.

Recently, the company created a Bluetooth-enable prototype of lapel pin based on this technology that operates just by tapping it much like the communicators on Star Trek. (For more details, see Google Made a Secret Prototype That Works Like the Star Trek Communicator, by Victor Luckerson, on Time.com, posted on November 22, 2015.)

The configurations and specs of AI-powered devices, be it lapel pins, some form of augmented reality10 headsets or something else altogether, supporting such pervasive and ambient intelligence are not exactly clear yet. Their development and introduction will take time but remain inevitable.

Will ambient intelligence make our lives any better? It remains to be seen, but it is probably a viable means to handle some of more our ordinary daily tasks. It will likely “fade into the fabric of daily life” and be readily accessible everywhere.

Quite possibly then, the world will truly become a better place to live upon the arrival of ambient intelligence-enabled ocarina solos.

My Questions

  • Does the emergence of ambient intelligence, in fact, signal the arrival of a genuine fourth industrial revolution or is this all just a semantic tool to characterize a broader spectrum of smarter technologies?
  • How might this trend affect overall employment in terms of increasing or decreasing jobs on an industry by industry basis and/or the entire workforce? (See also this June 4, 2015 Subway Fold post entitled How Robots and Computer Algorithms Are Challenging Jobs and the Economy.)
  • How might this trend also effect non-commercial spheres such as public interest causes and political movements?
  • As ambient intelligence insinuates itself deeper into our online worlds, will this become a principal driver of new entrepreneurial opportunities for startups? Will ambient intelligence itself provide new tools for startups to launch and thrive?

 


1.   Thanks to Little Steven (@StevieVanZandt) for keeping the band’s music in occasional rotation on The Underground Garage  (#UndergroundGarage.) Also, for an appreciation of this radio show see this August 14, 2014 Subway Fold post entitled The Spirit of Rock and Roll Lives on Little Steven’s Underground Garage.

2.  For a remarkably comprehensive report on the pervasiveness of this phenomenon, see the Pew Research Center report entitled U.S. Smartphone Use in 2015, by Aaron Smith, posted on April 1, 2015.

3These 10 Subway Fold posts touch upon the IoT.

4.  The Subway Fold category Big Data and Analytics contains 50 posts cover this topic in whole or in part.

5.  The Subway Fold category Telecommunications contains 12 posts cover this topic in whole or in part.

6These 5 Subway Fold posts contain references to self-driving cars.

7.   Mr. Kelly is also the author of a forthcoming book entitled The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future, to be published on June 7, 2016 by Viking.

8.  This September 1, 2014 Subway Fold post entitled Possible Futures for Artificial Intelligence in Law Practice, in part summarized an article by Steven Levy in the September 2014 issue of WIRED entitled Siri’s Inventors Are Building a Radical New AI That Does Anything You Ask. This covered a startup called Viv Labs whose objective was to transform AI into a form of utility. Fast forward to the Disrupt NY 2016 conference going on in New York last week. On May 9, 2016, the founder of Viv, Dag Kittlaus, gave his presentation about the Viv platform. This was reported in an article posted on TechCrunch.com entitled Siri-creator Shows Off First Public Demo of Viv, ‘the Intelligent Interface for Everything’, by Romain Dillet, on May 9, 2016. The video of this 28-minute presentation is embedded in this story.

9.  For the full details on this story see a recent article entitled The Race Is On to Control Artificial Intelligence, and Tech’s Future by John Markoff and Steve Lohr, published in the March 25, 2016 edition of The New York Times.

10These 10 Subway Fold posts cover some recent trends and development in augmented reality.

LinkNYC Rollout Brings Speedy Free WiFi and New Opportunities for Marketers to New York

Link.NYC WiFi Kiosk 5, Image by Alan Rothman

Link.NYC WiFi Kiosk 5, Image by Alan Rothman

Back in the halcyon days of yore before the advent of smartphones and WiFi, there were payphones and phone booths all over of the streets in New York. Most have disappeared, but a few scattered survivors have still managed to hang on. An article entitled And Then There Were Four: Phone Booths Saved on Upper West Side Sidewalks, by Corey Kilgannon, posted on NYTimes.com on February 10, 2016, recounts the stories of some of the last lonely public phones.

Taking their place comes a highly innovative new program called LinkNYC (also @LinkNYC and #LinkNYC). This initiative has just begun to roll out across all five boroughs with a network of what will become thousands of WiFi kiosks providing free and way fast free web access and phone calling, plus a host of other online NYC support services. The kiosks occupy the same physical spaces as the previous payphones.

The first batch of them has started to appear along Third Avenue in Manhattan. I took the photos accompanying this post of one kiosk at the corner of 14th Street and Third Avenue. While standing there, I was able to connect to the web on my phone and try out some of the LinkNYC functions. My reaction: This is very cool beans!

LinkNYC also presents some potentially great new opportunities for marketers. The launch of the program and the companies getting into it on the ground floor were covered in a terrific new article on AdWeek.com on February 15, 2015 entitled What It Means for Consumers and Brands That New York Is Becoming a ‘Smart City’, by Janet Stilson. I recommend reading it in its entirety. I will summarize and annotate it to add some additional context, and pose some of my own ad-free questions.

LinkNYC Set to Proliferate Across NYC

Link.NYC WiFi Kiosk 2, Image by Alan Rothman

Link.NYC WiFi Kiosk 2, Image by Alan Rothman

When completed, LinkNYC will give New York a highly advanced mobile network spanning the entire city. Moreover, it will help to transform it into a very well-wired “smart city“.¹ That is, an urban area comprehensively collecting, analyzing and optimizing vast quantities of data generated by a wide array of sensors and other technologies. It is a network and a host of network effects where a city learns about itself and leverages this knowledge for multiple benefits for it citizenry.²

Beyond mobile devices and advertising, smart cities can potentially facilitate many other services. The consulting firm Frost & Sullivan predicts that there will be 26 smart cities across the globe during by 2025. Currently, everyone is looking to NYC to see how the implementation of LinkNYC works out.

According to Mike Gamaroff, the head of innovation in the New York office of Kinetic Active a global media and marketing firm, LinkNYC is primarily a “utility” for New Yorkers as well as “an advertising network”. Its throughput rates are at gigabit speeds thereby making it the fastest web access available when compared to large commercial ISP’s average rates of merely 20 to 30 megabits.

Nick Cardillicchio, a strategic account manager at Civiq Smartscapes, the designer and manufacturer of the LinkNYC kiosks, said that LinkNYC is the only place where consumers can access the Net at such speeds. For the AdWeek.com article, he took the writer, Janet Stilson, on a tour of the kiosks include the one at Third Avenue and 14th Street, where one of the first ones is in place. (Coincidentally, this is the same kiosk I photographed for this post.)

There are a total of 16 currently operational for the initial testing. The WiFi web access is accessible with 150 feet of the kiosk and can range up to 400 feet. Perhaps those New Yorkers actually living within this range will soon no longer need their commercial ISPs.

Link.NYC WiFi Kiosk 4, Image by Alan Rothman

Link.NYC WiFi Kiosk 4, Image by Alan Rothman

The initial advertisers appearing in rotation on the large digital screen include Poland Spring (see the photo at the right), MillerCoors, Pager and Citibank. Eventually “smaller tablet screens” will be added to enable users to make free domestic voice or video calls. As well, they will present maps, local activities and emergency information in and about NYC. Users will also be able to charge up their mobile devices.

However, it is still too soon to assess and quantify the actual impact on such providers. According to David Krupp, CEO, North America, for Kinetic, neither Poland Spring nor MillerCoors has produced an adequate amount of data to yet analyze their respective LinkNYC ad campaigns. (Kinetic is involved in supporting marketing activities.)

Commercializing the Kiosks

The organization managing LinkNYC, the CityBridge consortium (consisting of Qualcomm, Intersection, and Civiq Smartscapes) , is not yet indicating when the new network will progress into a more “commercial stage”. However, once the network is fully implemented with the next few years, the number of kiosks might end up being somewhere between 75,000 and 10,000. That would make it the largest such network in the world.

CityBridge is also in charge of all the network’s advertising sales. These revenues will be split with the city. Under the 12-year contract now in place, this arrangement is predicted to produce $500M for NYC, with positive cash flow anticipated within 5 years. Brad Gleeson, the chief commercial officer at Civiq, said this project depends upon the degree to which LinkNYC is “embraced by Madison Avenue” and the time need for the network to reach “critical mass”.

Because of the breadth and complexity of this project, achieving this inflection point will be quite challenging according to David Etherington, the chief strategy officer at Intersection. He expressed his firm’s “dreams and aspirations” for LinkNYC, including providing advertisers with “greater strategic and creative flexibility”, offering such capabilities as:

  • Dayparting  – dividing a day’s advertising into several segments dependent on a range of factors about the intended audience, and
  • Hypertargeting – delivering advertising to very highly defined segments of an audience

Barry Frey, the president and CEO of the Digital Place-based Advertising Association, was also along for the tour of the new kiosks on Third Avenue. He was “impressed” by the capability it will offer advertisers to “co-locate their signs and fund services to the public” for such services as free WiFi and long-distance calling.

As to the brand marketers:

  • MillerCoors is using information at each kiosk location from Shazam, for the company’s “Sounds of the Street” ad campaign which presents “lists of the most-Shazammed tunes in the area”. (For more about Shazam, see the December 10, 2014 Subway Fold post entitled Is Big Data Calling and Calculating the Tune in Today’s Global Music Market?)
  • Poland Spring is now running a 5-week campaign featuring a digital ad (as seen in the third photo above). It relies upon “the brand’s popularity in New York”.

Capturing and Interpreting the Network’s Data

Link.NYC WiFi Kiosk 1, Image by Alan Rothman

Link.NYC WiFi Kiosk 1, Image by Alan Rothman

Thus far, LinkNYC has been “a little vague” about its methods for capturing the network’s data, but has said that it will maintain the privacy of all consumers’ information. One source has indicated that LinkNYC will collect, among other points “age, gender and behavioral data”. As well, the kiosks can track mobile devices within its variably 150 to 400 WiFi foot radius to ascertain the length of time a user stops by.  Third-party data is also being added to “round out the information”.³

Some industry experts’ expectations of the value and applications of this data include:

  • Helma Larkin, the CEO of Posterscope, a New York based firm specializing in “out-of- home communications (OOH)“, believes that LinkNYC is an entirely “new out-of-home medium”. This is because the data it will generate “will enhance the media itself”. The LinkNYC initiative presents an opportunity to build this network “from the ground up”. It will also create an opportunity to develop data about its own audience.
  • David Krupp of Kinetic thinks that data that will be generated will be quite meaningful insofar as producing a “more hypertargeted connection to consumers”.

Other US and International Smart City Initiatives

Currently in the US, there is nothing else yet approaching the scale of LinkNYC. Nonetheless, Kansas City is now developing a “smaller advertiser-supported  network of kiosks” with wireless support from Sprint. Other cities are also working on smart city projects. Civiq is now in discussions with about 20 of them.

Internationally, Rio de Janeiro is working on a smart city program in conjunction with the 2016 Olympics. This project is being supported by Renato Lucio de Castro, a consultant on smart city projects. (Here is a brief video of him describing this undertaking.)

A key challenge facing all smart city projects is finding officials in local governments who likewise have the enthusiasm for efforts like LinkNYC. Michael Lake, the CEO of Leading Cities, a firm that help cities with smart city projects, believes that programs such as LinkNYC will “continue to catch on” because of the additional security benefits they provide and the revenues they can generate.

My Questions

  • Should domestic and international smart cities to cooperate to share their resources, know-how and experience for each other’s mutual benefit? Might this in some small way help to promote urban growth and development on a more cooperative global scale?
  • Should LinkNYC also consider offering civic support services such as voter registration or transportation scheduling apps as well as charitable functions where pedestrians can donate to local causes?
  • Should LinkNYC add some augmented reality capabilities to enhance the data capabilities and displays of the kiosks? (See these 10 Subway Fold posts covering a range of news and trends on this technology.)

February 19, 2017 Update:  For the latest status report on LinkNYC nearly a year after this post was first uploaded, please see After Controversy, LinkNYC Finds Its Niche, by Gerald Schifman, on CrainsNewYork.com, dated February 15, 2017.


1.   While Googling “smart cities” might nearly cause the Earth to shift off its axis with its resulting 70 million hits, I suggest reading a very informative and timely feature from the December 11, 2015 edition of The Wall Street Journal entitled As World Crowds In, Cities Become Digital Laboratories, by Robert Lee Hotz.

2.   Smart Cities: Big Data, Civic Hackers, and the Quest for a New Utopia (W. W. Norton & Company, 2013), by Anthony M. Townsend, is a deep and wide book-length exploration of how big data and analytics are being deployed in large urban areas by local governments and independent citizens. I very highly recommend reading this fascinating exploration of the nearly limitless possibilities for smart cities.

3.   See, for example, How Publishers Utilize Big Data for Audience Segmentation, by Arvid Tchivzhel, posted on Datasciencecentral.com on November 17, 2015


These items just in from the Pop Culture Department: It would seem nearly impossible to film an entire movie thriller about a series of events centered around a public phone, but a movie called – – not so surprisingly – – Phone Booth managed to do this quite effectively in 2002. It stared Colin Farrell, Kiefer Sutherland and Forest Whitaker. Imho, it is still worth seeing.

Furthermore, speaking of Kiefer Sutherland, Fox announced on January 15, 2016 that it will be making 24: Legacy, a complete reboot of the 24 franchise, this time without him playing Jack Bauer. Rather, they have cast Corey Hawkins in the lead role. Hawkins can now be seen doing an excellent job playing Heath on season 6 of The Walking Dead. Watch out Grimes Gang, here comes Negan!!


Artificial Swarm Intelligence: There Will be An Answer, Let it Bee

Honey Bee on Willow Catkin", Image by Bob Peterson

“Honey Bee on Willow Catkin”, Image by Bob Peterson

In almost any field involving new trends and developments, anything attracting rapidly increasing media attention is often referred to in terms of “generating a lot of buzz”. Well, here’s a quite different sort of story that adds a whole new meaning to this notion.

A truly fascinating post appeared on TechRepublic.com this week on January 22, 2016 entitled How ‘Artificial Swarm Intelligence’ Uses People to Make Smarter Predictions Than Experts by Hope Reese. It is about a development where technology and humanity intersect in a highly specialized manner to produce a new means to improve predictions by groups of people. I highly recommend reading it in its entirety. I will summarize and annotate it, and then pose a few of my own bug-free questions.

A New Prediction Platform

In a recent switching of roles, while artificial intelligence (AI) concerns itself with machines executing human tasks¹, a newly developed and highly accurate algorithm “harnesses the power” of crowds to generate predictions of “real world events”. This approach is called “artificial swarm intelligence“.

A new software platform called UNU has being developed by a startup called Unanimous AI. The firm’s CEO is Dr. Louis Rosenberg. UNU facilitates the gathering of people online in order to “make collective decisions”. This is being done, according to Dr. Rosenberg “to amplify human intelligence”. Thus far, the platform has been “remarkably accurate” in its predictions of the Academy Awards, the Super Bowl² and elections.

UNU is predicated upon the concept of the wisdom of the crowds which states that larger groups of people make better decisions collectively than even the single smartest person within that group.³  Dr. Roman Yampolskiy, the Director of the Cybersecurity Lab at the University of Louisville, has also created a comparable algorithm known as “Wisdom of Artificial Crowds“. (The first time this phenomenon was covered on The Subway Fold, in the context of entertainment, was in the December 10, 2014 post entitled Is Big Data Calling and Calculating the Tune in Today’s Global Music Market?)

The Birds and the Bees

Swarm intelligence learns from events and systems occurring in nature such as the formation of swarms by bees and flocks by birds. These groups collectively make better choices than their single members. Dr. Rosenberg believes that, in his view there is “a vast amount of intelligence in groups” that, in turn generates “intelligence that amplifies their natural abilities”. He has transposed the rules of these natural systems onto the predictive abilities of humans in groups.

He cites honeybees as being “remarkable” decision-makers in their environment. On a yearly basis, the divide their colonies and “send out scout bees” by the hundreds for many miles around to check out locations for a new home. When these scouts return to the main hive they perform a “waggle dance” to “convey information to the group” and next decide about the intended location. For the entire colony, this is a “complex decision” composed of “conflicting variables”. On average, bee colonies choose the optimal location by more than 80%.

Facilitating Human Bee-hive-ior

However, humans display a much lesser accuracy rate when making their own predictions. Most commonly, polling and voting is used. Dr. Rosenberg finds such methods “primitive” and often incorrect as they tend to be “polarizing”. In effect, they make it difficult to assess the “best answer for the group”.

UNU is his firm’s attempt to facilitate humans with making the best decisions for an entire group. Users log onto it and respond to questions with a series of possible choices displayed. It was modeled upon such behavior occurring in nature among “bees, fish and birds”. This is distinguished from individuals just casting a single vote. Here are two videos of the system in action involving choosing the most competitive Republican presidential candidate and selecting the most beloved sidekick from Star Wars4. As groups of users make their selections on UNU and are influenced by the visible onscreen behavior of others, this movement is the online manifestation of the group’s swarming activity.

Another instance of UNU’s effectiveness and accuracy involved 50 users trying to predict the winners of the Academy Awards. On an individual basis, they each averaged six out of 15 correct. This test swarm was able to get a significantly better nine out of the 15.  Beyond movies, the implications may be further significant if applied in areas such as strategic business decision-making.

My Questions

  • Does UNU lend itself to being turned into a scalable mobile app for much larger groups of users on a multitude of predictions? If so, should users be able to develop their own questions and choices for the swarm to decide? Should all predictions posed be open to all users?
  • Might UNU find some sort of application in guiding the decision process of juries while they are resolving a series of factual issues?
  • Could UNU be used to supplement reviews for books, movies, music and other forms of entertainment? Perhaps some form of “UNU Score” or “UNU Rating”?

 


1.  One of the leading proponents and developers of AI for many decades was MIT Professor Marvin Minsky who passed away on Sunday, January 24, 2016. Here is his obituary from the January 25, 2015 edition of The New York Times entitled Marvin Minsky, Pioneer in Artificial Intelligence, Dies at 88, by Glenn Rifkin.

2.  For an alternative report on whether the wisdom of the crowds appears to have little or no effect on the Super Bowl, one not involving UNU in any way, see an article in the January 28, 2016 edition of The New York Times entitled Super Bowl Challenges Wisdom of Crowds and Oddsmakers, by Victor Mather.

3.  An outstanding and comprehensive treatment of this phenomenon I highly recommend reading The Wisdom of the Crowds, by James Surowiecki (Doubleday, 2004).

4.  I would really enjoy seeing a mash-up of these two demos to see how the group would swarm among the Star Wars sidekicks to select which one of these science fiction characters might have the best chance to win the 2016 election.

Mind Over Subject Matter: Researchers Develop A Better Understanding of How Human Brains Manage So Much Information

"Synapse", Image by Allan Ajifo

“Synapse”, Image by Allan Ajifo

There is an old joke that goes something like this: What do you get for the man who has everything and then where would he put it all?¹ This often comes to mind whenever I have experienced the sensation of information overload caused by too much content presented from too many sources. Especially since the advent of the Web, almost everyone I know has also experienced the same overwhelming experience whenever the amount of information they are inundated with everyday seems increasingly difficult to parse, comprehend and retain.

The multitudes of screens, platforms, websites, newsfeeds, social media posts, emails, tweets, blogs, Post-Its, newsletters, videos, print publications of all types, just to name a few, are relentlessly updated and uploaded globally and 24/7. Nonetheless, for each of us on an individualized basis, a good deal of the substance conveyed by this quantum of bits and ocean of ink somehow still manages to stick somewhere in our brains.

So, how does the human brain accomplish this?

Less Than 1% of the Data

A recent advancement covered in a fascinating report on Phys.org on December 15, 2015 entitled Researchers Demonstrate How the Brain Can Handle So Much Data, by Tara La Bouff describes the latest research into how this happens. I will summarize and annotate this, and pose a few organic material-based questions of my own.

To begin, people learn to identify objects and variations of them rather quickly. For example, a letter of the alphabet, no matter the font or an individual regardless of their clothing and grooming, are always recognizable. We can also identify objects even if the view of them is quite limited. This neurological processing proceeds reliably and accurately moment-by-moment during our lives.

A recent discover by a team of researchers at Georgia Institute of Technology (Georgia Tech)² found that we can make such visual categorizations with less than 1% of the original data. Furthermore, they created and validated an algorithm “to explain human learning”. Their results can also be applied to “machine learning³, data analysis and computer vision4. The team’s full findings were published in the September 28, 2015 issue of Neural Computation in an article entitled Visual Categorization with Random Projection by Rosa I. Arriaga, David Rutter, Maya Cakmak and Santosh S. Vempala. (Dr. Cakmak is from the University of Washington, while the other three are from Georgia Tech.)

Dr. Vempala believes that the reason why humans can quickly make sense of the very complex and robust world is because, as he observes “It’s a computational problem”. His colleagues and team members examined “human performance in ‘random projection tests'”. These measure the degree to which we learn to identify an object. In their work, they showed their test subjects “original, abstract images” and then asked them if they could identify them once again although using a much smaller segment of the image. This led to one of their two principal discoveries that the test subjects required only 0.15% of the data to repeat their identifications.

Algorithmic Agility

In the next phase of their work, the researchers prepared and applied an algorithm to enable computers (running a simple neural network, software capable of imitating very basic human learning characteristics), to undertake the same tasks. These digital counterparts “performed as well as humans”. In turn, the results of this research provided new insight into human learning.

The team’s objective was to devise a “mathematical definition” of typical and non-typical inputs. Next, they wanted to “predict which data” would be the most challenging for the test subjects and computers to learn. As it turned out, they each performed with nearly equal results. Moreover, these results proved that “data will be the hardest to learn over time” can be predicted.

In testing their theory, the team prepared 3 different groups of abstract images of merely 150 pixels each. (See the Phys.org link above containing these images.) Next, they drew up “small sketches” of them. The full image was shown to the test subjects for 10 seconds. Next they were shown 16 of the random sketches. Dr. Vempala of the team was “surprised by how close the performance was” of the humans and the neural network.

While the researchers cannot yet say with certainty that “random projection”, such as was demonstrated in their work, happens within our brains, the results lend support that it might be a “plausible explanation” for this phenomenon.

My Questions

  • Might this research have any implications and/or applications in virtual reality and augment reality systems that rely on both human vision and processing large quantities of data to generate their virtual imagery? (These 13 Subway Fold posts cover a wide range of trends and applications in VR and AR.)
  • Might this research also have any implications and/or applications in medical imaging and interpretation since this science also relies on visual recognition and continual learning?
  • What other markets, professions, universities and consultancies might be able to turn these findings into new entrepreneurial and scientific opportunities?

 


1.  I was unable to definitively source this online but I recall that I may have heard it from the comedian Steven Wright. Please let me know if you are aware of its origin. 

2.  For the work of Georgia’s Tech’s startup incubator see the Subway Fold post entitled Flashpoint Presents Its “Demo Day” in New York on April 16, 2015.

3.   These six Subway Fold posts cover a range of trends and developments in machine learning.

4.   Computer vision was recently taken up in an October 14, 2015 Subway Fold post entitled Visionary Developments: Bionic Eyes and Mechanized Rides Derived from Dragonflies.

Semantic Scholar and BigDIVA: Two New Advanced Search Platforms Launched for Scientists and Historians

"The Chemistry of Inversin", Image by Raymond Bryson

“The Chemistry of Inversion”, Image by Raymond Bryson

As powerful, essential and ubiquitous as Google and its search engine peers are across the world right now, needs often arise in many fields and marketplaces for platforms that can perform much deeper and wider digital excavating. So it is that two new highly specialized search platforms have just come online specifically engineered, in these cases, for scientists and historians. Each is structurally and functionally quite different from the other but nonetheless is aimed at very specific professional user bases with advanced researching needs.

These new systems provide uniquely enhanced levels of context, understanding and visualization with their results. We recently looked at a very similar development in the legal professions in an August 18, 2015 Subway Fold post entitled New Startup’s Legal Research App is Driven by Watson’s AI Technology.

Let’s have a look at both of these latest innovations and their implications. To introduce them, I will summarize and annotate two articles about their introductions, and then I will pose some additional questions of my own.

Semantic Scholar Searches for New Knowledge in Scientific Papers

First, the Allen Institute for Artificial Intelligence (A2I) has just launched its new system called Semantic Scholar, freely accessible on the web. This event was covered on NewScientist.com in a fascinating article entitled AI Tool Scours All the Science on the Web to Find New Knowledge on November 2, 2015 by Mark Harris.

Semantic Scholar is supported by artificial intelligence (AI)¹ technology. It is automated to “read, digest and categorise findings” from approximately two million scientific papers published annually. Its main objective is to assist researchers with generating new ideas and “to identify previously overlooked connections and information”. Because of the of the overwhelming volume of the scientific papers published each year, which no individual scientist could possibly ever read, it offers an original architecture and high-speed manner to mine all of this content.

Oren Etzioni, the director of A2I, termed Semantic Scholar a “scientist’s apprentice”, to assist them in evaluating developments in their fields. For example, a medical researcher could query it about drug interactions in a certain patient cohort having diabetes. Users can also pose their inquiries in natural language format.

Semantic Scholar operates by executing the following functions:

  • crawling the web in search of “publicly available scientific papers”
  • scanning them into its database
  • identifying citations and references that, in turn, are assessed to determine those that are the most “influential or controversial”
  • extracting “key phrases” appearing similar papers, and
  • indexing “the datasets and methods” used

A2I is not alone in their objectives. Other similar initiatives include:

Semantic Scholar will gradually be applied to other fields such as “biology, physics and the remaining hard sciences”.

BigDIVA Searches and Visualized 1,500 Year of History

The second innovative search platform is called Big Data Infrastructure Visualization Application (BigDIVA). The details about its development, operation and goals were covered in a most interesting report posted online on  NC State News on October 12, 2015 entitled Online Tool Aims to Help Researchers Sift Through 15 Centuries of Data by Matt Shipman.

This is joint project by the digital humanities scholars at NC State University and Texas A&M University. Its objective is to assist researchers in, among other fields, literature, religion, art and world history. This is done by increasing the speed and accuracy of searching through “hundreds of thousands of archives and articles” covering 450 A.D. to the present. BigDIVA was formally rolled out at NC State on October 16, 2015.

BigDIVA presents users with an entirely new visual interface, enabling them to search and review “historical documents, images of art and artifacts, and any scholarship associated” with them. Search results, organized by categories of digital resources, are displayed in infographic format4. The linked NC State News article includes a photo of this dynamic looking interface.

This system is still undergoing beta testing and further refinement by its development team. Expansion of its resources on additional historical periods is expected to be an ongoing process. Current plans are to make this system available on a subscription basis to libraries and universities.

My Questions

  • Might the IBM Watson, Semantic Scholar, DARPA and BigDIVA development teams benefit from sharing design and technical resources? Would scientists, doctors, scholars and others benefit from multi-disciplinary teams working together on future upgrades and perhaps even new platforms and interface standards?
  • What other professional, academic, scientific, commercial, entertainment and governmental fields would benefit from these highly specialized search platforms?
  • Would Google, Bing, Yahoo and other commercial search engines benefit from participating with the developers in these projects?
  • Would proprietary enterprise search vendors likewise benefit from similar joint ventures with the types of teams described above?
  • What entrepreneurial opportunities might arise for vendors, developers, designers and consultants who could provide fuller insight and support for developing customized search platforms?

 


1.  These 11 Subway Fold posts cover various AI applications and developments.

2.  These seven Subway Fold posts cover a range of IBM Watson applications and markets.

3A new history of DARPA written by Annie Jacobsen was recently published entitled The Pentagon’s Brain (Little Brown and Company, 2015).

4.  See this January 30, 2015 Subway Fold post entitled Timely Resources for Studying and Producing Infographics on this topic.

Visionary Developments: Bionic Eyes and Mechanized Rides Derived from Dragonflies

"Transparency and Colors", Image by coniferconifer

“Transparency and Colors”, Image by coniferconifer

All manner of software and hardware development projects strive to diligently take out every single bug that can be identified¹. However, a team of researchers who is currently working on a fascinating and potentially valuable project is doing everything possible to, at least figuratively, leave their bugs in.

This involves a team of Australian researchers who are working on modeling the vision of dragonflies. If they are successful, there could be some very helpful implications for applying their work to the advancement of bionic eyes and driverless cars.

When the design and operation of biological systems in nature are adapted to improve man-made technologies as they are being here, such developments are often referred to as being biomimetic².

The very interesting story of this, well, visionary work was reported in an article in the October 6, 2015 edition of The Wall Street Journal entitled Scientists Tap Dragonfly Vision to Build a Better Bionic Eye by Rachel Pannett. I will summarize and annotate it, and pose some bug-free questions of my own. Let’s have a look and see what all of this organic and electronic buzz is really about.

Bionic Eyes

A research team from the University of Adelaide has recently developed this system modeled upon a dragonfly’s vision. It is built upon a foundation that also uses artificial intelligence (AI)³. Their findings appeared in an article entitled Properties of Neuronal Facilitation that Improve Target Tracking in Natural Pursuit Simulations that was published in the June 6, 2015 edition of The Royal Society Interface (access credentials required). The authors include Zahra M. Bagheri, Steven D. Wiederman, Benjamin S. Cazzolato, Steven Grainger, and David C. O’Carroll. The funding grant for their project was provided by the Australian Research Council.

While the vision of dragonflies “cannot distinguish details and shapes of objects” as well as humans, it does possess a “wide field of vision and ability to detect fast movements”. Thus, they can readily track of targets even within an insect swarm.

The researchers, including Dr. Steven Wiederman, the leader of the University of Adelaide team, believe their work could be helpful to the development work on bionic eyes. These devices consist of an  artificial implant placed in a person’s retina that, in turn, is connected to a video camera. What a visually impaired person “sees” while wearing this system is converted into electrical signals that are communicated to the brain. By adding the software model of the dragonfly’s 360-degree field of vision, this will add the capability for the people using it to more readily detect, among other things, “when someone unexpectedly veers into their path”.

Another member of the research team and one of the co-authors of their research paper, a Ph.D. candidate named Zahra Bageri, said that dragonflies are able to fly so quickly and be so accurate “despite their visual acuity and a tiny brain around the size of a grain of rice”4 In other areas of advanced robotics development, this type of “sight and dexterity” needed to avoid humans and objects has proven quite challenging to express in computer code.

One commercial company working on bionic eye systems is Second Sight Medical Products Inc., located in California. They have received US regulatory approval to sell their retinal prosthesis.

Driverless Cars

In the next stage of their work, the research team is currently studying “the motion-detecting neurons in insect optic lobes”, in an effort to build a system that can predict and react to moving objects. They believe this might one day be integrated into driverless cars in order to avoid pedestrians and other cars5. Dr. Wiederman foresees the possible commercialization of their work within the next five to ten years.

However, obstacles remain in getting this to market. Any integration into a test robot would require a “processor big enough to simulate a biological brain”. The research team believes that is can be scaled down since the “insect-based algorithms are much more efficient”.

Ms. Bagheri noted that “detecting and tracking small objects against complex backgrounds” is quite a technical challenge. She gave as an example of this a  baseball outfielder who has only seconds to spot, track and predict where a ball hit will fall in the field in the midst of a colorful stadium and enthusiastic fans6.

My Questions

  • As suggested in the article, might this vision model be applicable in sports to enhancing live broadcasts of games, helping teams review their game day videos afterwards by improving their overall play, and assisting individual players to analyze how they react during key plays?
  • Is the vision model applicable in other potential safety systems for mass transportation such as planes, trains, boats and bicycles?
  • Could this vision model be added to enhance the accuracy, resolution and interactivity of virtual reality and augmented reality systems? (These 11 Subway Fold posts appearing in the category of Virtual and Augmented Reality cover a range of interesting developments in this field.)

 


1.  See this Wikipedia page for a summary of the extraordinary career Admiral Grace Hopper. Among her many technological accomplishments, she was a pioneer in developing modern computer programming. She was also the originator of the term computer “bug”.

2For an earlier example of this, see the August 18, 2014 Subway Fold post entitled IBM’s New TrueNorth Chip Mimics Brain Functions.

3The Subway Fold category of Smart Systems contains 10 posts on AI.

4Speaking of rice-sized technology, see also the April 14, 2015 Subway Fold post entitled Smart Dust: Specialized Computers Fabricated to Be Smaller Than a Single Grain of Rice.

5While the University of Adelaide research team is not working with Google, nonetheless the company has been a leader in the development of autonomous cars with their Google’s Self-Driving Car Project.

6New York’s beloved @Mets might also prove to be worthwhile subjects to model because of their stellar play in the 2015 playoffs. Let’s vanquish those dastardly LA Dodgers on Thursday night. GO METS!

New Startup’s Legal Research App is Driven by Watson’s AI Technology

"Supreme Court, 60 Centre Street, Lower Manhattan", Image by Jeffrey Zeldman

[New York] “Supreme Court, 60 Centre Street, Lower Manhattan”, Image by Jeffrey Zeldman

May 9, 2016: An update on this post appears below.


Casey Stengel had a very long, productive and colorful career in professional baseball as a player for five teams and later as a manager for four teams. He was also consistently quotable (although not to the extraordinary extent of his Yankee teammate Yogi Berra). Among the many things Casey said was his frequent use of the imperative “You could look it up”¹.

Transposing this gem of wisdom from baseball to law practice², looking something up has recently taken on an entirely new meaning. According to a fascinating article posted on Wired.com on August 8, 2015 entitled Your Lawyer May Soon Ask for This AI-Powered App for Legal Help by Davey Alba, a startup called ROSS Intelligence has created a unique new system for legal research. I will summarize, annotate and pose a few questions of my own.

One of the founders of ROSS, Jimoh Ovbiagele (@findingjimoh), was influenced by his childhood and adolescent experiences to pursue studying either law or computer science. He chose the latter and eventually ended up working on an artificial intelligence (AI) project at the University of Toronto. It occurred to him then that machine learning (a branch of AI), would be a helpful means to assist lawyers with their daily research requirements.

Mr. Ovbiagele joined with a group of co-founders from diverse fields including “law to computers to neuroscience” in order to launch ROSS Intelligence. The legal research app they have created is built upon the AI capabilities of IBM’s Watson as well as voice recognition. Since June, it has been tested in “small-scale pilot programs inside law firms”.

AI, machine learning, and IBM’s Watson technology have been variously taken up in these nine Subway Fold posts. Among them, the September 1, 2014 post entitled Possible Futures for Artificial Intelligence in Law Practice covered the possible legal applications of IBM’s Watson (prior to the advent of ROSS), and the technology of a startup called Viv Labs.

Essentially, the new ROSS app enables users to ask legal research questions in natural language. (See also the July 31, 2015 Subway Fold post entitled Watson, is That You? Yes, and I’ve Just Demo-ed My Analytics Skills at IBM’s New York Office.) Similar in operation to Apple’s Siri, when a question is verbally posed to ROSS, it searches through its data base of legal documents to provide an answer along with the source documents used to derive it. The reply is also assessed and assigned a “confidence rating”. The app further prompts the user to evaluate the response’s accuracy with an onscreen “thumbs up” or “thumbs down”. The latter will prompt ROSS to produce another result.

Andrew Arruda (@AndrewArruda), another co-founder of ROSS, described the development process as beginning with a “blank slate” version of Watson into which they uploaded “thousands of pages of legal documents”, and trained their system to make use of Watson’s “question-and-answer APIs³. Next, they added machine learning capabilities they called “LegalRank” (a reference to Google’s PageRank algorithm), which, among others things, designates preferential results depending upon the supporting documents’ numbers of citations and the deciding courts’ jurisdiction.

ROSS is currently concentrating on bankruptcy and insolvency issues. Mr. Ovbiagele and Mr. Arruda are sanguine about the possibilities of adding other practice areas to its capabilities. Furthermore, they believe that this would meaningfully reduce the $9.6 billion annually spent on legal research, some of which is presently being outsourced to other countries.

In another recent and unprecedented development, the global law firm Dentons has formed its own incubator for legal technology startups called NextLaw Labs. According to this August 7, 2015 news release on Denton’s website, the first company they have signed up for their portfolio is ROSS Intelligence.

Although it might be too early to exclaim “You could look it up” at this point, my own questions are as follows:

  • What pricing model(s) will ROSS use to determine the cost structure of their service?
  • Will ROSS consider making its app available to public interest attorneys and public defenders who might otherwise not have the resources to pay for access fees?
  • Will ROSS consider making their service available to the local, state and federal courts?
  • Should ROSS make their service available to law schools or might this somehow impair their traditional teaching of the fundamentals of legal research?
  • Will ROSS consider making their service available to non-lawyers in order to assist them in represent themselves on a pro se basis?
  • In addition to ROSS, what other entrepreneurial opportunities exist for other legal startups to deploy Watson technology?

Finally, for an excellent roundup of five recent articles and blog posts about the prospects of Watson for law practice, I highly recommend a click-through to read Five Solid Links to Get Smart on What Watson Means for Legal, by Frank Strong, posted on The Business of Law Blog on August 11, 2015.


May 9, 2016 Update:  The global law firm of Baker & Hostetler, headquartered in Cleveland, Ohio, has become the first US AmLaw 100 firm to announce that it has licensed the ROSS Intelligence’s AI product for its bankruptcy practice. The full details on this were covered in an article posted on May 6, 2016 entitled AI Pioneer ROSS Intelligence Lands Its First Big Law Clients by Susan Beck, on Law.com.

Some follow up questions:

  • Will other large law firms, as well as medium and smaller firms, and in-house corporate departments soon be following this lead?
  • Will they instead wait and see whether this produces tangible results for attorneys and their clients?
  • If so, what would these results look like in terms of the quality of legal services rendered, legal business development, client satisfaction, and/or the incentives for other legal startups to move into the legal AI space?

1.  This was also the title of one of his many biographies,  written by Maury Allen, published Times Books in 1979.

2.  For the best of both worlds, see the legendary law review article entitled The Common Law Origins of the Infield Fly Rule, by William S. Stevens, 123 U. Penn. L. Rev. 1474 (1975).

3For more details about APIs see the July 2, 2015 Subway Fold post entitled The Need for Specialized Application Programming Interfaces for Human Genomics R&D Initiatives