Visionary Developments: Bionic Eyes and Mechanized Rides Derived from Dragonflies

"Transparency and Colors", Image by coniferconifer

“Transparency and Colors”, Image by coniferconifer

All manner of software and hardware development projects strive to diligently take out every single bug that can be identified¹. However, a team of researchers who is currently working on a fascinating and potentially valuable project is doing everything possible to, at least figuratively, leave their bugs in.

This involves a team of Australian researchers who are working on modeling the vision of dragonflies. If they are successful, there could be some very helpful implications for applying their work to the advancement of bionic eyes and driverless cars.

When the design and operation of biological systems in nature are adapted to improve man-made technologies as they are being here, such developments are often referred to as being biomimetic².

The very interesting story of this, well, visionary work was reported in an article in the October 6, 2015 edition of The Wall Street Journal entitled Scientists Tap Dragonfly Vision to Build a Better Bionic Eye by Rachel Pannett. I will summarize and annotate it, and pose some bug-free questions of my own. Let’s have a look and see what all of this organic and electronic buzz is really about.

Bionic Eyes

A research team from the University of Adelaide has recently developed this system modeled upon a dragonfly’s vision. It is built upon a foundation that also uses artificial intelligence (AI)³. Their findings appeared in an article entitled Properties of Neuronal Facilitation that Improve Target Tracking in Natural Pursuit Simulations that was published in the June 6, 2015 edition of The Royal Society Interface (access credentials required). The authors include Zahra M. Bagheri, Steven D. Wiederman, Benjamin S. Cazzolato, Steven Grainger, and David C. O’Carroll. The funding grant for their project was provided by the Australian Research Council.

While the vision of dragonflies “cannot distinguish details and shapes of objects” as well as humans, it does possess a “wide field of vision and ability to detect fast movements”. Thus, they can readily track of targets even within an insect swarm.

The researchers, including Dr. Steven Wiederman, the leader of the University of Adelaide team, believe their work could be helpful to the development work on bionic eyes. These devices consist of an  artificial implant placed in a person’s retina that, in turn, is connected to a video camera. What a visually impaired person “sees” while wearing this system is converted into electrical signals that are communicated to the brain. By adding the software model of the dragonfly’s 360-degree field of vision, this will add the capability for the people using it to more readily detect, among other things, “when someone unexpectedly veers into their path”.

Another member of the research team and one of the co-authors of their research paper, a Ph.D. candidate named Zahra Bageri, said that dragonflies are able to fly so quickly and be so accurate “despite their visual acuity and a tiny brain around the size of a grain of rice”4 In other areas of advanced robotics development, this type of “sight and dexterity” needed to avoid humans and objects has proven quite challenging to express in computer code.

One commercial company working on bionic eye systems is Second Sight Medical Products Inc., located in California. They have received US regulatory approval to sell their retinal prosthesis.

Driverless Cars

In the next stage of their work, the research team is currently studying “the motion-detecting neurons in insect optic lobes”, in an effort to build a system that can predict and react to moving objects. They believe this might one day be integrated into driverless cars in order to avoid pedestrians and other cars5. Dr. Wiederman foresees the possible commercialization of their work within the next five to ten years.

However, obstacles remain in getting this to market. Any integration into a test robot would require a “processor big enough to simulate a biological brain”. The research team believes that is can be scaled down since the “insect-based algorithms are much more efficient”.

Ms. Bagheri noted that “detecting and tracking small objects against complex backgrounds” is quite a technical challenge. She gave as an example of this a  baseball outfielder who has only seconds to spot, track and predict where a ball hit will fall in the field in the midst of a colorful stadium and enthusiastic fans6.

My Questions

  • As suggested in the article, might this vision model be applicable in sports to enhancing live broadcasts of games, helping teams review their game day videos afterwards by improving their overall play, and assisting individual players to analyze how they react during key plays?
  • Is the vision model applicable in other potential safety systems for mass transportation such as planes, trains, boats and bicycles?
  • Could this vision model be added to enhance the accuracy, resolution and interactivity of virtual reality and augmented reality systems? (These 11 Subway Fold posts appearing in the category of Virtual and Augmented Reality cover a range of interesting developments in this field.)

 


1.  See this Wikipedia page for a summary of the extraordinary career Admiral Grace Hopper. Among her many technological accomplishments, she was a pioneer in developing modern computer programming. She was also the originator of the term computer “bug”.

2For an earlier example of this, see the August 18, 2014 Subway Fold post entitled IBM’s New TrueNorth Chip Mimics Brain Functions.

3The Subway Fold category of Smart Systems contains 10 posts on AI.

4Speaking of rice-sized technology, see also the April 14, 2015 Subway Fold post entitled Smart Dust: Specialized Computers Fabricated to Be Smaller Than a Single Grain of Rice.

5While the University of Adelaide research team is not working with Google, nonetheless the company has been a leader in the development of autonomous cars with their Google’s Self-Driving Car Project.

6New York’s beloved @Mets might also prove to be worthwhile subjects to model because of their stellar play in the 2015 playoffs. Let’s vanquish those dastardly LA Dodgers on Thursday night. GO METS!

New Startup’s Legal Research App is Driven by Watson’s AI Technology

"Supreme Court, 60 Centre Street, Lower Manhattan", Image by Jeffrey Zeldman

[New York] “Supreme Court, 60 Centre Street, Lower Manhattan”, Image by Jeffrey Zeldman

May 9, 2016: An update on this post appears below.


Casey Stengel had a very long, productive and colorful career in professional baseball as a player for five teams and later as a manager for four teams. He was also consistently quotable (although not to the extraordinary extent of his Yankee teammate Yogi Berra). Among the many things Casey said was his frequent use of the imperative “You could look it up”¹.

Transposing this gem of wisdom from baseball to law practice², looking something up has recently taken on an entirely new meaning. According to a fascinating article posted on Wired.com on August 8, 2015 entitled Your Lawyer May Soon Ask for This AI-Powered App for Legal Help by Davey Alba, a startup called ROSS Intelligence has created a unique new system for legal research. I will summarize, annotate and pose a few questions of my own.

One of the founders of ROSS, Jimoh Ovbiagele (@findingjimoh), was influenced by his childhood and adolescent experiences to pursue studying either law or computer science. He chose the latter and eventually ended up working on an artificial intelligence (AI) project at the University of Toronto. It occurred to him then that machine learning (a branch of AI), would be a helpful means to assist lawyers with their daily research requirements.

Mr. Ovbiagele joined with a group of co-founders from diverse fields including “law to computers to neuroscience” in order to launch ROSS Intelligence. The legal research app they have created is built upon the AI capabilities of IBM’s Watson as well as voice recognition. Since June, it has been tested in “small-scale pilot programs inside law firms”.

AI, machine learning, and IBM’s Watson technology have been variously taken up in these nine Subway Fold posts. Among them, the September 1, 2014 post entitled Possible Futures for Artificial Intelligence in Law Practice covered the possible legal applications of IBM’s Watson (prior to the advent of ROSS), and the technology of a startup called Viv Labs.

Essentially, the new ROSS app enables users to ask legal research questions in natural language. (See also the July 31, 2015 Subway Fold post entitled Watson, is That You? Yes, and I’ve Just Demo-ed My Analytics Skills at IBM’s New York Office.) Similar in operation to Apple’s Siri, when a question is verbally posed to ROSS, it searches through its data base of legal documents to provide an answer along with the source documents used to derive it. The reply is also assessed and assigned a “confidence rating”. The app further prompts the user to evaluate the response’s accuracy with an onscreen “thumbs up” or “thumbs down”. The latter will prompt ROSS to produce another result.

Andrew Arruda (@AndrewArruda), another co-founder of ROSS, described the development process as beginning with a “blank slate” version of Watson into which they uploaded “thousands of pages of legal documents”, and trained their system to make use of Watson’s “question-and-answer APIs³. Next, they added machine learning capabilities they called “LegalRank” (a reference to Google’s PageRank algorithm), which, among others things, designates preferential results depending upon the supporting documents’ numbers of citations and the deciding courts’ jurisdiction.

ROSS is currently concentrating on bankruptcy and insolvency issues. Mr. Ovbiagele and Mr. Arruda are sanguine about the possibilities of adding other practice areas to its capabilities. Furthermore, they believe that this would meaningfully reduce the $9.6 billion annually spent on legal research, some of which is presently being outsourced to other countries.

In another recent and unprecedented development, the global law firm Dentons has formed its own incubator for legal technology startups called NextLaw Labs. According to this August 7, 2015 news release on Denton’s website, the first company they have signed up for their portfolio is ROSS Intelligence.

Although it might be too early to exclaim “You could look it up” at this point, my own questions are as follows:

  • What pricing model(s) will ROSS use to determine the cost structure of their service?
  • Will ROSS consider making its app available to public interest attorneys and public defenders who might otherwise not have the resources to pay for access fees?
  • Will ROSS consider making their service available to the local, state and federal courts?
  • Should ROSS make their service available to law schools or might this somehow impair their traditional teaching of the fundamentals of legal research?
  • Will ROSS consider making their service available to non-lawyers in order to assist them in represent themselves on a pro se basis?
  • In addition to ROSS, what other entrepreneurial opportunities exist for other legal startups to deploy Watson technology?

Finally, for an excellent roundup of five recent articles and blog posts about the prospects of Watson for law practice, I highly recommend a click-through to read Five Solid Links to Get Smart on What Watson Means for Legal, by Frank Strong, posted on The Business of Law Blog on August 11, 2015.


May 9, 2016 Update:  The global law firm of Baker & Hostetler, headquartered in Cleveland, Ohio, has become the first US AmLaw 100 firm to announce that it has licensed the ROSS Intelligence’s AI product for its bankruptcy practice. The full details on this were covered in an article posted on May 6, 2016 entitled AI Pioneer ROSS Intelligence Lands Its First Big Law Clients by Susan Beck, on Law.com.

Some follow up questions:

  • Will other large law firms, as well as medium and smaller firms, and in-house corporate departments soon be following this lead?
  • Will they instead wait and see whether this produces tangible results for attorneys and their clients?
  • If so, what would these results look like in terms of the quality of legal services rendered, legal business development, client satisfaction, and/or the incentives for other legal startups to move into the legal AI space?

1.  This was also the title of one of his many biographies,  written by Maury Allen, published Times Books in 1979.

2.  For the best of both worlds, see the legendary law review article entitled The Common Law Origins of the Infield Fly Rule, by William S. Stevens, 123 U. Penn. L. Rev. 1474 (1975).

3For more details about APIs see the July 2, 2015 Subway Fold post entitled The Need for Specialized Application Programming Interfaces for Human Genomics R&D Initiatives

Watson, is That You? Yes, and I’ve Just Demo-ed My Analytics Skills at IBM’s New York Office

IMAG0082

My photo of the entrance to IBM’s office at 590 Madison Avenue in New York, taken on July 29, 2015.

I don’t know if my heart can take this much excitement. Yesterday morning, on July 29, 2015, I attended a very compelling presentation and demo of IBM’s Watson technology. (This AI-driven platform has been previously covered in these five Subway Fold posts.) Just the night before, I saw I saw a demo of some ultra-cool new augmented reality systems.

These experiences combined to make me think of the evocative line from Supernaut by Black Sabbath with Ozzie belting out “I’ve seen the future and I’ve left it behind”. (Incidentally, this prehistoric metal classic also has, IMHO, one of the most infectious guitar riffs with near warp speed shredding ever recorded.)

Yesterday’s demo of Watson Analytics, one key component among several on the platform, was held at IBM’s office in the heart of midtown Manhattan at 590 Madison Avenue and 57th Street. The company very graciously put this on for free. All three IBM employees who spoke were outstanding in their mastery of the technology, enthusiasm for its capabilities, and informative Q&A interactions with the audience. Massive kudos to everyone involved at the company in making this happen. Thanks, too, for all of attendees who asked such excellent questions.

Here is my summary of the event:

Part 1: What is Watson Analytics?

The first two speakers began with a fundamental truth about all organizations today: They have significant quantities of data that are driving all operations. However, a bottleneck often occurs when business users understand this but do not have the technical skills to fully leverage it while, correspondingly, IT workers do not always understand the business context of the data. As a result, business users have avenues they can explore but not the best or most timely means to do so.

This is where Watson can be introduced because it can make these business users self-sufficient with an accessible, extensible and easier to use analytics platform. It is, as one the speakers said “self-service analytics in the cloud”. Thus, Watson’s constituents can be seen as follows:

  • “What” is how to discover and define business problems.
  • “Why” is to understand the existence and nature of these problems.
  • “How” is to share this process in order to affect change.

However, Watson is specifically not intended to be a replacement for IT in any way.

Also, one of Watson’s key capabilities is enabling users to pursue their questions by using a natural language dialog. This involves querying Watson with questions posed in ordinary spoken terms.

Part 2: A Real World Demo Using Airline Customer Data

Taken directly from the world of commerce, the IBM speakers presented a demo of Watson Analytics’ capabilities by using a hypothetical situation in the airline industry. This involved a business analyst in the marketing department for an airline who was given a compilation of market data prepared by a third-party vendor. The business analyst was then assigned by his manager with researching and planning how to reduce customer churn.

Next, by enlisting Watson Analytics for this project, the two central issues became how the data could be:

  • Better understand, leveraged and applied to increase customers’ positive opinions while simultaneously decreasing the defections to the airline’s competitors.
  • Comprehensively modeled in order to understand the elements of the customer base’s satisfaction, or lack thereof, with the airline’s services.

The speakers then put Watson Analytics through its paces up on large screens for the audience to observe and ask questions. The goal of this was to demonstrate how the business analyst could query Watson Analytics and, in turn, the system would provide alternative paths to explore the data in search of viable solutions.

Included among the variables that were dexterously tested and spun into enlightening interactive visualizations were:

  • Satisfaction levels by other peer airlines and the hypothetical Watson customer airline
  • Why customers are, and are not, satisfied with their travel experience
  • Airline “status” segments such as “platinum” level flyers who pay a premium for additional select services
  • Types of travel including for business and vacation
  • Other customer demographic points

This results of this exercise as they appeared onscreen showed how Watson could, with its unique architecture and tool set:

  • Generate “guided suggestions” using natural language dialogs
  • Identify and test all manner of connections among the population of data
  • Use predictive analytics to make business forecasts¹
  • Calculate a “data quality score” to assess the quality of the data upon which business decisions are based
  • Map out a wide variety of data dashboards and reports to view and continually test the data in an effort to “tell a story”
  • Integrate an extensible set of analytical and graphics tools to sift through large data sets from relevant Twitter streams²

Part 3: The Development Roadmap

The third and final IBM speaker outlined the following paths for Watson Analytics that are currently in beta stage development:

  • User engagement developers are working on an updated visual engine, increased connectivity and capabilities for mobile devices, and social media commentary.
  • Collaboration developers are working on accommodating work groups and administrators, and dashboards that can be filtered and distributed.
  • Data connector developers are working on new data linkages, improving the quality and shape of connections, and increasing the degrees of confidence in predictions. For example, a connection to weather data is underway that would be very helpful to the airline (among other industries), in the above hypothetical.
  • New analytics developers are working on new functionality for business forecasting, time series analyses, optimization, and social media analytics.

Everyone in the audience, judging by the numerous informal conversations that quickly formed in the follow-up networking session, left with much to consider about the potential applications of this technology.


1.  Please see these six Subway Fold posts covering predictive analytics in other markets.

2.  Please see these ten Subway Fold posts for a variety of other applications of Twitter analytics.

 

How Robots and Computer Algorithms are Challenging Jobs and the Economy

"p8nderInG exIstence", Image by JD Hancock

“p8nderInG exIstence”, Image by JD Hancock

A Silicon Valley entrepreneur named Martin Ford (@MFordFuture) has written a very timely new book entitled Rise of the Robots: Technology and the Threat of a Jobless Future (Basic Books, 2015), which is currently receiving much attention in the media. The depth and significance of the critical issues it raises is responsible for this wide-beam spotlight.*

On May 27, 2015 the author was interviewed on The Brian Lehrer Show on radio station WNYC in New York. The result is available as a truly captivating 30-minute podcast entitled When Will Robots Take Your Job?  I highly recommend listening to this in its entirety. I will sum up. annotate and add some questions of my own to this.

The show’s host, Brian Lehrer, expertly guided Mr. Ford through the key complexities and subtleties of the thesis of his provocative new book. First, for now and increasingly in the future, robots and AI algorithms are taking on increasingly difficult task that are displacing human workers. Especially for those jobs that involve more repetitive and routine tasks, the more likely it will be that machines will replace human workers. This will not occur in just one sector, but rather, “across the board” in all areas of the marketplace.  For example, IBM’s Watson technology can be accessed using natural language which, in the future, might result in humans no longer being able to recognize its responses as coming from a machine.

Mr. Ford believes we are moving towards an economic model where productivity is increasing but jobs and income are decreasing. He asserts that solving this dilemma will be critical. Consequently, his second key point was the challenge of detaching work from income. He is proposing the establishment of some form of system where income is guaranteed. He believes this would still support Capitalism and would “produce plenty of income that could be taxed”. No nation is yet moving in this direction, but he thinks that Europe might be more amenable to it in the future.

He further believes that the US will be most vulnerable to displacement of workers because it leads the world in the use of technology but “has no safety net” for those who will be put out by this phenomenon. (For a more positive perspective on this, see the December 27, 2014 Subway Fold post entitled Three New Perspectives on Whether Artificial Intelligence Threatens or Benefits the World.)

Brian Lehrer asked his listeners to go to a specific page on the site of the regular podcast called Planet Money on National Public Radio. (“NPR” is the network of publicly supported radio stations that includes WNYC). This page entitled Will Your Job be Done by a Machine? displays a searchable database of job titles and the corresponding chance that each will be replaced by automation. Some examples that were discussed included:

  • Real estate agents with a 86.4% chance
  • Financial services workers with a 23% chance
  • Software developers with a 12.8% chance

Then the following six listeners called in to speak with Mr. Ford:

  • Caller 1 asked about finding a better way to get income to the population beyond the job market. This was squarely on point with Mr. Ford’s first main point about decoupling income and jobs. He was not advocating for somehow trying to stop technological progress. However, he reiterated how machines are “becoming autonomous workers, no longer just tools”.
  • Caller 2 asked whether Mr. Ford had seen a YouTube video entitled Humans Need Not Apply. Mr. Ford had seen it and recommended it. The caller said that the most common reply to this video (which tracks very closely with many of Mr. Ford’s themes), he has heard was, wrongly in his opinion, that “People will do something else”. Mr. Ford replied that people must find other things that they can get paid to do. The caller also said that machine had made it much easier and more economical for his to compose and record his own music.
  • Caller 3 raised the topic of automation in the medical profession. Specifically, whether IBM’s Watson could one day soon replace doctors. Mr. Ford believes that Watson will have an increasing effect here, particularly in fields such as radiology. However, it will have a lesser impact in those specialties where doctors and patients need to interact more with each other. (See also these three recent Subway Fold posts on the applications of Watson to TED Talks, business apps and the legal profession.)
  • Caller 4 posited that only humans can conceive ideas and be original. He asked about how can computers identify patterns for which they have not been programmed. He cited the example of the accidental discovery of penicillin. Mr. Ford replied that machines will not replace scientists but they can replace service workers. Therefore, he is “more worried about the average person”. Brian Lehrer then asked him about driverless cars and, perhaps, even driverless Uber cabs one day. Mr. answered that although expectations were high that this will eventually happen. He is concerned that taxi drivers will lose jobs. (See this September 15, 2014 Subway Fold post on Uber and the “sharing economy”.)  Which led to …
  • Caller 5 who is currently a taxi driver in New York. They discussed how, in particular, many types of drivers who drive for commerce are facing this possibility. Brian Lehrer followed-up by asking whether this may somehow lead to the end of Capitalism. Mr. Ford that Capitalism “can continue to work” but it must somehow “adapt to new laws and circumstances”.
  • Caller 6 inquired whether one of the proposals raised in VR pioneer Jaron Lanier’s book entitled Who Owns the Future (Simon & Schuster, 2013), whereby people could perhaps be paid for the information they provide online. This might be a possible means to financially assist people in the future. Mr. Ford’s response was that while it was “an interesting idea” it would be “difficult to implement”. As well, he believes that Google would resist this. He made a further distinction between his concept of guaranteed income and Lanier’s proposal insofar he believes that “Capitalism can adapt” more readily to his concept. (I also highly recommend Lanier’s book for its originality and deep insights.)

Brian Lehrer concluded by raising the prospect of self-aware machines. He noted that Bill Gates and Stephen Hawking had recently warned about this possibility. Mr. Ford responded that “we are too far from this now”. For him, today’s concern is on automation’s threat to jobs, many of which are becoming easier to reduce to a program.

To say the very least, to my own organic and non-programmatic way of thinking, this was an absolutely mind-boggling discussion. I greatly look forward to this topic will continue to gather momentum and expanded media coverage.

My own questions include:

  • How should people at the beginning, middle and end of their careers be advised and educated to adapt to these rapid changes so that they can not only survive, but rather, thrive within them?
  • What role should employers, employees, educators and the government take, in any and all fields, to keep the workforce up-to-date in the competencies they will need to continue to be valuable contributors?
  • Are the challenges of automation most efficiently met on the global, national and/or local levels by all interested contingencies working together? What forms should their cooperation take?

*  For two additional book reviews I recommend reading ‘Rise of the Robots’ and ‘Shadow Work’ by Barbara Ehrenreich in the May 11, 2015 edition of The New York Times, and Soon They’ll Be Driving It, Too by Sumit Paul-Choudhury in the May 15, 2015 edition of The Wall Street Journal (subscription required).

New Chips are Using Deep Learning to Enhance Mobile, Camera and Auto Image Processing Capabilities

"Smartphone Photography", Image by AvenueTheory

“Smartphone Photography”, Image by AvenueTheory

We interface with our devices’ screens for inputs and outputs nearly all day and everyday. What many of the gadgets will soon be able to display and, moreover, understand about digital imagery is about to take a significant leap forward. This will be due to the pending arrival of new chips embedded into their circuitry that are enabled by artificial intelligence (AI) algorithms. Let’s have a look.

This story was reported in a most interesting article on TechnologyReview.com entitled Silicon Chips That See Are Going to Make Your Smartphone Brilliant by Tom Simonite on May 14, 2015. I will sum, annotate and pose some question about it.

The key technology behind these new chips is an AI methodology called deep learning. In these 10 recent Subway Fold posts, deep learning has been covered in a range of applications in various online and real world marketplaces including, among others, entertainment, news, social media, law, medicine, finance and education. The emergence of these smarter new chips will likely bring additional significant enhancements to all of them and many others insofar as their abilities to better comprehend the nature of the content of images.

Two major computer chip companies, Synopsis and Qualcomm, and the Chinese search firm Baidu, are developing systems, based upon deep learning, for mobile devices, autos and other screen-based hardware. They were discussed by their representatives at the May 2015 Embedded Vision Summit held on Tuesday, May 12, 2015, in Santa Clara, California. The companies’ representatives were:

  • Pierre Paul, the director of Research and Development at Synopsis, who presented a demo of a new chip core that “recognized speed limit signs” on the road for vehicles and enabled facial recognition for security apps. This chip uses less power than current chips on the market and, moreover, could add some “visual intelligence” to phone and car apps, and security cameras.  (Here is the link to the abstracts of the presentations, listed by speaker including Mr. Paul’s entitled Low-power Embedded Vision: A Face Tracker Case Study from the Summit’s website.)
  • Ren Wu, Distinguished Scientist, Baidu Institute of Deep Learning, said that deep learning-based chips are important for computers used for research, and called for making such intelligence as ubiquitous as possible. (Here is the link to the abstracts of the presentations, listed by speaker including Mr. Wu’s, entitled Enabling Ubiquitous Visual Intelligence Through Deep Learning from the Summit’s website.)

Both Wu and Gehlhaar said that adding more intelligence to mobile device’s ability to recognize photos could be used to address the privacy implications of some apps by lessening the quantity of personal data they upload to the web.

My questions are as follows:

  • Whether and how should social networks employ these chips? For example, what if such visually intelligent capabilities were to be added to the recently rolled out live video apps Periscope and MeerKat on Twitter?
  • Will these chips be adapted to the forthcoming commercial augmented and virtual reality systems (as discussed in the five recent Subway Fold posts)? If so, what new capabilities might they add to these environments?
  • What additional privacy and security concerns will need to be addressed by manufacturers, consumers and regulators as these chips are introduced into their respective marketplaces?

IBM’s Watson is Now Data Mining TED Talks to Extract New Forms of Knowledge

"sydneytocairns_385", Image by Daniel Dimarco

“sydneytocairns_385”, Image by Daniel Dimarco

Who really benefited from the California Gold Rush of 1849? Was it the miners, only some of whom were successfully, or the merchants who sold them their equipment? Historians have differed as to the relative degree, but they largely believe it was the merchants.

Today, it seems we have somewhat of a modern analog to this in our very digital world: The gold rush of 2015 is populated by data miners and IBM is providing them with access to its innovative Watson technology in order for these contemporary prospectors to discover new forms of knowledge.

So then, what happens when Watson is deployed to sift through the thousands of incredibly original and inspiring videos of online TED Talks? Can the results be such that TED can really talk and, when processed by Watson, yield genuine knowledge with meaning and context?

Last week, the extraordinary results of this were on display at the four-day World of Watson exposition here in New York. A fascinating report on it entitled How IBM Watson Can Mine Knowledge from TED Talks by Jeffrey Coveyduc, Director, IBM Watson, and Emily McManus, Editor, TED.com was posted on the TED Blog on May 5, 2015. This was the same day that the newfangled Watson + TED system was introduced at the event. The story also includes a captivating video of a prior 2014 TED Talk by Dario Gil of IBM entitled Cognitive Systems and the Future of Expertise that came to play a critical role in launching this undertaking.

Let’s have a look and see what we can learn from the initial results. I will sum up and annotate this report, and then ask a few additional questions.

One of the key objectives of this new system is to enable users to query it in natural language. An example given in the article is “Will new innovations give me a longer life?”. Thus, users can ask questions about ideas expressed among the full database of TED talks and, for the results, view video excerpts where such ideas have been explored. Watson’s results are further accompanied by a “timeline” of related concepts contained in a particular video clip permitting users to “tunnel sideways” if they wish and explore other topics that are “contextually related”.

The rest of the article is a dialog between the project’s leaders Jeffrey Coveyduc from IBM and TED.com editor Emily McManus that took place at Watson World.  They discussed how this new idea was transformed into a “prototype” of a fresh new means to extract “insights” from within “unstructured video”.

Ms. McManus began by recounting how she had attended Mr. Dario’s TED Talk about cognitive computing. Her admiration of his presentation led her to wonder whether Watson could be applied to TED Talks’ full content whereby users would be able to pose their own questions to it in natural language. She asked Mr. Dario if this might be possible.

Mr. Coveyduc said that Mr. Dario then approached him to discuss the proposed project. They agreed that it was not just the content per se, but rather, that TED’s mission of spreading ideas was so compelling. Because one of Watson’s key objectives is to “extract knowledge” that’s meaningful to the user, it thus appeared to be “a great match”.

Ms. McManus mentioned that TED Talks maintains an application programming interface (API) to assist developers in accessing their nearly 2,000 videos and transcripts. She agreed to provide access to TED’s voluminous content to IBM. The company assembled its multidisciplinary project team in about eight weeks.

They began with no preconceptions as to where their efforts would lead. Mr. Coveyduc said they “needed the freedom to be creative”. They drew from a wide range of Watson’s existing technical services. In early iterations of their work they found that “ideas began to group themselves”. In turn, this led them to “new insights” within TED’s vast content base.

Ms. McManus recently received a call from Mr. Dario asking her to stop by his office in New York. He demo-ed the new system which had completely indexed the TED content. Moreover, he showed how it could display, according to her “a universe of concepts extracted” from the content’s core. Next, using the all important natural language capabilities to pose questions, they demonstrated how the results in the form of numerous short clips which, taken altogether, were compiling “a nuanced and complex answer to a big question”, as she described it.

Mr. Coveyduc believes this new system simplifies how users can inspect and inquire about “diverse expertise and viewpoints” expressed in video. He cited other potential areas of exploration such as broadcast journalism and online courses (also known as MOOCs*). Furthermore, the larger concept underlying this project is that Watson can distill the major “ideas and concepts” of each TED Talk and thus give users the knowledge they are seeking.

Going beyond Watson + TED’s accomplishments, he believes that video search remains quite challenging but this project demonstrates it can indeed be done. As a result, he thinks that mining such deep and wide knowledge within massive video libraries may turn into “a shared source of creativity and innovation”.

My questions are as follows:

  • What if Watson was similarly applied to the vast troves of video classes used by professionals to maintain their ongoing license certifications in, among others, law, medicine and accounting? Would new forms of potentially applicable and actionable knowledge emerge that would benefit these professionals as well as the consumers of their services? Rather than restricting Watson to processing the video classes of each profession separately, what might be the results of instead processing them together in various combinations and permutations?
  • What if Watson was configured to process the video repositories of today’s popular MOOC providers  such as Coursera or edX? The same as well for universities around the world who are putting their classes online. Their missions are more or less the same in enabling remote learning across the web in a multitude of subjects. The results could possibly hold new revelations about subjects that no one can presently discern.

Two other recent Subway Fold posts that can provide additional information, resources and questions that I suggest checking out include Artificial Intelligence Apps for Business are Approaching a Tipping Point posted on March 31, 2015, and Three New Perspectives on Whether Artificial Intelligence Threatens or Benefits the World posted on December 27, 2014.


*  See the September 18, 2014 Subway Fold post entitled A Real Class Act: Massive Open Online Courses (MOOCs) are Changing the Learning Process for the full details and some supporting links.

Artificial Intelligence Apps for Business are Approaching a Tipping Point

5528275910_52c9b1ffb4_z

“Algorithmic Contaminations”, Image by Derek Gavey

There have been many points during the long decades of the development of business applications using artificial intelligence (AI) when it appeared that The Rubicon was about to be crossed. That is, this technology often seemed to be right on the verge of going mainstream in global commerce. Yet it has still to achieve a pervasive critical mass despite the vast resources and best intentions behind it.

Today, with the advent of big data and analytics and their many manifestations¹ spreading across a wide spectrum of industries, AI is now closer than ever to reaching such a tipping point. Consultant, researcher and writer Brad Power makes a timely and very persuasive case for this in a highly insightful and informative article entitled Artificial Intelligence Is Almost Ready for Business, posted on the Harvard Business Review site on March 19, 2015. I will summarize some of the key points, add some links and annotations, and pose a few questions.

Mr. Power sees AI being brought to this threshold by the convergence of rapidly increasing tech sophistication, “smarter analytics engines, and the surge in data”. Further adding to this mix is the incursion and growth of the Internet of Things (Iot), better means to analyze “unstructured” data, and the extensive categorization and tagging of data. Furthermore,  there is the dynamic development and application of smarter algorithms to  discern complex patterns in data and to generate increasingly accurate predictive models.

So, too, does machine learning² play a highly significant role in AI applications. It can be used to generate “thousands of models a week”. For example, a model premised upon machine learning can be used to select which ads should be placed on what websites within milliseconds in order to achieve the greatest effectiveness in reaching an intended audience. DataXu is one of the model-generating firms in this space.

Tom Davenport, a professor at Babson College and an analytics expert³, was one of the experts interviewed by Power for this article. To paraphrase part of his quote, he believes that AI and machine learning would be useful adjuncts to the human analysts (often referred to as “quants”4). Such living experts can far better understand what goes into and comes out of a model than a machine learning app alone. In turn, these people can persuade business managers to apply such “analytical insights” to actual business processes.

AI can also now produce greater competitive efficiencies by closing the time gap between analyzing vast troves of data at high speeds and decision-making on how to apply the results.

IBM, one of the leading integrators of AI, has recently invested $1B in the creation of their Watson Group, dedicated to exploring and leveraging commercial applications for Watson technology. (X-ref to the December 1, 2014 Subway Fold post entitled Possible Futures for Artificial Intelligence in Law Practice for a previous mention and links concerning Watson.) This AI technology is currently finding significant applications in:

  • Health Care: Due to Watson’s ability to process large, complex and dynamic quantities of text-based data, in turn, it can “generate and evaluate hypotheses”. With specialized training, these systems can then make recommendation about treating particular patients. A number of elite medical teaching institutions in the US are currently engaging with IBM to deploy Watson to “better understand patients’ diseases” and recommend treatments.
  • Finance: IBM is presently working with 45 companies on app including “digital virtual agents” to work with their clients in a more “personalized way”; a “wealth advisor” for financial planning5; and on “risk and compliance management”. For example, USAA provides financial services to active members of the military services and to their veterans. Watson is being used to provide a range of financial support functions to soldiers as they move to civilian status.
  • Startups: The company has designated $100 million for introducing Watson into startups. An example is WayBlazer which, according to its home page, is “an intelligence search discovery system” to assist travelers throughout all aspects of their trips. This online service is designed to be an easy-to-use series of tools to provide personalized answers and support for all sort of journeys. At the very bottom of their home page on the left-hand side are the words “Powered by IBM Watson”.

To get a sense of the trends and future of AI in business, Power spoke with the following venture capitalists who are knowledgeable about commercial AI systems:

  • Mark Gorenberg, Managing Director at Zetta Venture Partners which invests in big data and analytics startups, believes that AI is an “embedded technology”. It is akin to adding “a brain”  – – in the form of cognitive computing – – to an application through the use of machine learning.
  • Promod Haque, senior managing partner at Norwest Venture Partners, believes that when systems can draw correlations and construct models on their own, and thus labor is reduced and better speed is achieved. As a result, a system such as Watson can be used to automate analytics.
  • Manoj Saxena, a venture capitalists (formerly with IBM), sees analytics migrating to the “cognitive cloud”, a virtual place where vast amounts of data from various sources will be processed in such a manner to “deliver real-time analytics and learning”. In effect, this will promote smoother integration of data with analytics, something that still remains challenging. He is an investor in a startup called Cognitive Scale working in this space.

My own questions (not derived through machine learning), are as follows:

  • Just as Watson has begun to take root in the medical profession as described above, will it likewise begin to propagate across the legal profession? For a fascinating analysis as a starting point, I highly recommend 10 Predictions About How IBM’s Watson Will Impact the Legal Profession, by Paul Lippe and Daniel Katz, posted on the ABA Journal website on October 4, 2014. I wonder whether the installation of Watson in law offices take on other manifestations that cannot even be foreseen until the systems are fully integrated and running? Might the Law of Unintended Consequences also come into play and produce some negative results?
  • What other professions, industries and services might also be receptive to the introduction of AI apps that have not even considered it yet?
  • Does the implementation of AI always produce reductions in jobs or is this just a misconception? Are there instances where it could increase the number of jobs in a business? What might be some of the new types of jobs that could result? How about AI Facilitator, AI Change Manager, AI Instructor, AI Project Manager, AI Fun Specialist, Chief AI Officer,  or perhaps AI Intrapreneur?

______________________

1.  There are 27 Subway Fold posts in the category of Big Data and Analytics.

2.  See the Subway Fold posts on December 12, 2014 entitled Three New Perspectives on Whether Artificial Intelligence Threatens or Benefits the World and then another on December 10, 2014 entitled  Is Big Data Calling and Calculating the Tune in Today’s Global Music Market? for specific examples of machine learning.

3.  I had the great privilege of reading one of Mr. Davenport’s very insightful and enlightening books entitled Competing on Analytics: The New Science of Winning (Harvard Business Review Press, 2007), when it was first published. I learned a great deal from it and this book was responsible for my initial interest in the applications of analytics in commerce. Although big data and analytics have grown exponentially since its publication, I still highly recommend this book for its clarity, usefulness and enthusiasm for this field.

4.  For a terrific and highly engaging study of the work and influence of these analysts, I also recommend reading The Quants: How a New Breed of Math Whizzes Conquered Wall Street and Nearly Destroyed It (Crown Business, 2011), by Scott Patterson.

5.  There was a most interesting side-by-side comparison of human versus automated financial advisors entitled Robo-Advisors Vs. Financial Advisors: Which Is Better For Your Money? by Libby Kane, posted on BusinessInsider.com on July 21, 2014.

Three New Perspectives on Whether Artificial Intelligence Threatens or Benefits the World

"Gritty Refraction", Image by Mark Oakley

“Gritty Refraction”, Image by Mark Oakley

As the velocity of the rate of change in today’s technology steadily continues to increase, one of the contributing factors behind this acceleration the rise of artificial intelligence (AI). “Smart” attributes and functionalities are being baked into a multitude of systems that are affecting our lives in many visible and, at other times, transparent ways. Just to name one well-known example of an AI-enabled app is Siri, the voice recognition system in the iPhone. Two recent Subway Fold posts have also examined AI’s applications in law (1) and music (2).

However, notwithstanding all of the technological, social and commercial benefits produced by AI, a widespread reluctance, if not fear, of its capabilities to produce negative effects still persists. Will the future produce consequences resembling those in the Terminator or Matrix movie franchises, the “singularity” predicted by Ray Kurzweil where machine intelligence will eventually surpass human intelligence, or perhaps other more benign and productive outcomes?

During the past two weeks, three articles have appeared where their authors have expressed more upbeat outlooks about AI’s potential. They believe that smarter systems are not going to become the world’s new overlords (3) and, moreover, there is a long way to go before computers will ever achieve human-level intelligence or even consciousness. I highly recommend reading them all in their entirety for their rich content, insights and engaging prose.

I will sum up, annotate and comment upon some of the key points in these pieces, which have quite a bit in common in their optimism, analyses and forecasts.

First is a reassuring column by Dr. Gary Marcus, a university professor and corporate CEO, entitled Artificial Intelligence Isn’t a Threat—Yet, that appeared in the December 12, 2014 edition of The Wall Street Journal. While acknowledging the advances in machine intelligence, he still believes that computers today are nowhere near “anything that looks remotely like human intelligence”. However,  computers do not necessarily need to be “superintelligent” to do significant harm such as wild swings in the equities markets resulting from programming errors.(4)

He is not calling for an end to further research and development in AI. Rather, he urges proceeding with caution with safeguards carefully in place focusing upon on the apps access to other networked systems, in areas such as, but not limited to, medicine and autos. Still, the design, implementation and regulation of such “oversight” has yet to be worked out.

Dr. Marcus believes that we might now be overly concerned about any real threats from AI while still acknowledging potential threats from it. He poses questions about levels of transparency and technologies that assess whether AI programs are functioning as intended. Essentially, a form of “infrastructure” should be  in place to evaluate and “control the results” if needed.

Second, is an article enumerating five key reasons why the AI apocalypse is not nearly at hand right now. It is aptly entitled Will Artificial Intelligence Destroy Humanity? Here are Reasons Not to Worry, by Timothy B. Lee, which was posted on Vox.com on December 19, 2014. The writer asserts that the fears and dangers of AI are far overstated based on his research and interviews with some AI experts. To sum up these factors:

  • Actual “intelligence” is dependent on real world experience such that massive computing power alone will not produce comparable capabilities in machines. The example cited here is studying a foreign language well enough to pass as a native speaker. This involves both book learning and actually speaking with locals in order to include social elements and slang. A computer does not and never will have these experiences nor can they simulate them.
  • Computers, by their very nature, must reply on humans for maintenance, materials, repairs and ultimately, replacement. The current state of robotics development is unable to handle these responsibilities. Quite simply, machines need us and will continue to do so for a long time.
  • Creating a computerized equivalent of a real human’s brain is very tough and remains beyond the reach of today’s circuitry and programming.  Living neurons are indeed quite different in their behaviors and responses than digital devices.  The author cites the modeling of weather simulations as one where progress has been relatively small despite the huge increases in available processing capacity. Moreover, simulating brain activity in the an effort to generate a form of intelligence is relatively far more difficult than modeling weather systems.(5)
  • Relationships, more than intelligence, are needed to acquire power in the real world. Looking at the achievements of recent US presidents, the author states that they gained their achievements by virtue of their networks, personalities and skills at offering rewards and penalties. Thus, machines assist in attaining great technological breakthroughs, but only governments and companies can assemble to capital and resources to implement great projects. Taking this logic further, machines could never take over the world because they utterly lack the capability to work with the large numbers of people needed to even attempt this. (Take that, SkyNet.)
  • Intelligence will become less valuable as its supply increases according to the laws of supply and demand. As the pricing of computing continues to fall, their technological capabilities continues to rise. As the author interprets these market forces, the availability of “super-intelligent computers” will become commoditized and, in turn, produce even more intelligent machines where pricing is competitive. (6)

The third article presents a likewise sanguine view on the future of AI entitled Apocalypse or Golden Age: What Machine Intelligence Will Do to Us, by Patrick Ehlen, was posted on VentureBeat.com on December 23, 2014. He drew his research from a range of leaders, projects and studies to arrive at similar conclusions that the end of the world as we know it is not at hand because of AI. This piece overlaps with the others on a number of key points. It provides the following additional information and ideas:

  • Well regarded university researchers and tech giants such as Google are pursuing extensive and costly AI research and development programs in conjunction with their ongoing work into such areas as robotics, machine learning, and modeling simple connectomes (see fn.5 below).
  • Unintended bad consequence of well-intentioned research are almost always inevitable. Nonetheless, experts believe that the rate of advancement in this field will continue to accelerate and may well have significant impacts upon the world during the next 20 years.
  • On August 6, 2014, the Pew Internet Research Project published a comprehensive report that was directly on point entitled AI, Robotics, and the Future of Jobs by Aaron Smith and Janna Anderson. This was compiled based on surveys of nearly 1,900 AI experts. To greatly oversimplify the results, while there was largely a consensus view on the progress in this field and ever-increasing integration of AI into numerous areas, there was also a significant split of opinion as the economic,  employment and educational effects of AI in conjunction with robotics. (I highly recommend taking some time to read through this very enlightening report because of its wealth of insights and diversity of perspectives.)
  • Today we are experiencing a “perfect storm” where AI’s progress is further being propelled by the forces of computing power and big data. As a result, we can expect “to create new services that will redefine our expectations”. (7)
  • Certain sectors of our economy will realize greater benefits from the surge in AI than others.(8) This, too, will be likely to cause displacements and realignments in employment in these areas.
  • Changes to relevant social and public policies will be needed in order to successfully adapt to AI-driven effects upon the economy. (This is similar to Dr. Marcus’s views, above, that news forms of safeguards and infrastructure will become necessary.)

I believe that authors Marcus, Lee and Ehlen have all made persuasive cases that AI will continue to produce remarkable new goods, services and markets without any world threatening consequences. Yet they all alert their readers about the unintended and unforeseeable economic and social impacts that likely await us further down the road. My own follow up questions are as follows:

  • Who should take the lead in coordinating the monitoring of these pending changes? Whom should they report to and what, if any, regulatory powers should they have?
  • Will any resulting positive or negative changes attributable to AI be global if and when they manifest themselves, or will they be unevenly distributed in among only certain nations, cities, marketplaces, populations and so on?
  • Is a “negative” impact of AI only in the eye of the beholder? That is, what metrics and analytics exist or need to be developed in order to assess the magnitudes of plus or minus effects? Could such standards be truly objective in their determinations?
  • Assuming that AI development and investment continues to race ahead, will this lead to a possible market/investment bubble or, alternatively, some form of AI Industrial Complex?
  • So, is everyone looking forward to the July 2015 release of Terminator Genisys?

___________________________________

1.  See Possible Futures for Artificial Intelligence in Law Practice posted on September 1, 2014.

2.  See Spotify Enhances Playlist Recommendations Processing with “Deep Learning” Technology posted  on August 14, 2014.

3.  The origin of the popular “I, for one, welcome our new robot overlords” meme originated in the Season 5 episode 15 of The Simpsons entitled Deep Space Homer. (My favorite scene in this ep is where the – – D’oh! – – potato chips are floating all around the spaceship.)

4.   In Flash Boys (W.W. Norton & Company, 2014), renowned author Michael Lewis did an excellent job of reporting on high-speed trading and the ongoing efforts  to reform it. Included is coverage of the “flash crash” in 2010 when errant program trading caused a temporary steep decline in the stock market.

 5For an absolutely fascinating deep and wide analysis of current and future projects to map out all of the billions of connections among the neurons in the human brain, I suggest reading Connectome: How the Brain’s Wiring Makes Us Who We Are (Houghton Mifflin, 2012), by Sebastian Seung.  See also a most interesting column about the work of Dr. Seung and others by James Gorman in the November 10, 2014 edition of The New York Times entitled Learning How Little We Know About the Brain. (For the sake of all humanity, let’s hope these scientists don’t decide to use Homer J. Simpson, at fn.3 above, as a test subject for their work.)

6.  This factor is also closely related to the effects of Moore’s Law which states that the number of transistors that can be packed onto a chip doubles almost doubles in two years (later revised to 18 months). This was originally conceived by Gordon E. Moore, a legendary computer scientists and one of the founders of Intel. This principal has held up for nearly fifty years since it was first published.

7.  This technological convergence is fully and enthusiastically  explored in an excellent article by Kevin Kelly entitled The Three Breakthroughs That Have Finally Unleashed AI on the World in the November 2014 issue of WIRED.

8This seems like a perfect opportunity to invoke the often quoted maxim by master sci-fi and speculative fiction author William Gibson that “The future is already here – it’s just not very evenly distributed.”

Updates on Recent Posts Re: Music’s Big Data, Deep Learning, VR Movies, Regular Movies’ Effects on Our Brains, Storytelling and, of Course, Zombies

This week has seen the publication of an exciting series of news stories and commentaries that provide a very timely opportunity to update six recent Subway Fold posts. The common thread running through the original posts and these new pieces is the highly inventive mixing, mutating and monetizing of pop culture and science. Please put on your virtual 3-D glasses let’s see what’s out there.

The December 10, 2014 Subway Fold post entitled Is Big Data Calling and Calculating the Tune in Today’s Global Music Market? explored the apps, companies and trends that have become the key drivers in the current global music business. Adding to the big data strategies and implementations for three more major music companies and their rosters of artists was a very informative report in the December 15, 2014 edition of The Wall Street Journal by Hannah Karp entitled Music Business Plays to Big Data’s Beat. (A subscription for the full text required a subscription to WSJonline.com, but the story also appeared in full on Nasdaq.com clickable here.) As described in detail in this report, Universal Music, Warner Music, and Sony Music have all created sophisticated systems to parse numerous data sources and apply customized analytics for planning and executing marketing campaigns.

Next for an alternative and somewhat retro approach, a veteran music retailer named Sal Nunziato wrote a piece on the Op Ed page of The New York Times on the very same day entitled Elegy for the ‘Suits’. He blamed the Internet more than the music labels for the current state of music where “anyone with a computer, a kazoo and an untuned guitar” can release their music  online regardless of its quality. Thus, the ‘suits’ he nostalgically misses were the music company execs who exerted  more controlled upon the quantity and quality of music available to the public.

Likewise covering the tuning up of another major force in today’s online music streaming industry was an August 14, 2014 Subway Fold post entitled Spotify Enhances Playlist Recommendations Processing with “Deep Learning” Technology. This summarized a report about how deep learning technology was being successfully applied to improve the accuracy and responsiveness of Spotify’s recommendation engine. Presenting an even stronger case that you-ain’t-seen-nothing-yet in this field was an engaging analysis of some still largely unseen developments in deep learning posted on December 15, 2014, on Gigaom.com entitled What We Read About Deep Learning is Just the Tip of the Iceberg by Derrick Harris. These include experimental systems being tested by the likes of Google, Facebook and Microsoft. As well, there were a series of intriguing presentations and demos at the recent Neural Information Processing Systems conference held in Montreal. As detailed here with a wealth of supporting links, many of these advanced systems and methods are expected to gain more press and publicity in 2015.

Returning to the here and now at end of 2014, the current release of the movie adaptation of the novel Wild by Cheryl Strayed (Knopf, 2011), has been further formatted into 3-minute supplemental virtual reality movie as reported in the December 15, 2014 edition of The New York Times by Michael Cieply in an article entitled Virtual Reality ‘Wild’ Trek. This fits right in with the developments covered in the December 10, 2014 Subway Fold post entitled A Full Slate of Virtual Reality Movies and Experiences Scheduled at the 2015 Sundance Film Festival as this short film is also scheduled to be presented at the 2015 Sundance festival. Using Oculus and Samsung VR technology, this is an immersive meeting with the lead character, played by actress Reese Witherspoon, while she is hiking in the wilderness. She is quoted as being very pleased with the final results of this VR production.

The next set of analyses and enhancements to our cinematic experience, continuing right along with the September 3, 2014 Subway Fold post entitled Applying MRI Technology to Determine the Effects of Movies and Music on Our Brains, concerns a newly published book that explains the science of how movies affect our brains entitled Flicker: Your Brain on Movies (Oxford University Press, 2014), by Dr. Jeffrey Zacks. The author was interviewed during a fascinating segment of the December 18, 2014 broadcast of The Brian Lehrer Show on WYNC radio. Among other things, he spoke about why audiences cry during movies (even when the films are not very good), sometimes root for the villain, and move to duck out of the way when an object on the screen seems to be coming right at them such as the giant bolder rolling after Indiana Jones at the start of Raiders of the Lost Ark. Much of this is intentionally done by the filmmakers to manipulate audiences into heightened emotional responses to key events as they unfold on the big screen.

Of course, all movie making involves the art and science of storytelling skills as discussed in the November 4, 2014 Subway Fold post entitled Say, Did You Hear the Story About the Science and Benefits of Being an Effective Storyteller?. In a very practical and insightful article in the December 12, 2014 edition of The New York Times by Alina Tugend entitled Storytelling Your Way to a Better Job or a Stronger Start-Up there are some helpful applications for today’s marketplace. As concisely stated in this piece “You need to have a good story.” It describes in detail how there are now consultants, charging meaningful fees, with new approaches and techniques who assist people in improving their skills in order to become more persuasive storytellers. Among others interviewed for this story was Dr. Paul J. Zak, who wrote the recent article on The Harvard Business Review Blog which was the basis for the November 4th Subway Fold post. It concludes with five helpful pointers to spin a compelling yarn for your listeners.

Finally, the best story told on TV during the 2014 season was – – in a fictional world where brains take on an entirely different significance – –  The Walking Dead on AMC in terms of the extraordinary number of tweets about ongoing adventures Sheriff Rick and the Grimes Gang. This was covered on Nielsen.com on December 15, 2014 in a post entitled Tops of 2014: Social TV.  TWD averaged twice as many tweets as its next competitor in the ongoing series category. This follows up directly with the July 31, 2014 Subway Fold post entitled New Analytical Twitter Traffic Report on US TV Shows During the 2013 – 2014 Season.  As I read scores of TWD tweets on the mid-season finale myself, everyone will miss you, Beth.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

As a major fan of TWD, I would like to take the opportunity add my own brief review about the tragic events in Episode 5.8:

I think that in the end, Beth was a form of avatar for the entire show. She traveled many miles from lying on her bed in Season 2 completely unable to function and progressing to Season 5 as a realist concerning herself and the group’s survival. Rather than resigning herself to be held a captive ward in the hospital, she was determined to escape no matter what and was so proud of helping Jonah to escape.

She awakened and arose to be a survivor and a committed member of the Grimes Gang, just as everyone else has done during the past five years. That is, Beth’s journey reflects the entire group’s journey. She, and the Grimes Gang, up to this point have survived all of the threats they faced and endured all of the horrors they have seen. They will all survive but this death with have more serious repercussions than perhaps any other death up until this point. Maggie, Daryl, Rick, Carol and Carl, the core of the GG, will not soon recover from this.

What I still do not understand is why, given that she was finally free in the hospital’s hallway, did she jeopardize her life by going after the lead officer with a scissors. It seemed to be somewhat at odds with Beth’s character as someone who had survived until now on her own determination and close bond with the group. She had nothing to gain by such a reckless act in the middle of a very volatile situation. Was it a sacrifice to save Jonah? Did she realize that the cop was holding a gun at that point? Was she just overtaken by the motivation that desperate times sometimes call for desperate measures?

Consider, too, that she was Herschel’s daughter and her character reflected what she had learned from him: 1. Both learned to see things differently and adapted when the circumstances changed. 2. Both faced sacrifices and danger with great dignity. (Recall Herschel’s acknowledging grin towards Rick right before the Governor murdered the elder of the survivors, and then Beth’s defiant grin when she saw that Jonah had escaped.) 3. Both were resilient insofar as Herschel adapting to the loss of his leg and Beth recovering from her father’s murder. 4. Both sought to comfort others as Herschel stayed with the flu patients and Beth finally drew Daryl out about his terrible family life. Recall also, the three very effective times during her history on the show when Beth’s singing gave great comfort to the others. Indeed, she was a saintly figure but as this story arc wore on, her demise seemed to be foretold.

TWD remains, for me, an absolutely brilliant show in terms of its characters, narrative and presentation.

 

 

 

 

Possible Futures for Artificial Intelligence in Law Practice

As the legal marketplace continues to see significant economic and productivity gains from many practice-specific technologies, is it possible that attorneys themselves could one day be supplanted by sophisticated systems driven by artificial intelligence (AI) such as IBM’s Watson?

Jeopardy championships aside for the moment, leading legal technology expert and blogger Ron Friedmann has posted a fascinating report and analysis on August 24, 2014 on his Prism Legal Strategic Technology Blog entitled Meet Your New Lawyer, IBM Watson. He covers an invitation-only session held for CIO’s of large global law firms held at this summer’s annual meeting held by International Legal Technology Association (ILTA) where a Watson senor manager made a presentation to this group. Ron, as he always does on his consistently excellent blog, offers his own deep and valuable insights on the practical and economic implications regarding the possible adaptation of Watson to the work done at large law firms. I highly recommend clicking-through and full read of this post.

(X-ref also to an earlier post here ILTA’s New Multi-dimensional Report on the Future of Legal Information Technology.)

As I was preparing to write this post a few days ago, lo and behold, my September 2014 subscription edition of WIRED arrived. It carries a highly relevant feature about a hush-hush AI startup, entitled Siri’s Inventors Are Building a Radical New AI That Does Anything You Ask, by Steven Levy. This is about the work of the founders of Viv Labs who are developing the next generation of AI technology. Even in a crowded field where many others have competed, the article indicates that this new company may really be onto something very new. That is, AI as a form of utility that can:

  • Access and integrate vast numbers of big data sources
  • Continually teach itself to do new things and autonomously generate supporting code to accomplish them
  • Handle voice queries on mobile devices that involve compound and multi-level questions,steps and sources to resolve

Please check out the full text of this article for all of the details about how Viv’s technology works and its exciting prospective uses.

That said, would Viv’s utility architecture as opposed to Watson’s larger scale technology be more conducive to today’s legal applications? Assuming for the moment that it’s technically feasible, how would the ability to operate by such voice-based AI input/output affect the operation and quality of results for, say, legal research services, document assembly applications, precedent libraries, enterprise search, wikis, extranets, and perhaps even Continuing Legal Education courses? What might be a tipping point towards a greater engagement of AI in the law across many types of practices and office settings? Might this result in in-house counsel bringing more work to their own staffs rather than going to outside counsel? Would public interest law offices be able to provide more economical services to clients who cannot normal afford to pay legal fees? Might this have further impacts upon the trends towards fixed fee-based billing arrangements?