New Visualization Maps Out the Concepts of the “Theory of Everything”

“DC891- Black Hole”, Image by Adriana Arias

January 6, 2017: An update on this post appears below.


While I was a student in the fourth grade at Public School 79, my teacher introduced the class to the concept of fractions. She demonstrated this using the classic example of the cutting up a pie into different numbers of slices. She explained to the class about slicing it into halves, thirds, quarters and so on. During this introductory lesson, she kept emphasizing that the sum of all the parts always added up to the whole pie and that they could never be equal to more than or less than the whole.

I thought I could deal with this fractions business back then. As far as I know, it still holds up pretty well today.

On an infinitely grander and brain-bendingly complex scale that is more than just pieces of π, physicists have been working for decades on a Theory of Everything (ToE). The objective is to build a comprehensive framework that fully unites and explains the theoretical foundations of physics across the universe. The greatest minds in this field have approached this ultimate challenge with a variety of highly complex and advanced mathematics, theoretical constructs and proposals. Many individuals and multidisciplinary teams are still at work try to achieve the ToE. If and when anyone of them succeeds formulating and proving it, the result will be the type of breakthrough that will potentially have profound changes on our understanding of our world and the universe we inhabit.

Einstein was one of the early pioneers in this field. He invested a great deal of effort in this challenge but even a Promethean genius such as him never succeeded at it.  His General Theory of Relativity continues to be one of the cornerstones of the ToE endeavor. The entire September 2015 issue of Scientific American is devoted to the 100th anniversary of this monumental accomplishment. I highly recommend reading this issue in its entirety.

I also strongly urge you to check out a remarkable interactive visualization of the component theories and concepts of the ToE that was posted in an August 3, 2015 post on Quantamagazine.org entitled Theories of Everything, Mapped by Natalie Wolchover. The author very concisely explains how the builder of the map, developer Emily Fuhrman, created it in order to teach people about ToE. Furthermore, it shows that there are areas with substantial “disunions, holes and inconsistencies” remaining that comprise the “deep questions that must be answered” in order to achieve the ToE.

The full map is embedded at the top of the article, ready for visitors to click into it and immerse themselves immerse in such topics as, among many others, grand unification, quantum gravity and dark matter.  All along the way, there are numerous linked resources within it available for further extensive explorations. In my humble opinion, Ms. Fuhrman has done a brilliant job of creating this.

Having now spent a bit of time clicking all over this bounty of fascinating information, I was reminded of my favorite line from Bob Dylan’s My Back Pages that goes “Using ideas as my maps”. (The Byrds also had a hauntingly beautiful Top 40 hit covering this.)

In these 26 prior Subway Fold posts we have examined a wide range of the highly inventive and creative work that can be done with contemporary visualization tools. This ToE map is yet another inspiring example. Even if subjects like space-time and the cosmological constant are not familiar to you, this particularly engaging visualization expertly arranges and explains the basics of these theoretical worlds. It also speaks to the power of effective visualization in capturing the viewer’s imagination about a subject which, if otherwise only left as text, would not succeed in drawing most online viewers in so deeply.


 

January 6, 2017 Update:

Well, it looks like those Grand Unified Fielders have recently suffered another disappointing bump in the road (or perhaps in the universe), as they have been unable to find any genuine proton decay. Although this might sound like something your dentist has repeatedly warned you about, it is rather an anticipated physical phenomenon on the road to the finding the Theory of Everything that has yet to be observed and measured. This put that quest on hold for time being unless and until either it is observed or physicists and theorists can work around its absence. The full details appear in a new article entitled Grand Unification Dream Kept at Bay, by Natalie Wolchover (the same author whose earlier article on this was summarized above), in QuantaMagazine.com, posted on December 15, 2016.

Data Analysis and Visualizations of All U.S. Presidential State of the Union Addresses

"President Obama's State of the Union Address 2013", Word cloud image by Kurtis Garbutt

“President Obama’s State of the Union Address 2013”, Word cloud image by Kurtis Garbutt

While data analytics and visualization tools have accumulated a significant historical record of accomplishments, now, in turn, this technology is being applied to actual significant historical accomplishments. Let’s have a look.

Every year in January, the President of the United States gives the State of the Union speech before both houses of the U.S. Congress. This is to address the condition of the nation, his legislative agenda and other national priorities. The requirement for this presentation appears in Article II of the U.S. Constitution.

This talk with the nation has been given every year (with only one exception), since 1790. The resulting total of 224 speeches presents a remarkable and dynamic historical record of U.S. history and policy. Researchers at Columbia University and the University of Paris have recently applied sophisticated data analytics and visualization tools to this trove of presidential addresses. Their findings were published in the August 10, 2015 edition of the Proceedings of the National Academy of Sciences in a truly fascinating paper entitled Lexical Shifts, Substantive Changes, and Continuity in State of the Union Discourse, 1790–2014, by Alix Rule, Jean-Philippe Cointet, and Peter S. Bearman.

A very informative and concise summary of this paper was also posted in an article on Phys.org, also on August 10, 2015, entitled in a post entitled Big Data Analysis of State of the Union Remarks Changes View of American History, (no author is listed). I will summarize, annotate and post a few questions of my own. I highly recommend clicking through and reading the full report and the summary article together for a fuller perspective on this achievement. (Similar types of textual and graphical analyses of US law were covered in the May 15, 2015 Subway Fold post entitled Recent Visualization Projects Involving US Law and The Supreme Court.)

The researchers developed custom algorithms for their research. They were applied to the total number of words used in all of the addresses, from 1790 to 2014, of 1.8 million.  By identifying the frequencies of “how often words appear jointly” and “mapping their relation to other clusters of words”, the team was able to highlight “dominant social and political” issues and their relative historical time frames. (See Figure 1 at the bottom of Page 2 of the full report for this lexigraphical mapping.)

One of the researchers’ key findings was that although the topics of “industry, finance, and foreign policy” were predominant and persist throughout all of the addresses, following World War II the recurring keywords focus further upon “nation building, the regulation of business and the financing of public infrastructure”. While it is well know that these emergent terms were all about modern topics, the researchers were thus able to pinpoint the exact time frames when they first appeared. (See Page 5 of the full report for the graphic charting these data trends.)

Foreign Policy Patters

The year 1917 struck the researchers as a critical turning point because it represented a dramatic shift in the data containing words indicative of more modern times. This was the year that the US sent its troops into battle in Europe in WWI. It was then that new keywords in the State of the Union including “democracy,” “unity,” “peace” and “terror” started to appear and recur. Later, by the 1940’s, word clusters concerning the Navy appeared, possibly indicating emerging U.S. isolationism. However, they suddenly disappeared again as the U.S. became far more involved in world events.

Domestic Policy Patterns

Over time, the researchers identified changes in the terminology used when addressing domestic matters. These concerned the government’s size, economic regulation, and equal opportunity. Although the focus of the State of the Union speeches remained constant, new keywords appeared whereby “tax relief,” “incentives” and “welfare” have replaced “Treasury,” “amount” and “expenditures”.

An important issue facing this project was that during the more than two centuries being studied, keywords could substantially change in meaning over time. To address this, the researchers applied new network analysis methods developed by Jean-Philippe Cointet, a team member, co-author and physicist at the University of Paris. They were intended to identify changes whereby “some political topics morph into similar topics with common threads” as others fade away. (See Figure 3 at the bottom of Page 4 of the full paper for this enlightening graphic.*)

As a result, they were able to parse the relative meanings of words as they appear with each other and, on a more macro level, in the “context of evolving topics”. For example, it was discovered that the word “Constitution” was:

  • closely associated with the word “people” in early U.S. history
  • linked to “state” following the Civil War
  • linked to “law” during WWI and WWII, and
  • returned to “people” during the 1970’s

Thus, the meaning of “Constitution” must be assessed in its historical context.

My own questions are as follows:

  • Would this analytical approach yield new and original insights if other long-running historical records such as the Congressional Record were like subject to the research team’s algorithms and analytics?
  • Could companies and other commercial businesses derive any benefits from having their historical records similarly analyzed? For example, might it yield new insights and recommendations for corporate governance and information governance policies and procedures?
  • Could this methodology be used as an electronic discovery tool for litigators as they parse corporate documents produced during a case?

 


*  This is also resembles the methodology and appearance to the graphic on Page 29 of the law review article entitled A Quantitative Analysis of the Writing Style of the U.S. Supreme Court, by Keith Carlson, Michael A. Livermore, and Daniel Rockmore, Dated March 11, 2015, linked to and discussed with the May 15, 2015 Subway Fold post cited above.

Companies Are Forming Digital Advisory Panels To Help Keep Pace With Trending Technologies

"Empty Boardroom", Image by reynermedia

“Empty Boardroom”, Image by reynermedia

As a result of the lightening-fast rates of change in social media, big data and analytics, and online commerce¹, some large corporations have recently created digital advisory panels (also called  “boards”, “councils” and “groups” in place of “panels”), to assist executives in keeping pace with implementing some of the latest technologies. These panels are being patterned as less formal and scaled-down counterparts of traditional boards of directors.

This story was covered in a fascinating and very instructive article in the June 10, 2015 edition of The Wall Street Journal entitled “Companies Set Up Advisory Boards to Improve Digital Savvy” (subscription required, however, the article is fully available here on nasdaq.com). I will sum up, annotate and add a few questions of my own.

These digital advisory panels are often composed of “six outside experts under 50 years old”. In regularly scheduled meetings, their objective is to assist corporate managers in reaching diverse demographics and using new tools such as virtual reality² for marketing purposes. The executives whom the panels serve are appreciative of their “honest feedback”, access to entrepreneurs, and perspectives on these digital matters.

George L. Davis at the executive recruiting firm Egon Zehnder reports that approximately 50 companies in the Fortune 500 have already set up digital advisory panels. These include, among others, Target Corp. (details below) and American Express. However, not all such panels have not continued to stay in operation.

Here are the experiences of three major corporations with their digital advisory panels:

1. General Electric

GE’s digital advisory panel has met every quarter since its inception in 2011. Its members are drawn from a diversity of fields such as gaming and data visualization³. The youngest member of their 2014 panel was Christina Xu. She is a co-founder of a consulting company called PL Data. She found her experience with GE to be “an interesting window” into a corporate environment.

Ms. Xu played a key role in creating something new that has already drawn eight million downloads. It’s called the GE Sound Pack, a collection of factory sounds recorded at their own industrial facilities, intended for use by musicians4.  In effect, with projects like this the company is using the web in new ways to enhance its online presence and reputation.

GE’s panel also participated in the company’s remembrance of the 45th anniversary of the first moon landing. Back then, the company made the silicon rubber for the Apollo 11 astronauts’ boots. To commemorate in 2014, the panel convinced GE to create and market a limited edition line of “Moon Boot” sneakers online. They sold out in seven minutes. (For more details but, unfortunately, no more chances to get a pair of these way cool sneakers, see an article with photos of them entitled GE Modernizes Moon Boots and Sells Them as Sneakers, by Belinda Lanks, posted on Bloomberg.com on July 16, 2014 .)

2.  Target Corporation

On Target’s digital advisory council,  Ajay Agarwal, who is the Managing Director of Bain Capital Ventures in Palo Alto, California, is one of its four members. He was told by the company that “there were ‘no sacred cows’ “. Among the council’s recommendations was to increase Target’s staff of data scientists faster than originally planned, and to deploy new forms of in-store and online product displays.

Another council member, Sam Yagin, the CEO of Match.com,  viewed a “showcase” Target store and was concerned that it looked just like other locations. He had instead expected advanced and personalized features such as “smart” shopping carts linked to shoppers’ mobile phones that would serve to make shopping more individualized. Casey Carl, the chief strategy and innovation officer at Target, agreed with his assessment.

3.  Medtronic PLC

This medical device manufacturer’s product includes insulin pumps for people with diabetes.5 They have been working with their digital advisory board, founded in 2011, to establish a “rapport” on social media with this community. One of the board’s members, Kay Madati, who was previously an executive at Facebook, recommended a more streamlined approach using a Facebook page. The goal was to build patient loyalty. Today, this FB page (clickable here), has more than 230,000 followers. Another initiative was launched to expand Medtronics’ public perception beyond being a medical device manufacturer.

This digital advisory board was suspended following the company’s acquisition and re-incorporation in Ireland. Nonetheless, an executive expects the advisory board to be revived within six months.

My questions are as follows:

  • Would it be advisable for a member of a digital advisory panel to also sit on another company’s panel, given that it would not be a competitor? Would both the individual and both corporations benefit by the possible cross-pollination of ideas from different markets?
  • What guidelines should be established for choosing members of such panels in terms of their qualifications and then vetting them for any possible business or legal conflicts?
  • What forms of ethical rules and guidelines should be imposed panel members? If so, who should draft,  approve, and then implement them?
  • What other industries, marketplaces, government agencies, schools and public movements might likewise benefit from their own digital advisory panels? Would established tech companies and/or startups likewise find benefits from them?
  • Might finding and recruiting members for a digital advisory panel be a new market segment for executive search firms?
  • What new entrepreneurial opportunities might emerge when and if digital advisory panels continue to grow in acceptance and popularity?

 


1.   All of which are covered in dozens of Subway Fold posts in their respective categories here, here and here.

2.  There are six recent Subway Fold posts in the category of Virtual and Augmented Reality.

3.  There are 21 recent Subway Fold posts in the category of Visualization.

4.   When I first read this, it made me think of Factory by Bruce Springsteen on his brilliant Darkness on the Edge of Town album.

5.   X-ref to the October 3, 2014 Subway Fold post entitled New Startups, Hacks and Conferences Focused Upon Health Data and Analytics concerning Project Night Scout involving a group of engineers working independently to provide additional mobile technology integration and support for people using insulin pumps.

GDELT 2.0 Launches Bringing Real-Time News Translation in 65 Languages

7094052079_2f4e870288_z

Image by Library and Archives Canada

I only speak two languages: English and New York. Some visitors to NYC, especially those for the first time, often feel like they are hearing some otherworldly dialect of English being spoken here.

I am always amazed and a bit envious when I people are genuinely fluent in more than one language. I have friends and colleagues who can converse, write and even claim to think in multiple languages. Two of them immediately come to mind, one of whom who can speak 5 languages and the other can speak 6 languages. How do they do it?

Thus seeing an article posted on Gigaom.com entitled A Massive Database Now Translates News in 65 Languages in Real Time by Derrick Harris on  Feb. 19, 2015 immediately got my attention. I will sum up, annotate and add some comments to this remarkable story.

The Global Database of Events, Languages and Tone (GDELT) is an ongoing project that has amassed a database of 250 million “socioeconomic and geopolitical events” and supporting metadata from 1979 to the present. GDELT was conceived and built by Kalev Leetaru, and he continues to run it. The database resides in Google’s cloud service and provides free access and coding tools to query and analyze this massive quantum of data.

Just one representative of GDELT’s many projects are an interactive map (available on GDELT’s home page), of conflicts and protests around the world.  Support for this project is provided by The US Institute of Peace. an independent and nonpartisan American government institution.

Here is a deep and wide listing from GDELT’s blog that links directly to more than 300 of their other fascinating projects. Paging through and following even a sampling of these links will very likely help to spark your own imagination and creativity as to what can be done with this data and these tools.

On February 19, 2015 GDELT 2.0 was launched. In addition to a whole roster of new analytical tools, its most extraordinary new capability is real-time translation of news reports across 65 languages. The feeds of these reports are from non-Western and non-English sources. In effect, it is reporting from a different set of perspectives. The extensive details and parameters of this system are described in a February 19, 2015 blog post by Mr. Leetaru on GDELT’s website entitled GDELT Translingual: Translating the Planet.

Here is an accompanying blog post on the same day announcing and detailing many of the new tools and features entitled GDELT 2.0: Our Global World in Realtime. Among these is a capability called “Realtime Measurement of 2,300 Emotions and Themes” composed of  “24 emotional measurement packages that together assess more than 2,300 emotions and themes from every article in realtime”. This falls within the science of content analysis which attempts to ascertain the deeper meanings and perspectives within a whole range of multimedia types and large sets.

I highly recommend checking out the Gigaom.com story. But I believe that is only the start if GDELT interests you. I further suggest clicking through and fully exploring their site to get a fuller sense of this project’s far-reaching vision and capabilities. Next, for the truly ambitious, the data sets and toolkits are all available for downloading right on the site. I say let the brainstorming for more new projects begin!

Back on December 2, 2014 in a Subway Fold post entitled Startup is Visualizing and Interpreting Massive Quantities of Daily Online News Content, we took a look at  an exciting new startup call Quid that is doing  similar sounding deep mining and analysis of news. Taken together, they represent a very fertile field for new endeavors like GDELT and Quid as the sophistication of machine intelligence to parse, and the capacities to gather and store these vast troves of data continues to advance. For both profit and non-profit organizations, I expect that potential benefits from deep global news analysis, interpretation, translation, visualization and metrics will continue to draw increasing numbers of interested and ambitious media companies, entrepreneurs, academics and government agencies.

 

 

Visualization, Interpretation and Inspiration from Mapping Twitter Networks

6871711979_bbe5b1ae1f_z

Image by Marc Smith

[This post was originally uploaded on September 26, 2014. It has been updated below with new information on February 5, 2015.]

Have you ever wondered what a visual map of your Twitter network might look like? The realization of such Twitter topography was covered in a terrific post on September 24, 2014 on socialmediatoday.com entitled How to Create a Visual Map of Your Twitter Network by Mary Ellen Egan.

To briefly sum up, at the recent Social Shake-Up Conference in Atlanta sponsored by SocialMediaToday, the Social Research Foundation created and presented such a map. They generated it by including 513 Twitter users who participated for four days in the hashtag #socialshakeup. The platform used is called NodeXL. The resulting graphic of the results as shown in this article are extraordinary. Please pay particular attention as to how the “influencers” in this network are identified and their characteristics. I strongly urge you to click through to read this article and see this display.

For an additional deep dive and comprehensive study on Twitter network mapping mechanics, analyses and policy implications accompanied by numerous examples of how Twitter networks form, grow, transform and behave, I also very highly recommend a report posted on February 20, 2014, entitled Mapping Twitter Topic Networks: From Polarized Crowds to Community Clusters by Marc A. Smith, Lee Rainie, Ben Shneiderman and Itai Himelboim for the Pew Foundation Internet Project.

I believe this article and report will quite likely spark your imagination. I think it is safe to assume that many users would be intrigued by this capability and, moreover, would devise new and innovative ways to leverage the data to better understand, grow and plot strategy to enhance their Twitter networks. Some questions I propose for such an analysis while inspecting a Twitter map include:

  • Am I reaching my target audience? Is this map reliable as a sole indicator or should others be used?
  • Who are the key influencers in my network? Once identified, can it be determined why they are influencers?
  • Does my growth strategy depend on promoting retweets, growing the population of followers, getting mentioned in relevant publications and websites, or other possible approaches?

What I would really be like to see emerge is a 3-dimensional form of visual map that fully integrates multiple maps of an  individual’s or group’s or company’s online presence to simultaneously include their Twitter, Facebook, LinkedIn¹, Instagram and other social networks. Maybe a platform like the Hyve-3D visualization system² could be used to enable a more broadly extensible and scalable 3D view. Perhaps this multi-dimensional virtual construct could produce entirely new planning and insights for optimizing one’s presence, marketing and influence in social media.

If so, would new trends and influencers not previously seen then be identified? Could tools be developed in this system whereby users would test the strengths and weaknesses of certain cross-social media platforms links and relationships? Would certain industries such news networks³ be able to spot events and trends much sooner? Are there any potentially new opportunities here for entrepreneurs?

February 5, 2015 Update:

A very instructive and illuminating example of the power of mapping a specialized Twitter network has just been posted by Ryan Whelan, a law and doctoral student at Northwestern University. It is composed of US law school professors who are now actively Tweeting away. He posted his methodology, an interactive graphic of this network, and one supporting graph plus four data tables on his blog in a February 3, 2015 post entitled The Law Prof Twitter Network 2.0. I highly recommend clicking through and reading this in its entirety. Try clicking on the graphic to activate a set of tools to explore and query this network map. As well, the tables illustrate the relative sensitivities of the data and their impact on the graphic when particular members of the network or the origins and groupings of the followers is examined.

I think you find it inspiring in thinking about what situations such a network map might be helpful to you in work, school, special interest groups, and many other potential applications. Mr. Whelan presents plenty of information to get you started off in the right direction.

I also found the look and feel of the network map to be very similar to the network mapping tool that was previously available on LinkedIn and discussed in the August 14, 2014 Subway Fold post entitled 2014 LinkedIn Usage Trends and Additional Data Questions.

My questions are as follows:

  • What effects, if any, is this network and its structure having upon improving the legal education system? That is, are these professors, by being active on Twitter in their own handle and as members of this network as followers of each other, benefiting the professor’s work and/or law students’ classroom and learning experiences?
  • Are the characteristics of this network of legal academics any different from, let’s say, a Twitter network of medical school professors or high school teachers?
  • Would more of a meta-study of networks within the legal profession produce results that would be helpful to lawyers and their clients? For example, what would Twitter maps of corporate lawyers, litigators and public interest attorneys show that might be helpful and to whom?

___________________________
1.  See the April 10, 2014 Subway Fold post entitled Visualization Tool for LinkedIn Personal Networks.

2 See the August 28, 2014 Subway Fold post entitled Hyve-3D: A New 3D Immersive and Collaborative Design System

3.   See also a most interesting article published in the September 23, 2014 edition of The New York Times entitled Tool Called Dataminr Hunts for News in the Din of Twitter by Leslie Kaufman about such a system that is scanning and interpolating possible news emerging from the Twitter-sphere.

Timely Resources for Studying and Producing Infographics

Image by Nicho Design

Image by Nicho Design

[This post was originally uploaded on October 21, 2014. It has been updated below with new information on January 30, 2015.]

Infographics seem to be appearing in a steadily increasing frequency in many online and print publications. Collectively they are an expressive informational phenomenon where art and data science intersect to produce often strikingly original and informative results. In two previous Subway Fold posts concerning new visual perspectives and covering user data about LinkedIn, I highlighted two examples that struck me as being particularly effective in transforming complex data sets into clear and convincing visual displays.

Recently, I have come across the following resources about inforgraphics I believe are worth exploring:

  • A new book entitled Infographics Designers’ Sketchbooks by authors Steven Heller and Rick Landers is being published today, October 14, 2014, by Princeton Architectural Press. An advanced review, including quotes from the authors, was posted on October 7, 2014 entitled A Behind-the-Scenes Look at How Infographics Are Made on Wired.com by Liz Stinson. To quickly recap this article, the book compiles a multitude of resources, sketches, how-to’s, best practices guidelines, and insights from more than 200 designers of infographics. Based upon the writer’s description, there is much value and motivation to be had within these pages to learn and put to good use the aesthetic and explanatory powers of infographics.
  • DailyInfographic.com provides thousands of exceptional examples of infographics, true to its name updated daily, that are valuable for both the information they present and, moreover, the inspiration they provide to consider trying to design and prepare your own for your online and print efforts. This page on Wikipedia provides an excellent exploration of the evolution and effectiveness of infographics.
  • Edward Tufte is considered to be one of the foremost experts in the visual presentation of data and information and I highly recommend checking out his link rich biography and bibliography page on Wikipedia and more of his work and other offerings on his own site edwardtufte.com.
  • October 15, 2014 UPDATE:  Yesterday, soon after I added this post, I read about the publication of another compilation of the year’s best in this field in US entitled The Best American Infographics 2014 by Gareth Cook (Houghton Mifflin Harcourt). This appeared in an article about the publication of this new book on Scientific American.com in a post there entitled SA Recognized for Great Infographics  by Jen Christiansen. This collection includes two outstanding infographics that have recently appeared in  Scientific American about the locations of wild bees and the increasing levels of caffeine in various drinks, both of which are reproduced on this page. (One location where I would not like to, well, bee, is where these two topics intersect to produce over-caffeinated wild bees. Run!)

Please post any comments here to share examples of infographics that have impressed you or impacted your understanding of particular concepts and information.

January 30, 2015 Update:

Consisely Getting to the heart of succeeding with this web-ubiquitous form of visual display of information is a very practical new column by Sarah Quinn entitled What Makes a Great Infographic? , posted on January 28, 2015, on SocialMediaToday.com.  I highly recommend clicking through and reading it in full for all of its valuable details. I believe it is a timely addition to anyone’s infographic toolkit.

I will briefly sum up, annotate and add some comments to Ms. Quinn’s five elements to get an infographic to potential greatness. (The anagram I have come up to help commit these points to memory by using their first letters is: Try make your effort a GooD ACT):

1.  A Targeted Audience:  Research your audience well so that your infographic becomes a must share for them. As a part of this, focus upon what problem they may have that you can solve for them and use the infographic to provide solutions to it. Further, establish a persona define the ideal audience you intend to reach and then address them. (Personas are often the cornerstones of marketing and content strategy campaigns.)

2.  A Compelling Theme: Your infographic depicts “your story’ and must strongly relate with your brand’s identity.  The representative sample used in this article is entitled “Food Safety at the Grill” which does an effective job of guiding and educating the reader while simultaneously representing the infographic author’s brand.

3.  Actionable DataThis should be thoroughly researched and the numbers threaded throughout the graphical display. In effect, the data should support the solution and/or brand you are presenting.

4.  Awesome GraphicsQuite simply, it must be aesthetically pleasing while presenting the message. Indeed, the graphics’ quality will form an effective narrative. If you are outsourcing this, Ms. Quinn provides seven helpful guidelines to help instruct the graphics contractor.

5.  Powerful Copy:  This is just as important as the display and should include “powerful headlines” is presenting your message. As with the targeted audience in 1. above, so to should the text be compelling enough so that readers will be motivated to share the infographic with others.

Animator Getting Closer to Creating a Convincing Virtual Human Face

"DSC_19335", Image by Philippe Put

“DSC_19335”, Image by Philippe Put

[This post was originally uploaded on October 21, 2014. It has been updated below with new information on January 11, 2015.]

The one achievement that still eludes movie and gaming special effects artists and programmers is the creation of a human face so convincing that it could fool viewers into believing it is a real person. Vast and untold amounts of time, money and other resources have been expended in this quest and these artists and programmers have gotten close over the years. However, the human eye is so precise and discriminating that audiences can always accurately detect a virtual visage. In, well, effect, the imagery looks almost, but not quite “real”.

That is, maybe until now. According to a fascinating article on studio360.org posted on October 14, 2014 entitled Have We Finally Conquered the Uncanny Valley? by Eric Molinsky, an animator named Chris Jones may have just achieved this or else come extremely close to it. He has been uploading his recent animation efforts to his blog, and two of them are also embedded in this article of a human face and a human hand. These are videos, not static images. I found the result to be extraordinary. I highly recommend clicking through to have a look at Jones’s efforts.

Let’s assume for a moment that Jones fully succeeds in his work and such virtual humans  start to populate movies, tv shows, videos and games. Lets further assume that the tools for doing this become widely accessible to computer generated imagery (CGI) artists. Then what? Here are my questions:

  • Whether and how will the careers of today’s real life working actors be impacted?
  • Will commercial audiences accept such virtual actors or will this be perceived as just being too creepy?
  • Will living actors or the estates of deceased actors be able to license their likenesses to be used in new video and film creative works?
  • Assuming that such licensing becomes a reality (even though the graphics remain unreal), what terms will be noticeable in terms of making an actor younger or older? What if the actor/licensor objects to final manner in which his or her image is used in the story?
  • Will new forms of agents and agencies be needed to handle the negotiations and contracts? Will future talent agents both literally and figuratively, become software agents?
  • Will new virtual and branded “stars” emerge in terms of the quality, usefulness and public acceptance of the imagery? That is, stars in terms of the virtual creations themselves and stars in terms of the CGI artists who emerge as the best in this specialty?
  • Will this development permit news forms of storytelling and gaming that are not possible with the current state of CGI?

I suppose then that this story also gives a whole new meaning to user inter-face development.

January 11, 2015 Update:

Breaching of the “uncanny valley” described in the original post above, the term used for the significant difficulty in creating a fully convincing computer animation of a human face, still remains quite elusive. According to a most interesting column in yesterday’s (January 10, 2015) edition of The Wall Street Journal entitled Why Digital-Movie Effects Still Can’t Do a Human Face by Alison Gopnik, this holy grail of CGI still presents some considerable roadblocks. I highly recommend clicking through and reading this piece in full. I will try to sum up, annotate and comment on it as a counterpoint to the post above.

Using a very clever analogy, the author compares the still unrealized feat of creation of a convincing CGI human to the Turing Test which, originally posited by the brilliant computer scientist Alan Turing (also the subject of the very well-received and possibly Oscar contending current bio-pic called The Imitation Game). This is a test for the achievement of actual machine “intelligence” whereby such a system cannot be detected in its interactions with an actual human being. That is, the human believes that he or she is communicating with another human when, in fact, the other party is a computer. Computer science is in deep pursuit of passing the Turing Test but it has thus far not been accomplished.

As between the passing the Turing Test and crossing the Uncanny Valley, Gopnik writes that the latter is “much, much hard for a computer to pass”.

While CGI is woven into so much of today’s visual media, they human face remains stalled in the Uncanny Valley for the time being. This is largely because of the incredible sensitivity of human vision and the wide range of subtleties in our human facial expressions to communicate our emotions to each other.

Gopnik further describes the effort on the fascinating ongoing project called Baby X by Mark Sager, a designer who worked the faces in Avatar and who is now a professor at the University of Auckland,  as being “one of the most convincingly complete computer-generated faces”. I highly recommend checking out and comparing this simulation to that of Chris Jones described and linked to above. Which do you think is more realistic and life-like in its appearance and movements? Does it make a considerable improvement in the realness of Sagar’s simulation that his project make additional use of the latest relevant neurological research when compared to the efforts of a highly skilled CGI artist alone?