Advertisers Looking for New Opportunities in Virtual and Augmented Spaces

"P1030522.JPG", Image by Xebede

“P1030522.JPG”, Image by Xebede

Are virtual reality (VR) and augmented reality (AR) technologies about to start putting up “Place Your Ad Here” signs in their spaces?

Today’s advertising firms and their clients are constantly searching for new venues and the latest technologies with which to compete in evermore specialized global marketplaces. With so many current and emerging alternatives, investing their resources to reach their optimal audiences and targeted demographics requires highly nimble planning and anticipating risks. Effective strategies for both of these factors were recently explored in-depth in the March 22, 2015 Subway Fold post entitled What’s Succeeding Now in Multi-Level Digital Strategies for Companies.

Just such a new venue to add to the media buying mix might soon become virtual worlds. With VR is the early stages of going more mainstream in the news media (see the May 5, 2015 Subway Fold post entitled The New York Times Introduces Virtual Reality Tech into Their Reporting Operations), and even more so in film (see the March 26, 2015 Subway Fold post entitled Virtual Reality Movies Wow Audiences at 2015’s Sundance and SXSW Festivals), it seems inevitable that VR might turn out to be the next frontier for advertising.

This new marketplace will also include augmented reality, involving the projection of virtual/holographic images within a field of view of the real world. Microsoft recently introduced a very sleek-looking headset for this called the HoloLens, which will be part of their release of Windows 10 expected sometime later this year.

A fascinating report on three new startups in this nascent field entitled Augmented Advertising, by Rachel Metz, appeared in the May/June 2015 edition of MIT Technology Review. (Online, the same article is entitled “Virtual Reality Advertisements Get in Your Face”.) I will sum up, annotate and pose a few additional questions to this. As well, I highly recommend clicking through on the links below to these new companies to fully explore all of the resources on their truly innovative and imaginative sites.

As the VR and AR headsets are set to enter the consumer marketplace later in 2015, manufactured by companies including Oculus, Sony, Microsoft (see the above links), Magic Leap and Samsung, consumers will soon be above to experience video games and movie formatted for these new platforms.

The first company in the article working in this space is called Mediaspike. They develop apps and tools for mobile VR. The demo that the writer Metz viewed with a VR headset placed that her in a blimp flying over a city containing billboards for an amusement ride based on the successful movie franchise that began with Despicable Me. The company is developing product placement implementations within these environments using billboards, videos and other methods.

One of the billboards was showing a trailer for the next movie in this series called Minions. While Metz became a bit queasy during this experience (a still common concern for VR users), she nonetheless found it “a heck of a lot more interesting” than the current types of ads seen on websites and mobile.

The second new firm is called Airvertise. They are developing “virtual 3-D models that are integrated with real world locations”. It uses geographic data to create constructs where, as a virtual visitor, you can readily walk around in them. Their first platform will be smartphone apps followed by augmented reality viewers. At the SXSW Festival last March (please see the link again in the third paragraph above to the post about VR at SXSW), the company demo-ed an iPad app that, using the tablet’s motion sensors, produced and displayed a virtual drone “hovering above the air about 20 feet away” with a banner attached to it. As the user/viewer walks closer to it the relative size and spatial orientation of it correspondingly increases.

The third startup is called Blippar. Their AR-based app permits commercial content to be viewed on smartphones. Examples include seeing football players when the phone is held up to a can of Pepsi, and shades of nail polish from the cosmetics company Maybelline. The company is currently strategizing about how to create ads in this manner that will appropriately consumers engage consumers but not put them off in any way.

My questions are as follows:

  • Will VR and AR advertising agencies and sponsors open up this field to user-generated ads and commercial content which has already been successful in a number of ad campaigns for food and cars? Perhaps by open-sourcing their development platforms, crowdsourcing the ads, and providing assistance with such efforts this new advertising space can gain some additional attention and traction.
  • What is exactly about VR and AR experience that will provide the most leverage to advertising agencies and their clients? Is it only limited to the novelty of it – – that might well wear off after a while – – or is there something unique about these technologies that will inform and entertain consumers about goods and services in ways neither previously conceived of nor achieved? Is a critical must-have app or viral ad campaign going to be needed for this to reach a tipping point?
  • Might countering technologies also appear to block VR and AR advertising? For example, Ad Block Plus is a very popular browser add-on that enables users to filter out today’s banner ads and pop-ups online. How might advertisers positively reaction to such avoidance?
  • Just as the leading social media services such as, among others, Facebook (which now owns Oculus), Twitter and Intagram, where advertisers have major presences, do VR and AR similarly lend themselves to being populated by advertisers on such a web-wide scale?

A Legal Thriller Meets Quantum Physics: A Book Review of “Superposition”

"Gyroscope", Image by Frank Boston

“Gyroscope”, Image by Frank Boston

Quantum physics is not a subject for the scientifically faint of heart. Those nutty particles and waves are behaving in some very weird ways down there at the nanoscale level. They probably think no one is looking so they can party all they want. However, plenty of students, professors, engineers, scientists and cryptographers have variously been watching them for many years. Even the old comedy troupe called The Firesign Theater named one of their albums, albeit unintentionally, with the spot on quantum descriptive title How Can You be In Two Places at Once When Not Anywhere at All.

Undeterred by the many brain-bending challenges of this highly specialized field of physics, those involved in it in any manner are keenly aware of the fundamental principle that merely observing any of this activity can change can change the quantum state of things. This puts a whole different, well, spin on those pesky electrons.

Taking these principles and integrating them into a wildly imaginative plot, writer David Walton, himself an engineer, has just spun this volatile mixture into a delightfully over-the-top new novel called Superposition (Pyr, 2015). He has craftily thrown sci-fi, mystery and trial drama ingredients into his literary blender and poured out a very cool smoothie of a story. Moreover, he has given his narrative and characters enough original twists and turns to make the producers of Lost envious.

The story gets off to a furious start, set sometime in the near future. A college professor named Jacob Kelly receives a visit a home one night from one of his colleagues named Brian Vanderhall. Although it is anything but a “Hi, how ya doing?” stop over, but rather, he is brandishing a gun and claiming to be pursued by an intelligent alien who has crossed over from the quantum universe. Brian then demonstrates several seemingly impossible physical actions, starting with a gyroscope, that offer proof of his seemingly delusional claims. Then he points the gun at Jacob’s wife and that when things get really bizarre in both the normal and quantum realms. Joe allegedly kills Brian while protecting his wife and family but a second and identical Brian is also soon found murdered at a nearby abandoned physics research lab at their university where Joe and Brian work.

But this is all just getting started because a second and identical Joseph likewise appears. One is arrested and put on trial for murdering Brian while the second one is free and, along with his surviving daughter (his wife and two other children were later found inexplicably dead), trying to solve this infuriating and thickening skein of puzzles.

Could Brian have been telling the truth about his breakthrough into the quantum universe where he learned to control seeming impossible phenomena and drawn the murderous attention of a highly intelligent alien who inhabits that “place”? Are the two Josephs and Brians the same “people” or are they different beings? Could Joe’s family be dead in one universe only to have possibly survived in another and, if so, can he bring them back across the breach? Do the events in one universe somehow “entangle” (in quantum physics speak) and affect events in the other?

While Einstein expressed his doubts about entanglement, famously calling it “spooky action at a distance” (see Footnote 4 in the link immediately above), Walton’s novel provides an abundance of spooky action way up close and personal. The novel is split along two timelines, one for the trial and the other as, well, the “other” Jacob, his daughter and his brother-in-law race off to track down the alien, the other members of their family, and the Brian’s real murderer. Both tracks are highly compelling as the narrative pivots between the nearly hallucinatory events of the sci-fi mystery and the more firmly grounded murder trial unfold. Walton very cleverly transports the reader back and forth between these extremely familiar and unfamiliar environments while carefully opening and resolving tricky plot points on both sides of this dimensional divide.

Besides deftly mashing up what would otherwise seem to be the unmashable, the author’s equally shrewd accomplishment is the valuable tutorial on quantum physics woven into his text. Speaking through his characters, he explains these often difficult to grasp concepts with enough clarity to draw his readers into the underlying science and, in turn, his plot lines been built upon it. College physics professors could both enlighten and entertain their students by assigning this novel as reading. It would very likely put them in a rather, well, super position going into the final exam.

Sci-fi, mystery and legal thriller fans will find the book satisfying and worthwhile because its audacity and creativity succeed on all levels. Furthermore, it raises some very intriguing issues of first impression about evidence, procedure and professional ethics during the course of the trail. Jacob’s lawyer becomes fully aware of reality-defying events surrounding his client and does his best to defend him in court. However, there is no standard defense strategy where several of the victims and the accused may have been divided in two and one of the other suspects is from alternative world.

How can a scientific foundation be presented to the court for phenomena that no one knows exists much less can be tested in any way? How can an expert be qualified to testify under such bizarre circumstances? Does the attorney/client privilege attach to only the Joseph in jail who is being tried or to the second but identical Joseph who is trying to help the defense attorney? Do both of Joseph’s iterations receive the exactly the same constitutional rights and protections? How can the prosecution rebut any evidence that these splits and dual events have even occurred? The author has done his research well in constructing a story where these issues arise.

A famous thought experiment in study of quantum physics is called Schrödinger’s Cat. It is used to explain what “superposition” actually means in quantum physics and there is a reference to it in some dialog in the book. It is used to illustrate and interpret different possible quantum states, in this case whether the cat, under certain experimental circumstances involving a radioactive source and a flask of poison inside the box, is alive and dead or else alive or dead. Walton predicates the fates of Joseph and his family upon this concept and pulls it off with great finesses and drama. Any further explanation here would only give paws to letting that cat out of the box inside of this very impressive novel.


Superposition is not the first sci-fi novel to use quantum physics as part of a plot involving a murder mystery. This was done previously in a novel entitled Spin State (Spectra 2004) by Chris Moriarty. This is a vastly different type of story set far in the future. I highly recommend it to any and all dedicated sci-fi fans. As well, it was the just first installment in an outstanding trilogy that later included Spin Control (Spectra 2007) and Ghost Spin (Spectra, 2013).

Thanks, Dave, for 33 Years of Great Entertainment

IMAG0063Tonight, May 20, 2015, will be David Letterman’s final appearance as the host of The Late Show.There is a terrific Page 1 appreciation of his work and his retirement from the show on Page 1 of today’s edition of The New York Times entitled David Letterman, Prickly Innovator, Counts Down to His Exit, by John Koblin. I highly recommend a click-through to read this in full. As well, I hope that you will be watching tonight’s final show.

I have been a major fan of Letterman going back to his very early days on TV. He always managed to make me laugh out loud no matter how tired I might have been while watching him late at night.

I had an appointment this morning in midtown Manhattan and my route took me right past The Ed Sullivan Theater on Broadway between 53rd and 54th Streets, the home of The Late Show. Above and below are four photos I took on this historic day in TV entertainment. In the pictures, you can see some of the media that was starting to gather outside. (I am about to upload this post at 12:35 EST.)

Thanks, Dave, for all of that great entertainment you have brought into everyone’s homes each nigh for the past 33 years. The very best wishes on your retirement and whatever the future may hold for you.

IMAG0064

IMAG0062

IMAG0061

Recent Visualization Projects Involving US Law and The Supreme Court

"Copyright Sign in 3D", Image by Muses Touch

“Copyright Sign in 3D”, Image by Muses Touch

There have been many efforts over the past few decades to use visualization methods and technologies to create graphical representations of the law. These have been undertaken by innovative lawyers in diversity of settings including public and private practice, and in legal academia.

I wrote an article about this topic years ago entitled “Graphics and Visualization: Drawing on All Your Resources”, in the August 25, 1992* edition of the New York Law Journal. (No link is currently available.) Not to paint with too broad a brush here, but things have changed dramatically since then in terms of how and why to create compelling legal visualizations.

Two very interesting projects have recently gotten significant notice online for their ingenuity and the deeper levels of understanding they have facilitated.

First are the legal visualizations of Harry Surden. He is a professor at the University of Colorado School of Law. He teaches, researches and writes about intellectual property law, legal informatics, legal automation and information privacy.

I had the opportunity to hear the professor speak at the Reinvent Law NYC program held in New York in February 2014. This was a memorable one-day event with about 40 speakers who captivated the audience with their presentations about the multitude of ways that technology is dramatically changing the contemporary marketplace for legal services.

On Professor Surden’s blog, he has recently posted the following three data visualization projects he built himself:

  • US Code Explorer 1 consisting of a nested tree structure for Title 35 of the US Code covering patents. Clicking on each levels starting with Part I and continuing through V will, in turn, open up to the Chapters, Sections and Subsections. This is an immediately accessible interactive means to unfold Title 35’s structure.
  • US Code Explorer 2 Force Directed Graph presents a different form of visualization for Title 17 of the US Code covering Trademarks. It operates as a series of clickable hub-and-spoke formations of the Code’s text whereby clicking on any of the hubs will lead you to the many different sections of Title 17.
  • US Constitution Explorer is also presented in a nested tree structure of the Constitution. Clicking on any of the Articles will open the Sections and then the actual text.

Professor Surden’s visualizations are instantly and intuitively navigable as soon as you view them. As a result, you will immediately be drawn into exploring them. For legal professionals and the public alike, he impressively presents these displays in a clear manner that belies the complexities of the underlying laws. I highly recommend clicking through to check out and navigate all of these imaginative visualizations. Furthermore, I hope his work inspires others to experiment with additional forms of visualization of the other federal, state and local codes, laws and regulations.

For a related visualization of the networks of law professors on Twitter, please see the February 5, 2015 Subway Fold post entitled Visualization, Interpretation and Inspiration from Mapping Twitter Networks.

The second new study containing numerous graphics and charts is entitled A Quantitative Analysis of the Writing Style of the U.S. Supreme Court, by Keith Carlson, Michael A. Livermore, and Daniel Rockmore, Dated March 11, 2015. This will be published later in Washington University Law Review 93:6 (2016). The story was reported in the May 4, 2015 edition of The New York Times entitled Justices’ Opinions Grow in Size, Accessibility and Testiness, Study Finds, by Adam Liptak. This article focused upon the three main conclusions stated in the title. I highly recommend click-throughs to read both.

The full-text of the Law Review article contains the very engaging details and methodologies employed. Moreover, it demonstrates the incredible amount of analytical work the authors spent to arrive at their findings. Just as one example, please have a look at the network visualization on Page 29 entitled Figure 5. LANS Graph of Stylistic Similarity Between Justices. It truly brings the author’s efforts to life. I believe this article is a very instructive, well, case where the graphics and text skillfully elevate each other’s effectiveness.


To get online then you needed something called a Lynx browser that only displayed text after you connected with a very zippy 14.4K dial-up modem. What fun it was back then! 

IBM’s Watson is Now Data Mining TED Talks to Extract New Forms of Knowledge

"sydneytocairns_385", Image by Daniel Dimarco

“sydneytocairns_385”, Image by Daniel Dimarco

Who really benefited from the California Gold Rush of 1849? Was it the miners, only some of whom were successfully, or the merchants who sold them their equipment? Historians have differed as to the relative degree, but they largely believe it was the merchants.

Today, it seems we have somewhat of a modern analog to this in our very digital world: The gold rush of 2015 is populated by data miners and IBM is providing them with access to its innovative Watson technology in order for these contemporary prospectors to discover new forms of knowledge.

So then, what happens when Watson is deployed to sift through the thousands of incredibly original and inspiring videos of online TED Talks? Can the results be such that TED can really talk and, when processed by Watson, yield genuine knowledge with meaning and context?

Last week, the extraordinary results of this were on display at the four-day World of Watson exposition here in New York. A fascinating report on it entitled How IBM Watson Can Mine Knowledge from TED Talks by Jeffrey Coveyduc, Director, IBM Watson, and Emily McManus, Editor, TED.com was posted on the TED Blog on May 5, 2015. This was the same day that the newfangled Watson + TED system was introduced at the event. The story also includes a captivating video of a prior 2014 TED Talk by Dario Gil of IBM entitled Cognitive Systems and the Future of Expertise that came to play a critical role in launching this undertaking.

Let’s have a look and see what we can learn from the initial results. I will sum up and annotate this report, and then ask a few additional questions.

One of the key objectives of this new system is to enable users to query it in natural language. An example given in the article is “Will new innovations give me a longer life?”. Thus, users can ask questions about ideas expressed among the full database of TED talks and, for the results, view video excerpts where such ideas have been explored. Watson’s results are further accompanied by a “timeline” of related concepts contained in a particular video clip permitting users to “tunnel sideways” if they wish and explore other topics that are “contextually related”.

The rest of the article is a dialog between the project’s leaders Jeffrey Coveyduc from IBM and TED.com editor Emily McManus that took place at Watson World.  They discussed how this new idea was transformed into a “prototype” of a fresh new means to extract “insights” from within “unstructured video”.

Ms. McManus began by recounting how she had attended Mr. Dario’s TED Talk about cognitive computing. Her admiration of his presentation led her to wonder whether Watson could be applied to TED Talks’ full content whereby users would be able to pose their own questions to it in natural language. She asked Mr. Dario if this might be possible.

Mr. Coveyduc said that Mr. Dario then approached him to discuss the proposed project. They agreed that it was not just the content per se, but rather, that TED’s mission of spreading ideas was so compelling. Because one of Watson’s key objectives is to “extract knowledge” that’s meaningful to the user, it thus appeared to be “a great match”.

Ms. McManus mentioned that TED Talks maintains an application programming interface (API) to assist developers in accessing their nearly 2,000 videos and transcripts. She agreed to provide access to TED’s voluminous content to IBM. The company assembled its multidisciplinary project team in about eight weeks.

They began with no preconceptions as to where their efforts would lead. Mr. Coveyduc said they “needed the freedom to be creative”. They drew from a wide range of Watson’s existing technical services. In early iterations of their work they found that “ideas began to group themselves”. In turn, this led them to “new insights” within TED’s vast content base.

Ms. McManus recently received a call from Mr. Dario asking her to stop by his office in New York. He demo-ed the new system which had completely indexed the TED content. Moreover, he showed how it could display, according to her “a universe of concepts extracted” from the content’s core. Next, using the all important natural language capabilities to pose questions, they demonstrated how the results in the form of numerous short clips which, taken altogether, were compiling “a nuanced and complex answer to a big question”, as she described it.

Mr. Coveyduc believes this new system simplifies how users can inspect and inquire about “diverse expertise and viewpoints” expressed in video. He cited other potential areas of exploration such as broadcast journalism and online courses (also known as MOOCs*). Furthermore, the larger concept underlying this project is that Watson can distill the major “ideas and concepts” of each TED Talk and thus give users the knowledge they are seeking.

Going beyond Watson + TED’s accomplishments, he believes that video search remains quite challenging but this project demonstrates it can indeed be done. As a result, he thinks that mining such deep and wide knowledge within massive video libraries may turn into “a shared source of creativity and innovation”.

My questions are as follows:

  • What if Watson was similarly applied to the vast troves of video classes used by professionals to maintain their ongoing license certifications in, among others, law, medicine and accounting? Would new forms of potentially applicable and actionable knowledge emerge that would benefit these professionals as well as the consumers of their services? Rather than restricting Watson to processing the video classes of each profession separately, what might be the results of instead processing them together in various combinations and permutations?
  • What if Watson was configured to process the video repositories of today’s popular MOOC providers  such as Coursera or edX? The same as well for universities around the world who are putting their classes online. Their missions are more or less the same in enabling remote learning across the web in a multitude of subjects. The results could possibly hold new revelations about subjects that no one can presently discern.

Two other recent Subway Fold posts that can provide additional information, resources and questions that I suggest checking out include Artificial Intelligence Apps for Business are Approaching a Tipping Point posted on March 31, 2015, and Three New Perspectives on Whether Artificial Intelligence Threatens or Benefits the World posted on December 27, 2014.


*  See the September 18, 2014 Subway Fold post entitled A Real Class Act: Massive Open Online Courses (MOOCs) are Changing the Learning Process for the full details and some supporting links.

Book Review of “The Age of Cryptocurrency”

"bitcoinlogo", Image by Dennis Sylvester Hurd

“bitcoinlogo”, Image by Dennis Sylvester Hurd

Many college students have had the experience of needing to take a required course in finance or economics that they were reluctant to register for because of the complexity of the subject matter. Nonetheless, they were soon pleasantly surprised to find that a talented and motivated instructor could make the subject spring to life. Indeed, a skillful prof can light up just about any topic. Recently, I had this type of experience again.

I had developed my own, well, academic interest in bitcoin during the past few years, following various events and tech developments in the news. Not overly curious, but just keeping an eye on developments. I felt no particular need to go out and learn too much more about it.

This changed for me on the very cold evening of  February 9, 2015, when I attended a terrific presentation at the New York office of the law firm of Latham & Watkins entitled Bitcoin – No Boundaries: Innovation in Bitcoin. The event was very professionally and graciously organized by the financial firm Hedgeable. The agenda consisted of six brief demos by bitcoin startups followed by a panel discussion of experts. The link above contains full videos of the entire program, including links to the startups’ websites, all of which I highly recommend viewing.

My completely unscientific poll of the attendees whom I left with on the elevator on the way out was unanimous that they learned a great deal from all of the speakers and presenters and, in turn, were  motivated to go out and learn more about the bitcoin movement. Their enthusiasm reminded me of a favorite quote of mine attributed Nobel Prize winning physicist I.I. Rabi. I recalled it from years ago when I read Silicon Dreams: Information, Man and Machine  by Robert Lucky (St.Matin’s Press, 1989). To paraphrase it: When asked about the inspiration for his great scientific achievements, Rabi  recalled that each day when he returned home from school and his mother greeted him, rather than asking him whether he knew the answers to the teacher’s questions, instead she would ask him whether he asked any good questions.

New and intriguing questions, developments and events about bitcoin are now regularly covered by the media.

My own follow-up effort to further satisfy my heightened curiosity after the February 9th event was to soon thereafter acquire a copy of The Age of Cryptocurrency (St. Martin’s Press, 2015) by Paul Vigna and Michael J. Casey. Mr. Casey was one of the four panelists that night as he can be seen in the video. This book has done much to help me to better grok bitcoin, with its deep and wide explanations about virtual currencies’ fundamentals (including, among many others, “blockchain” “digital wallet”, and “hashrates”), markets and histories.

More specifically, although bitcoin has only been around for six years, grasping the entirety and significance of its remarkable origin and originator (Satoshi Nakatomo), operations, concepts, implications, leaders, investors, benefits, technical flaws, security, encryption protocols, entrepreneurs, communities, supporters, critics, regulations and politics seems to involve more angles than a geometry textbook. Nonetheless, despite the daunting challenge of carefully introducing and then logically mapping out all of this, the authors have constructed, in a very internally consistent manner, a finely detailed travel guide exploring this new world.

Furthermore, The Age of Cryptocurrency can  benefit the different levels of understanding about bitcoin being sought by a wide spectrum of readers. From those with just a passing who/what/where/why curiosity and then scaling up to people seriously involving themselves in this emerging realm, there is plenty in this text to ponder and consume.

One of bitcoin’s technological operations involves setting up powerful computers (dubbed “mining rigs”), dedicated to solve difficult mathematical equations that will, in turn, release specific quantities of bitcoin on a regular basis. This process is called “mining”. So, too, have authors Vigna and Casey comprehensively mined their subject matter to produce an accessible, informative and spirited book.

The New York Times Introduces Virtual Reality Tech into Their Reporting Operations

"Mobile World Congress 2015", Image by Jobopa

“Mobile World Congress 2015”, Image by Jobopa

As incredibly vast as New York City is, it has always been a great place to walk around. Its multitude of wonderfully diverse neighborhoods, streets, buildings, parks, shops and endless array of other sites can always be more fully appreciated going on foot here and there in – – as we NYC natives like call it – – “The City”.

The April 26, 2015 edition of The New York Times Magazine was devoted to this tradition. The lead off piece by Steve Duenes was entitled How to Walk in New York.  This was followed by several other pieces and then reports on 15 walks around specific neighborhoods. (Clicking on the Magazine’s link above and then scrolling down to the second and third pages will produce links to nearly all of these articles.) I was thrilled by reading this because I am such an avid walker myself.

The very next day, on May 27, 2015, Wired.com carried a fascinating story about how one of the issues’ accompanying and rather astonishing supporting graphics was actually done in a report by Angela Watercutter entitled How the NY Times is Sparking the VR Journalism Revolution.  But even that’s not the half of it – – the NYTimes has made available for downloading a full virtual reality file of the full construction and deconstruction of the graphic. The Wired.com post contains the link as well as a truly mind-boggling high-speed YouTube video of the graphic’s rapid appearance and disappearance and a screen capture from the VR file itself. (Is “screen capture” really accurate to describe it or is something more like “VR  frame”?)  This could take news reporting into an entirely new dimension where viewers literally go inside of a story.

I will sum up, annotate and pose a few questions about this story. (For another other enthusiastic Subway Fold post about VR, last updated on March 26, 2015, please see Virtual Reality Movies Wow Audiences at 2015’s Sundance and SXSW Festivals.)

This all began on April 11, 2015 when a French artist named JR pieced together and then removed in less than 24 hours, a 150-foot photograph right across the street from the landmark Flatiron Building. This New York Times commissioned image was of “a 20-year-old Azerbaijani immigrant named Elmar Aliyev”. It was used on the cover of this special NYTimes Magazine edition. Upon its completion JR then photographed from a helicopter hovering above. (See the March 19, 2015 Subway Fold post entitled  Spectacular Views of New York, San Francisco and Las Vegas at Night from 7,500 Feet Up for another innovative project inject involving highly advanced photography of New York also taken from a helicopter.)

The NYTimes deployed VR technology from a company called VRSE.tools to transform this whole artistic experience into a fully immersive presentation entitled Walking New York. The paper introduced this new creation at a news conference on April 27th. To summarize the NYTimes Magazine’s editor-in-chief, Jake Silverstein, this project was chosen for a VR implementation because it would so dramatically enhance a viewer’s experience of it. Otherwise, pedestrians walking over the image across the sidewalk would not nearly get the full effect of it.

Viewing Walking New York in full VR mode will require an app from VRSE’s site (linked above), and a VR viewer such as, among others, Google Cardboard.

The boost to VR as an emerging medium by the NYTimes‘ engagement on this project is quite significant. Moreover, this demonstrates how it can now be implemented in journalism. Mr. Silverman, to paraphrase his points of view,  believes this demonstrates how it can be used to literally and virtually bring someone into a story. Furthermore, by doing so, the effect upon the VR viewer is likely to be an increased amount of empathy for certain individuals and circumstances who are the subjects of these more immersive reports.

There will more than likely be a long way to go before “VR filming rigs” can be sent out by news organizations to cover stories as they occur. The hardware is just now that widespread or mainstream yet. As well, the number of people who are trained and know how to use this equipment is still quite small and, even for those who do, preparing such a virtual presentation lags behind today’s pace of news reporting.

Another journalist venturing into VR work is Newsweek reporter Nonny de la Pena’s reconstruction of the shooting in the Trayvon Martin case. (See ‘Godmother of VR’ Sees Journalism as the Future of Virtual Reality by Edward Helmore, posted on The Guardian’s website on March 11, 2015, for in-depth coverage of her innovative efforts.)

Let’s assume that out on the not too distant horizon that VR journalism gains acceptance, its mobility and ease-of-use increases, and the rosters of VR-trained reporters and producers increases so that this field undergoes some genuine economies of scale. Then, as with many other life cycles of emergent technologies, the applications in this nascent field would only become limited by the imaginations by its professionals and their audiences. My questions are as follows:

  • What if the leading social media platforms such as Twitter, Facebook (which already purchased Oculus, the maker of VR headsets for $2B last year),  LinkedIn, Instagram (VR Instgramming, anyone?), and others integrate VR into their capabilities? For example, Twitter has recently added a live video feature called Periscope that its users have quickly and widely embraced. In fact, it is already being used for live news reporting as users turn their phones towards live events as they happen. Would they just as likely equally swarm to VR?
  • What if new startup social media platforms launch that are purely focused on experiencing news, commentary, and discussion in VR?
  • Will previously unanticipated ethical standards be needed and likewise dilemmas result as journalists move up the experience curve with VR?
  • How would the data and analytics firms that parse and interpret social media looking for news trends add VR newsfeeds into their operations and results? (See the Subway Fold posts on January 21, 2015 entitled The Transformation of News Distribution by Social Media Platforms in 2015 and on December 2, 2014 entitled Startup is Visualizing and Interpreting Massive Quantities of Daily Online News Content.)
  • Can and should VR be applied to breaking news, documentaries and news shows such as 60 Minutes? What could be the potential risks in doing so?
  • Can drone technology and VR news gathering be blended into a hybrid flying VR capture platform?

I am also looking forward to seeing what other applications, adaptations and markets for VR journalism will emerge that no one can possibly anticipate at this point.