Companies Are Forming Digital Advisory Panels To Help Keep Pace With Trending Technologies

"Empty Boardroom", Image by reynermedia

“Empty Boardroom”, Image by reynermedia

As a result of the lightening-fast rates of change in social media, big data and analytics, and online commerce¹, some large corporations have recently created digital advisory panels (also called  “boards”, “councils” and “groups” in place of “panels”), to assist executives in keeping pace with implementing some of the latest technologies. These panels are being patterned as less formal and scaled-down counterparts of traditional boards of directors.

This story was covered in a fascinating and very instructive article in the June 10, 2015 edition of The Wall Street Journal entitled “Companies Set Up Advisory Boards to Improve Digital Savvy” (subscription required, however, the article is fully available here on nasdaq.com). I will sum up, annotate and add a few questions of my own.

These digital advisory panels are often composed of “six outside experts under 50 years old”. In regularly scheduled meetings, their objective is to assist corporate managers in reaching diverse demographics and using new tools such as virtual reality² for marketing purposes. The executives whom the panels serve are appreciative of their “honest feedback”, access to entrepreneurs, and perspectives on these digital matters.

George L. Davis at the executive recruiting firm Egon Zehnder reports that approximately 50 companies in the Fortune 500 have already set up digital advisory panels. These include, among others, Target Corp. (details below) and American Express. However, not all such panels have not continued to stay in operation.

Here are the experiences of three major corporations with their digital advisory panels:

1. General Electric

GE’s digital advisory panel has met every quarter since its inception in 2011. Its members are drawn from a diversity of fields such as gaming and data visualization³. The youngest member of their 2014 panel was Christina Xu. She is a co-founder of a consulting company called PL Data. She found her experience with GE to be “an interesting window” into a corporate environment.

Ms. Xu played a key role in creating something new that has already drawn eight million downloads. It’s called the GE Sound Pack, a collection of factory sounds recorded at their own industrial facilities, intended for use by musicians4.  In effect, with projects like this the company is using the web in new ways to enhance its online presence and reputation.

GE’s panel also participated in the company’s remembrance of the 45th anniversary of the first moon landing. Back then, the company made the silicon rubber for the Apollo 11 astronauts’ boots. To commemorate in 2014, the panel convinced GE to create and market a limited edition line of “Moon Boot” sneakers online. They sold out in seven minutes. (For more details but, unfortunately, no more chances to get a pair of these way cool sneakers, see an article with photos of them entitled GE Modernizes Moon Boots and Sells Them as Sneakers, by Belinda Lanks, posted on Bloomberg.com on July 16, 2014 .)

2.  Target Corporation

On Target’s digital advisory council,  Ajay Agarwal, who is the Managing Director of Bain Capital Ventures in Palo Alto, California, is one of its four members. He was told by the company that “there were ‘no sacred cows’ “. Among the council’s recommendations was to increase Target’s staff of data scientists faster than originally planned, and to deploy new forms of in-store and online product displays.

Another council member, Sam Yagin, the CEO of Match.com,  viewed a “showcase” Target store and was concerned that it looked just like other locations. He had instead expected advanced and personalized features such as “smart” shopping carts linked to shoppers’ mobile phones that would serve to make shopping more individualized. Casey Carl, the chief strategy and innovation officer at Target, agreed with his assessment.

3.  Medtronic PLC

This medical device manufacturer’s product includes insulin pumps for people with diabetes.5 They have been working with their digital advisory board, founded in 2011, to establish a “rapport” on social media with this community. One of the board’s members, Kay Madati, who was previously an executive at Facebook, recommended a more streamlined approach using a Facebook page. The goal was to build patient loyalty. Today, this FB page (clickable here), has more than 230,000 followers. Another initiative was launched to expand Medtronics’ public perception beyond being a medical device manufacturer.

This digital advisory board was suspended following the company’s acquisition and re-incorporation in Ireland. Nonetheless, an executive expects the advisory board to be revived within six months.

My questions are as follows:

  • Would it be advisable for a member of a digital advisory panel to also sit on another company’s panel, given that it would not be a competitor? Would both the individual and both corporations benefit by the possible cross-pollination of ideas from different markets?
  • What guidelines should be established for choosing members of such panels in terms of their qualifications and then vetting them for any possible business or legal conflicts?
  • What forms of ethical rules and guidelines should be imposed panel members? If so, who should draft,  approve, and then implement them?
  • What other industries, marketplaces, government agencies, schools and public movements might likewise benefit from their own digital advisory panels? Would established tech companies and/or startups likewise find benefits from them?
  • Might finding and recruiting members for a digital advisory panel be a new market segment for executive search firms?
  • What new entrepreneurial opportunities might emerge when and if digital advisory panels continue to grow in acceptance and popularity?

 


1.   All of which are covered in dozens of Subway Fold posts in their respective categories here, here and here.

2.  There are six recent Subway Fold posts in the category of Virtual and Augmented Reality.

3.  There are 21 recent Subway Fold posts in the category of Visualization.

4.   When I first read this, it made me think of Factory by Bruce Springsteen on his brilliant Darkness on the Edge of Town album.

5.   X-ref to the October 3, 2014 Subway Fold post entitled New Startups, Hacks and Conferences Focused Upon Health Data and Analytics concerning Project Night Scout involving a group of engineers working independently to provide additional mobile technology integration and support for people using insulin pumps.

New Chips are Using Deep Learning to Enhance Mobile, Camera and Auto Image Processing Capabilities

"Smartphone Photography", Image by AvenueTheory

“Smartphone Photography”, Image by AvenueTheory

We interface with our devices’ screens for inputs and outputs nearly all day and everyday. What many of the gadgets will soon be able to display and, moreover, understand about digital imagery is about to take a significant leap forward. This will be due to the pending arrival of new chips embedded into their circuitry that are enabled by artificial intelligence (AI) algorithms. Let’s have a look.

This story was reported in a most interesting article on TechnologyReview.com entitled Silicon Chips That See Are Going to Make Your Smartphone Brilliant by Tom Simonite on May 14, 2015. I will sum, annotate and pose some question about it.

The key technology behind these new chips is an AI methodology called deep learning. In these 10 recent Subway Fold posts, deep learning has been covered in a range of applications in various online and real world marketplaces including, among others, entertainment, news, social media, law, medicine, finance and education. The emergence of these smarter new chips will likely bring additional significant enhancements to all of them and many others insofar as their abilities to better comprehend the nature of the content of images.

Two major computer chip companies, Synopsis and Qualcomm, and the Chinese search firm Baidu, are developing systems, based upon deep learning, for mobile devices, autos and other screen-based hardware. They were discussed by their representatives at the May 2015 Embedded Vision Summit held on Tuesday, May 12, 2015, in Santa Clara, California. The companies’ representatives were:

  • Pierre Paul, the director of Research and Development at Synopsis, who presented a demo of a new chip core that “recognized speed limit signs” on the road for vehicles and enabled facial recognition for security apps. This chip uses less power than current chips on the market and, moreover, could add some “visual intelligence” to phone and car apps, and security cameras.  (Here is the link to the abstracts of the presentations, listed by speaker including Mr. Paul’s entitled Low-power Embedded Vision: A Face Tracker Case Study from the Summit’s website.)
  • Ren Wu, Distinguished Scientist, Baidu Institute of Deep Learning, said that deep learning-based chips are important for computers used for research, and called for making such intelligence as ubiquitous as possible. (Here is the link to the abstracts of the presentations, listed by speaker including Mr. Wu’s, entitled Enabling Ubiquitous Visual Intelligence Through Deep Learning from the Summit’s website.)

Both Wu and Gehlhaar said that adding more intelligence to mobile device’s ability to recognize photos could be used to address the privacy implications of some apps by lessening the quantity of personal data they upload to the web.

My questions are as follows:

  • Whether and how should social networks employ these chips? For example, what if such visually intelligent capabilities were to be added to the recently rolled out live video apps Periscope and MeerKat on Twitter?
  • Will these chips be adapted to the forthcoming commercial augmented and virtual reality systems (as discussed in the five recent Subway Fold posts)? If so, what new capabilities might they add to these environments?
  • What additional privacy and security concerns will need to be addressed by manufacturers, consumers and regulators as these chips are introduced into their respective marketplaces?

Advertisers Looking for New Opportunities in Virtual and Augmented Spaces

"P1030522.JPG", Image by Xebede

“P1030522.JPG”, Image by Xebede

Are virtual reality (VR) and augmented reality (AR) technologies about to start putting up “Place Your Ad Here” signs in their spaces?

Today’s advertising firms and their clients are constantly searching for new venues and the latest technologies with which to compete in evermore specialized global marketplaces. With so many current and emerging alternatives, investing their resources to reach their optimal audiences and targeted demographics requires highly nimble planning and anticipating risks. Effective strategies for both of these factors were recently explored in-depth in the March 22, 2015 Subway Fold post entitled What’s Succeeding Now in Multi-Level Digital Strategies for Companies.

Just such a new venue to add to the media buying mix might soon become virtual worlds. With VR is the early stages of going more mainstream in the news media (see the May 5, 2015 Subway Fold post entitled The New York Times Introduces Virtual Reality Tech into Their Reporting Operations), and even more so in film (see the March 26, 2015 Subway Fold post entitled Virtual Reality Movies Wow Audiences at 2015’s Sundance and SXSW Festivals), it seems inevitable that VR might turn out to be the next frontier for advertising.

This new marketplace will also include augmented reality, involving the projection of virtual/holographic images within a field of view of the real world. Microsoft recently introduced a very sleek-looking headset for this called the HoloLens, which will be part of their release of Windows 10 expected sometime later this year.

A fascinating report on three new startups in this nascent field entitled Augmented Advertising, by Rachel Metz, appeared in the May/June 2015 edition of MIT Technology Review. (Online, the same article is entitled “Virtual Reality Advertisements Get in Your Face”.) I will sum up, annotate and pose a few additional questions to this. As well, I highly recommend clicking through on the links below to these new companies to fully explore all of the resources on their truly innovative and imaginative sites.

As the VR and AR headsets are set to enter the consumer marketplace later in 2015, manufactured by companies including Oculus, Sony, Microsoft (see the above links), Magic Leap and Samsung, consumers will soon be above to experience video games and movie formatted for these new platforms.

The first company in the article working in this space is called Mediaspike. They develop apps and tools for mobile VR. The demo that the writer Metz viewed with a VR headset placed that her in a blimp flying over a city containing billboards for an amusement ride based on the successful movie franchise that began with Despicable Me. The company is developing product placement implementations within these environments using billboards, videos and other methods.

One of the billboards was showing a trailer for the next movie in this series called Minions. While Metz became a bit queasy during this experience (a still common concern for VR users), she nonetheless found it “a heck of a lot more interesting” than the current types of ads seen on websites and mobile.

The second new firm is called Airvertise. They are developing “virtual 3-D models that are integrated with real world locations”. It uses geographic data to create constructs where, as a virtual visitor, you can readily walk around in them. Their first platform will be smartphone apps followed by augmented reality viewers. At the SXSW Festival last March (please see the link again in the third paragraph above to the post about VR at SXSW), the company demo-ed an iPad app that, using the tablet’s motion sensors, produced and displayed a virtual drone “hovering above the air about 20 feet away” with a banner attached to it. As the user/viewer walks closer to it the relative size and spatial orientation of it correspondingly increases.

The third startup is called Blippar. Their AR-based app permits commercial content to be viewed on smartphones. Examples include seeing football players when the phone is held up to a can of Pepsi, and shades of nail polish from the cosmetics company Maybelline. The company is currently strategizing about how to create ads in this manner that will appropriately consumers engage consumers but not put them off in any way.

My questions are as follows:

  • Will VR and AR advertising agencies and sponsors open up this field to user-generated ads and commercial content which has already been successful in a number of ad campaigns for food and cars? Perhaps by open-sourcing their development platforms, crowdsourcing the ads, and providing assistance with such efforts this new advertising space can gain some additional attention and traction.
  • What is exactly about VR and AR experience that will provide the most leverage to advertising agencies and their clients? Is it only limited to the novelty of it – – that might well wear off after a while – – or is there something unique about these technologies that will inform and entertain consumers about goods and services in ways neither previously conceived of nor achieved? Is a critical must-have app or viral ad campaign going to be needed for this to reach a tipping point?
  • Might countering technologies also appear to block VR and AR advertising? For example, Ad Block Plus is a very popular browser add-on that enables users to filter out today’s banner ads and pop-ups online. How might advertisers positively reaction to such avoidance?
  • Just as the leading social media services such as, among others, Facebook (which now owns Oculus), Twitter and Intagram, where advertisers have major presences, do VR and AR similarly lend themselves to being populated by advertisers on such a web-wide scale?

The New York Times Introduces Virtual Reality Tech into Their Reporting Operations

"Mobile World Congress 2015", Image by Jobopa

“Mobile World Congress 2015”, Image by Jobopa

As incredibly vast as New York City is, it has always been a great place to walk around. Its multitude of wonderfully diverse neighborhoods, streets, buildings, parks, shops and endless array of other sites can always be more fully appreciated going on foot here and there in – – as we NYC natives like call it – – “The City”.

The April 26, 2015 edition of The New York Times Magazine was devoted to this tradition. The lead off piece by Steve Duenes was entitled How to Walk in New York.  This was followed by several other pieces and then reports on 15 walks around specific neighborhoods. (Clicking on the Magazine’s link above and then scrolling down to the second and third pages will produce links to nearly all of these articles.) I was thrilled by reading this because I am such an avid walker myself.

The very next day, on May 27, 2015, Wired.com carried a fascinating story about how one of the issues’ accompanying and rather astonishing supporting graphics was actually done in a report by Angela Watercutter entitled How the NY Times is Sparking the VR Journalism Revolution.  But even that’s not the half of it – – the NYTimes has made available for downloading a full virtual reality file of the full construction and deconstruction of the graphic. The Wired.com post contains the link as well as a truly mind-boggling high-speed YouTube video of the graphic’s rapid appearance and disappearance and a screen capture from the VR file itself. (Is “screen capture” really accurate to describe it or is something more like “VR  frame”?)  This could take news reporting into an entirely new dimension where viewers literally go inside of a story.

I will sum up, annotate and pose a few questions about this story. (For another other enthusiastic Subway Fold post about VR, last updated on March 26, 2015, please see Virtual Reality Movies Wow Audiences at 2015’s Sundance and SXSW Festivals.)

This all began on April 11, 2015 when a French artist named JR pieced together and then removed in less than 24 hours, a 150-foot photograph right across the street from the landmark Flatiron Building. This New York Times commissioned image was of “a 20-year-old Azerbaijani immigrant named Elmar Aliyev”. It was used on the cover of this special NYTimes Magazine edition. Upon its completion JR then photographed from a helicopter hovering above. (See the March 19, 2015 Subway Fold post entitled  Spectacular Views of New York, San Francisco and Las Vegas at Night from 7,500 Feet Up for another innovative project inject involving highly advanced photography of New York also taken from a helicopter.)

The NYTimes deployed VR technology from a company called VRSE.tools to transform this whole artistic experience into a fully immersive presentation entitled Walking New York. The paper introduced this new creation at a news conference on April 27th. To summarize the NYTimes Magazine’s editor-in-chief, Jake Silverstein, this project was chosen for a VR implementation because it would so dramatically enhance a viewer’s experience of it. Otherwise, pedestrians walking over the image across the sidewalk would not nearly get the full effect of it.

Viewing Walking New York in full VR mode will require an app from VRSE’s site (linked above), and a VR viewer such as, among others, Google Cardboard.

The boost to VR as an emerging medium by the NYTimes‘ engagement on this project is quite significant. Moreover, this demonstrates how it can now be implemented in journalism. Mr. Silverman, to paraphrase his points of view,  believes this demonstrates how it can be used to literally and virtually bring someone into a story. Furthermore, by doing so, the effect upon the VR viewer is likely to be an increased amount of empathy for certain individuals and circumstances who are the subjects of these more immersive reports.

There will more than likely be a long way to go before “VR filming rigs” can be sent out by news organizations to cover stories as they occur. The hardware is just now that widespread or mainstream yet. As well, the number of people who are trained and know how to use this equipment is still quite small and, even for those who do, preparing such a virtual presentation lags behind today’s pace of news reporting.

Another journalist venturing into VR work is Newsweek reporter Nonny de la Pena’s reconstruction of the shooting in the Trayvon Martin case. (See ‘Godmother of VR’ Sees Journalism as the Future of Virtual Reality by Edward Helmore, posted on The Guardian’s website on March 11, 2015, for in-depth coverage of her innovative efforts.)

Let’s assume that out on the not too distant horizon that VR journalism gains acceptance, its mobility and ease-of-use increases, and the rosters of VR-trained reporters and producers increases so that this field undergoes some genuine economies of scale. Then, as with many other life cycles of emergent technologies, the applications in this nascent field would only become limited by the imaginations by its professionals and their audiences. My questions are as follows:

  • What if the leading social media platforms such as Twitter, Facebook (which already purchased Oculus, the maker of VR headsets for $2B last year),  LinkedIn, Instagram (VR Instgramming, anyone?), and others integrate VR into their capabilities? For example, Twitter has recently added a live video feature called Periscope that its users have quickly and widely embraced. In fact, it is already being used for live news reporting as users turn their phones towards live events as they happen. Would they just as likely equally swarm to VR?
  • What if new startup social media platforms launch that are purely focused on experiencing news, commentary, and discussion in VR?
  • Will previously unanticipated ethical standards be needed and likewise dilemmas result as journalists move up the experience curve with VR?
  • How would the data and analytics firms that parse and interpret social media looking for news trends add VR newsfeeds into their operations and results? (See the Subway Fold posts on January 21, 2015 entitled The Transformation of News Distribution by Social Media Platforms in 2015 and on December 2, 2014 entitled Startup is Visualizing and Interpreting Massive Quantities of Daily Online News Content.)
  • Can and should VR be applied to breaking news, documentaries and news shows such as 60 Minutes? What could be the potential risks in doing so?
  • Can drone technology and VR news gathering be blended into a hybrid flying VR capture platform?

I am also looking forward to seeing what other applications, adaptations and markets for VR journalism will emerge that no one can possibly anticipate at this point.

Virtual Reality Movies Wow Audiences at 2015’s Sundance and SXSW Festivals

Image by mconnors

Image by mconnors

[This post was originally uploaded on December 12, 2014. It has been updated below with new information on December 19, 2014,  January 13, 2015 and March 27, 2015.]

December 12, 2014 Post:

At the 2015 Sundance Film Festival to be held in Park City Utah from January 22, 2015 through February 1, 2015, part of this major annual film event is a program called New Frontier. This year it will be presenting 13 virtual reality (VR) films and “experiences”. Advanced coverage of this event was reported in an article on Wired.com on December 4, 2014 entitled VR Films Are Going to Be All Over Sundance in 2015 by Angela Watercutter. After reading this exciting preview I wanted to immediately pack a bag and start walking there.

To sum up, annotate and comment upon some of the key points in this story, the platforms being used for these presentations will mostly be the Oculus, while Google Cardboard and Samsung’s Gear VR will also deployed. While the Oculus Rift headset has not yet released to the consumer public, developers currently do have had access to it. As a result, they were able to create and format these soon-to-be-premiered experimental works. This year’s offerings are a much deeper and wider lineup than the much more limited sampling of Ocolus-based experiments presented during the 2012 Sundance Festival.

(In a recent Subway Fold post on November 26, 2014 entitled Robots and Diamonds and Drones, Aha! Innovations on the Horizon for 2015, one of the startups briefly mentioned is called Jaunt which is described in the blog post as “… developing an entirely new platform and 360 degree camera to create fully immersive virtual reality movies to be viewed using the versatile new Oculus Rift headset.”)

Attendees at some other recent industry events have responded very favorably to Oculus demonstrations. They included a HBO’s presentation of a Game of Thrones experience at this year’s South by Southwest festival, a Jaeger-piloting simulation ¹ at the 2014 Comic-Con in San Diego , and at the 2014 Electronic Entertainment Expo (E3).

To read what some of the creators involved in Sundance’s VR movies have to say about their creations and some brief descriptions and 2-D graphics of this immersive fare, I very highly recommend clicking through and reading this report in its entirety. They include, among others, news and documentaries, bird flights, travel landscapes, rampaging Kaiju, and several social situations.

I wanna go!

My follow-up questions include:

  • Because VR movie production is entirely digital, can this experience be securely distributed online to other film festival and film schools to share with and, moreover, inspire new VR cinematic works by writers, directors, producers and actors?
  • Can the Hyve-3D virtual development platform covered in this August 28, 2014 Subway Fold Post entitled Hyve-3D: A New 3D Immersive and Collaborative Design System, be adapted and formatted for the cinema so that audiences can be fully immersed in virtual firms without the need for a VR headset?
  • If entertainment companies, movie producers, investors and other supporters line up behind the development and release of VR movies, will this be seen by the public as being more like 3-D movies where the novelty has quickly worn off ², or more like a fundamental shift in movie production, presentation and marketing? What if, using the Oculous Rift, users could experience movie trailers, if the entire film at any location? Would this be a market that might draw the attention of Netflix, Hulu, Amazon, Google and other online content distributors and producers?

____________________________
1.  In another Jaeger and Kaiju-related update, there is indeed good news as reported on June 27, 2014 on the HuffingtonPost.com by Jessica Goodman in a story entitled ‘Pacific Rim 2’ Confirmed For 2017 Release Date.

2.  See 2014 Box Office Will Be Hurt By Diminishing Popularity Of 3D Movies: Analyst by David Lieberman, posted on Deadline.com on February 3, 2014. For other new theater experience innovations, see also To Lure Young, Movie Theaters Shake, Smell and Spritz by Brooks Barnes in the November 29, 2014 edition  of The New York Times.

____________________________

December 19, 2014 Update:

The current release of the movie adaptation of the novel Wild by Cheryl Strayed (Knopf, 2011), has been further formatted into 3-minute supplemental virtual reality movie as reported in the December 15, 2014 edition of The New York Times by Michael Cieply in an article entitled Virtual Reality ‘Wild’ Trek. This short film is also scheduled to be presented at the 2015 Sundance festival. Using Oculus and Samsung VR technology, this is an immersive meeting with the lead character, played by actress Reese Witherspoon, while she is hiking in the wilderness. She is quoted as being very pleased with the final results of this VR production.

January 13, 2015 Update:

While VR’s greatest core ability is in placing viewers within a totally immersive digital  environments, this also presents a challenge in keeping them fully focused upon the main narrative.That is, something happening off to the left or right may draw their attention away and thus detract from the experience.

A startup called Visionary VR has developed a system to reconcile this challenge. It enables creators of VR entertainment to concentrate the viewer’s attention upon the action occurring in the stories and games. This was reported in a most interesting article posted on Recode.com on January 5, 2015 entitled In Virtual Reality Movies, You Are the Camera. That Can Be a Problem, but Here’s One Solution, by Eric Johnson. I believe this will keep your attention as a reader, even in the three dimensions in the real world, and recommend clicking through for all of the details. As well, there is a rather spectacular video presented by the founders of the company on the capabilities of their system.

To recap the key points, Visionary VR creates an invisible boundary around the main narrative that alerts the viewer that they are looking away into other “zones” within the environment. When this occurs, the narrative is suspended but viewers can venture into these interactive peripheral areas and further explore elements of the story. Just as easily, they can return their gaze back to the story which will then re-engage and move forward. Visionary VR has created platform and toolkit for VR authors and storytellers to generate and edit their work while within a virtual environment itself. When viewing the accompanying video, the interface reminded me of something out of Minority Report.

(Btw, it has just been announced that this movie is going to be turned into a TV pilot for Fox according to a story posted on Deadline.com entitled ‘Minority Report’ Gets Fox Pilot Order, by Nellie Andreeva on January 9, 2015. This post also contains a photo from the movie showing this then fictional and now real interface. How cool would it be to see this new pilot in full VR?!)

March 27, 2015 Update:

VR movie technology continues to gather momentum and accolades at 2015’s artistic festivals. Its latest display was held at last week’s (March 13 through 17, 2015) South By Southwest Festival (SXSW). The page for the VR panel and speakers is linked here. Coverage of the event was posted in a very informative and enthusiastic article on VentureBeat.com entitled The Future of Interactive Cinematic VR is Coming, and Fast by Daniel Terdiman, on March 18, 2015.

Those in attendance were truly wowed by what they saw, and, moreover, the potential of fully immersive experiences and storytelling. Please click-through to this story for the full details. I will briefly sum up some of the main points.

The article mostly highlights and highly praises the demo by Jaunt, a startup emerging as one of the innovators in VR movies, mentioned in the initial December 12, 2014 post above. Other VR companies also presented their demos at SXSW.

The Jaunt demo consisted of Paul McCartney playing Live and Let Die in concert. Here’s the link to Jaunt’s Content page containing the stream for this and eight other VR movies (including the Kaiju Fury! film also mentioned in the December 12th post above). In order to immerse yourself in ay of these you will need either an Oculus Rift headset or a Google Cardboard device.

VR movie technology is indeed presenting filmmakers with “opportunities that have not been possible before”. This is likewise so for a range of content creators including, among others sure to come, musicians, athletes, interviewers and documentary makers.

Another panelist, Jason Rubin, the head of worldwide studios for Oculus, spoke about the level of progress being made to make these narrative experiences more genuinely interactive with viewers. He believes this will lead to entirely new forms of cinematic experiences.

Arthur van Hoff, Jaunt’s founder and CTO, stated the possibility of VR films where users can follow one particular actor’s perspective and story within the production. (Visionary VR’s technology, described in the January 13, 2015 Update above, might also be helpful in this regard.)

While new “companies, technologies and investors” in this nascent field are expected, Jaunt believes its current two-year lead will give its technology and productions an advantage.