Charge of the Light Brigade: Faster and More Efficient New Chips Using Photons Instead of Electrons

"PACE - PEACE" Image by Etienne Valois

“PACE – PEACE” Image by Etienne Valois

Alfred, Lord Tennyson wrote his immortal classic poem, The Charge of the Light Brigade, in 1854. It was to honor the dead heroes of a doomed infantry charge at the Battle of Balaclava during the Crimean War. Moreover, it strikingly portrayed the horrors of war. In just six short verses, he created a monumental work that has endured ever since for 162 years.

The poem came to mind last week after reading two recent articles on seemingly disparate topics. The first was posted on The New Yorker’s website on December 30, 2015 entitled In Silicon Valley Now, It’s Almost Always Winner Takes All by Om Malik. This is highly insightful analysis of how and why tech giants such as Google in search, Facebook in social networking, and Uber in transportation, have come to dominate their markets. In essence, competition is a fierce and relentless battle in the global digital economy. The second was an article on CNET.com posted on December 23, 2015 entitled Chip Promises Faster Computing with Light, Not Electrical Wires by Stephan Shankland. I highly recommend reading both of them in their entirety.

Taken together, the homonym of “light” both in historical poetry and in tech, seems to tie these two posted pieces together insofar as contemporary competition in tech markets is often described in military terms and metaphors. Focusing on that second story here for purposes of this blog post, about a tantalizing advance in chip design and fabrication, will this survive as it moves forward into the brutal and relentlessly “winner takes all” marketplace? I will summarize and annotate this story, and pose some of my own, hopefully en-light-ening questions.

Forward, the Light Brigade

A team of researchers, all of whom are university professors, including Vladimir Stojanovic from the University of California at Berkeley who led the development, Krste Asanovic also from Berkeley, Rajeev Ram from MIT, and Milos Popovic from the University of Colorado at Boulder, have created a new type of processing chip “that transmits data with light”. As well, its architecture significantly increases processing speed while reducing power consumption.  A report on the team’s work was published in an article in the December 24, 2015 issue of Nature (subscription required) entitled Single-chip Microprocessor That Communicates Directly Using Light by Chen Sun, Mark T. Wade, Yunsup Lee, et al.

This approach, according to Wikipedia, of “using silicon as an optical medium”, is called silicon photonics. IBM (see this link) and Intel (see this link)  have likewise been involved in R&D in this field, but have yet to introduce anything ready for the market.

However, this team of university researchers believes their new approach might be introduced commercially within a year. While their efforts do not make chips run faster per se, the photonic elements “keep chips supplied with data” which avoids them having to lose time by idling. Thus, they can process data faster.

Currently (no pun intended), electrical signals traverse metal wiring across the world on computing and communications devices and networks. For data traveling greater national and international distances, the electronic signals are transposed into light and sent along on high-speed fiber-optic cables. Nonetheless, this approach “isn’t cheap”.

Half a League Onward

What the university researchers’ team has done is create chips with “photonic components” built into them. If they succeed in scaling-up and commercializing their creation, consumers will be likely the beneficiaries. These advantages will probably manifest themselves first when used in data centers that, in turn, could speed up:

  • Google searches
  • Facebook image recognition
  • Other “performance-intensive features not economical today”
  • Remove processing bottlenecks and conserve battery life in smartphones and other personal computing platforms

Professor Stojanovic believes that one of their largest challenges if is to make this technology affordable before it can be later implemented in consumer level computing and communications devices. He is sanguine that such economies of scale can be reached. He anticipates further applications of this technology to enable chips’ onboard processing and memory components to communicate directly with each other.

Additional integrations of silicon photonics might be seen in the lidar remote sensing systems for self-driving cars¹, as well as brain imaging² and environmental sensors. It also holds the potential to alter the traditional methods that computers are assembled. For example, the length of cables is limited to the extent that data can pass through them quickly and efficiently before needed amplification along the way. Optical links may permit data to be transferred significant further along network cabling. The research team’s “prototype used 10-meter optical links”, but Professor Stojanovic believes this could eventually be lengthened to a kilometer. This could potentially result in meaningful savings in energy, hardware and processing efficiency.

Two startups that are also presently working in the silicon photonics space include:

My Questions:

  • Might another one of silicon photonics’ virtues be that it is partially fabricated from more sustainable materials, primarily silicon derived from sand rather than various metals?
  • Could silicon photonics chips and architectures be a solution to the very significant computing needs of the virtual reality (VR) and augmented reality (AR) systems that will be coming onto the market in 2016? This issue was raised in a most interesting article posted on Bloomberg.com on December 30, 2015 entitled Few Computers Are Powerful Enough to Support Virtual Reality by Ian King. (See also these 13 Subway Fold posts on a range of VR and AR developments.)
  • What other new markets, technologies and opportunities for entrepreneurs and researchers might emerge if the university research team’s chips achieve their intended goals and succeed in making it to market?

May 17, 2017 UpdateFor an update on one of the latest developments in photonics with potential applications in advanced computing and materials science, see Photonic Hypercrystals Are Now a Reality and Light Will Never Be the Same, by Dexter Johnson, posted on May 10, 2017, on IEEESpectrum.com. 


1.  See these six Subway Fold posts for references to autonomous cars.

2.  See these four Subway Fold posts concerning certain developments in brain imaging technology.

Virtual Reality Universe-ity: The Immersive “Augmentarium” Lab at the U. of Maryland

"A Touch of Science", Image by Mars P.

“A Touch of Science”, Image by Mars P.

Got to classes. Sit through a series of 50 minute lectures. Drink coffee. Pay attention and take notes. Drink more coffee. Go to the library to study, do research and complete assignments. Rinse and repeat for the rest of the semester. Then take your final exams and hope that you passed everything. More or less, things have traditionally been this way in college since Hector was a pup.

Might students instead be interested in participating at the new and experimental learning laboratory called the Augmentarium at the University of Maryland where immersing themselves in their studies takes on an entirely new meaning? This is a place where virtual reality (VR)  is being tested and integrated into the learning process. (There 14 Subway Fold posts cover a range of VR and augmented reality [AR] developments and applications.)

Where do I sign up for this?¹

The story was covered in a fascinating report that appeared on December 8, 2015 on the website of the Chronicle of Higher Education entitled Virtual-Reality Lab Explores New Kinds of Immersive Learning, by Ellen Wexler. I highly recommend reading this in its entirety as well as clicking on the Augmentarium link to learn about some these remarkable projects. I also suggest checking out the hashtag #Augmentarium on Twitter the very latest news and developments. I will summarize and annotate this story, and pose some of my own questions right after I take off my own imaginary VR headset.

Developing VR Apps in the Augmentarium

In 2014, Brendan Iribe, the co-founder of the VR headset company Oculus², as well as a University of Maryland alumni, donated $31 million to the University for its development of VR technology³. During the same year, with addition funding obtained from the National Science Foundation, the Augmentarium was built. Currently, researchers at the facility are working on applications of VR to “health care, public safety, and education”.

Professor Ramani Duraiswami, a PhD and co-founder of a startup called VisiSonics (developers of 3D audio and VR gaming systems), is involved with the Augmentarium. His work is in the area of audio, which he believes has a great effect upon how people perceive the world around them. He further thinks that an audio or video lecture presented via distance learning can be greatly enhanced by using VR to, in his words make “the experience feel more immersive”. He feels this would make you feel as though you are in the very presence of the instructor4.

During a recent showcase there, Professor Duraiswami demo-ed 3D sound5 and a short VR science fiction production called Fixing Incus. (This link is meant to be played on a smartphone that is then embedded within a VR viewer/headset.) This implementation showed the audience what it was like to be immersed into a virtual environment where, when they moved their heads and line of sight, what they were viewing corresponding and seamlessly changed.

Enhancing Virtual Immersions for Medicine and Education

Amitabh Varshney, the Director of the University’s Institute for Advanced Computer Studies, is now researching “how the brain processes information in immersive environments” and how is differs from how this is done on a computer screen.6 He believes that VR applications in the classroom will enable students to immerse themselves in their subjects, such as being able to “walk through buildings they design” and “explore” them beyond “just the equations” involved in creating these structures.

At the lab’s recent showcase, he provided the visitors with (non-VR) 3D glasses and presented “an immersive video of a surgical procedure”. He drew the audience’s attention to the doctors at the operating table who were “crowing around” it. He believes that the use of 3D headsets would provide medical students a better means to “move around” and get an improved sense of what this experience is actually like in the operating room. (The September 22, 2015 Subway Fold post entitled VR in the OR: New Virtual Reality System for Planning, Practicing and Assisting in Surgery is also on point and provides extended coverage on this topic.)

While today’s early iterations of VR headsets (either available now or early in 2016), are “cumbersome”, researchers hope that they will evolve (in a manner similar to mobile phones which, in turn and as mentioned above, are presently a key element in VR viewers), and be applied in “hospitals, grocery stores and classrooms”.  Director Varshney can see them possibly developing along an even faster timeline.

My Questions

  • Is the establishment and operation of the Augmentarium a model that other universities should consider as a means to train students in this field, attract donations, and incubate potential VR and AR startups?
  • What entrepreneurial opportunities might exist for consulting, engineering and tech firms to set up comparable development labs at other schools and in private industry?
  • What other types of academic courses would benefit from VR and AR support? Could students now use these technologies to create or support their academic projects? What sort of grading standards might be applied to them?
  • Do the rapidly expanding markets for VR and AR require that some group in academia and/or the government establish technical and perhaps even ethical standards for such labs and their projects?
  • How are relevant potential intellectual property and technology transfer issues going to be negotiated, arbitrated and litigated if needed?

 


1.  Btw, has anyone ever figured out how the very elusive and mysterious “To Be Announced (TBA)”, the professor who appears in nearly all course catalogs, ends up teaching so many subjects at so many schools at the same time? He or she must have an incredibly busy schedule.

2.  These nine Subway Fold posts cover, among other VR and AR related stories, the technology of Oculus.

3.  This donation was reported in an article on September 11, 2014 in The Washington Post in an article entitled Brendan Iribe, Co-founder of Oculus VR, Makes Record $31 Million Donation to U-Md by Nick Anderson.

4.  See also the February 18, 2015 Subway Fold post entitled A Real Class Act: Massive Open Online Courses (MOOCs) are Changing the Learning Process.

5.  See also Designing Sound for Virtual Reality by Todd Baker posted on Medium.com on December 21, 2015, for a thorough overview of this aspect of VR, and the August 5, 2015 Subway Fold post entitled  Latest Census on Virtual Senses: A Seminar on Augmented Reality in New York covering, among other AR technologies, the development work and 3D sound wireless headphones of Hooke Audio.

6.  On a somewhat related topic, see the December 18, 2015 Subway Fold post entitled Mind Over Subject Matter: Researchers Develop A Better Understanding of How Human Brains Manage So Much Information.

Mind Over Subject Matter: Researchers Develop A Better Understanding of How Human Brains Manage So Much Information

"Synapse", Image by Allan Ajifo

“Synapse”, Image by Allan Ajifo

There is an old joke that goes something like this: What do you get for the man who has everything and then where would he put it all?¹ This often comes to mind whenever I have experienced the sensation of information overload caused by too much content presented from too many sources. Especially since the advent of the Web, almost everyone I know has also experienced the same overwhelming experience whenever the amount of information they are inundated with everyday seems increasingly difficult to parse, comprehend and retain.

The multitudes of screens, platforms, websites, newsfeeds, social media posts, emails, tweets, blogs, Post-Its, newsletters, videos, print publications of all types, just to name a few, are relentlessly updated and uploaded globally and 24/7. Nonetheless, for each of us on an individualized basis, a good deal of the substance conveyed by this quantum of bits and ocean of ink somehow still manages to stick somewhere in our brains.

So, how does the human brain accomplish this?

Less Than 1% of the Data

A recent advancement covered in a fascinating report on Phys.org on December 15, 2015 entitled Researchers Demonstrate How the Brain Can Handle So Much Data, by Tara La Bouff describes the latest research into how this happens. I will summarize and annotate this, and pose a few organic material-based questions of my own.

To begin, people learn to identify objects and variations of them rather quickly. For example, a letter of the alphabet, no matter the font or an individual regardless of their clothing and grooming, are always recognizable. We can also identify objects even if the view of them is quite limited. This neurological processing proceeds reliably and accurately moment-by-moment during our lives.

A recent discover by a team of researchers at Georgia Institute of Technology (Georgia Tech)² found that we can make such visual categorizations with less than 1% of the original data. Furthermore, they created and validated an algorithm “to explain human learning”. Their results can also be applied to “machine learning³, data analysis and computer vision4. The team’s full findings were published in the September 28, 2015 issue of Neural Computation in an article entitled Visual Categorization with Random Projection by Rosa I. Arriaga, David Rutter, Maya Cakmak and Santosh S. Vempala. (Dr. Cakmak is from the University of Washington, while the other three are from Georgia Tech.)

Dr. Vempala believes that the reason why humans can quickly make sense of the very complex and robust world is because, as he observes “It’s a computational problem”. His colleagues and team members examined “human performance in ‘random projection tests'”. These measure the degree to which we learn to identify an object. In their work, they showed their test subjects “original, abstract images” and then asked them if they could identify them once again although using a much smaller segment of the image. This led to one of their two principal discoveries that the test subjects required only 0.15% of the data to repeat their identifications.

Algorithmic Agility

In the next phase of their work, the researchers prepared and applied an algorithm to enable computers (running a simple neural network, software capable of imitating very basic human learning characteristics), to undertake the same tasks. These digital counterparts “performed as well as humans”. In turn, the results of this research provided new insight into human learning.

The team’s objective was to devise a “mathematical definition” of typical and non-typical inputs. Next, they wanted to “predict which data” would be the most challenging for the test subjects and computers to learn. As it turned out, they each performed with nearly equal results. Moreover, these results proved that “data will be the hardest to learn over time” can be predicted.

In testing their theory, the team prepared 3 different groups of abstract images of merely 150 pixels each. (See the Phys.org link above containing these images.) Next, they drew up “small sketches” of them. The full image was shown to the test subjects for 10 seconds. Next they were shown 16 of the random sketches. Dr. Vempala of the team was “surprised by how close the performance was” of the humans and the neural network.

While the researchers cannot yet say with certainty that “random projection”, such as was demonstrated in their work, happens within our brains, the results lend support that it might be a “plausible explanation” for this phenomenon.

My Questions

  • Might this research have any implications and/or applications in virtual reality and augment reality systems that rely on both human vision and processing large quantities of data to generate their virtual imagery? (These 13 Subway Fold posts cover a wide range of trends and applications in VR and AR.)
  • Might this research also have any implications and/or applications in medical imaging and interpretation since this science also relies on visual recognition and continual learning?
  • What other markets, professions, universities and consultancies might be able to turn these findings into new entrepreneurial and scientific opportunities?

 


1.  I was unable to definitively source this online but I recall that I may have heard it from the comedian Steven Wright. Please let me know if you are aware of its origin. 

2.  For the work of Georgia’s Tech’s startup incubator see the Subway Fold post entitled Flashpoint Presents Its “Demo Day” in New York on April 16, 2015.

3.   These six Subway Fold posts cover a range of trends and developments in machine learning.

4.   Computer vision was recently taken up in an October 14, 2015 Subway Fold post entitled Visionary Developments: Bionic Eyes and Mechanized Rides Derived from Dragonflies.

Summary of the Media and Tech Preview 2016 Discussion Panel Held at Frankfurt Kurnit in NYC on December 2, 2015

"dtv svttest", Image by Karl Baron

“dtv svttest”, Image by Karl Baron

GPS everywhere notwithstanding, there are still maps on the walls in most buildings that have a red circle somewhere on them accompanied by the words “You are here”. This is to reassure and reorient visitors by giving them some navigational bearings. Thus you can locate where you are at the moment and then find your way forward.

I had the pleasure of attending an expert panel discussion last week, all of whose participants did an outstanding job of analogously mapping where the media and technology are at the end of 2015 and where their trends are heading going into the New Year. It was entitled Digital Breakfast: Media and Tech Preview 2016, was held at the law firm of Frankfurt Kurnit Klein & Selz in midtown Manhattan. It was organized and presented by Gotham Media, a New York based firm engaged in “Digital Strategy, Marketing and Events” as per their website.

This hour and a half presentation was a top-flight and highly enlightening event from start to finish. My gratitude and admiration for everyone involved in making this happen. Bravo! to all of you.

The panelists’ enthusiasm and perspectives fully engaged and transported the entire audience. I believe that everyone there appreciated and learned much from all of them. The participants included:

The following is a summary based on my notes.

Part 1:  Assessments of Key Media Trends and Events in 2015

The event began on an unintentionally entertaining note when one of the speakers, Jesse Redniss, accidentally slipped out his chair. Someone in the audience called out “Do you need a lawyer?”, and considering the location of the conference, the room erupted into laughter.¹

Once the ensuing hilarity subsided, Mr. Goldblatt began by asking the panel for their media highlights for 2015.

  • Ms. Bond said it was the rise of streaming TV, citing Netflix and Amazon, among other industry leaders. For her, this is a time of interesting competition as consumers have increasing control over what they view. She also believes that this is a “fascinating time” for projects and investments in this market sector. Nonetheless, she does not think that cable will disappear.
  • Mr. Kurnit said that Verizon’s purchase of AOL was one of the critical events of 2015, as Verizon “wants to be 360” and this type of move might portend the future of TV. The second key development was the emergence of self-driving cars, which he expects to see implemented within the next 5 to 15 years.
  • Mr. Redniss concurred on Verizon’s acquisition of AOL. He sees other activity such as the combination of Comcast and Universal as indicative of an ongoing “massive media play” versus Google and Facebook. He also mentioned the significance of Nielsen’s Total Audience Measure service.²
  • Mr. Sreenivasan stated that social media is challenging, as indicated by the recent appearance of “Facebook fatigue” affecting its massive user base. Nonetheless, he said “the empire strikes back” as evidenced in their strong financial performance and the recent launch of Chan Zuckerberg LLC to eventually distribute the couple’s $45B fortune to charity. He also sees that current market looking “like 2006 again” insofar as podcasts, email and blogs making it easy to create and distribute content.

Part 2: Today’s Golden Age of TV

Mr. Goldblatt asked the panel for their POVs on what he termed the current “Golden Age of TV” because of the increasing diversity of new platforms, expanding number of content providers and the abundance of original programming. He started off by asking them for their market assessments.

  • Ms. Bond said that the definition of “television” is now “any video content on any screen”. As a ubiquitous example she cited content on mobile platforms. She also noted proliferation of payment methods as driving this market.
  • Mr. Kurnit said that the industry would remain a bit of a “mess” for the next three or four years because of the tremendous volume of original programming, businesses that operate as content aggregators, and pricing differentials. Sometime thereafter, these markets will “rationalize”. Nonetheless, the quality of today’s content is “terrific”, pointing to examples by such media companies as the programs on AMC and HBO‘s Game of Thrones. He also said that an “unbundled model” of content offerings would enable consumers to watch anywhere.
  • Mr. Redniss believes that “mobile transforms TV” insofar as smartphones have become the “new remote control” providing both access to content and “disoverability” of new offerings. He predicted that content would become “monetized across all screens”.
  • Mr. Sreenivasan mentioned the growing popularity of binge-watching as being an important phenomenon. He believes that the “zeitgeist changes daily” and that other changes are being “led by the audience”.

The panel moved to group discussion mode concerning:

  • Consumer Content Options: Ms. Bond asked how will the audience pay for either bundled or unbundled programming options. She believes that having this choice will provide consumers with “more control and options”. Mr. Redniss then asked how many apps or services will consumers be willing to pay for? He predicted that “everyone will have their own channel”. Mr. Kurnit added that he thought there are currently too many options and that “skinny bundles” of programming will be aggregated. Mr. Sreenivasan pointed towards the “Amazon model” where much content is now available but it is also available elsewhere and then Netflix’s offering of 30 original shows. He also wanted to know “Who will watch all of this good TV?”
  • New Content Creation and Aggregation: Mr. Goldblatt asked the panelists whether a media company can be both a content aggregator and a content creator. Mr. Kurnit said yes and Mr. Redniss immediately followed by citing the long-tail effect (statistical distributions in business analytics where there are higher numbers of data points away from the initial top or central parts of the distribution)³. Therefore, online content providers were not bound by the same rules as the TV networks. Still, he could foresee some of Amazon’s and Netflix’s original content ending up being broadcast on them. He also gave the example of Amazon’s House of Cards original programming as being indicative of the “changing market for more specific audiences”. Ultimately, he believes that meeting such audiences’ needs was part of “playing the long game” in this marketplace. 
  • Binge-Watching: Mr. Kurnit followed up by predicting that binge-watching and the “binge-watching bucket” will go away. Mr. Redniss agreed with him and, moreover, talked about the “need for human interaction” to build up audiences. This now takes the form of “superfans” discussing each episode in online venues. For example, he pointed to the current massive marketing campaign build upon finding out the fate of Jon Snow on Games of Thrones.
  • Cord-Cutting: Mr. Sreenivasan believes that we will still have cable in the future. Ms. Bond said that service offerings like Apple TV will become more prevalent. Mr. Kunit said he currently has 21 cable boxes. Mr. Redniss identified himself as more of a cord-shaver who, through the addition of Netflix and Hulu, has reduced his monthly cable bill.

Part 3: Virtual Reality (VR) and Augmented Reality (AR)

Moving on to two of the hottest media topics of the day, virtual reality and augmented reality, the panelist gave their views.

  • Mr. Sreenivasan expressed his optimism about the prospects of VR and AR, citing the pending market launches of the Oculus Rift headset and Facebook 360 immersive videos. The emergence of these technologies is creating a “new set of contexts”. He also spoke proudly of the Metropolitan Museum Media Lab using Oculus for an implementation called Diving Into Pollack (see the 10th project down on this page), that enables users to “walk into a Jackson Pollack painting”.
  • Mr. Kurnit raised the possibility of using Oculus to view Jurassic Park. In terms of movie production and immersion, he said “This changes everything”.
  • Mr. Redniss said that professional sports were a whole new growth area for VR and AR, where you will need “goggles, not a screen”. Mr. Kurnit followed up mentioning a startup that is placing 33 cameras at Major League Baseball stadiums in order to provide 360 degree video coverage of games. (Although he did not mention the company by name, my own Googling indicates that he was probably referring to the “FreeD” system developed by Replay Technologies.)
  • Ms. Bond posed the question “What does this do for storytelling?”4

(See also these 12 Subway Fold posts) for extensive coverage of VR and AR technologies and applications.)

Part 4: Ad-Blocking Software

Mr. Goldblatt next asked the panels for their thoughts about the impacts and economics of ad-blocking software.

  • Mr. Redniss said that ad-blocking apps will affect how advertisers get their online audience’s attention. He thinks a workable alternative is to use technology to “stitch their ads into content” more effectively.
  • Mr. Sreenivasan believes that “ads must get better” in order to engage their audience rather than have viewers looking for means to avoid them. He noted another alternative used on the show Fargo where network programming does not permit them to use fast-forward to avoid ads.
  • Mr. Kurnit expects that ads will be blocked based on the popularity and extensibility of ad-blocking apps. Thus, he also believes that ads need to improve but he is not confident of the ad industry’s ability to do so. Furthermore, when advertisers are more highly motivated because of cost and audience size, they produce far more creative work for events like the NFL Super Bowl.

Someone from the audience asked the panel how ads will become integrated into VR and AR environments. Mr. Redniss said this will happen in cases where this technology can reproduce “real world experiences” for consumers. An example of this is the Cruise Ship Virtual Tours available on Carnival Cruise’s website.

(See also this August 13, 2015 Subway Fold post entitled New Report Finds Ad Blockers are Quickly Spreading and Costing $Billions in Lost Revenue.)

Part 5: Expectations for Media and Technology in 2016

  • Mr. Sreenivasan thinks that geolocation technology will continue to find new applications in “real-life experiences”. He gave as an example the use of web beacons by the Metropolitan Museum.
  • Ms. Bond foresees more “one-to-one” and “one-to-few” messaging capabilities, branded emjois, and a further examination of the “role of the marketer” in today’s media.
  • Mr. Kurnit believes that drones will continue their momentum into the mainstream. He sees the sky filling up with them as they are “productive tools” for a variety of commercial applications.
  • Mr. Redniss expressed another long-term prospect of “advertisers picking up broadband costs for consumers”. This might take the form of ads being streamed to smart phones during NFL games. In the shorter term, he can foresee Facebook becoming a significant simulcaster of professional sporting events.

 


1.  This immediately reminded of a similar incident years ago when I was attending a presentation at the local bar association on the topic of litigating cases involving brain injuries. The first speaker was a neurologist who opened by telling the audience all about his brand new laptop and how it was the latest state-of-the-art-model. Unfortunately, he could not get it to boot up no matter what he tried. Someone from the back of audience then yelled out “Hey doc, it’s not brain surgery”. The place went into an uproar.

2.  See also these other four Subway Fold posts mentioning other services by Nielsen.

3.  For a fascinating and highly original book on this phenomenon, I very highly recommend reading
The Long Tail: Why the Future of Business Is Selling Less of More (Hyperion, 2005), by Chris Anderson. It was also mentioned in the December 10, 2014 Subway Fold post entitled Is Big Data Calling and Calculating the Tune in Today’s Global Music Market?.

4.  See also the November 4, 2014 Subway Fold post entitled Say, Did You Hear the Story About the Science and Benefits of Being an Effective Storyteller?

Advertisers Looking for New Opportunities in Virtual and Augmented Spaces

"P1030522.JPG", Image by Xebede

“P1030522.JPG”, Image by Xebede

Are virtual reality (VR) and augmented reality (AR) technologies about to start putting up “Place Your Ad Here” signs in their spaces?

Today’s advertising firms and their clients are constantly searching for new venues and the latest technologies with which to compete in evermore specialized global marketplaces. With so many current and emerging alternatives, investing their resources to reach their optimal audiences and targeted demographics requires highly nimble planning and anticipating risks. Effective strategies for both of these factors were recently explored in-depth in the March 22, 2015 Subway Fold post entitled What’s Succeeding Now in Multi-Level Digital Strategies for Companies.

Just such a new venue to add to the media buying mix might soon become virtual worlds. With VR is the early stages of going more mainstream in the news media (see the May 5, 2015 Subway Fold post entitled The New York Times Introduces Virtual Reality Tech into Their Reporting Operations), and even more so in film (see the March 26, 2015 Subway Fold post entitled Virtual Reality Movies Wow Audiences at 2015’s Sundance and SXSW Festivals), it seems inevitable that VR might turn out to be the next frontier for advertising.

This new marketplace will also include augmented reality, involving the projection of virtual/holographic images within a field of view of the real world. Microsoft recently introduced a very sleek-looking headset for this called the HoloLens, which will be part of their release of Windows 10 expected sometime later this year.

A fascinating report on three new startups in this nascent field entitled Augmented Advertising, by Rachel Metz, appeared in the May/June 2015 edition of MIT Technology Review. (Online, the same article is entitled “Virtual Reality Advertisements Get in Your Face”.) I will sum up, annotate and pose a few additional questions to this. As well, I highly recommend clicking through on the links below to these new companies to fully explore all of the resources on their truly innovative and imaginative sites.

As the VR and AR headsets are set to enter the consumer marketplace later in 2015, manufactured by companies including Oculus, Sony, Microsoft (see the above links), Magic Leap and Samsung, consumers will soon be above to experience video games and movie formatted for these new platforms.

The first company in the article working in this space is called Mediaspike. They develop apps and tools for mobile VR. The demo that the writer Metz viewed with a VR headset placed that her in a blimp flying over a city containing billboards for an amusement ride based on the successful movie franchise that began with Despicable Me. The company is developing product placement implementations within these environments using billboards, videos and other methods.

One of the billboards was showing a trailer for the next movie in this series called Minions. While Metz became a bit queasy during this experience (a still common concern for VR users), she nonetheless found it “a heck of a lot more interesting” than the current types of ads seen on websites and mobile.

The second new firm is called Airvertise. They are developing “virtual 3-D models that are integrated with real world locations”. It uses geographic data to create constructs where, as a virtual visitor, you can readily walk around in them. Their first platform will be smartphone apps followed by augmented reality viewers. At the SXSW Festival last March (please see the link again in the third paragraph above to the post about VR at SXSW), the company demo-ed an iPad app that, using the tablet’s motion sensors, produced and displayed a virtual drone “hovering above the air about 20 feet away” with a banner attached to it. As the user/viewer walks closer to it the relative size and spatial orientation of it correspondingly increases.

The third startup is called Blippar. Their AR-based app permits commercial content to be viewed on smartphones. Examples include seeing football players when the phone is held up to a can of Pepsi, and shades of nail polish from the cosmetics company Maybelline. The company is currently strategizing about how to create ads in this manner that will appropriately consumers engage consumers but not put them off in any way.

My questions are as follows:

  • Will VR and AR advertising agencies and sponsors open up this field to user-generated ads and commercial content which has already been successful in a number of ad campaigns for food and cars? Perhaps by open-sourcing their development platforms, crowdsourcing the ads, and providing assistance with such efforts this new advertising space can gain some additional attention and traction.
  • What is exactly about VR and AR experience that will provide the most leverage to advertising agencies and their clients? Is it only limited to the novelty of it – – that might well wear off after a while – – or is there something unique about these technologies that will inform and entertain consumers about goods and services in ways neither previously conceived of nor achieved? Is a critical must-have app or viral ad campaign going to be needed for this to reach a tipping point?
  • Might countering technologies also appear to block VR and AR advertising? For example, Ad Block Plus is a very popular browser add-on that enables users to filter out today’s banner ads and pop-ups online. How might advertisers positively reaction to such avoidance?
  • Just as the leading social media services such as, among others, Facebook (which now owns Oculus), Twitter and Intagram, where advertisers have major presences, do VR and AR similarly lend themselves to being populated by advertisers on such a web-wide scale?

The New York Times Introduces Virtual Reality Tech into Their Reporting Operations

"Mobile World Congress 2015", Image by Jobopa

“Mobile World Congress 2015”, Image by Jobopa

As incredibly vast as New York City is, it has always been a great place to walk around. Its multitude of wonderfully diverse neighborhoods, streets, buildings, parks, shops and endless array of other sites can always be more fully appreciated going on foot here and there in – – as we NYC natives like call it – – “The City”.

The April 26, 2015 edition of The New York Times Magazine was devoted to this tradition. The lead off piece by Steve Duenes was entitled How to Walk in New York.  This was followed by several other pieces and then reports on 15 walks around specific neighborhoods. (Clicking on the Magazine’s link above and then scrolling down to the second and third pages will produce links to nearly all of these articles.) I was thrilled by reading this because I am such an avid walker myself.

The very next day, on May 27, 2015, Wired.com carried a fascinating story about how one of the issues’ accompanying and rather astonishing supporting graphics was actually done in a report by Angela Watercutter entitled How the NY Times is Sparking the VR Journalism Revolution.  But even that’s not the half of it – – the NYTimes has made available for downloading a full virtual reality file of the full construction and deconstruction of the graphic. The Wired.com post contains the link as well as a truly mind-boggling high-speed YouTube video of the graphic’s rapid appearance and disappearance and a screen capture from the VR file itself. (Is “screen capture” really accurate to describe it or is something more like “VR  frame”?)  This could take news reporting into an entirely new dimension where viewers literally go inside of a story.

I will sum up, annotate and pose a few questions about this story. (For another other enthusiastic Subway Fold post about VR, last updated on March 26, 2015, please see Virtual Reality Movies Wow Audiences at 2015’s Sundance and SXSW Festivals.)

This all began on April 11, 2015 when a French artist named JR pieced together and then removed in less than 24 hours, a 150-foot photograph right across the street from the landmark Flatiron Building. This New York Times commissioned image was of “a 20-year-old Azerbaijani immigrant named Elmar Aliyev”. It was used on the cover of this special NYTimes Magazine edition. Upon its completion JR then photographed from a helicopter hovering above. (See the March 19, 2015 Subway Fold post entitled  Spectacular Views of New York, San Francisco and Las Vegas at Night from 7,500 Feet Up for another innovative project inject involving highly advanced photography of New York also taken from a helicopter.)

The NYTimes deployed VR technology from a company called VRSE.tools to transform this whole artistic experience into a fully immersive presentation entitled Walking New York. The paper introduced this new creation at a news conference on April 27th. To summarize the NYTimes Magazine’s editor-in-chief, Jake Silverstein, this project was chosen for a VR implementation because it would so dramatically enhance a viewer’s experience of it. Otherwise, pedestrians walking over the image across the sidewalk would not nearly get the full effect of it.

Viewing Walking New York in full VR mode will require an app from VRSE’s site (linked above), and a VR viewer such as, among others, Google Cardboard.

The boost to VR as an emerging medium by the NYTimes‘ engagement on this project is quite significant. Moreover, this demonstrates how it can now be implemented in journalism. Mr. Silverman, to paraphrase his points of view,  believes this demonstrates how it can be used to literally and virtually bring someone into a story. Furthermore, by doing so, the effect upon the VR viewer is likely to be an increased amount of empathy for certain individuals and circumstances who are the subjects of these more immersive reports.

There will more than likely be a long way to go before “VR filming rigs” can be sent out by news organizations to cover stories as they occur. The hardware is just now that widespread or mainstream yet. As well, the number of people who are trained and know how to use this equipment is still quite small and, even for those who do, preparing such a virtual presentation lags behind today’s pace of news reporting.

Another journalist venturing into VR work is Newsweek reporter Nonny de la Pena’s reconstruction of the shooting in the Trayvon Martin case. (See ‘Godmother of VR’ Sees Journalism as the Future of Virtual Reality by Edward Helmore, posted on The Guardian’s website on March 11, 2015, for in-depth coverage of her innovative efforts.)

Let’s assume that out on the not too distant horizon that VR journalism gains acceptance, its mobility and ease-of-use increases, and the rosters of VR-trained reporters and producers increases so that this field undergoes some genuine economies of scale. Then, as with many other life cycles of emergent technologies, the applications in this nascent field would only become limited by the imaginations by its professionals and their audiences. My questions are as follows:

  • What if the leading social media platforms such as Twitter, Facebook (which already purchased Oculus, the maker of VR headsets for $2B last year),  LinkedIn, Instagram (VR Instgramming, anyone?), and others integrate VR into their capabilities? For example, Twitter has recently added a live video feature called Periscope that its users have quickly and widely embraced. In fact, it is already being used for live news reporting as users turn their phones towards live events as they happen. Would they just as likely equally swarm to VR?
  • What if new startup social media platforms launch that are purely focused on experiencing news, commentary, and discussion in VR?
  • Will previously unanticipated ethical standards be needed and likewise dilemmas result as journalists move up the experience curve with VR?
  • How would the data and analytics firms that parse and interpret social media looking for news trends add VR newsfeeds into their operations and results? (See the Subway Fold posts on January 21, 2015 entitled The Transformation of News Distribution by Social Media Platforms in 2015 and on December 2, 2014 entitled Startup is Visualizing and Interpreting Massive Quantities of Daily Online News Content.)
  • Can and should VR be applied to breaking news, documentaries and news shows such as 60 Minutes? What could be the potential risks in doing so?
  • Can drone technology and VR news gathering be blended into a hybrid flying VR capture platform?

I am also looking forward to seeing what other applications, adaptations and markets for VR journalism will emerge that no one can possibly anticipate at this point.

Virtual Reality Movies Wow Audiences at 2015’s Sundance and SXSW Festivals

Image by mconnors

Image by mconnors

[This post was originally uploaded on December 12, 2014. It has been updated below with new information on December 19, 2014,  January 13, 2015 and March 27, 2015.]

December 12, 2014 Post:

At the 2015 Sundance Film Festival to be held in Park City Utah from January 22, 2015 through February 1, 2015, part of this major annual film event is a program called New Frontier. This year it will be presenting 13 virtual reality (VR) films and “experiences”. Advanced coverage of this event was reported in an article on Wired.com on December 4, 2014 entitled VR Films Are Going to Be All Over Sundance in 2015 by Angela Watercutter. After reading this exciting preview I wanted to immediately pack a bag and start walking there.

To sum up, annotate and comment upon some of the key points in this story, the platforms being used for these presentations will mostly be the Oculus, while Google Cardboard and Samsung’s Gear VR will also deployed. While the Oculus Rift headset has not yet released to the consumer public, developers currently do have had access to it. As a result, they were able to create and format these soon-to-be-premiered experimental works. This year’s offerings are a much deeper and wider lineup than the much more limited sampling of Ocolus-based experiments presented during the 2012 Sundance Festival.

(In a recent Subway Fold post on November 26, 2014 entitled Robots and Diamonds and Drones, Aha! Innovations on the Horizon for 2015, one of the startups briefly mentioned is called Jaunt which is described in the blog post as “… developing an entirely new platform and 360 degree camera to create fully immersive virtual reality movies to be viewed using the versatile new Oculus Rift headset.”)

Attendees at some other recent industry events have responded very favorably to Oculus demonstrations. They included a HBO’s presentation of a Game of Thrones experience at this year’s South by Southwest festival, a Jaeger-piloting simulation ¹ at the 2014 Comic-Con in San Diego , and at the 2014 Electronic Entertainment Expo (E3).

To read what some of the creators involved in Sundance’s VR movies have to say about their creations and some brief descriptions and 2-D graphics of this immersive fare, I very highly recommend clicking through and reading this report in its entirety. They include, among others, news and documentaries, bird flights, travel landscapes, rampaging Kaiju, and several social situations.

I wanna go!

My follow-up questions include:

  • Because VR movie production is entirely digital, can this experience be securely distributed online to other film festival and film schools to share with and, moreover, inspire new VR cinematic works by writers, directors, producers and actors?
  • Can the Hyve-3D virtual development platform covered in this August 28, 2014 Subway Fold Post entitled Hyve-3D: A New 3D Immersive and Collaborative Design System, be adapted and formatted for the cinema so that audiences can be fully immersed in virtual firms without the need for a VR headset?
  • If entertainment companies, movie producers, investors and other supporters line up behind the development and release of VR movies, will this be seen by the public as being more like 3-D movies where the novelty has quickly worn off ², or more like a fundamental shift in movie production, presentation and marketing? What if, using the Oculous Rift, users could experience movie trailers, if the entire film at any location? Would this be a market that might draw the attention of Netflix, Hulu, Amazon, Google and other online content distributors and producers?

____________________________
1.  In another Jaeger and Kaiju-related update, there is indeed good news as reported on June 27, 2014 on the HuffingtonPost.com by Jessica Goodman in a story entitled ‘Pacific Rim 2’ Confirmed For 2017 Release Date.

2.  See 2014 Box Office Will Be Hurt By Diminishing Popularity Of 3D Movies: Analyst by David Lieberman, posted on Deadline.com on February 3, 2014. For other new theater experience innovations, see also To Lure Young, Movie Theaters Shake, Smell and Spritz by Brooks Barnes in the November 29, 2014 edition  of The New York Times.

____________________________

December 19, 2014 Update:

The current release of the movie adaptation of the novel Wild by Cheryl Strayed (Knopf, 2011), has been further formatted into 3-minute supplemental virtual reality movie as reported in the December 15, 2014 edition of The New York Times by Michael Cieply in an article entitled Virtual Reality ‘Wild’ Trek. This short film is also scheduled to be presented at the 2015 Sundance festival. Using Oculus and Samsung VR technology, this is an immersive meeting with the lead character, played by actress Reese Witherspoon, while she is hiking in the wilderness. She is quoted as being very pleased with the final results of this VR production.

January 13, 2015 Update:

While VR’s greatest core ability is in placing viewers within a totally immersive digital  environments, this also presents a challenge in keeping them fully focused upon the main narrative.That is, something happening off to the left or right may draw their attention away and thus detract from the experience.

A startup called Visionary VR has developed a system to reconcile this challenge. It enables creators of VR entertainment to concentrate the viewer’s attention upon the action occurring in the stories and games. This was reported in a most interesting article posted on Recode.com on January 5, 2015 entitled In Virtual Reality Movies, You Are the Camera. That Can Be a Problem, but Here’s One Solution, by Eric Johnson. I believe this will keep your attention as a reader, even in the three dimensions in the real world, and recommend clicking through for all of the details. As well, there is a rather spectacular video presented by the founders of the company on the capabilities of their system.

To recap the key points, Visionary VR creates an invisible boundary around the main narrative that alerts the viewer that they are looking away into other “zones” within the environment. When this occurs, the narrative is suspended but viewers can venture into these interactive peripheral areas and further explore elements of the story. Just as easily, they can return their gaze back to the story which will then re-engage and move forward. Visionary VR has created platform and toolkit for VR authors and storytellers to generate and edit their work while within a virtual environment itself. When viewing the accompanying video, the interface reminded me of something out of Minority Report.

(Btw, it has just been announced that this movie is going to be turned into a TV pilot for Fox according to a story posted on Deadline.com entitled ‘Minority Report’ Gets Fox Pilot Order, by Nellie Andreeva on January 9, 2015. This post also contains a photo from the movie showing this then fictional and now real interface. How cool would it be to see this new pilot in full VR?!)

March 27, 2015 Update:

VR movie technology continues to gather momentum and accolades at 2015’s artistic festivals. Its latest display was held at last week’s (March 13 through 17, 2015) South By Southwest Festival (SXSW). The page for the VR panel and speakers is linked here. Coverage of the event was posted in a very informative and enthusiastic article on VentureBeat.com entitled The Future of Interactive Cinematic VR is Coming, and Fast by Daniel Terdiman, on March 18, 2015.

Those in attendance were truly wowed by what they saw, and, moreover, the potential of fully immersive experiences and storytelling. Please click-through to this story for the full details. I will briefly sum up some of the main points.

The article mostly highlights and highly praises the demo by Jaunt, a startup emerging as one of the innovators in VR movies, mentioned in the initial December 12, 2014 post above. Other VR companies also presented their demos at SXSW.

The Jaunt demo consisted of Paul McCartney playing Live and Let Die in concert. Here’s the link to Jaunt’s Content page containing the stream for this and eight other VR movies (including the Kaiju Fury! film also mentioned in the December 12th post above). In order to immerse yourself in ay of these you will need either an Oculus Rift headset or a Google Cardboard device.

VR movie technology is indeed presenting filmmakers with “opportunities that have not been possible before”. This is likewise so for a range of content creators including, among others sure to come, musicians, athletes, interviewers and documentary makers.

Another panelist, Jason Rubin, the head of worldwide studios for Oculus, spoke about the level of progress being made to make these narrative experiences more genuinely interactive with viewers. He believes this will lead to entirely new forms of cinematic experiences.

Arthur van Hoff, Jaunt’s founder and CTO, stated the possibility of VR films where users can follow one particular actor’s perspective and story within the production. (Visionary VR’s technology, described in the January 13, 2015 Update above, might also be helpful in this regard.)

While new “companies, technologies and investors” in this nascent field are expected, Jaunt believes its current two-year lead will give its technology and productions an advantage.