Taking Note of Music Tech’s VC and Accelerator Market Trends in 2017

As a part of today’s modern music industry there exists a complementary and thriving support system of venture capital firms and music tech startup accelerators who are providing a multitude of innovative services.  A fascinating examination of the current state of this ecosystem appeared in an article entitled Music Pushes to Innovate Beyond Streaming, But Investors Play It Safe: Analysis, by Cherie Hu, posted on Billboard.com on 7/24/17. I highly recommend reading it in its entirety for its insights, assessments and accompanying graphics.

I will summarize this feature here, add some links and annotations and, well, venture a few of my own questions. Also, I believe this is a logical follow to three previous Subway Fold posts about the music biz including:

Tempo

In mid-2017, the music tech market is generating signals as to its direction and viability. For example, Jawbone, the once thriving manufacturer of wearable audio devices is currently being liquidated; Soundcloud  the audio distribution platform let go of 40 percent of its staff recently only days before the firm’s tenth anniversary; and Pandora has experienced high turnover among its executives while seeking a sale.

Nonetheless, the leaders in music streaming are maintaining “the music industry’s growth”. Music tech showcases and music accelerators including SXSW Music Startup Spotlight, the Midemlab Accelerator, and Techstars Music are likewise driving market transformation.   During 2017 thus far, 54 music startups from more than 25 cities across the globe have taken part in these three entities. They have presented a range of submissions including “live music activations and automated messaging to analytics tools for labels and artists”.

While companies such as Live Nation, Balderton Capital and Evolution Media have previously invested in music startups, most investors at this mid-year point have never previously funded a company in this space. This is despite the fact that investments in this market sector have rarely returned the 30% that VCs generally seek. As well, a number of established music industry stars are participating as first-time or veteran investors this year.

Of the almost $900 million funding in music tech for the first half of this year, 75% was allocated for streaming services – – 82% of which went only to the leading four companies. However, there remains a “stark disconnect” involving the types of situations where music accelerators principally “lend their mentorship” in “hardware, virtual reality1, chatbots, label tools”, and the issues that VC concentrate the funding such as “streaming, social media, brands”.  Moreover, this situation has the potential of “stifling innovation” across the industry.

To date, music accelerators have “successfully given a platform and resources” to some sectors of the industry that VCs don’t often consider. For example, automated messaging and AI-generated music2 are both categories that music accelerators avoided until recently, now equal 15% of membership. This expansion into new categories reflects a much deeper “tech investment and hiring trends”. Leading music companies are now optimistic about virtual digital assistants (VDA) including chatbots and voice-activated systems such as Amazon Alexa3. As well, Spotify recently hired away a leading AI expert from Sony.

Rhythm

However, this “egalitarian focus” on significant problems has failed to “translate into the wider investing landscape” insofar as the streaming services have attracted 75% of music tech funding. The data further shows that licensing/rights/catalog management, social music media, and music, brands and advertising finished, in that order, in second at 11.1%, third at 7.1% and fourth at 3.9%.

These percentages closely match those for 2016. Currently, many VCs in this sector view streaming “as the safest model available”. It is also one upon which today’s music industry depends for its survival.

Turning to the number of rounds of music tech funding rather than the dollar amounts raised, by segments within the industry, a “slightly more egalitarian landscape” emerges:

  • Music hardware, AI-generated music, and VR and Immersive media each at 5.0%
  • Live music; music brands and advertising; streaming; and social music media each at 15.0%
  • Licensing, rights, and catalog management at 25% (for such companies as Kobalt Music, Stem and Dubset)

Categories that did relatively well in both their number of rounds of funding and accelerator membership were “catalog management, social music platforms, and live music”.

Those music tech startups that are more “futuristic” like hardware and VR are seen favorably by “accelerators and conference audiences”, but less so among VCs.  Likewise, while corporate giants including Live Nation, Universal Music Group, Citi and Microsoft have announced movement into music VR in the past six months, VC funding for this tech remained “relatively soft”.

Even more pronounced is the situation where musical artists and label services such as Instrumental (a influencer discovery platform) and chart monitors like Soundcharts have not raised any rounds of funding. This is so “despite unmatched attention from accelerators. This might be due to these services not being large enough to draw too “many traditional investors”.

An even more persistent problem here is that not many VCs “are run by people with experience in the music industry” and are familiar with its particular concerns. Once exception is Plus Eight Equity Partners, who are trying to address “this ideological and motivational gap”.

Then there are startups such as 8tracks and Chew who are “experimenting with crowdfunding” in this arena but who were not figured into this analysis.

In conclusion, the tension between a “gap in industry knowledge” and the VCs’ preference for “safety and convenience”, is blurring the line leading from accelerator to investment for many of these imaginative startups.

My Questions

  • Of those music startups who have successfully raised funding, what factors distinguished their winning pitches and presentations that others can learn from and apply?
  • Do VCs and accelerators really need the insights and advice of music industry professionals or are the numbers, projects and ROIs only what really matters in deciding whether or not to provide support?
  • Would the application of Moneyball principles be useful to VCs and accelerators in their decision-making processes?

 


1.  See the category Virtual and Augmented Reality for other Subway Fold posts on a range of applications of these technologies.

2.  For a report on a recent developments, see A New AI Can Write Music as Well as a Human Composer, by Bartu Kaleagasi, posted on Futurism.com on 3/9/17.

3.  Other examples of VDAs include Apple’s Siri, Google’s Assistant and Microsoft’s Cortana.

Hacking Matter Really Matters: A New Programmable Material Has Been Developed

Image from Pixabay

Image from Pixabay

The sales receipt from The Strand Bookstore in New York is dated April 5, 2003. It still remains tucked into one of the most brain-bendingly different books I have ever bought and read called Hacking Matter: Levitating Chairs, Quantum Mirages, and the Infinite Weirdness of Programmable Atoms (Basic Books, 2003), by Wil McCarthy. It was a fascinating deep dive into what was then the nascent nanotechnology research on creating a form of “programmable atoms” called quantum dots. This technology has since found applications in the production of semiconductors.

Fast forward thirteen years to a recent article entitled Exoskin: A Programmable Hybrid Shape-Changing Material, by Evan Ackerman, posted on IEEE Spectrum on June 3, 2016. This is about an all-new and entirely different development, quite separate from quantum dots, but nonetheless a current variation on the concept that matter can be programmed for new applications. While we always think of programming as involving systems and software, this new story takes and literally stretches this long-established process into some entirely new directions.

I highly recommend reading this most interesting report in its entirety and viewing the two short video demos embedded within it. I will summarize and annotate it, and then pose several questions of my own on this, well, matter. I also think it fits in well with these 10 Subway Fold posts on other recent developments in material science including, among others, such way cool stuff as Q-Carbon, self-healing concrete and metamaterials.

Matter of Fact

The science of programmable matter is still in its formative stages. The Tangible Media Group at MIT Media Lab is currently working on this challenge included in its scores of imaginative projects. A student pursuing his Master’s Degree in this group is Basheer Tome. Among his current research projects, he is working on a type of programmable material he calls “Exoskin” which he describes as “membrane-backed rigid material”. It is composed of “tessellated triangles of firm silicone mounted on top of a stack of flexible silicone bladders”. By inflating these bladders in specific ways, Exoskin can change its shape in reaction to the user’s touch. This activity can, in turn, be used to relay information and “change functionality”.

Although this might sound a bit abstract, the two accompanying videos make the Exoskin’s operations quite clear. For example, it can be applied to a steering wheel which, through “tactile feedback”, can inform the driver about direction-finding using GPS navigation and other relevant driving data. This is intended to lower driver distractions and “simplify previously complex multitasking” behind the wheel.

The Exoskin, in part, by its very nature makes use of haptics (using touch as a form of interface). One of the advantages of this approach is that it enables “fast reflexive motor responses to stimuli”. Moreover, the Exoskin actually involves inputs that “are both highly tactily perceptible and visually interpretable”.

Fabrication Issues

A gap still exists between the current prototype and a commercially viable product in the future in terms of the user’s degree of “granular control” over the Exoskin. The number of “bladders” underneath the rigid top materials will play a key role in this. Under existing fabrication methods, multiple bladders in certain configurations are “not practical” at this time.

However, this restriction might be changing. Soon it may be possible to produce bladders for each “individual Exoskin element” rather than a single bladder for all of them. (Again, the videos present this.) This would involve a system of “reversible electrolysis” that alternatively separates water into hydrogen and oxygen and then back again into water. Other options to solve this fabrication issue are also under consideration.

Mt. Tome hopes this line of research disrupts the distinction between what is “rigid and soft” as well as “animate and inanimate” to inspire Human-Computer Interaction researchers at MIT to create “more interfaces using physical materials”.

My Questions

  • In what other fields might this technology find viable applications? What about medicine, architecture, education and online gaming just to begin?
  • Might Exoskin present new opportunities to enhance users’ experience with the current and future releases virtual reality and augmented reality systems? (These 15 Subway Fold posts cover a sampling of trends and developments in VR and AR.)
  • How might such an Exoskin-embedded steering wheel possibly improve drivers’ and riders’ experiences with Uber and other ride-sharing services?
  • What entrepreneurial opportunities in design, engineering, programming and manufacturing might present themselves if Exoskin becomes commercialized?

Charge of the Light Brigade: Faster and More Efficient New Chips Using Photons Instead of Electrons

"PACE - PEACE" Image by Etienne Valois

“PACE – PEACE” Image by Etienne Valois

Alfred, Lord Tennyson wrote his immortal classic poem, The Charge of the Light Brigade, in 1854. It was to honor the dead heroes of a doomed infantry charge at the Battle of Balaclava during the Crimean War. Moreover, it strikingly portrayed the horrors of war. In just six short verses, he created a monumental work that has endured ever since for 162 years.

The poem came to mind last week after reading two recent articles on seemingly disparate topics. The first was posted on The New Yorker’s website on December 30, 2015 entitled In Silicon Valley Now, It’s Almost Always Winner Takes All by Om Malik. This is highly insightful analysis of how and why tech giants such as Google in search, Facebook in social networking, and Uber in transportation, have come to dominate their markets. In essence, competition is a fierce and relentless battle in the global digital economy. The second was an article on CNET.com posted on December 23, 2015 entitled Chip Promises Faster Computing with Light, Not Electrical Wires by Stephan Shankland. I highly recommend reading both of them in their entirety.

Taken together, the homonym of “light” both in historical poetry and in tech, seems to tie these two posted pieces together insofar as contemporary competition in tech markets is often described in military terms and metaphors. Focusing on that second story here for purposes of this blog post, about a tantalizing advance in chip design and fabrication, will this survive as it moves forward into the brutal and relentlessly “winner takes all” marketplace? I will summarize and annotate this story, and pose some of my own, hopefully en-light-ening questions.

Forward, the Light Brigade

A team of researchers, all of whom are university professors, including Vladimir Stojanovic from the University of California at Berkeley who led the development, Krste Asanovic also from Berkeley, Rajeev Ram from MIT, and Milos Popovic from the University of Colorado at Boulder, have created a new type of processing chip “that transmits data with light”. As well, its architecture significantly increases processing speed while reducing power consumption.  A report on the team’s work was published in an article in the December 24, 2015 issue of Nature (subscription required) entitled Single-chip Microprocessor That Communicates Directly Using Light by Chen Sun, Mark T. Wade, Yunsup Lee, et al.

This approach, according to Wikipedia, of “using silicon as an optical medium”, is called silicon photonics. IBM (see this link) and Intel (see this link)  have likewise been involved in R&D in this field, but have yet to introduce anything ready for the market.

However, this team of university researchers believes their new approach might be introduced commercially within a year. While their efforts do not make chips run faster per se, the photonic elements “keep chips supplied with data” which avoids them having to lose time by idling. Thus, they can process data faster.

Currently (no pun intended), electrical signals traverse metal wiring across the world on computing and communications devices and networks. For data traveling greater national and international distances, the electronic signals are transposed into light and sent along on high-speed fiber-optic cables. Nonetheless, this approach “isn’t cheap”.

Half a League Onward

What the university researchers’ team has done is create chips with “photonic components” built into them. If they succeed in scaling-up and commercializing their creation, consumers will be likely the beneficiaries. These advantages will probably manifest themselves first when used in data centers that, in turn, could speed up:

  • Google searches
  • Facebook image recognition
  • Other “performance-intensive features not economical today”
  • Remove processing bottlenecks and conserve battery life in smartphones and other personal computing platforms

Professor Stojanovic believes that one of their largest challenges if is to make this technology affordable before it can be later implemented in consumer level computing and communications devices. He is sanguine that such economies of scale can be reached. He anticipates further applications of this technology to enable chips’ onboard processing and memory components to communicate directly with each other.

Additional integrations of silicon photonics might be seen in the lidar remote sensing systems for self-driving cars¹, as well as brain imaging² and environmental sensors. It also holds the potential to alter the traditional methods that computers are assembled. For example, the length of cables is limited to the extent that data can pass through them quickly and efficiently before needed amplification along the way. Optical links may permit data to be transferred significant further along network cabling. The research team’s “prototype used 10-meter optical links”, but Professor Stojanovic believes this could eventually be lengthened to a kilometer. This could potentially result in meaningful savings in energy, hardware and processing efficiency.

Two startups that are also presently working in the silicon photonics space include:

My Questions:

  • Might another one of silicon photonics’ virtues be that it is partially fabricated from more sustainable materials, primarily silicon derived from sand rather than various metals?
  • Could silicon photonics chips and architectures be a solution to the very significant computing needs of the virtual reality (VR) and augmented reality (AR) systems that will be coming onto the market in 2016? This issue was raised in a most interesting article posted on Bloomberg.com on December 30, 2015 entitled Few Computers Are Powerful Enough to Support Virtual Reality by Ian King. (See also these 13 Subway Fold posts on a range of VR and AR developments.)
  • What other new markets, technologies and opportunities for entrepreneurs and researchers might emerge if the university research team’s chips achieve their intended goals and succeed in making it to market?

May 17, 2017 UpdateFor an update on one of the latest developments in photonics with potential applications in advanced computing and materials science, see Photonic Hypercrystals Are Now a Reality and Light Will Never Be the Same, by Dexter Johnson, posted on May 10, 2017, on IEEESpectrum.com. 


1.  See these six Subway Fold posts for references to autonomous cars.

2.  See these four Subway Fold posts concerning certain developments in brain imaging technology.

Virtual Reality Universe-ity: The Immersive “Augmentarium” Lab at the U. of Maryland

"A Touch of Science", Image by Mars P.

“A Touch of Science”, Image by Mars P.

Got to classes. Sit through a series of 50 minute lectures. Drink coffee. Pay attention and take notes. Drink more coffee. Go to the library to study, do research and complete assignments. Rinse and repeat for the rest of the semester. Then take your final exams and hope that you passed everything. More or less, things have traditionally been this way in college since Hector was a pup.

Might students instead be interested in participating at the new and experimental learning laboratory called the Augmentarium at the University of Maryland where immersing themselves in their studies takes on an entirely new meaning? This is a place where virtual reality (VR)  is being tested and integrated into the learning process. (There 14 Subway Fold posts cover a range of VR and augmented reality [AR] developments and applications.)

Where do I sign up for this?¹

The story was covered in a fascinating report that appeared on December 8, 2015 on the website of the Chronicle of Higher Education entitled Virtual-Reality Lab Explores New Kinds of Immersive Learning, by Ellen Wexler. I highly recommend reading this in its entirety as well as clicking on the Augmentarium link to learn about some these remarkable projects. I also suggest checking out the hashtag #Augmentarium on Twitter the very latest news and developments. I will summarize and annotate this story, and pose some of my own questions right after I take off my own imaginary VR headset.

Developing VR Apps in the Augmentarium

In 2014, Brendan Iribe, the co-founder of the VR headset company Oculus², as well as a University of Maryland alumni, donated $31 million to the University for its development of VR technology³. During the same year, with addition funding obtained from the National Science Foundation, the Augmentarium was built. Currently, researchers at the facility are working on applications of VR to “health care, public safety, and education”.

Professor Ramani Duraiswami, a PhD and co-founder of a startup called VisiSonics (developers of 3D audio and VR gaming systems), is involved with the Augmentarium. His work is in the area of audio, which he believes has a great effect upon how people perceive the world around them. He further thinks that an audio or video lecture presented via distance learning can be greatly enhanced by using VR to, in his words make “the experience feel more immersive”. He feels this would make you feel as though you are in the very presence of the instructor4.

During a recent showcase there, Professor Duraiswami demo-ed 3D sound5 and a short VR science fiction production called Fixing Incus. (This link is meant to be played on a smartphone that is then embedded within a VR viewer/headset.) This implementation showed the audience what it was like to be immersed into a virtual environment where, when they moved their heads and line of sight, what they were viewing corresponding and seamlessly changed.

Enhancing Virtual Immersions for Medicine and Education

Amitabh Varshney, the Director of the University’s Institute for Advanced Computer Studies, is now researching “how the brain processes information in immersive environments” and how is differs from how this is done on a computer screen.6 He believes that VR applications in the classroom will enable students to immerse themselves in their subjects, such as being able to “walk through buildings they design” and “explore” them beyond “just the equations” involved in creating these structures.

At the lab’s recent showcase, he provided the visitors with (non-VR) 3D glasses and presented “an immersive video of a surgical procedure”. He drew the audience’s attention to the doctors at the operating table who were “crowing around” it. He believes that the use of 3D headsets would provide medical students a better means to “move around” and get an improved sense of what this experience is actually like in the operating room. (The September 22, 2015 Subway Fold post entitled VR in the OR: New Virtual Reality System for Planning, Practicing and Assisting in Surgery is also on point and provides extended coverage on this topic.)

While today’s early iterations of VR headsets (either available now or early in 2016), are “cumbersome”, researchers hope that they will evolve (in a manner similar to mobile phones which, in turn and as mentioned above, are presently a key element in VR viewers), and be applied in “hospitals, grocery stores and classrooms”.  Director Varshney can see them possibly developing along an even faster timeline.

My Questions

  • Is the establishment and operation of the Augmentarium a model that other universities should consider as a means to train students in this field, attract donations, and incubate potential VR and AR startups?
  • What entrepreneurial opportunities might exist for consulting, engineering and tech firms to set up comparable development labs at other schools and in private industry?
  • What other types of academic courses would benefit from VR and AR support? Could students now use these technologies to create or support their academic projects? What sort of grading standards might be applied to them?
  • Do the rapidly expanding markets for VR and AR require that some group in academia and/or the government establish technical and perhaps even ethical standards for such labs and their projects?
  • How are relevant potential intellectual property and technology transfer issues going to be negotiated, arbitrated and litigated if needed?

 


1.  Btw, has anyone ever figured out how the very elusive and mysterious “To Be Announced (TBA)”, the professor who appears in nearly all course catalogs, ends up teaching so many subjects at so many schools at the same time? He or she must have an incredibly busy schedule.

2.  These nine Subway Fold posts cover, among other VR and AR related stories, the technology of Oculus.

3.  This donation was reported in an article on September 11, 2014 in The Washington Post in an article entitled Brendan Iribe, Co-founder of Oculus VR, Makes Record $31 Million Donation to U-Md by Nick Anderson.

4.  See also the February 18, 2015 Subway Fold post entitled A Real Class Act: Massive Open Online Courses (MOOCs) are Changing the Learning Process.

5.  See also Designing Sound for Virtual Reality by Todd Baker posted on Medium.com on December 21, 2015, for a thorough overview of this aspect of VR, and the August 5, 2015 Subway Fold post entitled  Latest Census on Virtual Senses: A Seminar on Augmented Reality in New York covering, among other AR technologies, the development work and 3D sound wireless headphones of Hooke Audio.

6.  On a somewhat related topic, see the December 18, 2015 Subway Fold post entitled Mind Over Subject Matter: Researchers Develop A Better Understanding of How Human Brains Manage So Much Information.

Mind Over Subject Matter: Researchers Develop A Better Understanding of How Human Brains Manage So Much Information

"Synapse", Image by Allan Ajifo

“Synapse”, Image by Allan Ajifo

There is an old joke that goes something like this: What do you get for the man who has everything and then where would he put it all?¹ This often comes to mind whenever I have experienced the sensation of information overload caused by too much content presented from too many sources. Especially since the advent of the Web, almost everyone I know has also experienced the same overwhelming experience whenever the amount of information they are inundated with everyday seems increasingly difficult to parse, comprehend and retain.

The multitudes of screens, platforms, websites, newsfeeds, social media posts, emails, tweets, blogs, Post-Its, newsletters, videos, print publications of all types, just to name a few, are relentlessly updated and uploaded globally and 24/7. Nonetheless, for each of us on an individualized basis, a good deal of the substance conveyed by this quantum of bits and ocean of ink somehow still manages to stick somewhere in our brains.

So, how does the human brain accomplish this?

Less Than 1% of the Data

A recent advancement covered in a fascinating report on Phys.org on December 15, 2015 entitled Researchers Demonstrate How the Brain Can Handle So Much Data, by Tara La Bouff describes the latest research into how this happens. I will summarize and annotate this, and pose a few organic material-based questions of my own.

To begin, people learn to identify objects and variations of them rather quickly. For example, a letter of the alphabet, no matter the font or an individual regardless of their clothing and grooming, are always recognizable. We can also identify objects even if the view of them is quite limited. This neurological processing proceeds reliably and accurately moment-by-moment during our lives.

A recent discover by a team of researchers at Georgia Institute of Technology (Georgia Tech)² found that we can make such visual categorizations with less than 1% of the original data. Furthermore, they created and validated an algorithm “to explain human learning”. Their results can also be applied to “machine learning³, data analysis and computer vision4. The team’s full findings were published in the September 28, 2015 issue of Neural Computation in an article entitled Visual Categorization with Random Projection by Rosa I. Arriaga, David Rutter, Maya Cakmak and Santosh S. Vempala. (Dr. Cakmak is from the University of Washington, while the other three are from Georgia Tech.)

Dr. Vempala believes that the reason why humans can quickly make sense of the very complex and robust world is because, as he observes “It’s a computational problem”. His colleagues and team members examined “human performance in ‘random projection tests'”. These measure the degree to which we learn to identify an object. In their work, they showed their test subjects “original, abstract images” and then asked them if they could identify them once again although using a much smaller segment of the image. This led to one of their two principal discoveries that the test subjects required only 0.15% of the data to repeat their identifications.

Algorithmic Agility

In the next phase of their work, the researchers prepared and applied an algorithm to enable computers (running a simple neural network, software capable of imitating very basic human learning characteristics), to undertake the same tasks. These digital counterparts “performed as well as humans”. In turn, the results of this research provided new insight into human learning.

The team’s objective was to devise a “mathematical definition” of typical and non-typical inputs. Next, they wanted to “predict which data” would be the most challenging for the test subjects and computers to learn. As it turned out, they each performed with nearly equal results. Moreover, these results proved that “data will be the hardest to learn over time” can be predicted.

In testing their theory, the team prepared 3 different groups of abstract images of merely 150 pixels each. (See the Phys.org link above containing these images.) Next, they drew up “small sketches” of them. The full image was shown to the test subjects for 10 seconds. Next they were shown 16 of the random sketches. Dr. Vempala of the team was “surprised by how close the performance was” of the humans and the neural network.

While the researchers cannot yet say with certainty that “random projection”, such as was demonstrated in their work, happens within our brains, the results lend support that it might be a “plausible explanation” for this phenomenon.

My Questions

  • Might this research have any implications and/or applications in virtual reality and augment reality systems that rely on both human vision and processing large quantities of data to generate their virtual imagery? (These 13 Subway Fold posts cover a wide range of trends and applications in VR and AR.)
  • Might this research also have any implications and/or applications in medical imaging and interpretation since this science also relies on visual recognition and continual learning?
  • What other markets, professions, universities and consultancies might be able to turn these findings into new entrepreneurial and scientific opportunities?

 


1.  I was unable to definitively source this online but I recall that I may have heard it from the comedian Steven Wright. Please let me know if you are aware of its origin. 

2.  For the work of Georgia’s Tech’s startup incubator see the Subway Fold post entitled Flashpoint Presents Its “Demo Day” in New York on April 16, 2015.

3.   These six Subway Fold posts cover a range of trends and developments in machine learning.

4.   Computer vision was recently taken up in an October 14, 2015 Subway Fold post entitled Visionary Developments: Bionic Eyes and Mechanized Rides Derived from Dragonflies.

Summary of the Media and Tech Preview 2016 Discussion Panel Held at Frankfurt Kurnit in NYC on December 2, 2015

"dtv svttest", Image by Karl Baron

“dtv svttest”, Image by Karl Baron

GPS everywhere notwithstanding, there are still maps on the walls in most buildings that have a red circle somewhere on them accompanied by the words “You are here”. This is to reassure and reorient visitors by giving them some navigational bearings. Thus you can locate where you are at the moment and then find your way forward.

I had the pleasure of attending an expert panel discussion last week, all of whose participants did an outstanding job of analogously mapping where the media and technology are at the end of 2015 and where their trends are heading going into the New Year. It was entitled Digital Breakfast: Media and Tech Preview 2016, was held at the law firm of Frankfurt Kurnit Klein & Selz in midtown Manhattan. It was organized and presented by Gotham Media, a New York based firm engaged in “Digital Strategy, Marketing and Events” as per their website.

This hour and a half presentation was a top-flight and highly enlightening event from start to finish. My gratitude and admiration for everyone involved in making this happen. Bravo! to all of you.

The panelists’ enthusiasm and perspectives fully engaged and transported the entire audience. I believe that everyone there appreciated and learned much from all of them. The participants included:

The following is a summary based on my notes.

Part 1:  Assessments of Key Media Trends and Events in 2015

The event began on an unintentionally entertaining note when one of the speakers, Jesse Redniss, accidentally slipped out his chair. Someone in the audience called out “Do you need a lawyer?”, and considering the location of the conference, the room erupted into laughter.¹

Once the ensuing hilarity subsided, Mr. Goldblatt began by asking the panel for their media highlights for 2015.

  • Ms. Bond said it was the rise of streaming TV, citing Netflix and Amazon, among other industry leaders. For her, this is a time of interesting competition as consumers have increasing control over what they view. She also believes that this is a “fascinating time” for projects and investments in this market sector. Nonetheless, she does not think that cable will disappear.
  • Mr. Kurnit said that Verizon’s purchase of AOL was one of the critical events of 2015, as Verizon “wants to be 360” and this type of move might portend the future of TV. The second key development was the emergence of self-driving cars, which he expects to see implemented within the next 5 to 15 years.
  • Mr. Redniss concurred on Verizon’s acquisition of AOL. He sees other activity such as the combination of Comcast and Universal as indicative of an ongoing “massive media play” versus Google and Facebook. He also mentioned the significance of Nielsen’s Total Audience Measure service.²
  • Mr. Sreenivasan stated that social media is challenging, as indicated by the recent appearance of “Facebook fatigue” affecting its massive user base. Nonetheless, he said “the empire strikes back” as evidenced in their strong financial performance and the recent launch of Chan Zuckerberg LLC to eventually distribute the couple’s $45B fortune to charity. He also sees that current market looking “like 2006 again” insofar as podcasts, email and blogs making it easy to create and distribute content.

Part 2: Today’s Golden Age of TV

Mr. Goldblatt asked the panel for their POVs on what he termed the current “Golden Age of TV” because of the increasing diversity of new platforms, expanding number of content providers and the abundance of original programming. He started off by asking them for their market assessments.

  • Ms. Bond said that the definition of “television” is now “any video content on any screen”. As a ubiquitous example she cited content on mobile platforms. She also noted proliferation of payment methods as driving this market.
  • Mr. Kurnit said that the industry would remain a bit of a “mess” for the next three or four years because of the tremendous volume of original programming, businesses that operate as content aggregators, and pricing differentials. Sometime thereafter, these markets will “rationalize”. Nonetheless, the quality of today’s content is “terrific”, pointing to examples by such media companies as the programs on AMC and HBO‘s Game of Thrones. He also said that an “unbundled model” of content offerings would enable consumers to watch anywhere.
  • Mr. Redniss believes that “mobile transforms TV” insofar as smartphones have become the “new remote control” providing both access to content and “disoverability” of new offerings. He predicted that content would become “monetized across all screens”.
  • Mr. Sreenivasan mentioned the growing popularity of binge-watching as being an important phenomenon. He believes that the “zeitgeist changes daily” and that other changes are being “led by the audience”.

The panel moved to group discussion mode concerning:

  • Consumer Content Options: Ms. Bond asked how will the audience pay for either bundled or unbundled programming options. She believes that having this choice will provide consumers with “more control and options”. Mr. Redniss then asked how many apps or services will consumers be willing to pay for? He predicted that “everyone will have their own channel”. Mr. Kurnit added that he thought there are currently too many options and that “skinny bundles” of programming will be aggregated. Mr. Sreenivasan pointed towards the “Amazon model” where much content is now available but it is also available elsewhere and then Netflix’s offering of 30 original shows. He also wanted to know “Who will watch all of this good TV?”
  • New Content Creation and Aggregation: Mr. Goldblatt asked the panelists whether a media company can be both a content aggregator and a content creator. Mr. Kurnit said yes and Mr. Redniss immediately followed by citing the long-tail effect (statistical distributions in business analytics where there are higher numbers of data points away from the initial top or central parts of the distribution)³. Therefore, online content providers were not bound by the same rules as the TV networks. Still, he could foresee some of Amazon’s and Netflix’s original content ending up being broadcast on them. He also gave the example of Amazon’s House of Cards original programming as being indicative of the “changing market for more specific audiences”. Ultimately, he believes that meeting such audiences’ needs was part of “playing the long game” in this marketplace. 
  • Binge-Watching: Mr. Kurnit followed up by predicting that binge-watching and the “binge-watching bucket” will go away. Mr. Redniss agreed with him and, moreover, talked about the “need for human interaction” to build up audiences. This now takes the form of “superfans” discussing each episode in online venues. For example, he pointed to the current massive marketing campaign build upon finding out the fate of Jon Snow on Games of Thrones.
  • Cord-Cutting: Mr. Sreenivasan believes that we will still have cable in the future. Ms. Bond said that service offerings like Apple TV will become more prevalent. Mr. Kunit said he currently has 21 cable boxes. Mr. Redniss identified himself as more of a cord-shaver who, through the addition of Netflix and Hulu, has reduced his monthly cable bill.

Part 3: Virtual Reality (VR) and Augmented Reality (AR)

Moving on to two of the hottest media topics of the day, virtual reality and augmented reality, the panelist gave their views.

  • Mr. Sreenivasan expressed his optimism about the prospects of VR and AR, citing the pending market launches of the Oculus Rift headset and Facebook 360 immersive videos. The emergence of these technologies is creating a “new set of contexts”. He also spoke proudly of the Metropolitan Museum Media Lab using Oculus for an implementation called Diving Into Pollack (see the 10th project down on this page), that enables users to “walk into a Jackson Pollack painting”.
  • Mr. Kurnit raised the possibility of using Oculus to view Jurassic Park. In terms of movie production and immersion, he said “This changes everything”.
  • Mr. Redniss said that professional sports were a whole new growth area for VR and AR, where you will need “goggles, not a screen”. Mr. Kurnit followed up mentioning a startup that is placing 33 cameras at Major League Baseball stadiums in order to provide 360 degree video coverage of games. (Although he did not mention the company by name, my own Googling indicates that he was probably referring to the “FreeD” system developed by Replay Technologies.)
  • Ms. Bond posed the question “What does this do for storytelling?”4

(See also these 12 Subway Fold posts) for extensive coverage of VR and AR technologies and applications.)

Part 4: Ad-Blocking Software

Mr. Goldblatt next asked the panels for their thoughts about the impacts and economics of ad-blocking software.

  • Mr. Redniss said that ad-blocking apps will affect how advertisers get their online audience’s attention. He thinks a workable alternative is to use technology to “stitch their ads into content” more effectively.
  • Mr. Sreenivasan believes that “ads must get better” in order to engage their audience rather than have viewers looking for means to avoid them. He noted another alternative used on the show Fargo where network programming does not permit them to use fast-forward to avoid ads.
  • Mr. Kurnit expects that ads will be blocked based on the popularity and extensibility of ad-blocking apps. Thus, he also believes that ads need to improve but he is not confident of the ad industry’s ability to do so. Furthermore, when advertisers are more highly motivated because of cost and audience size, they produce far more creative work for events like the NFL Super Bowl.

Someone from the audience asked the panel how ads will become integrated into VR and AR environments. Mr. Redniss said this will happen in cases where this technology can reproduce “real world experiences” for consumers. An example of this is the Cruise Ship Virtual Tours available on Carnival Cruise’s website.

(See also this August 13, 2015 Subway Fold post entitled New Report Finds Ad Blockers are Quickly Spreading and Costing $Billions in Lost Revenue.)

Part 5: Expectations for Media and Technology in 2016

  • Mr. Sreenivasan thinks that geolocation technology will continue to find new applications in “real-life experiences”. He gave as an example the use of web beacons by the Metropolitan Museum.
  • Ms. Bond foresees more “one-to-one” and “one-to-few” messaging capabilities, branded emjois, and a further examination of the “role of the marketer” in today’s media.
  • Mr. Kurnit believes that drones will continue their momentum into the mainstream. He sees the sky filling up with them as they are “productive tools” for a variety of commercial applications.
  • Mr. Redniss expressed another long-term prospect of “advertisers picking up broadband costs for consumers”. This might take the form of ads being streamed to smart phones during NFL games. In the shorter term, he can foresee Facebook becoming a significant simulcaster of professional sporting events.

 


1.  This immediately reminded of a similar incident years ago when I was attending a presentation at the local bar association on the topic of litigating cases involving brain injuries. The first speaker was a neurologist who opened by telling the audience all about his brand new laptop and how it was the latest state-of-the-art-model. Unfortunately, he could not get it to boot up no matter what he tried. Someone from the back of audience then yelled out “Hey doc, it’s not brain surgery”. The place went into an uproar.

2.  See also these other four Subway Fold posts mentioning other services by Nielsen.

3.  For a fascinating and highly original book on this phenomenon, I very highly recommend reading
The Long Tail: Why the Future of Business Is Selling Less of More (Hyperion, 2005), by Chris Anderson. It was also mentioned in the December 10, 2014 Subway Fold post entitled Is Big Data Calling and Calculating the Tune in Today’s Global Music Market?.

4.  See also the November 4, 2014 Subway Fold post entitled Say, Did You Hear the Story About the Science and Benefits of Being an Effective Storyteller?

Artificial Fingerprints: Something Worth Touching Upon

"030420_1884_0077_x__s", Image by TNS Sofres

“030420_1884_0077_x__s”, Image by TNS Sofres

Among the recent advancements of the replication of various human senses, particularly for prosthetics and robotics, scientists have just made another interesting achievement in creating, of all things, artificial fingerprints. They can actually sense certain real world stimuli. This development could have some potentially very productive – – and conductive – – applications.

Could someone please cue up The Human Touch by Bruce Springsteen for this?

We looked at a similar development in artificial human vision just recently in the October 14, 2015 Subway Fold post entitled Visionary Developments: Bionic Eyes and Mechanized Rides Derived from Dragonflies.

This latest digital and all-digit story was reported in a fascinating story posted on Sciencemag.org on October 30, 2015 entitled New Artificial Fingerprints Feel Texture, Hear Sound by Hanae Armitage. I will summarize and annotate it, and then add some of my own non-artificial questions.

Design and Materials

An electronic material has been created at the University of Illinois, Urbana-Champaign, that, while still under development in the lab “mimics the swirling design” of fingerprints. It can detect pressure, temperature and sound. The researchers who devised this believe it could be helpful in artificial limbs and perhaps even enhancing our own organic senses.

Dr. John Rogers, a member of the development team, finds this new material is an addition to the “sensor types that can be integrated with the skin”.

Scientists have been working for years on these materials called electronic skins (e-skins). Some of them can imitate the senses of human skin that can monitor pulse and temperature. (See also the October 18, 2015 Subway Fold post entitled Printable, Temporary Tattoo-like Medical Sensors are Under Development.) Dr. Hyunhyub Ko, a chemical engineer at Ulsan National Institute of Science and Technology in South Korea and another member of the artificial fingerprints development team noted that there are further scientific challenges “in replicating fingertips” with their ability to sense very small differences in textures.

Sensory Perceptions

In the team’s work, Dr. Ko and the others began with “a thin, flexible material” textured with features much like human fingerprints. Next, they used this to create a “microstructured ferroelectric skin“. This contains small embedded structures called “microdomes” (as shown in an illustration accompanying the AAAS.org article), that enable the following e-skin’s sensory perceptions*:

  • Pressure: When outside pressure moves two layers of this material together it generates a small electric current that is monitored through embedded electrodes. In effect, the greater the pressure the greater the current.
  • Temperature: The e-skin relaxes in warmer temperatures and stiffens in colder temperatures, likewise generating changes in the electrical current and thus enabling it to sense temperature changes.
  • Sound: While not originally expected, the e-skin was also been found to be sensitive to sound. This occurred in testing by Dr. Ko and his team. They electronically measured the vibrations from pronouncing the letters in the word “skin” right near the e-skin. The results show this affected the microdomes and, in turn, the electric current to register changes.

Dr. Ko said his next challenge is how to transmit all of these sensations to the human brain. This has been done elsewhere using optogenetics (the use of light to control neurons that have been genetically modified) in e-skins, but he plans to research other technologies for this. Specifically, in the increasing scientific interest and development in skin-mounted sensors (such as those described in the October 18, 2015 Subway Fold post linked above), this involves a smart groups of “ideas and materials” to engineer these.

My Questions

  • Might e-skins have applications in virtual reality and augmented reality systems for medicine, engineering, manufacturing, design, robotics, architecture, and gaming? (These 11 Subway Fold posts cover a range of new developments and applications of these technologies.)
  • What other fields and marketplaces might also benefit from integrating e-skin technology? What entrepreneurial opportunities might emerge here?
  • Could e-skins work in conjunction with the system being developed in the June 27, 2015 Subway Fold post entitled Medical Researchers are Developing a “Smart Insulin Patch” ?

 


For an absolutely magnificent literary exploration of the human senses, I recommend A Natural History of the Senses by Diane Ackerman (Vintage, 1991) in the highest possible terms. It is a gem in both its sparking prose and engaging subject.


See this Wikipedia page for detailed information and online resources about the field known as haptic technology.