Digital Smarts Everywhere: The Emergence of Ambient Intelligence

Image from Pixabay

Image from Pixabay

The Troggs were a legendary rock and roll band who were part of the British Invasion in the late 1960’s. They have always been best known for their iconic rocker Wild Thing. This was also the only Top 10 hit that ever had an ocarina solo. How cool is that! The band went on to have two other major hits, With a Girl Like You and Love is All Around.¹

The third of the band’s classic singles can be stretched a bit to be used as a helpful metaphor to describe an emerging form pervasive “all around”-edness, this time in a more technological context. Upon reading a fascinating recent article on TechCrunch.com entitled The Next Stop on the Road to Revolution is Ambient Intelligence, by Gary Grossman, on May 7, 2016, you will find a compelling (but not too rocking) analysis about how the rapidly expanding universe of digital intelligent systems wired into our daily routines is becoming more ubiquitous, unavoidable and ambient each day.

All around indeed. Just as romance can dramatically affect our actions and perspectives, studies now likewise indicate that the relentless global spread of smarter – – and soon thereafter still smarter – – technologies is comparably affecting people’s lives at many different levels.² 

We have followed just a sampling of developments and trends in the related technologies of artificial intelligence, machine learning, expert systems and swarm intelligence in these 15 Subway Fold posts. I believe this new article, adding “ambient intelligence” to the mix, provides a timely opportunity to bring these related domains closer together in terms of their common goals, implementations and benefits. I highly recommend reading Mr. Grossman’s piece it in its entirety.

I will summarize and annotate it, add some additional context, and then pose some of my own Troggs-inspired questions.

Internet of Experiences

Digital this, that and everything is everywhere in today’s world. There is a surging confluence of connected personal and business devices, the Internet, and the Internet of Things (I0T) ³. Woven closely together on a global scale, we have essentially built “a digital intelligence network that transcends all that has gone before”. In some cases, this quantum of advanced technologies gains the “ability to sense, predict and respond to our needs”, and is becoming part of everyone’s “natural behaviors”.

A forth industrial revolution might even manifest itself in the form of machine intelligence whereby we will interact with the “always-on, interconnected world of things”. As a result, the Internet may become characterized more by experiences where users will converse with ambient intelligent systems everywhere. The supporting planks of this new paradigm include:

A prediction of what more fully realized ambient intelligence might look like using travel as an example appeared in an article entitled Gearing Up for Ambient Intelligence, by Lisa Morgan, on InformationWeek.com on March 14, 2016. Upon leaving his or her plane, the traveler will receive a welcoming message and a request to proceed to the curb to retrieve their luggage. Upon reaching curbside, a self-driving car6 will be waiting with information about the hotel booked for the stay.

Listening

Another article about ambient intelligence entitled Towards a World of Ambient Computing, by Simon Bisson, posted on ZDNet.com on February 14, 2014, is briefly quoted for the line “We will talk, and the world will answer”, to illustrate the point that current technology will be morphing into something in the future that would be nearly unrecognizable today. Grossman’s article proceeds to survey a series of commercial technologies recently brought to market as components of a fuller ambient intelligence that will “understand what we are asking” and provide responsive information.

Starting with Amazon’s Echo, this new device can, among other things:

  • Answer certain types of questions
  • Track shopping lists
  • Place orders on Amazon.com
  • Schedule a ride with Uber
  • Operate a thermostat
  • Provide transit schedules
  • Commence short workouts
  • Review recipes
  • Perform math
  • Request a plumber
  • Provide medical advice

Will it be long before we begin to see similar smart devices everywhere in homes and businesses?

Kevin Kelly, the founding Executive Editor of WIRED and a renowned futurist7, believes that in the near future, digital intelligence will become available in the form of a utility8 and, as he puts it “IQ as a service”. This is already being done by Google, Amazon, IBM and Microsoft who are providing open access to sections of their AI coding.9 He believes that success for the next round of startups will go to those who enhance and transforms something already in existence with the addition of AI. The best example of this is once again self-driving cars.

As well, in a chapter on Ambient Computing from a report by Deloitte UK entitled Tech Trends 2015, it was noted that some products were engineering ambient intelligence into their products as a means to remain competitive.

Recommending

A great deal of AI is founded upon the collection of big data from online searching, the use of apps and the IoT. This universe of information supports neural networks learn from repeated behaviors including people’s responses and interests. In turn, it provides a basis for “deep learning-derived personalized information and services” that can, in turn, derive “increasingly educated guesses with any given content”.

An alternative perspective, that “AI is simply the outsourcing of cognition by machines”, has been expressed by Jason Silva, a technologist, philosopher and video blogger on Shots of Awe. He believes that this process is the “most powerful force in the universe”, that is, of intelligence. Nonetheless, he sees this as an evolutionary process which should not be feared. (See also the December 27, 2014 Subway Fold post entitled  Three New Perspectives on Whether Artificial Intelligence Threatens or Benefits the World.)

Bots are another contemporary manifestation of ambient intelligence. These are a form of software agent, driven by algorithms, that can independently perform a range of sophisticated tasks. Two examples include:

Speaking

Optimally, bots should also be able to listen and “speak” back in return much like a 2-way phone conversation. This would also add much-needed context, more natural interactions and “help to refine understanding” to these human/machine exchanges. Such conversations would “become an intelligent and ambient part” of daily life.

An example of this development path is evident in Google Now. This service combines voice search with predictive analytics to present users with information prior to searching. It is an attempt to create an “omniscient assistant” that can reply to any request for information “including those you haven’t thought of yet”.

Recently, the company created a Bluetooth-enable prototype of lapel pin based on this technology that operates just by tapping it much like the communicators on Star Trek. (For more details, see Google Made a Secret Prototype That Works Like the Star Trek Communicator, by Victor Luckerson, on Time.com, posted on November 22, 2015.)

The configurations and specs of AI-powered devices, be it lapel pins, some form of augmented reality10 headsets or something else altogether, supporting such pervasive and ambient intelligence are not exactly clear yet. Their development and introduction will take time but remain inevitable.

Will ambient intelligence make our lives any better? It remains to be seen, but it is probably a viable means to handle some of more our ordinary daily tasks. It will likely “fade into the fabric of daily life” and be readily accessible everywhere.

Quite possibly then, the world will truly become a better place to live upon the arrival of ambient intelligence-enabled ocarina solos.

My Questions

  • Does the emergence of ambient intelligence, in fact, signal the arrival of a genuine fourth industrial revolution or is this all just a semantic tool to characterize a broader spectrum of smarter technologies?
  • How might this trend affect overall employment in terms of increasing or decreasing jobs on an industry by industry basis and/or the entire workforce? (See also this June 4, 2015 Subway Fold post entitled How Robots and Computer Algorithms Are Challenging Jobs and the Economy.)
  • How might this trend also effect non-commercial spheres such as public interest causes and political movements?
  • As ambient intelligence insinuates itself deeper into our online worlds, will this become a principal driver of new entrepreneurial opportunities for startups? Will ambient intelligence itself provide new tools for startups to launch and thrive?

 


1.   Thanks to Little Steven (@StevieVanZandt) for keeping the band’s music in occasional rotation on The Underground Garage  (#UndergroundGarage.) Also, for an appreciation of this radio show see this August 14, 2014 Subway Fold post entitled The Spirit of Rock and Roll Lives on Little Steven’s Underground Garage.

2.  For a remarkably comprehensive report on the pervasiveness of this phenomenon, see the Pew Research Center report entitled U.S. Smartphone Use in 2015, by Aaron Smith, posted on April 1, 2015.

3These 10 Subway Fold posts touch upon the IoT.

4.  The Subway Fold category Big Data and Analytics contains 50 posts cover this topic in whole or in part.

5.  The Subway Fold category Telecommunications contains 12 posts cover this topic in whole or in part.

6These 5 Subway Fold posts contain references to self-driving cars.

7.   Mr. Kelly is also the author of a forthcoming book entitled The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future, to be published on June 7, 2016 by Viking.

8.  This September 1, 2014 Subway Fold post entitled Possible Futures for Artificial Intelligence in Law Practice, in part summarized an article by Steven Levy in the September 2014 issue of WIRED entitled Siri’s Inventors Are Building a Radical New AI That Does Anything You Ask. This covered a startup called Viv Labs whose objective was to transform AI into a form of utility. Fast forward to the Disrupt NY 2016 conference going on in New York last week. On May 9, 2016, the founder of Viv, Dag Kittlaus, gave his presentation about the Viv platform. This was reported in an article posted on TechCrunch.com entitled Siri-creator Shows Off First Public Demo of Viv, ‘the Intelligent Interface for Everything’, by Romain Dillet, on May 9, 2016. The video of this 28-minute presentation is embedded in this story.

9.  For the full details on this story see a recent article entitled The Race Is On to Control Artificial Intelligence, and Tech’s Future by John Markoff and Steve Lohr, published in the March 25, 2016 edition of The New York Times.

10These 10 Subway Fold posts cover some recent trends and development in augmented reality.

Applying Origami Folding Techniques to Strands of DNA to Produce Faster and Cheaper Computer Chips

"Origami", Image by David Wicks

“Origami”, Image by David Wicks

We all learned about the periodic table of elements in high school chemistry class. This involved becoming familiar with the names, symbols and atomic weights of all of the chemical occupants of this display. Today, the only thing I still recall from this academic experience was when the teacher told us on the first day of class that we would soon learn to laugh at the following:

Two hydrogen atoms walk into a bar and the first one says to the other “I’ve lost my electron”. The other one answers “Are you sure?”. The first one says “I’m positive.”

I still find this hilarious but whatever I recall today about learning chemistry would likely get lost at the bottom of a thimble. I know, you are probably thinking “Sew what”.

Facing the Elements

Besides everyone’s all-time favorites like oxygen and hydrogen that love to get mixed up with each other and most of the other 116 elements, another one stands alone as the foundation upon which the modern information age was born and continues to thrive today. Silicon has been used to create integrated circuits, much more commonly known as computer chips.

This has been the case since they were first fabricated in the late 1950’s. It has remained the material of choice including nearly all the chips running every imaginable one of our modern computing and communication devices. Through major advances in design, engineering and fabrication during the last five decades, chip manufacturers have been able to vastly shrink this circuitry and pack millions of components into smaller squares of this remarkable material.

A fundamental principle that has held up and guided the semiconductor industry, under relentlessly rigorous testing during silicon’s enduring run, is Moore’s Law. In its simplest terms, it states that the number of transistors that can be written onto a chip doubles nearly every two years. There have been numerous predictions for many years that the end of Moore’s Law is approaching and that another substrate, other than silicon, will be found in order to continue making chips smaller, faster and cheaper. This has not yet come to pass and may not do so for years to come.

Nonetheless, scientists and developers from a diversity of fields, industries and academia have remained in pursuit of alternative computing materials. This includes elements and compounds to improve or replace silicon’s extensible properties, and other efforts to research and fabricate entirely new computing architectures. One involves exploiting the spin states of electrons in a rapidly growing field called quantum computing (this Wikipedia link provides a detailed and accessible survey of its fundamentals and operations), and another involves using, of all things, DNA as a medium.

The field of DNA computing has actually been around in scientific labs and journals for several decades but has not gained much real traction as a viable alternative ready to produce computing chips for the modern marketplace. Recently though, a new advance was reported in a fascinating article posted on Phys.org on March 13, 2016, entitled DNA ‘origami’ Could Help Build Faster, Cheaper Computer Chips, provided by the American Chemical Society (no author is credited). I will summarize and annotate it in order to add some more context, and then pose several of my own molecular questions.

Know When to Fold ‘Em

A team of researchers reported that fabricating such chips is possible when DNA is folded and “formed into specific shapes” using a process much like origami, the Japanese art of folding paper into sculptures. They presented their findings at the 251st American Chemical Society Meeting & Exposition held in San Diego, CA during March 13 through 17, 2016. Their paper entitled 3D DNA Origami Templated Nanoscale Device Fabrication, appears listed as number 305 on Page 202 of the linked document.  Their presentation on March 14, 2016, was captured on this 16-minute YouTube video, with Adam T. Woolley, Ph.D. of Brigham Young University as the presenter for the researchers.

According to Dr. Woolley, researchers want to use DNA’s “small size, base-pairing capabilities and ability to self-assemble” in order to produce “nanoscale electronics”. By comparison, silicon chips currently in production contain features 14 nanometers wide, which turn out to be 10 times “the diameter of single-stranded DNA”. Thus, DNA could be used to build chips on a much smaller and efficient scale.

However, the problem with using DNA as a chip-building material is that it is not a good conductor of electrical current. To circumvent this, Dr. Woolley and his team is using “DNA as a scaffold” and then adding other materials to the assembly to create electronics. He is working on this with his colleagues, Robert C. Davis, Ph.D. and John N. Harb, Ph.D, at Brigham Young University. They are drawing upon their prior work on “DNA origami and DNA nanofabrication”.

Know When to Hold ‘Em

To create this new configuration of origami-ed DNA, they begin with a single long strand of it, which is comparable to a “shoelace” insofar as it is “flexible and floppy”. Then they mix this with shorter stand of DNA called “staples” which, in turn, “use base pairing” to gather and cross-link numerous other “specific segments of the long strand” to build an intended shape.

Dr. Woolley’s team is not satisfied with just replicating “two-dimensional circuits”, but rather, 3D circuitry because it can hold many more electronic components. An undergraduate who works with Dr. Woolley named Kenneth Lee, has already build such a “3-D, tube-shaped DNA origami structure”. He has been further experimenting with adding more components including “nano-sized gold particles”. He is planning to add still more nano-items to his creations with the objective of “forming a semiconductor”.

The entire team’s lead objective is to “place such tubes, and other DNA origami structures, at particular sites on the substrate”. As well, they are seeking us use gold nanoparticles to create circuits. The DNA is thus being used as “girders” to create integrated circuits.

Dr. Woolley also pointed to the advantageous cost differential between the two methods of fabrication. While traditional silicon chip fabrication facilities can cost more than $1 billion, exploiting DNA’s self-assembling capabilities “would likely entail much lower startup funding” and yield potentially “huge cost savings”.

My Questions

  • What is the optimal range and variety in design, processing power and software that can elevate DNA chips to their highest uses? Are there only very specific applications or can they be more broadly used in commercial computing, telecom, science, and other fields?
  • Can any of the advances currently being made and widely followed in the media using the CRISPR gene editing technology somehow be applied here to make more economical, extensible and/or specialized DNA chips?
  • Does DNA computing represent enough of a potential market to attract additional researchers, startups, venture capital and academic training to be considered a sustainable technology growth sector?
  • Because of the potentially lower startup and investment costs, does DNA chip development lend itself to smaller scale crowd-funded support such Kickstarter campaigns? Might this field also benefit if it was treated more as an open source movement?

February 19, 2017 Update:  On February 15, 2017, on the NOVA science show on PBS in the US, there was an absolutely fascinating documentary shown entitled The Origami Revolution. (The link is to the full 53-minute broadcast.) It covered many of the today’s revolutionary applications of origami in science, mathematics, design, architecture and biology. It was both highly informative and visually stunning. I highly recommend clicking through to learn about how some very smart people are doing incredibly imaginative and practical work in modern applications of this ancient art.

Artificial Swarm Intelligence: There Will be An Answer, Let it Bee

Honey Bee on Willow Catkin", Image by Bob Peterson

“Honey Bee on Willow Catkin”, Image by Bob Peterson

In almost any field involving new trends and developments, anything attracting rapidly increasing media attention is often referred to in terms of “generating a lot of buzz”. Well, here’s a quite different sort of story that adds a whole new meaning to this notion.

A truly fascinating post appeared on TechRepublic.com this week on January 22, 2016 entitled How ‘Artificial Swarm Intelligence’ Uses People to Make Smarter Predictions Than Experts by Hope Reese. It is about a development where technology and humanity intersect in a highly specialized manner to produce a new means to improve predictions by groups of people. I highly recommend reading it in its entirety. I will summarize and annotate it, and then pose a few of my own bug-free questions.

A New Prediction Platform

In a recent switching of roles, while artificial intelligence (AI) concerns itself with machines executing human tasks¹, a newly developed and highly accurate algorithm “harnesses the power” of crowds to generate predictions of “real world events”. This approach is called “artificial swarm intelligence“.

A new software platform called UNU has being developed by a startup called Unanimous AI. The firm’s CEO is Dr. Louis Rosenberg. UNU facilitates the gathering of people online in order to “make collective decisions”. This is being done, according to Dr. Rosenberg “to amplify human intelligence”. Thus far, the platform has been “remarkably accurate” in its predictions of the Academy Awards, the Super Bowl² and elections.

UNU is predicated upon the concept of the wisdom of the crowds which states that larger groups of people make better decisions collectively than even the single smartest person within that group.³  Dr. Roman Yampolskiy, the Director of the Cybersecurity Lab at the University of Louisville, has also created a comparable algorithm known as “Wisdom of Artificial Crowds“. (The first time this phenomenon was covered on The Subway Fold, in the context of entertainment, was in the December 10, 2014 post entitled Is Big Data Calling and Calculating the Tune in Today’s Global Music Market?)

The Birds and the Bees

Swarm intelligence learns from events and systems occurring in nature such as the formation of swarms by bees and flocks by birds. These groups collectively make better choices than their single members. Dr. Rosenberg believes that, in his view there is “a vast amount of intelligence in groups” that, in turn generates “intelligence that amplifies their natural abilities”. He has transposed the rules of these natural systems onto the predictive abilities of humans in groups.

He cites honeybees as being “remarkable” decision-makers in their environment. On a yearly basis, the divide their colonies and “send out scout bees” by the hundreds for many miles around to check out locations for a new home. When these scouts return to the main hive they perform a “waggle dance” to “convey information to the group” and next decide about the intended location. For the entire colony, this is a “complex decision” composed of “conflicting variables”. On average, bee colonies choose the optimal location by more than 80%.

Facilitating Human Bee-hive-ior

However, humans display a much lesser accuracy rate when making their own predictions. Most commonly, polling and voting is used. Dr. Rosenberg finds such methods “primitive” and often incorrect as they tend to be “polarizing”. In effect, they make it difficult to assess the “best answer for the group”.

UNU is his firm’s attempt to facilitate humans with making the best decisions for an entire group. Users log onto it and respond to questions with a series of possible choices displayed. It was modeled upon such behavior occurring in nature among “bees, fish and birds”. This is distinguished from individuals just casting a single vote. Here are two videos of the system in action involving choosing the most competitive Republican presidential candidate and selecting the most beloved sidekick from Star Wars4. As groups of users make their selections on UNU and are influenced by the visible onscreen behavior of others, this movement is the online manifestation of the group’s swarming activity.

Another instance of UNU’s effectiveness and accuracy involved 50 users trying to predict the winners of the Academy Awards. On an individual basis, they each averaged six out of 15 correct. This test swarm was able to get a significantly better nine out of the 15.  Beyond movies, the implications may be further significant if applied in areas such as strategic business decision-making.

My Questions

  • Does UNU lend itself to being turned into a scalable mobile app for much larger groups of users on a multitude of predictions? If so, should users be able to develop their own questions and choices for the swarm to decide? Should all predictions posed be open to all users?
  • Might UNU find some sort of application in guiding the decision process of juries while they are resolving a series of factual issues?
  • Could UNU be used to supplement reviews for books, movies, music and other forms of entertainment? Perhaps some form of “UNU Score” or “UNU Rating”?

 


1.  One of the leading proponents and developers of AI for many decades was MIT Professor Marvin Minsky who passed away on Sunday, January 24, 2016. Here is his obituary from the January 25, 2015 edition of The New York Times entitled Marvin Minsky, Pioneer in Artificial Intelligence, Dies at 88, by Glenn Rifkin.

2.  For an alternative report on whether the wisdom of the crowds appears to have little or no effect on the Super Bowl, one not involving UNU in any way, see an article in the January 28, 2016 edition of The New York Times entitled Super Bowl Challenges Wisdom of Crowds and Oddsmakers, by Victor Mather.

3.  An outstanding and comprehensive treatment of this phenomenon I highly recommend reading The Wisdom of the Crowds, by James Surowiecki (Doubleday, 2004).

4.  I would really enjoy seeing a mash-up of these two demos to see how the group would swarm among the Star Wars sidekicks to select which one of these science fiction characters might have the best chance to win the 2016 election.

Charge of the Light Brigade: Faster and More Efficient New Chips Using Photons Instead of Electrons

"PACE - PEACE" Image by Etienne Valois

“PACE – PEACE” Image by Etienne Valois

Alfred, Lord Tennyson wrote his immortal classic poem, The Charge of the Light Brigade, in 1854. It was to honor the dead heroes of a doomed infantry charge at the Battle of Balaclava during the Crimean War. Moreover, it strikingly portrayed the horrors of war. In just six short verses, he created a monumental work that has endured ever since for 162 years.

The poem came to mind last week after reading two recent articles on seemingly disparate topics. The first was posted on The New Yorker’s website on December 30, 2015 entitled In Silicon Valley Now, It’s Almost Always Winner Takes All by Om Malik. This is highly insightful analysis of how and why tech giants such as Google in search, Facebook in social networking, and Uber in transportation, have come to dominate their markets. In essence, competition is a fierce and relentless battle in the global digital economy. The second was an article on CNET.com posted on December 23, 2015 entitled Chip Promises Faster Computing with Light, Not Electrical Wires by Stephan Shankland. I highly recommend reading both of them in their entirety.

Taken together, the homonym of “light” both in historical poetry and in tech, seems to tie these two posted pieces together insofar as contemporary competition in tech markets is often described in military terms and metaphors. Focusing on that second story here for purposes of this blog post, about a tantalizing advance in chip design and fabrication, will this survive as it moves forward into the brutal and relentlessly “winner takes all” marketplace? I will summarize and annotate this story, and pose some of my own, hopefully en-light-ening questions.

Forward, the Light Brigade

A team of researchers, all of whom are university professors, including Vladimir Stojanovic from the University of California at Berkeley who led the development, Krste Asanovic also from Berkeley, Rajeev Ram from MIT, and Milos Popovic from the University of Colorado at Boulder, have created a new type of processing chip “that transmits data with light”. As well, its architecture significantly increases processing speed while reducing power consumption.  A report on the team’s work was published in an article in the December 24, 2015 issue of Nature (subscription required) entitled Single-chip Microprocessor That Communicates Directly Using Light by Chen Sun, Mark T. Wade, Yunsup Lee, et al.

This approach, according to Wikipedia, of “using silicon as an optical medium”, is called silicon photonics. IBM (see this link) and Intel (see this link)  have likewise been involved in R&D in this field, but have yet to introduce anything ready for the market.

However, this team of university researchers believes their new approach might be introduced commercially within a year. While their efforts do not make chips run faster per se, the photonic elements “keep chips supplied with data” which avoids them having to lose time by idling. Thus, they can process data faster.

Currently (no pun intended), electrical signals traverse metal wiring across the world on computing and communications devices and networks. For data traveling greater national and international distances, the electronic signals are transposed into light and sent along on high-speed fiber-optic cables. Nonetheless, this approach “isn’t cheap”.

Half a League Onward

What the university researchers’ team has done is create chips with “photonic components” built into them. If they succeed in scaling-up and commercializing their creation, consumers will be likely the beneficiaries. These advantages will probably manifest themselves first when used in data centers that, in turn, could speed up:

  • Google searches
  • Facebook image recognition
  • Other “performance-intensive features not economical today”
  • Remove processing bottlenecks and conserve battery life in smartphones and other personal computing platforms

Professor Stojanovic believes that one of their largest challenges if is to make this technology affordable before it can be later implemented in consumer level computing and communications devices. He is sanguine that such economies of scale can be reached. He anticipates further applications of this technology to enable chips’ onboard processing and memory components to communicate directly with each other.

Additional integrations of silicon photonics might be seen in the lidar remote sensing systems for self-driving cars¹, as well as brain imaging² and environmental sensors. It also holds the potential to alter the traditional methods that computers are assembled. For example, the length of cables is limited to the extent that data can pass through them quickly and efficiently before needed amplification along the way. Optical links may permit data to be transferred significant further along network cabling. The research team’s “prototype used 10-meter optical links”, but Professor Stojanovic believes this could eventually be lengthened to a kilometer. This could potentially result in meaningful savings in energy, hardware and processing efficiency.

Two startups that are also presently working in the silicon photonics space include:

My Questions:

  • Might another one of silicon photonics’ virtues be that it is partially fabricated from more sustainable materials, primarily silicon derived from sand rather than various metals?
  • Could silicon photonics chips and architectures be a solution to the very significant computing needs of the virtual reality (VR) and augmented reality (AR) systems that will be coming onto the market in 2016? This issue was raised in a most interesting article posted on Bloomberg.com on December 30, 2015 entitled Few Computers Are Powerful Enough to Support Virtual Reality by Ian King. (See also these 13 Subway Fold posts on a range of VR and AR developments.)
  • What other new markets, technologies and opportunities for entrepreneurs and researchers might emerge if the university research team’s chips achieve their intended goals and succeed in making it to market?

May 17, 2017 UpdateFor an update on one of the latest developments in photonics with potential applications in advanced computing and materials science, see Photonic Hypercrystals Are Now a Reality and Light Will Never Be the Same, by Dexter Johnson, posted on May 10, 2017, on IEEESpectrum.com. 


1.  See these six Subway Fold posts for references to autonomous cars.

2.  See these four Subway Fold posts concerning certain developments in brain imaging technology.

Mind Over Subject Matter: Researchers Develop A Better Understanding of How Human Brains Manage So Much Information

"Synapse", Image by Allan Ajifo

“Synapse”, Image by Allan Ajifo

There is an old joke that goes something like this: What do you get for the man who has everything and then where would he put it all?¹ This often comes to mind whenever I have experienced the sensation of information overload caused by too much content presented from too many sources. Especially since the advent of the Web, almost everyone I know has also experienced the same overwhelming experience whenever the amount of information they are inundated with everyday seems increasingly difficult to parse, comprehend and retain.

The multitudes of screens, platforms, websites, newsfeeds, social media posts, emails, tweets, blogs, Post-Its, newsletters, videos, print publications of all types, just to name a few, are relentlessly updated and uploaded globally and 24/7. Nonetheless, for each of us on an individualized basis, a good deal of the substance conveyed by this quantum of bits and ocean of ink somehow still manages to stick somewhere in our brains.

So, how does the human brain accomplish this?

Less Than 1% of the Data

A recent advancement covered in a fascinating report on Phys.org on December 15, 2015 entitled Researchers Demonstrate How the Brain Can Handle So Much Data, by Tara La Bouff describes the latest research into how this happens. I will summarize and annotate this, and pose a few organic material-based questions of my own.

To begin, people learn to identify objects and variations of them rather quickly. For example, a letter of the alphabet, no matter the font or an individual regardless of their clothing and grooming, are always recognizable. We can also identify objects even if the view of them is quite limited. This neurological processing proceeds reliably and accurately moment-by-moment during our lives.

A recent discover by a team of researchers at Georgia Institute of Technology (Georgia Tech)² found that we can make such visual categorizations with less than 1% of the original data. Furthermore, they created and validated an algorithm “to explain human learning”. Their results can also be applied to “machine learning³, data analysis and computer vision4. The team’s full findings were published in the September 28, 2015 issue of Neural Computation in an article entitled Visual Categorization with Random Projection by Rosa I. Arriaga, David Rutter, Maya Cakmak and Santosh S. Vempala. (Dr. Cakmak is from the University of Washington, while the other three are from Georgia Tech.)

Dr. Vempala believes that the reason why humans can quickly make sense of the very complex and robust world is because, as he observes “It’s a computational problem”. His colleagues and team members examined “human performance in ‘random projection tests'”. These measure the degree to which we learn to identify an object. In their work, they showed their test subjects “original, abstract images” and then asked them if they could identify them once again although using a much smaller segment of the image. This led to one of their two principal discoveries that the test subjects required only 0.15% of the data to repeat their identifications.

Algorithmic Agility

In the next phase of their work, the researchers prepared and applied an algorithm to enable computers (running a simple neural network, software capable of imitating very basic human learning characteristics), to undertake the same tasks. These digital counterparts “performed as well as humans”. In turn, the results of this research provided new insight into human learning.

The team’s objective was to devise a “mathematical definition” of typical and non-typical inputs. Next, they wanted to “predict which data” would be the most challenging for the test subjects and computers to learn. As it turned out, they each performed with nearly equal results. Moreover, these results proved that “data will be the hardest to learn over time” can be predicted.

In testing their theory, the team prepared 3 different groups of abstract images of merely 150 pixels each. (See the Phys.org link above containing these images.) Next, they drew up “small sketches” of them. The full image was shown to the test subjects for 10 seconds. Next they were shown 16 of the random sketches. Dr. Vempala of the team was “surprised by how close the performance was” of the humans and the neural network.

While the researchers cannot yet say with certainty that “random projection”, such as was demonstrated in their work, happens within our brains, the results lend support that it might be a “plausible explanation” for this phenomenon.

My Questions

  • Might this research have any implications and/or applications in virtual reality and augment reality systems that rely on both human vision and processing large quantities of data to generate their virtual imagery? (These 13 Subway Fold posts cover a wide range of trends and applications in VR and AR.)
  • Might this research also have any implications and/or applications in medical imaging and interpretation since this science also relies on visual recognition and continual learning?
  • What other markets, professions, universities and consultancies might be able to turn these findings into new entrepreneurial and scientific opportunities?

 


1.  I was unable to definitively source this online but I recall that I may have heard it from the comedian Steven Wright. Please let me know if you are aware of its origin. 

2.  For the work of Georgia’s Tech’s startup incubator see the Subway Fold post entitled Flashpoint Presents Its “Demo Day” in New York on April 16, 2015.

3.   These six Subway Fold posts cover a range of trends and developments in machine learning.

4.   Computer vision was recently taken up in an October 14, 2015 Subway Fold post entitled Visionary Developments: Bionic Eyes and Mechanized Rides Derived from Dragonflies.

Semantic Scholar and BigDIVA: Two New Advanced Search Platforms Launched for Scientists and Historians

"The Chemistry of Inversin", Image by Raymond Bryson

“The Chemistry of Inversion”, Image by Raymond Bryson

As powerful, essential and ubiquitous as Google and its search engine peers are across the world right now, needs often arise in many fields and marketplaces for platforms that can perform much deeper and wider digital excavating. So it is that two new highly specialized search platforms have just come online specifically engineered, in these cases, for scientists and historians. Each is structurally and functionally quite different from the other but nonetheless is aimed at very specific professional user bases with advanced researching needs.

These new systems provide uniquely enhanced levels of context, understanding and visualization with their results. We recently looked at a very similar development in the legal professions in an August 18, 2015 Subway Fold post entitled New Startup’s Legal Research App is Driven by Watson’s AI Technology.

Let’s have a look at both of these latest innovations and their implications. To introduce them, I will summarize and annotate two articles about their introductions, and then I will pose some additional questions of my own.

Semantic Scholar Searches for New Knowledge in Scientific Papers

First, the Allen Institute for Artificial Intelligence (A2I) has just launched its new system called Semantic Scholar, freely accessible on the web. This event was covered on NewScientist.com in a fascinating article entitled AI Tool Scours All the Science on the Web to Find New Knowledge on November 2, 2015 by Mark Harris.

Semantic Scholar is supported by artificial intelligence (AI)¹ technology. It is automated to “read, digest and categorise findings” from approximately two million scientific papers published annually. Its main objective is to assist researchers with generating new ideas and “to identify previously overlooked connections and information”. Because of the of the overwhelming volume of the scientific papers published each year, which no individual scientist could possibly ever read, it offers an original architecture and high-speed manner to mine all of this content.

Oren Etzioni, the director of A2I, termed Semantic Scholar a “scientist’s apprentice”, to assist them in evaluating developments in their fields. For example, a medical researcher could query it about drug interactions in a certain patient cohort having diabetes. Users can also pose their inquiries in natural language format.

Semantic Scholar operates by executing the following functions:

  • crawling the web in search of “publicly available scientific papers”
  • scanning them into its database
  • identifying citations and references that, in turn, are assessed to determine those that are the most “influential or controversial”
  • extracting “key phrases” appearing similar papers, and
  • indexing “the datasets and methods” used

A2I is not alone in their objectives. Other similar initiatives include:

Semantic Scholar will gradually be applied to other fields such as “biology, physics and the remaining hard sciences”.

BigDIVA Searches and Visualized 1,500 Year of History

The second innovative search platform is called Big Data Infrastructure Visualization Application (BigDIVA). The details about its development, operation and goals were covered in a most interesting report posted online on  NC State News on October 12, 2015 entitled Online Tool Aims to Help Researchers Sift Through 15 Centuries of Data by Matt Shipman.

This is joint project by the digital humanities scholars at NC State University and Texas A&M University. Its objective is to assist researchers in, among other fields, literature, religion, art and world history. This is done by increasing the speed and accuracy of searching through “hundreds of thousands of archives and articles” covering 450 A.D. to the present. BigDIVA was formally rolled out at NC State on October 16, 2015.

BigDIVA presents users with an entirely new visual interface, enabling them to search and review “historical documents, images of art and artifacts, and any scholarship associated” with them. Search results, organized by categories of digital resources, are displayed in infographic format4. The linked NC State News article includes a photo of this dynamic looking interface.

This system is still undergoing beta testing and further refinement by its development team. Expansion of its resources on additional historical periods is expected to be an ongoing process. Current plans are to make this system available on a subscription basis to libraries and universities.

My Questions

  • Might the IBM Watson, Semantic Scholar, DARPA and BigDIVA development teams benefit from sharing design and technical resources? Would scientists, doctors, scholars and others benefit from multi-disciplinary teams working together on future upgrades and perhaps even new platforms and interface standards?
  • What other professional, academic, scientific, commercial, entertainment and governmental fields would benefit from these highly specialized search platforms?
  • Would Google, Bing, Yahoo and other commercial search engines benefit from participating with the developers in these projects?
  • Would proprietary enterprise search vendors likewise benefit from similar joint ventures with the types of teams described above?
  • What entrepreneurial opportunities might arise for vendors, developers, designers and consultants who could provide fuller insight and support for developing customized search platforms?

 


October 19, 2017 Update: For the latest progress and applications of the Semantic Scholar system, see the latest report in a new post on the Economist.com entitled A Better Way to Search Through Scientific Papers, dated October 19, 2017.


1.  These 11 Subway Fold posts cover various AI applications and developments.

2.  These seven Subway Fold posts cover a range of IBM Watson applications and markets.

3A new history of DARPA written by Annie Jacobsen was recently published entitled The Pentagon’s Brain (Little Brown and Company, 2015).

4.  See this January 30, 2015 Subway Fold post entitled Timely Resources for Studying and Producing Infographics on this topic.

Summary of the Bitcoin Seminar Held at Kaye Scholer in New York on October 15, 2015

"Bitcoin", Image by Tiger Pixel

“Bitcoin”, Image by Tiger Pixel

The market quote for Bitcoin on October 15, 2015 at 5:00 pm EST was $255.64 US according to CoinDesk.com on the site’s Price & Data page. At that same moment, I was very fortunate to have been attending a presentation entitled the Bitcoin Seminar that was just starting at the law firm of Kaye Scholer in midtown Manhattan. Coincidentally, the firm’s address is numerically just 5.64, well, whatevers¹ away at 250 West 55th Street.

Many thanks to Kaye Scholer and the members of the expert panel for putting together this outstanding presentation. My appreciation and admiration as well for the informative content and smart formatting in the accompanying booklet they provided to the audience.

Based upon the depth and dimensions of all that was learned from the speakers, everyone attending gained a great deal of knowledge and insight on the Bitcoin phenomenon. The speakers clearly and concisely surveyed its essential technologies, operations, markets, regulations and trends.

This was the first of a two-part program the firm is hosting. The second half, covering the blockchain, is scheduled on Thursday, November 5, 2015.

The panelists included:

The following are my notes from this 90-minute session:

1.  What is a “Virtual Currency” and the Infrastructure Supporting It?

  • Bitcoin is neither legal tender nor tied to a particular nation.
  • Bitcoin is the first means available to move value online without third-party trusted intermediaries.
  • Bitcoin involves a series of decentralized protocols, consisting entirely of software, for the transfer of value between parties.
  • Only 21 million Bitcoins will ever be created but they are highly divisible into much smaller units unit called “satoshis” (named after the mysterious and still anonymous creator of Bitcoin who goes by the pseudonym Satoshi Nakamoto).
  • The network structure for these transfers is peer-to-peer, as well as transparent and secure.
  • Bitcoin is a genuine form of “cryptocurrency”, also termed “digital currency”²
  • The networks use strong encryption to secure the value and information being transferred.
  • The parties engaged in a Bitcoin transaction often intend for their virtual currency to be converted into actual fiat currency.

2.  Benefits of Bitcoin

  • Payments can be sent anywhere including internationally.
  • Transactions are borderless and can operate on a 24/7 basis.
  • Just like email, the network operates all the time.

3.  Bitcoin Mining and Bitcoin Miners

  • This is the process by which, and the people by whom, bitcoins are extracted and placed into circulation online.
  • “Miners” are those who use vast amounts of computing power to solve complex mathematical equations that, once resolved, produce new Bitcoins.
  • The miners’ motivations include:
    • the introduction of new Bitcoins
    • their roles as transaction validators and maintainers of the blockchain
  • All newly mined bitcoins need to be validated.
  • Minors are rewarded for their efforts with the bitcoins they extract and any additional fees that were volunteered along with pending transactions.
  • Miners must obey the network’s protocols during the course of their work.

4.  Security

  • Security is the central concern of all participants in Bitcoin operations.
  • Notwithstanding recent bad publicity concerning incidents and indictments for fraud (such as Mt. Gox), the vast majority of bitcoin transactions do not involve illegal activity.
  • The Bitcoin protocols prevent Bitcoins from being spent twice.
  • Measures are in place to avoid cryptography keys from being stolen or misused.
  • There is a common misconception that Bitcoin activity is anonymous. This is indeed not the case, as all transactions are recorded on the blockchain thus enabling anyone to look up the data.
  • Bitcoin operations and markets are becoming more mature and, in turn, relatively more resistant to potential threats.

5.  Using Bitcoins

  • Bitcoin is secured by individual crypto-keys which are required for “signing” in a transaction or exchange.
  • This system is distributed and individual keys are kept in different locations.
  • Once a transaction is “signed” it then goes online into the blockchain ledger³.
  • The crypto keys are highly secure to avoid tampering or interception by unintended parties.
  • Bitcoin can be structured so that either:
    • multiple keys are required to be turned at the same time on both sides of the transaction, or
    • only a single key is required to execute a transaction.
  • By definition, there are no traditional intermediaries (such as banks).

6.  Asset Custody and Valuation

  • Financial regulators see Bitcoin as being a money transmission.
  • Currently, the law says nothing about multi-keys (above).
  • Work is being done on drafting new model legislation in an attempt to define “custody” of Bitcoin as an asset.
  • Bitcoin services in the future will be programmatic and will not require the trusted third parties. For example, in a real estate transaction, if the parties agree to terms then the keys are signed. If not, an arbitrator can be used to turn the keys for the parties and complete the transaction. Thus, this method can be a means to perform settlements in the real world.
  • Auditing this process involves public keys with custodial ownership. In determining valuation, the question is whether “fair value” has been reached and agreed upon.
  • From an asset allocation perspective, it is instructive to compare Bitcoin to gold insofar as there is no fixed amount of gold in the world, but Bitcoin will always be limited to 21 million Bitcoins (see 1. above).

7.  US Regulatory Environment

  • Because of the Bitcoin market’s rapid growth in the past few years, US federal and state regulators have become interested and involved.
  • Bitcoin itself is not regulated. Rather, the key lies at the “chokepoints” in the system where Bitcoin is turned into fiat currency.
  • US states regulate the money transfer business. Thus, compliance is also regulated by state laws. For example, New York State’s Department of Financial Services issues a license for certain service companies in the Bitcoin market operating within the state called a BitLicense. California is currently considering similar legislation.
  • Federal money laundering laws must always be obeyed in Bitcoin transactions.
  • The panelists agreed that it is important for Bitcoin legislation is to protect innovation in this marketplace.
  • The Internal Revenue Service has determined Bitcoin to be a tangible personal asset. As a result, Bitcoin is an investment subject to capital gains. As well, it will be taxed if used to pay for goods and services

8.  Future Prospects and Predictions

  • Current compelling use cases for Bitcoin include high volume of cross-border transactions and areas of the world without stable governments.
  • Bitcoin’s success is not now a matter of if, but rather, when. It could eventually take the emergence of some form of Bitcoin 2.0 to ultimately succeed.
  • Currency is now online and is leading to innovations such as:
    • Programmable money and other new formats of digital currency.
    • Rights management for music services where royalties are sent directly to the artists. (See Footnote 3 below.)

9.  Ten Key Takeaway Points:

  • Bitcoin is a virtual currency but it is not anonymous.
  • The key legal consideration is that it involves a stateless but trusted exchange of value.
  • Bitcoin “miners” are creating the value and increasing in their computing sophistication to locate and solve equations to extract Bitcoins.
  • Security is the foremost concern of everyone involved with Bitcoin.
  • Because Bitcoin exchanges of value occur and settle quickly and transparently (on the blockchain ledger), there are major implications for online commerce and the securities markets.
  • Government regulators are now significantly involved and there are important distinctions between what the states and federal government can regulate.
  • The IRS has made a determination about the nature of Bitcoin as an asset, and its taxable status in paying for goods and services.
  • The crypto-keys and “multi-signing” process are essential to making Bitcoin work securely, with neither borders nor third-party intermediaries.
  • Real estate transactions seem to be well-suited for the blochchain (for example, recording mortgages).
  • Comparing Bitcoin to gold (as a commodity), can be instructive in understanding the nature of Bitcoin.

 


1.   Is there a conversion formula, equivalency or terminology for the transposition of address numerals into Bitcoin? If one soon emerges, it will add a whole new meaning to the notion of “street value”.

2See also this May 8, 2015 Subway Fold post entitled Book Review of “The Age of Cryptocurrency”.

3.  For two examples of other non-Bitcoin adaptations of blockchain technology (among numerous other currently taking place), see the August 21, 2015 Subway Fold post entitled Two Startups’ Note-Worthy Efforts to Adapt Blockchain Technology for the Music Industry and the September 10, 2015 Subway Fold post entitled Vermont’s Legislature is Considering Support for Blockchain Technology and Smart Contracts.