Data Analysis and Visualizations of All U.S. Presidential State of the Union Addresses

"President Obama's State of the Union Address 2013", Word cloud image by Kurtis Garbutt

“President Obama’s State of the Union Address 2013”, Word cloud image by Kurtis Garbutt

While data analytics and visualization tools have accumulated a significant historical record of accomplishments, now, in turn, this technology is being applied to actual significant historical accomplishments. Let’s have a look.

Every year in January, the President of the United States gives the State of the Union speech before both houses of the U.S. Congress. This is to address the condition of the nation, his legislative agenda and other national priorities. The requirement for this presentation appears in Article II of the U.S. Constitution.

This talk with the nation has been given every year (with only one exception), since 1790. The resulting total of 224 speeches presents a remarkable and dynamic historical record of U.S. history and policy. Researchers at Columbia University and the University of Paris have recently applied sophisticated data analytics and visualization tools to this trove of presidential addresses. Their findings were published in the August 10, 2015 edition of the Proceedings of the National Academy of Sciences in a truly fascinating paper entitled Lexical Shifts, Substantive Changes, and Continuity in State of the Union Discourse, 1790–2014, by Alix Rule, Jean-Philippe Cointet, and Peter S. Bearman.

A very informative and concise summary of this paper was also posted in an article on Phys.org, also on August 10, 2015, entitled in a post entitled Big Data Analysis of State of the Union Remarks Changes View of American History, (no author is listed). I will summarize, annotate and post a few questions of my own. I highly recommend clicking through and reading the full report and the summary article together for a fuller perspective on this achievement. (Similar types of textual and graphical analyses of US law were covered in the May 15, 2015 Subway Fold post entitled Recent Visualization Projects Involving US Law and The Supreme Court.)

The researchers developed custom algorithms for their research. They were applied to the total number of words used in all of the addresses, from 1790 to 2014, of 1.8 million.  By identifying the frequencies of “how often words appear jointly” and “mapping their relation to other clusters of words”, the team was able to highlight “dominant social and political” issues and their relative historical time frames. (See Figure 1 at the bottom of Page 2 of the full report for this lexigraphical mapping.)

One of the researchers’ key findings was that although the topics of “industry, finance, and foreign policy” were predominant and persist throughout all of the addresses, following World War II the recurring keywords focus further upon “nation building, the regulation of business and the financing of public infrastructure”. While it is well know that these emergent terms were all about modern topics, the researchers were thus able to pinpoint the exact time frames when they first appeared. (See Page 5 of the full report for the graphic charting these data trends.)

Foreign Policy Patters

The year 1917 struck the researchers as a critical turning point because it represented a dramatic shift in the data containing words indicative of more modern times. This was the year that the US sent its troops into battle in Europe in WWI. It was then that new keywords in the State of the Union including “democracy,” “unity,” “peace” and “terror” started to appear and recur. Later, by the 1940’s, word clusters concerning the Navy appeared, possibly indicating emerging U.S. isolationism. However, they suddenly disappeared again as the U.S. became far more involved in world events.

Domestic Policy Patterns

Over time, the researchers identified changes in the terminology used when addressing domestic matters. These concerned the government’s size, economic regulation, and equal opportunity. Although the focus of the State of the Union speeches remained constant, new keywords appeared whereby “tax relief,” “incentives” and “welfare” have replaced “Treasury,” “amount” and “expenditures”.

An important issue facing this project was that during the more than two centuries being studied, keywords could substantially change in meaning over time. To address this, the researchers applied new network analysis methods developed by Jean-Philippe Cointet, a team member, co-author and physicist at the University of Paris. They were intended to identify changes whereby “some political topics morph into similar topics with common threads” as others fade away. (See Figure 3 at the bottom of Page 4 of the full paper for this enlightening graphic.*)

As a result, they were able to parse the relative meanings of words as they appear with each other and, on a more macro level, in the “context of evolving topics”. For example, it was discovered that the word “Constitution” was:

  • closely associated with the word “people” in early U.S. history
  • linked to “state” following the Civil War
  • linked to “law” during WWI and WWII, and
  • returned to “people” during the 1970’s

Thus, the meaning of “Constitution” must be assessed in its historical context.

My own questions are as follows:

  • Would this analytical approach yield new and original insights if other long-running historical records such as the Congressional Record were like subject to the research team’s algorithms and analytics?
  • Could companies and other commercial businesses derive any benefits from having their historical records similarly analyzed? For example, might it yield new insights and recommendations for corporate governance and information governance policies and procedures?
  • Could this methodology be used as an electronic discovery tool for litigators as they parse corporate documents produced during a case?

 


*  This is also resembles the methodology and appearance to the graphic on Page 29 of the law review article entitled A Quantitative Analysis of the Writing Style of the U.S. Supreme Court, by Keith Carlson, Michael A. Livermore, and Daniel Rockmore, Dated March 11, 2015, linked to and discussed with the May 15, 2015 Subway Fold post cited above.

Medical Researchers are Developing a “Smart Insulin Patch”

“Spinning Top”, Image by Creativity103

In an innovative joint project at the University of North Carolina and at North Carolina State University, medical researchers are currently developing a “smart insulin patch” that can both measure blood glucose levels and then administer insulin to regulate it as needed for people with Type 1 diabetes. This is yet another approach at the core of much academic and commercial research and development at creating a “closed loop” system that senses and responds to changes in blood sugar.

Other ongoing research in this field is attempting to integrate continuous glucose sensors with insulin pumps, both of which are available on the market but not yet working together in a viable product with regulatory approval. Both of these approaches are efforts to create a biomedical system that can act as a fully functioning artificial pancreas for people with Type 1 diabetes.

The ongoing work on the smart insulin patch was covered in a fascinating article in the June 22, 2015 edition of The Washington Post entitled The ‘Smart’ Insulin Patch That Might One Day Replace Injections for Diabetic Patients by Brady Dennis. I will summarize, annotate and add a few questions of my own. (Two other recent Subway Fold posts on  October 3, 2014 and June 16, 2015, clickable here and here, respectively, have covered one project to upload glucose monitoring data to the mobile devices of friends and relatives, and another by a medical device manufacturer using social media to reach out to people using insulin pumps.)

This new smart insulin patch is a square shape as small as a penny and is word on the skin. One side of it contains numerous tiny “microneedles” that the face the skin and contain “both insulin and a glucose-sensing enzyme”. Thus, when an increase in blood glucose is detected, the patch can release insulin into the patient’s system “quickly and painlessly”. As a result, the necessity for the delivering insulin by traditional means of a syringe or insulin pump is eliminated.

To date, the development team has only tested the patch on mice. Early test results, published here in The Proceedings of the National Academy of Science (subscription required), showed that the patch worked on the test animals starting within 30 minutes of its application and then lasting for up to nine hours.

Dr. John Buse, one of the co-authors and the director of the UNC Diabetes Center, finds this “exciting”, but he also believes it will take years to determine if this will work in humans. A very informative and detailed news release with photos of the patch and the microneedles, entitled Smart Insulin Patch Could Replace Painful Injections for Diabetes, has recently posted on the UNC Diabetes Center website.

Using current technology requires people with Type 1 diabetes to check their blood glucose levels a number of times each day and then corresponding regulate their insulin to balance the effects of these up and down readings. Other researchers have endeavored to “closed the loop” between insulin pumps and continuous glucose monitors, but these systems still require close attention and adjustments by the patient

The smart insulin patch, if proven safe and viable, could one day dramatically change protocols for the care of Type 1 diabetes. It is an attempt to more directly emulate the human body’s own insulin regulatory system. As well, the microneedles in the patch are designed to be far less invasive and nearly painless than today’s use of injections, pumps and sensors, all of which require larger needles to pierce the skin. It is designed to directly “tap into the blood flowing through the capillaries” in order to become activated.

The researcher team has also found that they could “fine tune the patch” to attain blood glucose levels within an acceptable range. As a result, they are hopeful that, in the future, the patch could be adjusted to each individual patient’s system (including, among other things, weight and insulin sensitivity), and the duration of the patch’s effectiveness could be extended to several days.

My questions are as follows:

  • How exactly will the patch be personalized to meet the biological needs of each user? How will patients manage and regulate this from patch to patch? Is the goal to calibrate a single patch for the user or a series of patches as the user’s needs and environment changes?
  • Can the patches be customized and fabricated using today’s commercial 3D printing technology?
  • Will blood glucose levels still need to be checked regularly using current methods in order to assess and align the patch’s effectiveness and accuracy?
  • Can the patch’s data on blood glucose levels and insulin dosages be uploaded onto mobile devices in order to be monitored by the patient’s health professionals and family members?
  • Might the patch be used in conjunction with or even integrated into the Apple Watch as a medical app?
  • Can other medications that a person with diabetes is taking also be administered, monitored and regulated with the patch, perhaps making it even “smarter”?

The BBC is Testing an Experimental Neural Interface for Television Remote Control

"Brain Power", Image by Allan Arifo

“Brain Power”, Image by Allan Arifo

Experimental research into using human brainwaves as an alternative form of input to control computers and prosthetic devices has been underway for a number of years. This technology is often referred to as neural interfaces or brain-computer interfaces. The results thus far have generally been promising. Here is a roundup of reports on ExtremeTech.com.

Another early phase neural interface project has been undertaken by the BBC to develop a system enabling a user to mentally select a program from an onscreen television guide. This was reported in a most interesting article entitled The BBC Wants You to Use Your Brain as a Remote Control by Abhimanyu Ghoshal, posted on TheNextWeb.com on June 18, 2015. While still using my keyboard for now, I will sum up, annotate and pose a few questions.

This endeavor, called the Mind Control TV Project, is a joint effort BBC’s digital unit and a user experience (“UX”) firm called This Place. In its current format, the input hardware is a headset that can read human brainwave signals. The article contains three pictures of the hardware and software (which is a customized version of the BBC’s iPlayer app normally used for viewing TV shows on the network).

To choose from among a number of options present onscreen, the user is required to “‘concentrate’ on it” while wearing the headset. That is, to choose a particular option, the user must concentrate upon it “for a few seconds”. A meter in the interface indicates the level of brain activity the user is generating and the “threshold” he or she must reach in order to initiate their choice.

The BBC hopes that this research will, in the future, benefit people with physical and neural disabilities that restrict their movements.

My questions are as follows:

  • Could this system eventually be so miniaturized that it could be integrated into an ordinary pair of glasses, perhaps Google Glass or something else?
  • Notwithstanding the significant benefits mentioned in this article, what other types of apps and systems might also find advantages in adapting neural interfaces?
  • What entrepreneurial opportunities might be waiting out there as a result of this technology?
  • How might neural interfaces be integrated with the current wave of virtual and augmented reality systems (covered in these seven recent Subway Fold posts), about to very soon enter the consumer market?

The New York Times Introduces Virtual Reality Tech into Their Reporting Operations

"Mobile World Congress 2015", Image by Jobopa

“Mobile World Congress 2015”, Image by Jobopa

As incredibly vast as New York City is, it has always been a great place to walk around. Its multitude of wonderfully diverse neighborhoods, streets, buildings, parks, shops and endless array of other sites can always be more fully appreciated going on foot here and there in – – as we NYC natives like call it – – “The City”.

The April 26, 2015 edition of The New York Times Magazine was devoted to this tradition. The lead off piece by Steve Duenes was entitled How to Walk in New York.  This was followed by several other pieces and then reports on 15 walks around specific neighborhoods. (Clicking on the Magazine’s link above and then scrolling down to the second and third pages will produce links to nearly all of these articles.) I was thrilled by reading this because I am such an avid walker myself.

The very next day, on May 27, 2015, Wired.com carried a fascinating story about how one of the issues’ accompanying and rather astonishing supporting graphics was actually done in a report by Angela Watercutter entitled How the NY Times is Sparking the VR Journalism Revolution.  But even that’s not the half of it – – the NYTimes has made available for downloading a full virtual reality file of the full construction and deconstruction of the graphic. The Wired.com post contains the link as well as a truly mind-boggling high-speed YouTube video of the graphic’s rapid appearance and disappearance and a screen capture from the VR file itself. (Is “screen capture” really accurate to describe it or is something more like “VR  frame”?)  This could take news reporting into an entirely new dimension where viewers literally go inside of a story.

I will sum up, annotate and pose a few questions about this story. (For another other enthusiastic Subway Fold post about VR, last updated on March 26, 2015, please see Virtual Reality Movies Wow Audiences at 2015’s Sundance and SXSW Festivals.)

This all began on April 11, 2015 when a French artist named JR pieced together and then removed in less than 24 hours, a 150-foot photograph right across the street from the landmark Flatiron Building. This New York Times commissioned image was of “a 20-year-old Azerbaijani immigrant named Elmar Aliyev”. It was used on the cover of this special NYTimes Magazine edition. Upon its completion JR then photographed from a helicopter hovering above. (See the March 19, 2015 Subway Fold post entitled  Spectacular Views of New York, San Francisco and Las Vegas at Night from 7,500 Feet Up for another innovative project inject involving highly advanced photography of New York also taken from a helicopter.)

The NYTimes deployed VR technology from a company called VRSE.tools to transform this whole artistic experience into a fully immersive presentation entitled Walking New York. The paper introduced this new creation at a news conference on April 27th. To summarize the NYTimes Magazine’s editor-in-chief, Jake Silverstein, this project was chosen for a VR implementation because it would so dramatically enhance a viewer’s experience of it. Otherwise, pedestrians walking over the image across the sidewalk would not nearly get the full effect of it.

Viewing Walking New York in full VR mode will require an app from VRSE’s site (linked above), and a VR viewer such as, among others, Google Cardboard.

The boost to VR as an emerging medium by the NYTimes‘ engagement on this project is quite significant. Moreover, this demonstrates how it can now be implemented in journalism. Mr. Silverman, to paraphrase his points of view,  believes this demonstrates how it can be used to literally and virtually bring someone into a story. Furthermore, by doing so, the effect upon the VR viewer is likely to be an increased amount of empathy for certain individuals and circumstances who are the subjects of these more immersive reports.

There will more than likely be a long way to go before “VR filming rigs” can be sent out by news organizations to cover stories as they occur. The hardware is just now that widespread or mainstream yet. As well, the number of people who are trained and know how to use this equipment is still quite small and, even for those who do, preparing such a virtual presentation lags behind today’s pace of news reporting.

Another journalist venturing into VR work is Newsweek reporter Nonny de la Pena’s reconstruction of the shooting in the Trayvon Martin case. (See ‘Godmother of VR’ Sees Journalism as the Future of Virtual Reality by Edward Helmore, posted on The Guardian’s website on March 11, 2015, for in-depth coverage of her innovative efforts.)

Let’s assume that out on the not too distant horizon that VR journalism gains acceptance, its mobility and ease-of-use increases, and the rosters of VR-trained reporters and producers increases so that this field undergoes some genuine economies of scale. Then, as with many other life cycles of emergent technologies, the applications in this nascent field would only become limited by the imaginations by its professionals and their audiences. My questions are as follows:

  • What if the leading social media platforms such as Twitter, Facebook (which already purchased Oculus, the maker of VR headsets for $2B last year),  LinkedIn, Instagram (VR Instgramming, anyone?), and others integrate VR into their capabilities? For example, Twitter has recently added a live video feature called Periscope that its users have quickly and widely embraced. In fact, it is already being used for live news reporting as users turn their phones towards live events as they happen. Would they just as likely equally swarm to VR?
  • What if new startup social media platforms launch that are purely focused on experiencing news, commentary, and discussion in VR?
  • Will previously unanticipated ethical standards be needed and likewise dilemmas result as journalists move up the experience curve with VR?
  • How would the data and analytics firms that parse and interpret social media looking for news trends add VR newsfeeds into their operations and results? (See the Subway Fold posts on January 21, 2015 entitled The Transformation of News Distribution by Social Media Platforms in 2015 and on December 2, 2014 entitled Startup is Visualizing and Interpreting Massive Quantities of Daily Online News Content.)
  • Can and should VR be applied to breaking news, documentaries and news shows such as 60 Minutes? What could be the potential risks in doing so?
  • Can drone technology and VR news gathering be blended into a hybrid flying VR capture platform?

I am also looking forward to seeing what other applications, adaptations and markets for VR journalism will emerge that no one can possibly anticipate at this point.

The Next Wave in High Tech Materials Science

8652170709_54377eee2f_z

Optical Profilometer Metamaterials, Image by Oak Ridge National Laboratory

Metamaterials are not something used by the United Federation of Planets’ engineers to build the next iteration of the Starship Enterprise (which, btw, would be designated the NCC-1701-F, although some may differ).  Rather, they are materials fabricated in such a manner that they can bend light, sound, radar, radio and seismic waves. The technological implication of applying these materials in antennas, radar, cosmetics and soundproofing may prove to be transformative according to a fascinating article in the March 23, 2015 edition of The New York Times entitled The Waves of the Future May Bend Around Metamaterials, by John Markoff.  I will summarize this, add some links and annotations, and pose some questions.

These substances achieve their remarkable effect by being composed of microscopic “subcomponents” that are smaller than the wavelengths of the types of waves they are engineered to bend in certain ways. That is, they can be used to “manipulate” the waves in designated manners “that do not normally occur”.

Researchers have been developing a variety of metamaterials for the past 15 years. Their work has recently begun yielding some genuine innovations in systems that incorporate these advances in original and innovative ways. Some of these latest developments include:

  • Airbus*and Lamda Guard are about to test a coating on airline windows to deter attempts to blind them with laser pointing devices by someone on the ground. (See NYC Man Charged With Pointing Laser at Aircraft, in the March 15, 2015 edition of The New York Times for a recent case of this here in New York.)
  • Echodyne is working on several types of antennas, radar-based navigation systems and other devices.
  • Evolv Technology is developing airport security systems.
  • Kymeta has partnered with Intelsat to engineer “land-based and satellite-based intelligent antennas”.
  • Dr. Xiang Zhang at the University of California at Berkeley, is working on, among other metamaterials projects, “superlenses” for microscopes that might increase their magnification powers beyond today’s capabilities. He has received inquiries from “military contractors and commercial companies” and even cosmetics companies concerning metamaterials. As well, he and other developers are creating apps for optical computer networks.
  • Professor Vinod Menon and his research team at the City College of New York, in their Laboratory for Nano and Micro Photonics, have demo-ed “light emission from ultrafast-switching LEDs” made from metamaterials. Using this and other related developments may also lead to significantly faster optical computers networks.
  • Menard Construction published a paper in 2013 entitled Seismic Metamaterial: How to Shake Friends and Influence Waves? by S. Brûlé, E.H. Javelaud, S. Enoch and S. Guenneau, where the company successfully tested “a metamaterial grid of empty cylindrical columns bored into soil” in an effort to reduce the effects of a “simulated earthquake”. (The phases in quotes in the last sentence were from the NYTimes article, not the research paper itself.)

The article concludes on a note of great optimism from Professor Zhang about the future of metamaterials. I completely agree. Once these apps and development projects make their way into commercial markets and other scientists and companies from different fields and industries take greater notice, I strongly believe that new forms of metamaterials and their applications will emerge that have not even been imagined yet. Like any dramatically new technology, this will find its applications perhaps in some very unlikely and surprising sectors.

Just to start off, what about medical devices, optical computing and storage devices, visual displays, sound and video recording, and automotive safety technology? Let’s keep watching and see what springs from people’s needs and creativity.

Finally, just a quick mention of a recently published book that received many excellent reviews for a lively and engaging series of stories about the key developments of basic materials and materials science through history entitled Stuff Matters: Exploring the Marvelous Materials That Shape Our Man-Made World by Mark Midownik (Houghton Mifflin Harcourt, 2014).

[While I hope that this blog post will be enlightening, please be assured that no light waves were bent or harmed during the drafting process.]

_______________________

Another innovative project by Airbus to develop a drone for bringing Net access to remote and under-served regions was covered in the November 26, 2014 Subway Fold post entitled Robots and Diamonds and Drones, Aha! Innovations on the Horizon for 2015.

 

Mapping the Distribution of Mobile Device Operating Systems in New York

“Busy Times Square”, Image by Jim Larrison

Scott Galloway, a Clinical Professor of Marketing at NYU Stern School of Business, consultant and entrepreneur, recently gave a remarkable and captivating 15-minute presentation at this year’s Digital Life Design 15 (DLD15) Conference. This event was held in Munich on January 18 through 20, 2015. He examined the four most dominant global companies in the digital world and predicted those among them whose market values might  rise or fall. These included Amazon, Google, Apple and Facebook. Combined, their current market value is more than $1 trillion (yes, that’s trillion with a “t“).

The content and delivery of Professor’s Galloway’s talk is something that I think you will not soon forget. Whether his insights are in whole or in part correct, his talk will motivate you to think about  these four companies who, individually and as a group, exert such monumental economic, technical, commercial, and cultural influence across the entirety of the web. I highly recommend that you click-through and fully view this video.

Towards the end of his presentation, Professor Galloway clicked onto a rather astonishing slide of a heat map of New York City encoded with data points indicating mobile devices using Apple’s IoS, Android or Blackberry operating systems. This particular part of the presentation was covered in a most interesting article entitled Fun Maps: Heat Map of Mobile Operating Systems in NYC by Michelle Young on UntappedCities.com on March 31, 2015. The article adds three very informative additional graphics individually illuminated the spread of each OS. I will briefly recap this report, provide some links and annotations, and add a few comments of my own.

Professor Galloway interprets the results as indicating a correlation between each OS and the relative wealth of different neighborhoods in NYC: IoS devices are more prevalent in areas of higher incomes while Android appears more concentrated in lower income areas and suburbia.

However, Ms. Young believes this mapping is “misleading” and cites another article on UntappedCities.com entitled Beautiful Maps and the Lies They Tell, posted on February 20, 2014. This carefully refuted a series of data-mapped visualizations that were first published and interpreted as showing that only wealthier people used fitness apps.

Furthermore, there have been a series of Twitter posts in response to this heat map stating that the colors used for the heat map (red for IoS, green for Android and purple for Blackberry), might be misleading due to some optical blurring in the colors and geotagged tweets from 2011 to 2013. (X-ref to the March 20, 2015 Subway Fold post entitled Studies Link Social Media Data with Personality and Health Indicators, for other examples of geotagging.) In effect, there may be a structural bias whereby “If Twitter users tend to be on Apple products”.

The data and heat maps notwithstanding, as a New York City native and life-long resident, my own completely unscientific observations tell me that IoS and Android are more evenly split both in terms of absolute numbers and any correlation to the relative wealth of any given neighbor hood. The most obvious thing that jumped out at me was that each day millions of people commute all around the city, mostly into and around Manhattan. However,  this does not seem to have been taken into account. Thus, while User X’s mobile device may show him or her in a wealthier area of Manhattan, he or she might well live in, and commute from, another more working class neighborhood from a considerable distance away.

Rather than using such static heat maps, I would propose that a time-series of readings and data be taken continuously over a week or so. Next, I suggest applying some customized algorithms and analytics to smooth out, normalize and intuit the data. My instincts tell me that the results would indicate a much more homogenous mix of mobile OSes across all or most of the neighborhoods here.