Google’s A/B Testing Method is Being Applied to Improve Government Operations

"A/B", Image by Alan Rothman

“A/B”, Image by Alan Rothman

During my annual visit with my ophthalmologist, he always checks the accuracy of the prescription for my glasses by trying out different pairs of lenses and then asking me to read the letter chart on the wall. For each eye, he switches the lenses back and forth and asks me a series of times “Which is better, 1 or 2?”. This is called a refraction test. My answers either confirm that my current lenses are correct or that I need an updated prescription for new lenses.

I never realized until recently that this method of testing is very similar to one tech companies use to measure and adjust the usability of their products and services. (I view this as my own bit of, well, in-sight.) This process is called “A/B testing“, where test subjects are shown two nearly identical versions of something with one of them containing some slight variation.  Then they are asked to choose which one they prefer between the two.

What if this method was transposed and applied in a seemingly non-intuitive leap to the public sector? A new initiative founded upon this by the US federal government was reported on in a fascinating and instructive article in the September 26, 2015 edition of The New York Times entitled A Better Government, One Tweak at a Time, by Justin Wolfers*. I highly recommend reading it in its entirety. I will summarize and annotate it, and then ask some of my own non-A/B questions. (There is another very informative article on this topic, covering the US and elsewhere, in today’s September 30, 2015 edition of The New York Times entitled Behaviorists Show the U.S. How to Improve Government Operations, by Binyamin Appelbaum.)

Google makes extensive use of this method in their testing and development projects. Their A/B testing has confirmed an effect that social scientists have known for years in that “small changes in how choices are presented can lead to big changes in behavior”. Moreover, effective design is not so much about an aesthetically pleasing appearance as it is about testing competing ideas and gathering data to evaluate which of them works best.

Last year, this project team introduced the effectiveness and success of A/B to the public sector was launched when  the federal government organized a group of officials (enlisted from a wide variety of backgrounds and professions), called the Social and Behavioral Sciences Team (SBST). It is also referred to as the “Nudge Unit“. Their mandate was to “design a better government”. They set out to A/B test different government functions to see what works and what does not.

After a year in operation, they have recently released their first annual report, detailing the many “small tweaks” they have implemented. Each of these changes was subjected to A/B testing. Their results have been “impressive” and imply that their efforts will save $Millions, if not $Billions. Moreover, because these changes are so relatively inexpensive, “even moderate impacts” could produce remarkably “high cost-benefit ratios”.

Among the SBST’s accomplishments are the following:

  • Improving Printing Efficiency: Some, but not all, printers at the US Department of Agriculture presented users with a pop-up message to encourage two-sided printing. As a result, two-sided printing rose by 6%. While this sounds small, its magnitude quickly scales up because US government printers produce 18 billion pages each year. The SBST report suggests that implementing this for the entire federal government could potentially save more than half a billion pages a year.
  • Reminding High School Graduates to Finish Their College Enrollment: Text messages were sent by the researchers to high school students   during the summer after their graduation, urging them to follow-up on the next steps needed to enroll in college. The differential of those who received the texts and those who did not, in terms of completing their enrollment, was 68% to 65%, respectively. The positive effect was more pronounced for low-income students who got these texts. While this 3% improvement also might not sound so large, at a mere cost of doing this at $7 per student, it proved to be tremendously cost-effective as compared to the $Thousands it otherwise costs to offer “grant and scholarship aid”.
  • Increasing Vendors’ Honesty on Tax Forms: Prompts were randomly placed on some versions of a federal-vendor tax collection form asking vendors to be truthful in completing it. Those who used the form containing the prompt reported more taxable sales than those using the untweaked form. In turn, this resulted in vendors voluntarily paying “an additional $1.6 million in taxes”. Again, scaling up this experiment could potentially raise additional $Billions in tax revenue.
  • Raising Applications by Those Eligible for Student Loan Relief: The government knows, through their own methods, who is struggling to repay their federally funded student loans. Another experiment sent a selected group of them emails about applying for loan relief resulted in “many more” applying for it than those who did not receive this message.
  • Lifting Savings Rates for People in the Military: When members of the military service were transferred to Joint Base Myer-Henderson Hall in Virginia, they received a prompt to enroll in the military’s savings plan. The result was a significant rise in participants. This contrasts with no increase by other who were transferred to Fort Bragg in North Carolina and not prompted.
  • Other Successful Experimental “Nudges”:
    • Well written letters resulting in more health care sign-ups
    • Emails urging employees to join workplace savings plans
    • Shortened URLs encouraging more people to pay bills online
    • Telling veterans that they earned rather than were entitled to a program increased their participation in it.

Justin Wolfers, the author of this article, concludes that it is the testing itself that makes for these successes. He very succinctly summarizes this by stating:

“Experiment relentlessly, keep what works, and discard what doesn’t.”

He further asserts that if this is done as Google has done it, the US government might likewise become “clear, user-friendly and unflinchingly effective”.

My own questions about A/B testing by the government include:

  • Would it also produce cost-effective results for state and local governments? Are there any applications that could be done on a multi-national or even global level?
  • Could it be applied to improve electronic and perhaps even online public voting systems?
  • Could it bring incremental improvements in government administered health programs?
  • What would be the result if the government asked the public to submit suggestions online for new A/B testing applications? Could A/B testing itself be done by governments online?
  • Does it lend itself to being open sourced for test projects in the design, collection and interpretation of data?

An earlier and well-regarded book about using a variety of forms of nudges to improve public policies and functions is Nudge: Improving Decisions About Health, Wealth, and Happiness, by Richard H. Thaler and Cass R. Susstein (Penguin Press, 2009).


*  The author’s bio in this article states he is “a senior fellow at the Peterson Institute for International Economics and professor of economics and public policy at the University of Michigan“. (The links were added by me and not included in the original text.)

Facebook is Now Restricting Access to Certain Data About Its User Base to Third Parties

Image by Gerd Altmann

Image by Gerd Altmann

It is a simple and straight-forward basic business concept in any area of commerce: Do not become too overly reliant upon a single customer or supplier. Rather, try to build a diversified portfolio of business relationships to diligently avoid this possibility and, at the same time, assist in developing potential new business.

Starting in May 2015, Facebook instituted certain limits upon access to the valuable data about its 1.5 billion user base¹ to commercial and non-commercial third parties. This has caused serious disruption and even the end of operations for some of them who had so heavily depended on the social media giant’s data flow. Let’s see what happened.

This story was reported in a very informative and instructive article in the September 22, 2015 edition of The Wall Street Journal entitled Facebook’s Restrictions on User Data Cast a Long Shadow by Deepa Seetharaman and Elizabeth Dwoskin. (Subscription required.) If you have access to the WSJ.com, I highly recommend reading in its entirety. I will summarize and annotate it, and then pose some of my own third-party questions.

This change in Facebook’s policy has resulted in “dozen of startups” closing, changing their approach or being bought out. This has also affected political data consultants and independent researchers.

This is a significant shift in Facebook’s approach to sharing “one of the world’s richest sources of information on human relationships”. Dating back to 2007, CEO Mark Zuckerberg opened to access to Facebook’s “social graph” to outsiders. This included data points, among many others, about users’ friends, interests and “likes“.

However, the company recently changed this strategy due to users’ concerns about their data being shared with third parties without any notice. A spokeswoman from the company stated this is now being done in manner that is “more privacy protective”. This change has been implemented to thus give greater control to their user base.

Other social media leaders including LinkedIn and Twitter have likewise limited access, but Facebook’s move in this direction has been more controversial. (These 10 recent Subway Fold posts cover a variety of ways that data from Twitter is being mined, analyzed and applied.)

Examples of the applications that developers have built upon this data include requests to have friends join games, vote, and highlight a mutual friend of two people on a date. The reduction or loss of this data flow from Facebook will affect these and numerous other services previously dependent on it. As well, privacy experts have expressed their concern that this change might result in “more objectionable” data-mining practices.

Others view these new limits are a result of the company’s expansion and “emergence as the world’s largest social network”.

Facebook will provide data to outsiders about certain data types like birthdays. However, information about users’ friends is mostly not available. Some developers have expressed complaints about the process for requesting user data as well as results of “unexpected outcomes”.

These new restrictions have specifically affected the following Facebook-dependent websites in various ways:

  • The dating site Tinder asked Facebook about the new data policy shortly after it was announced because they were concerned that limiting data about relationships would impact their business. A compromise was eventually obtained but limited this site only to access to “photos and names of mutual friends”.
  • College Connect, an app that provided forms of social information and assistance to first-generation students, could not longer continue its operations when it lost access to Facebook’s data. (The site still remains online.)
  • An app called Jobs With Friends that connected job searchers with similar interests met a similar fate.
  • Social psychologist Benjamin Crosier was in the process of creating an app searching for connections “between social media activity and ills like drug addiction”. He is currently trying to save this project by requesting eight data types from Facebook.
  • An app used by President Obama’s 2012 re-election campaign was “also stymied” as a result. It was used to identify potential supporters and trying to get them to vote and encourage their friends on Facebook to vote or register to vote.²

Other companies are trying an alternative strategy to build their own social networks. For example, Yesgraph Inc. employs predictive analytics³ methodology to assist clients who run social apps in finding new users by data-mining, with the user base’s permission, through lists of email addresses and phone contacts.

My questions are as follows:

  • What are the best practices and policies for social networks to use to optimally balance the interests of data-dependent third parties and users’ privacy concerns? Do they vary from network to network or are they more likely applicable to all or most of them?
  • Are most social network users fully or even partially concerned about the privacy and safety of their personal data? If so, what practical steps can they take to protect themselves from unwanted access and usage of it?
  • For any given data-driven business, what is the threshold for over-reliance on a particular data supplier? How and when should their roster of data suppliers be further diversified in order to protect themselves from disruptions to their operations if one or more of them change their access policies?

 


1.   Speaking of interesting data, on Monday, August 24, 2015, for the first time ever in the history of the web, one billion users logged onto the same site, Facebook. For the details, see One Out of Every 7 People on Earth Used Facebook on Monday, by Alexei Oreskovic, posted on BusinessInsider.com on August 27, 2015.

2See the comprehensive report entitled A More Perfect Union by Sasha Issenberg in the December 2012 issue of MIT’s Technology Review about how this campaign made highly effective use of its data and social networks apps and data analytics in their winning 2012 re-election campaign.

3.  These seven Subway Fold posts cover predictive analytics applications in range of different fields.

VR in the OR: New Virtual Reality System for Planning, Practicing and Assisting in Surgery

"Neural Pathways in the Brain", Image by NICHD

“Neural Pathways in the Brain”, Image by NICHD

The work of a new startup and some pioneering doctors has recently given the term “operating system” and entirely new meaning: They are using virtual reality (VR) to experimentally plan, practice and assist in surgery.

Currently, VR technology is rapid diversifying into new and exciting applications across a widening spectrum of fields and markets. This surgical one is particularly promising because of its potential to improve medical care. Indeed, this is far beyond the VR’s more familiar domains of entertainment and media.

During the past year, a series of Subway Fold posts have covered closely related VR advancements and apps in moviesnews reporting, and corporate advisory boards. (These are just three of ten posts here in the category Virtual and Augmented Reality.)

Virtual Surgical Imaging

The details of these new VR surgical systems was reported in a fascinating article entitled posted on Smithsonian.com on September 15, 2015 entitled How Is Brain Surgery Like Flying? Put On a Headset to Find Out by Michelle Z. Donahue. I highly recommend reading it in its entirety. I will summarize, annotate, and pose a few non-surgical questions of my own.

Surgeons and developers are creating virtual environments by combining and enhancing today’s standard two-dimensional medical scans. The surgeons can then use the new VR system to study a particular patient’s internal biological systems. For example, prior to brain surgery, they can explore the virtual representation of the area to be operated upon before as well as after any incision has been made.

For example, a fourth year neurosurgery resident named Osamah Choudhry at NYU’s Langone Medical Center, has had experience doing this recently in a 3D virtualization of a patient’s glioma, a form of brain tumor. His VR headset is an HTC Vive used in conjunction with a game controller that enables him to move around and view the subject from different perspectives, and see the fine details of connecting nerve and blood vessels. Furthermore, he has been able to simulate a view of some of the pending surgical procedures.

SNAP Help for Surgeons

This new system that creates these fully immersive 3D surgical virtualizations is called the Surgical Navigation Advanced Platform (SNAP). It was created by a company in Ohio called Surgical Theater ( @SurgicalTheater ).  It can be used with either the Oculus Rift or HTC Vive VR headsets (neither of which has been commercially released yet).  Originally, SNAP was intended for planning surgery, as it is in the US. Now it is being tested by a number of hospitals for actual usage during surgery in Europe.

Surgeons using SNAP today need to step away from their operations and change gloves. Once engaged with this system, they can  explore the “surgical target” and then “return to the patient with a clear understanding of next steps and obstacles”. For instance, SNAP can assist in accurately measuring and focusing upon which parts of a brain tumor to remove as well which areas to avoid.

SNAP’s chance origin occurred when former Israeli fighter pilots Moty Avisar and Alon Geri were in Cleveland at work on a flight simulator for the US Air Force. While having a cup of coffee, some of their talk was overheard by Dr. Warren Selman who is the chair of neurosurgery at Case Western University. He inquired whether they could adapt their system for surgeons to enable them to “go inside a tumor” in order to see how best to remove a tumor while avoiding “blood vessels and nerves”. This eventually led to Avisar and Geri to form Surgical Theater. At first, their system produced a 3D model that was viewed on a 2D screen. The VR headset was integrated later on.

System Applications

SNAP’s software merges a patient’s CT and MRI images to create its virtual environment. Thereafter, a neurosurgeon can, with the assistance of a handheld controller, use the VR headset to “stand next to or even inside the tumor or aneurysm“.  This helps them to plan the craniotomy, the actual surgery on the skull, and additional parts of the procedures. As well, the surgeon can examine the virtual construct of the patient’s vascular system.

At NYU’s Langone Medical Center, the Chair of Neurosurgery, Dr. John Golfinos, believes that SNAP is a significant advancement in this field as doctors previously had to engage in all manner of “mental gymnastics” when using 2D medical imaging to visualize a patient’s condition. Today, with a system SNAP, simulations are much accurate in presenting patients the way that surgeons see them.

Dr. Golfinos has applied SNAP to evaluating “tricky procedures” such as whether or not to use an endoscopic tool for an operation. SNAP was helpful in deciding to proceed in this manner and the outcome was successful.

UCLA’s medical school, the David Geffen School of Medicine, is using SNAP in “research studies to plan surgeries and a procedure’s effectiveness”. The schools Neurosurgery Chair, Dr. Neil Martin, has been working with Surgical Theater to smooth over the disorientation some users experience with VR headsets.

Dr. Martin and Mr. Avisar believe that SNAP “could take collaboration on surgeries to an international level”. That is, surgeons could consult with each other on a global scale to assist with operations that place then within a shared virtual space to cooperate on an operation.

Patient Education and Future Developments

Dr. Choudhry further believes that the Oculus Rift or Vive headsets can be used to answer the questions of patients who have done their own research and devised their own questions, as well as improve the doctor/patient relationship. He has seen patients quickly “lose interest” when he uses 2D CT and MRI scans to explain these images. However, he believes that 3D VR “is intuitive” as patients recognize what they are viewing.

He also believes that future developments might lead to the integration of augmented reality systems into surgery. These present, through a transparent viewer and headset, a combination of a virtual data overlay within the user’s line of sight upon the real operating room.  (Please see these eight recent Subway Fold posts about augmented reality.)

My own questions are as follows:

  • Are VR surgical systems used only after a decision to operate has been made or are surgeons also using them to assess whether or not to operate?
  • What other types of surgeries could benefit both doctors and patients by introducing the option of using VR surgical systems?
  • Can such systems be used for other non-invasive medical applications such as physical therapy for certain medical conditions and post-operative care?
  • Can such systems be adapted for non-medical applications such as training athletes? That is, can such imagery assist in further optimizing and personalizing training regimens?

May 11, 2017 Update: For a truly fascinating report on a new surgical system that has just been developed using advanced augmented realty technology, see Augmented Reality Goggles Give Surgeons X-ray Vision, by Matt Reynolds, posted on NewScientist.com on May 11, 2017.


 

Terahertz Spectrum Technology May Produce Major Increases in Wireless Network Speeds

Communications Hardware", Image by Tom Blackwell

“Communications Hardware”, Image by Tom Blackwell

Remember when upgrading from a 14.4 baud modem to a 33.6 baud modem felt as though you had moved from bicycle to Indy racing car online? What about when you had DSL installed and you had never experienced anything like it? How about when you next had a T1 line at the office and then a cable modem hooked up at home? All of these drastic jumps in transmission speeds helped to fuel the exponential growth in the web’s evolving architecture, rich and limitless content, and integration into nearly every aspect of modern life.

While the next disruptive jump in speed has yet to occur, researchers and developers are currently working on technology to exploit a still little used area of the electromagnetic spectrum called terahertz (“THz”) waves. Should this come to pass, wireless bandwidth rates could potentially increase by 100 fold or more over today’s WiFi and mobile networks. Beyond increasing the velocity at which videos of cats playing the piano can be distributed and viewed, this technology could have a major impact on the entire world of wireless access, services and devices.

Nonetheless, despite the alluring promise of THz wireless, some key engineering challenges remain to be solved.

The latest significant advance in this early field was reported in a most interesting article posted on Phys.org on September 14, 2015 entitled Physicists Develop Key Component for Terahertz Wireless. (No author is credited.) I will summarize, annotate and pose some of my own questions derived from the blog-wave portion of the spectrum.

A team of researchers from Brown University and Osaka University have developed the “first system for multiplexing terahertz waves”. This is, by definition, a technological means to share multiple communication streams over a single resource such as a cable simultaneously carrying multiple TV channels or phone calls. (However, it is distinctly different from the multiplex movie theaters currently showing a dozen or more of the latest movies at time along with offering way overpriced snacks at the concession stands.) Another device often needed to reverse this process is called a demultiplexer.

The development team’s work on this advancement was published in the September 14, 2015 online edition of Nature Photonics in a paper entitled Frequency-division Multiplexing in the Terahertz Range Using a Leaky-wave Antenna by Nicholas J. Karl, Robert W. McKinney, Yasuaki Monnai, Rajind Mendis & Daniel M. Mittleman. (A subscription is required for full access.)

The “leaky wave antenna” at the core of this consists of “two metal plates place in parallel to form a waveguide“. As the THz waves move across this waveguide they “leak out a[t] different angles depending on their frequency”. In turn, the various frequencies can disperse individual streams of data riding on these THz waves. Devices at the receiving end will be able to capture this data from a unique stream.

According to the researchers, their new approach has the advantage of being able “to adjust the spectrum bandwidth that can be allocated to each channel”. This could be quite helpful if and when their new multiplexer is added to a data network. In effect, bandwidth can be apportioned to the network users’ individual data needs.

The team is planning to continue their development of the THz multiplexer. This includes integrating, testing and improving it in a “prototype terahertz network” they are building. A member of the team and co-author of their paper, Daniel M. Mittleman, hopes that their work will inspire other researchers to join in developing other original THz network technologies.

Assuming that THz wireless networks will be deployed in the future, my questions are as follows:

  • Will today’s wireless service providers adapt their networks if THz technology proves to be technically and economically feasible? Will new providers emerge in the telecom marketplace?
  • What new types of services will become enabled by THz?
  • Will it bring broadband transmission rates to underserved geographic areas around the world?
  • How will providers model and test the elasticity of the pricing for their THz services? Are current pricing schemes sufficient or are new alternatives needed?
  • What entrepreneurial opportunities await for companies developing THz systems and those leveraging its capabilities for content creation and delivery?
  • As more advertising continues to migrate to wireless platforms, how will marketing and content strategists use THz to their advantage?

Vermont’s Legislature is Considering Support for Blockchain Technology and Smart Contracts

"Ledger Detailing External Work Commissioned at Holmes McDougall", Image by Edinburgh City of Print

“Ledger Detailing External Work Commissioned at Holmes McDougall”, Image by Edinburgh City of Print

Merriam-Webster.com lists two definitions for the word legerdemain:”1. Sleight of hand. 2. A display of skill or adroitness”.

If a newly passed bill and a currently pending amendment to it in the state of  Vermont’s legislature produce their intended results and, taking the second of the above definitions into consideration, I think that such a combination might give rise to a, well, [bit]coining of a new homophone: ledgerdomain. That is, a conflation of:

  • the blockchain technology upon which the online ledger for Bitcoin and a growing array of other systems is built, and
  • the concept of the domain name, a critical Internet protocol.

The details of this very interesting story involving this unique meeting of the virtual and legislative worlds were reported in an article posted on cointelegraph.com on August 5, 2015 entitled Vermont Considering Blockchain Tech for State Records, Smart Contracts by Brian Cohen. I highly recommend clicking-through and reading it in its entirety. I will summarize just those parts of the article about the blockchain legislation and smart contracts, provide some annotations, and then add some of my own questions to the ledger.

Vermont’s Blockchain Legislation

This new legislation is intended to move the state towards using blockchain technology for “records, smart contracts and other applications”. One of the key distinctions here is that Vermont is not in any manner approving or adopting Bitcoin, but rather, the state is diversifying and adapting the underlying blockchain technology that supports it. Just recently, we examined a comparable effort in the music industry in the August 21, 2015 Subway Fold post entitled Two Startups’ Note-Worthy Efforts to Adapt Blockchain Technology for the Music Industry. (Please see also the May 8, 2015 SF post entitled Book Review of “The Age of Cryptocurrency”.)

In June 2015, a bill entitled No. 51. An Act relating to promoting economic development was signed into law by Vermont’s Governor, Peter Shumlin. On Page 7 is “Sec. A.3 Study and Report: Blockchain Technology”, requiring a report to be completed by January 16, 2016 on the “potential opportunities and risks” of using blockchain technology “for electronic facts and records”.

An as yet to be signed amendment to this legislation by Vermont General Assembly Senator Becca Balint is a “roadmap if there are favorable findings” in this report.  In April 2015, the amendment mentioned above appearing on Pages 2 and 3 of the PDF file for Sec. 47. 9 V.S.A. Chapter 2: Electronic Verification Of Facts And Records: § 11. Blockchain Enabling was introduced. The relevant text of §11(a) appears as:

Blockchain technology shall be a recognized practice for the verification of a fact or record, and those facts or records established through a valid blockchain technology process shall have a presumption of validity for matters to be determined subject to, or in accordance with, the laws of the State of Vermont

Because of some recent negative publicity about a number of cases of alleged illegality involving Bitcoin, the virtual currency is never mention in any of the official text. Instead, the focus of the bill and the amendment are squarely upon exploring the potential of the blockchain. It is the promising technological capabilities of the online ledger system that have drawn this serious attention from Vermont’s legislators.

Smart Contracts Resources

Oliver R. Goodenough, the Director of the Center for Legal Innovation at the Vermont School of Law, drafted the amendment. In his previous state legislative testimony along with his supporting memorandum on April 1, 2015 by Professor Goodenough, among other topics, he addressed the need for recognition of smart contracts. He mentioned the advances being made on these systems that “permit the statement of contractual obligations in software” covered in his own academic writing and the work of the software companies Ethereum (another link here covers this startup on Wikipedia), and Exari. He further recommended making “Vermont a leader in the field”.

Among the citations to the professor’s memo is one from an Office of Financial Research of the U.S. Department of Treasury Working Paper entitled Contract as Automaton: The Computational Representation of Financial Agreements. (This was dated March 26, 2015, just four days prior to his legislative testimony.) In turn, this paper contains a link to a one-hour YouTube video entitled Ethereum Contracts as Legal Contracts. This is an in-depth presentation by patent attorney Tom Johnson where he discusses the legality of smart contracts and documents using Ethereum. (I believe this video is quite informative and enlightening for anyone who is interested in the legal aspects and implications of Bitcoin and blockchain technology.)

My own questions are as follows:

  • How can the operations and possible benefits of adopting blockchain technology be effectively introduced to other states’ legislators and their constituents?
  • Should the US federal government and federal agencies initiate such studies for their operations?
  • Should local, state and federal judicial systems also undertake pilot studies to weigh the risks and rewards of introducing the blockchain applications?
  • What, if any, potential benefits would the large numbers of commercial contractors who deal with government agencies derive from applications of the blockchain?
  • Where should potential entrepreneurs now be looking in this early market to provide guidance and services for the possible evaluation, planning, installation, implementation, rollout, maintenance and upgrades of blockchain-based government IT systems?

For another comprehensive and timely article on the early stage blockchain work now being done in the private sector within the financial industry, I highly recommend clicking-through and reading an article from the August 28, 2015 edition of The New York Times entitled Bitcoin Technology Piques Interest on Wall St., by Nathaniel Popper. It also references and links to the cointelegraph.com report discussed above and well as the August 5, 2015 billboard.com article which was the basis for the August 21, 2015 Subway Fold post linked to in the fourth paragraph above concerning the music industry.

New Visualization Maps Out the Concepts of the “Theory of Everything”

“DC891- Black Hole”, Image by Adriana Arias

January 6, 2017: An update on this post appears below.


While I was a student in the fourth grade at Public School 79, my teacher introduced the class to the concept of fractions. She demonstrated this using the classic example of the cutting up a pie into different numbers of slices. She explained to the class about slicing it into halves, thirds, quarters and so on. During this introductory lesson, she kept emphasizing that the sum of all the parts always added up to the whole pie and that they could never be equal to more than or less than the whole.

I thought I could deal with this fractions business back then. As far as I know, it still holds up pretty well today.

On an infinitely grander and brain-bendingly complex scale that is more than just pieces of π, physicists have been working for decades on a Theory of Everything (ToE). The objective is to build a comprehensive framework that fully unites and explains the theoretical foundations of physics across the universe. The greatest minds in this field have approached this ultimate challenge with a variety of highly complex and advanced mathematics, theoretical constructs and proposals. Many individuals and multidisciplinary teams are still at work try to achieve the ToE. If and when anyone of them succeeds formulating and proving it, the result will be the type of breakthrough that will potentially have profound changes on our understanding of our world and the universe we inhabit.

Einstein was one of the early pioneers in this field. He invested a great deal of effort in this challenge but even a Promethean genius such as him never succeeded at it.  His General Theory of Relativity continues to be one of the cornerstones of the ToE endeavor. The entire September 2015 issue of Scientific American is devoted to the 100th anniversary of this monumental accomplishment. I highly recommend reading this issue in its entirety.

I also strongly urge you to check out a remarkable interactive visualization of the component theories and concepts of the ToE that was posted in an August 3, 2015 post on Quantamagazine.org entitled Theories of Everything, Mapped by Natalie Wolchover. The author very concisely explains how the builder of the map, developer Emily Fuhrman, created it in order to teach people about ToE. Furthermore, it shows that there are areas with substantial “disunions, holes and inconsistencies” remaining that comprise the “deep questions that must be answered” in order to achieve the ToE.

The full map is embedded at the top of the article, ready for visitors to click into it and immerse themselves immerse in such topics as, among many others, grand unification, quantum gravity and dark matter.  All along the way, there are numerous linked resources within it available for further extensive explorations. In my humble opinion, Ms. Fuhrman has done a brilliant job of creating this.

Having now spent a bit of time clicking all over this bounty of fascinating information, I was reminded of my favorite line from Bob Dylan’s My Back Pages that goes “Using ideas as my maps”. (The Byrds also had a hauntingly beautiful Top 40 hit covering this.)

In these 26 prior Subway Fold posts we have examined a wide range of the highly inventive and creative work that can be done with contemporary visualization tools. This ToE map is yet another inspiring example. Even if subjects like space-time and the cosmological constant are not familiar to you, this particularly engaging visualization expertly arranges and explains the basics of these theoretical worlds. It also speaks to the power of effective visualization in capturing the viewer’s imagination about a subject which, if otherwise only left as text, would not succeed in drawing most online viewers in so deeply.


 

January 6, 2017 Update:

Well, it looks like those Grand Unified Fielders have recently suffered another disappointing bump in the road (or perhaps in the universe), as they have been unable to find any genuine proton decay. Although this might sound like something your dentist has repeatedly warned you about, it is rather an anticipated physical phenomenon on the road to the finding the Theory of Everything that has yet to be observed and measured. This put that quest on hold for time being unless and until either it is observed or physicists and theorists can work around its absence. The full details appear in a new article entitled Grand Unification Dream Kept at Bay, by Natalie Wolchover (the same author whose earlier article on this was summarized above), in QuantaMagazine.com, posted on December 15, 2016.

A Thrilling Visit to the New One World Observatory at the Top of the World Trade Center

World Trade Center One on August 29, 2015, looking west from Vesey Street.

World Trade Center One on August 29, 2015, looking west on Vesey Street.

On Saturday, August 29, 2015, I had the great pleasure of visiting the new One World Observatory at the top of the World Trade Center. The 360-degree view from the 101st, 102nd and 103rd floors was absolutely spectacular.

A 48-second elevator ride took visitors all of the way up to 103rd floor. On the four walls of the elevator was an immersive animation of New York rapidly rising and growing over its 500 year history. Once we arrived and left the elevator we were then guided forward and shown an extraordinary video for several minutes about the construction of the new One World Trade Center. The large screen went up and there was a breathtaking panoramic view of New York. The Observatory was then open for visitors to walk around, marvel at, and photograph the view. Fortunately, it was a clear and brilliantly sunny day.

I love New York, my hometown and residence for my entire life, more than I really know how to put into words. Seeing all of it in its full glory from so high up was nearly overwhelming.

If you live in the New York Metro area or you are ever here for a visit, I cannot recommend this experience highly enough. You will remember it for a long time.

I took all of the pictures in this post. I have arranged the views below moving from south to east to north to west. I hope they provide you with some sense of the beauty and scope of this great city.

 

Looking south, the Statue of Liberty is in the middle of New York Harbor.

 

Boats traveling south, passing by Governor's Island in New York Harbor. The Yellow boat in the middle is the Staten Island Ferry.

Boats traveling south, passing by Governor’s Island in New York Harbor. The yellow boat in the middle is the Staten Island Ferry.

 

The southern end of Manhattan in the Wall street area. The green area in the middle is Battery Park.

The southern end of Manhattan in the Wall Street area. The greenery in the middle right before the water is Battery Park.

 

Looking east, this is the historic Brooklyn Bridge connecting the boroughs of Manhattan and Brooklyn.

Looking east, this is the historic Brooklyn Bridge connecting the boroughs of Manhattan and Brooklyn.

 

Looking northeast, top to bottom is Queens, the East River, and the Lower West Side of Manhattan.

Looking northeast, top to bottom is Queens, the East River, and the lower eastern side of Manhattan.

 

Looking North, a very long view of Manhattan. The Empire State Building is in the middle left of this picture.

Looking North, a very long view of Manhattan. The Empire State Building is in the middle left.

 

Still looking north in Manhattan from a different view more towards the west.

Still looking north in Manhattan from a different view more towards the western side of the island.

 

Looking west, a series of boats sailing north on the Hudson River.

Looking northwest, a series of boats sailing north on the Hudson River. Jersey City is to the left.

 

Looking northwest, a wider perspective of the Hudson River.

Looking northwest, a longer perspective of the Hudson River.