LinkNYC Rollout Brings Speedy Free WiFi and New Opportunities for Marketers to New York

Link.NYC WiFi Kiosk 5, Image by Alan Rothman

Link.NYC WiFi Kiosk 5, Image by Alan Rothman

Back in the halcyon days of yore before the advent of smartphones and WiFi, there were payphones and phone booths all over of the streets in New York. Most have disappeared, but a few scattered survivors have still managed to hang on. An article entitled And Then There Were Four: Phone Booths Saved on Upper West Side Sidewalks, by Corey Kilgannon, posted on NYTimes.com on February 10, 2016, recounts the stories of some of the last lonely public phones.

Taking their place comes a highly innovative new program called LinkNYC (also @LinkNYC and #LinkNYC). This initiative has just begun to roll out across all five boroughs with a network of what will become thousands of WiFi kiosks providing free and way fast free web access and phone calling, plus a host of other online NYC support services. The kiosks occupy the same physical spaces as the previous payphones.

The first batch of them has started to appear along Third Avenue in Manhattan. I took the photos accompanying this post of one kiosk at the corner of 14th Street and Third Avenue. While standing there, I was able to connect to the web on my phone and try out some of the LinkNYC functions. My reaction: This is very cool beans!

LinkNYC also presents some potentially great new opportunities for marketers. The launch of the program and the companies getting into it on the ground floor were covered in a terrific new article on AdWeek.com on February 15, 2015 entitled What It Means for Consumers and Brands That New York Is Becoming a ‘Smart City’, by Janet Stilson. I recommend reading it in its entirety. I will summarize and annotate it to add some additional context, and pose some of my own ad-free questions.

LinkNYC Set to Proliferate Across NYC

Link.NYC WiFi Kiosk 2, Image by Alan Rothman

Link.NYC WiFi Kiosk 2, Image by Alan Rothman

When completed, LinkNYC will give New York a highly advanced mobile network spanning the entire city. Moreover, it will help to transform it into a very well-wired “smart city“.¹ That is, an urban area comprehensively collecting, analyzing and optimizing vast quantities of data generated by a wide array of sensors and other technologies. It is a network and a host of network effects where a city learns about itself and leverages this knowledge for multiple benefits for it citizenry.²

Beyond mobile devices and advertising, smart cities can potentially facilitate many other services. The consulting firm Frost & Sullivan predicts that there will be 26 smart cities across the globe during by 2025. Currently, everyone is looking to NYC to see how the implementation of LinkNYC works out.

According to Mike Gamaroff, the head of innovation in the New York office of Kinetic Active a global media and marketing firm, LinkNYC is primarily a “utility” for New Yorkers as well as “an advertising network”. Its throughput rates are at gigabit speeds thereby making it the fastest web access available when compared to large commercial ISP’s average rates of merely 20 to 30 megabits.

Nick Cardillicchio, a strategic account manager at Civiq Smartscapes, the designer and manufacturer of the LinkNYC kiosks, said that LinkNYC is the only place where consumers can access the Net at such speeds. For the AdWeek.com article, he took the writer, Janet Stilson, on a tour of the kiosks include the one at Third Avenue and 14th Street, where one of the first ones is in place. (Coincidentally, this is the same kiosk I photographed for this post.)

There are a total of 16 currently operational for the initial testing. The WiFi web access is accessible with 150 feet of the kiosk and can range up to 400 feet. Perhaps those New Yorkers actually living within this range will soon no longer need their commercial ISPs.

Link.NYC WiFi Kiosk 4, Image by Alan Rothman

Link.NYC WiFi Kiosk 4, Image by Alan Rothman

The initial advertisers appearing in rotation on the large digital screen include Poland Spring (see the photo at the right), MillerCoors, Pager and Citibank. Eventually “smaller tablet screens” will be added to enable users to make free domestic voice or video calls. As well, they will present maps, local activities and emergency information in and about NYC. Users will also be able to charge up their mobile devices.

However, it is still too soon to assess and quantify the actual impact on such providers. According to David Krupp, CEO, North America, for Kinetic, neither Poland Spring nor MillerCoors has produced an adequate amount of data to yet analyze their respective LinkNYC ad campaigns. (Kinetic is involved in supporting marketing activities.)

Commercializing the Kiosks

The organization managing LinkNYC, the CityBridge consortium (consisting of Qualcomm, Intersection, and Civiq Smartscapes) , is not yet indicating when the new network will progress into a more “commercial stage”. However, once the network is fully implemented with the next few years, the number of kiosks might end up being somewhere between 75,000 and 10,000. That would make it the largest such network in the world.

CityBridge is also in charge of all the network’s advertising sales. These revenues will be split with the city. Under the 12-year contract now in place, this arrangement is predicted to produce $500M for NYC, with positive cash flow anticipated within 5 years. Brad Gleeson, the chief commercial officer at Civiq, said this project depends upon the degree to which LinkNYC is “embraced by Madison Avenue” and the time need for the network to reach “critical mass”.

Because of the breadth and complexity of this project, achieving this inflection point will be quite challenging according to David Etherington, the chief strategy officer at Intersection. He expressed his firm’s “dreams and aspirations” for LinkNYC, including providing advertisers with “greater strategic and creative flexibility”, offering such capabilities as:

  • Dayparting  – dividing a day’s advertising into several segments dependent on a range of factors about the intended audience, and
  • Hypertargeting – delivering advertising to very highly defined segments of an audience

Barry Frey, the president and CEO of the Digital Place-based Advertising Association, was also along for the tour of the new kiosks on Third Avenue. He was “impressed” by the capability it will offer advertisers to “co-locate their signs and fund services to the public” for such services as free WiFi and long-distance calling.

As to the brand marketers:

  • MillerCoors is using information at each kiosk location from Shazam, for the company’s “Sounds of the Street” ad campaign which presents “lists of the most-Shazammed tunes in the area”. (For more about Shazam, see the December 10, 2014 Subway Fold post entitled Is Big Data Calling and Calculating the Tune in Today’s Global Music Market?)
  • Poland Spring is now running a 5-week campaign featuring a digital ad (as seen in the third photo above). It relies upon “the brand’s popularity in New York”.

Capturing and Interpreting the Network’s Data

Link.NYC WiFi Kiosk 1, Image by Alan Rothman

Link.NYC WiFi Kiosk 1, Image by Alan Rothman

Thus far, LinkNYC has been “a little vague” about its methods for capturing the network’s data, but has said that it will maintain the privacy of all consumers’ information. One source has indicated that LinkNYC will collect, among other points “age, gender and behavioral data”. As well, the kiosks can track mobile devices within its variably 150 to 400 WiFi foot radius to ascertain the length of time a user stops by.  Third-party data is also being added to “round out the information”.³

Some industry experts’ expectations of the value and applications of this data include:

  • Helma Larkin, the CEO of Posterscope, a New York based firm specializing in “out-of- home communications (OOH)“, believes that LinkNYC is an entirely “new out-of-home medium”. This is because the data it will generate “will enhance the media itself”. The LinkNYC initiative presents an opportunity to build this network “from the ground up”. It will also create an opportunity to develop data about its own audience.
  • David Krupp of Kinetic thinks that data that will be generated will be quite meaningful insofar as producing a “more hypertargeted connection to consumers”.

Other US and International Smart City Initiatives

Currently in the US, there is nothing else yet approaching the scale of LinkNYC. Nonetheless, Kansas City is now developing a “smaller advertiser-supported  network of kiosks” with wireless support from Sprint. Other cities are also working on smart city projects. Civiq is now in discussions with about 20 of them.

Internationally, Rio de Janeiro is working on a smart city program in conjunction with the 2016 Olympics. This project is being supported by Renato Lucio de Castro, a consultant on smart city projects. (Here is a brief video of him describing this undertaking.)

A key challenge facing all smart city projects is finding officials in local governments who likewise have the enthusiasm for efforts like LinkNYC. Michael Lake, the CEO of Leading Cities, a firm that help cities with smart city projects, believes that programs such as LinkNYC will “continue to catch on” because of the additional security benefits they provide and the revenues they can generate.

My Questions

  • Should domestic and international smart cities to cooperate to share their resources, know-how and experience for each other’s mutual benefit? Might this in some small way help to promote urban growth and development on a more cooperative global scale?
  • Should LinkNYC also consider offering civic support services such as voter registration or transportation scheduling apps as well as charitable functions where pedestrians can donate to local causes?
  • Should LinkNYC add some augmented reality capabilities to enhance the data capabilities and displays of the kiosks? (See these 10 Subway Fold posts covering a range of news and trends on this technology.)

February 19, 2017 Update:  For the latest status report on LinkNYC nearly a year after this post was first uploaded, please see After Controversy, LinkNYC Finds Its Niche, by Gerald Schifman, on CrainsNewYork.com, dated February 15, 2017.


1.   While Googling “smart cities” might nearly cause the Earth to shift off its axis with its resulting 70 million hits, I suggest reading a very informative and timely feature from the December 11, 2015 edition of The Wall Street Journal entitled As World Crowds In, Cities Become Digital Laboratories, by Robert Lee Hotz.

2.   Smart Cities: Big Data, Civic Hackers, and the Quest for a New Utopia (W. W. Norton & Company, 2013), by Anthony M. Townsend, is a deep and wide book-length exploration of how big data and analytics are being deployed in large urban areas by local governments and independent citizens. I very highly recommend reading this fascinating exploration of the nearly limitless possibilities for smart cities.

3.   See, for example, How Publishers Utilize Big Data for Audience Segmentation, by Arvid Tchivzhel, posted on Datasciencecentral.com on November 17, 2015


These items just in from the Pop Culture Department: It would seem nearly impossible to film an entire movie thriller about a series of events centered around a public phone, but a movie called – – not so surprisingly – – Phone Booth managed to do this quite effectively in 2002. It stared Colin Farrell, Kiefer Sutherland and Forest Whitaker. Imho, it is still worth seeing.

Furthermore, speaking of Kiefer Sutherland, Fox announced on January 15, 2016 that it will be making 24: Legacy, a complete reboot of the 24 franchise, this time without him playing Jack Bauer. Rather, they have cast Corey Hawkins in the lead role. Hawkins can now be seen doing an excellent job playing Heath on season 6 of The Walking Dead. Watch out Grimes Gang, here comes Negan!!


New IBM Watson and Medtronic App Anticipates Low Blood Glucose Levels for People with Diabetes

"Glucose: Ball-and-Stick Model", Image by Siyavula Education

“Glucose: Ball-and-Stick Model”, Image by Siyavula Education

Can a new app jointly developed by IBM with its Watson AI technology in partnership with the medical device maker Medtronic provide a new form of  support for people with diabetes by safely avoiding low blood glucose (BG) levels (called hypoglycemia), in advance? If so, and assuming regulatory approval, this technology could potentially be a very significant boon to the care of this disease.

Basics of Managing Blood Glucose Levels

The daily management of diabetes involves a diverse mix of factors including, but not limited to, regulating insulin dosages, checking BG readings, measuring carbohydrate intakes at meals, gauging activity and exercise levels, and controlling stress levels. There is no perfect algorithm to do this as everyone with this medical condition is different from one another and their bodies react in individual ways in trying to balance all of this while striving to maintain healthy short and long-term control of BG levels.

Diabetes care today operates in a very data-driven environment. BG levels, expressed numerically, can be checked on hand-held meter and test strips using a single drop of blood or a continuous glucose monitoring system (CGM). The latter consists of a thumb drive-size sensor attached with temporary adhesive to the skin and a needle attached to this unit inserted just below the skin. This system provides patients with frequent real-time readings of their BG levels, and whether they are trending up or down, so they can adjust their medication accordingly. That is, for A grams of carbs and B amounts of physical activity and other contributing factors, C amount of insulin can be calculated and dispensed.

Insulin itself can be administered either manually by injection or by an insulin pump (also with a subcutaneously inserted needle). The later of these consists of two devices: The pump itself, a small enclosed device (about the size of a pager), with an infusion needle placed under the patient’s skin and a Bluetooth-enabled handheld device (that looks just like a smartphone), used to adjust the pump’s dosage and timing of insulin released. Some pump manufacturers are also bringing to market their latest generation of CGMs that integrate their data and command functions with their users’ smartphones.

(The links in the previous two paragraphs are to Wikipedia pages with detailed pages and photos on CGMs and insulin pumps. See also, this June 27, 2015 Subway Fold post entitled Medical Researchers are Developing a “Smart Insulin Patch” for another glucose sensing and insulin dispensing system under development.)

The trickiest part of all of these systems is maintaining levels of BG throughout each day that are within an acceptable range of values. High levels can result in a host of difficult symptoms. Hypoglycemic low levels, can quickly become serious, manifesting as dizziness, confusion and other symptoms, and can ultimately lead to unconsciousness in extreme cases if not treated immediately.

New App for Predicting and Preventing Low Blood Glucose Levels

Taking this challenge to an entirely new level, at last week’s annual Consumer Electronics Show (CES) held in Las Vegas, IBM and Medtronic jointly announced their new app to predict hypoglycemic events in advance. The app is built upon Watson’s significant strengths in artificial intelligence (AI) and machine learning to sift through and intuit patterns in large volumes of data, in this case generated from Medtronic’s user base for their CGMs and insulin pumps. This story was covered in a most interesting article posted in The Washington Post on January 6, 2016 entitled IBM Extends Health Care Bet With Under Armour, Medtronic by Jing Cao and Michelle Fay Cortez. I will summarize and annotate this report and then pose some of my own questions.

The announcement and demo of this new app on January 6, 2016 at CES showed the process by which a patient’s data can be collected from their Medtronic devices and then combined with additional information from their wearable activity trackers and food intake. Next, all of this information is processed through Watson in order to “provide feedback” for the patient to “manage their diabetes”.

Present and Future Plans for The App and This Approach

Making the announcement were Virginia Rometty, Chairman, President and CEO of IBM, and Omar Ishrak, Chairman and CEO of Medtronic. The introduction of this technology is expected in the summer of 2016. It still needs to be submitted to the US government’s regulatory review process.

Ms. Rometty said that the capability to predict low BG events, in some cases up to three hours before they occur, is a “breakthrough”. She described Watson as “cognitive computing”, using algorithms to generate “prescriptive and predictive analysis”. The company is currently making a major strategic move into finding and facilitating applications and partners for Watson in the health care industry. (The eight Subway Fold posts cover other various systems and developments using Watson.)

Hooman Hakami, Executive VP and President, of the Diabetes Group at Medtronic, described how his company is working to “anticipate” how the behavior of each person with Diabetes affects their blood glucose levels. With this information they can then “make choices to improve their health”. Here is the page from the company’s website about their partnership with IBM to work together on treating diabetes.

In the future, both companies are aiming to “give patients real-time information” on how their individual data is influencing their BG levels and “provide coaching” to assist them in making adjustments to keep their readings in a “healthy range”. In one scenario, patients might receive a text message that “they have an 85% chance of developing low blood sugar within an hour”. This will also include a recommendation to watch their readings and eat something to raise their BG back up to a safer level.

My Questions

  • Will this make patients more or less diligent in their daily care? Is there potential for patients to possibly assume less responsibility for their care if they sense that the management of their diabetes is running on a form of remote control? Alternatively, might this result in too much information for patients to manage?
  • What would be the possible results if this app is ever engineered to work in conjunction with the artificial pancreas project being led in by Ed Damiano and his group of developers in Boston?
  • If this app receives regulatory approval and gains wide acceptance among people with diabetes, what does this medical ecosystem look like in the future for patients, doctors, medical insurance providers, regulatory agencies, and medical system entrepreneurs? How might it positively or negatively affect the market for insulin pumps and CGMs?
  • Should IBM and Medtronic consider making their app available on and open-source basis to enable other individuals and groups of developers to improve it as well as develop additional new apps?
  • Whether and how will insurance policies for both patients and manufacturers, deal with any potential liability that may arise if the app causes some unforeseen adverse effects? Will medical insurance even cover, encourage or discourage the use of such an app?
  • Will the data generated by the app ever be used in any unforeseen ways that could affect patients’ privacy? Would patients using the new app have to relinquish all rights and interests to their own BG data?
  • What other medical conditions might benefit from a similar type of real-time data, feedback and recommendation system?

Mind Over Subject Matter: Researchers Develop A Better Understanding of How Human Brains Manage So Much Information

"Synapse", Image by Allan Ajifo

“Synapse”, Image by Allan Ajifo

There is an old joke that goes something like this: What do you get for the man who has everything and then where would he put it all?¹ This often comes to mind whenever I have experienced the sensation of information overload caused by too much content presented from too many sources. Especially since the advent of the Web, almost everyone I know has also experienced the same overwhelming experience whenever the amount of information they are inundated with everyday seems increasingly difficult to parse, comprehend and retain.

The multitudes of screens, platforms, websites, newsfeeds, social media posts, emails, tweets, blogs, Post-Its, newsletters, videos, print publications of all types, just to name a few, are relentlessly updated and uploaded globally and 24/7. Nonetheless, for each of us on an individualized basis, a good deal of the substance conveyed by this quantum of bits and ocean of ink somehow still manages to stick somewhere in our brains.

So, how does the human brain accomplish this?

Less Than 1% of the Data

A recent advancement covered in a fascinating report on Phys.org on December 15, 2015 entitled Researchers Demonstrate How the Brain Can Handle So Much Data, by Tara La Bouff describes the latest research into how this happens. I will summarize and annotate this, and pose a few organic material-based questions of my own.

To begin, people learn to identify objects and variations of them rather quickly. For example, a letter of the alphabet, no matter the font or an individual regardless of their clothing and grooming, are always recognizable. We can also identify objects even if the view of them is quite limited. This neurological processing proceeds reliably and accurately moment-by-moment during our lives.

A recent discover by a team of researchers at Georgia Institute of Technology (Georgia Tech)² found that we can make such visual categorizations with less than 1% of the original data. Furthermore, they created and validated an algorithm “to explain human learning”. Their results can also be applied to “machine learning³, data analysis and computer vision4. The team’s full findings were published in the September 28, 2015 issue of Neural Computation in an article entitled Visual Categorization with Random Projection by Rosa I. Arriaga, David Rutter, Maya Cakmak and Santosh S. Vempala. (Dr. Cakmak is from the University of Washington, while the other three are from Georgia Tech.)

Dr. Vempala believes that the reason why humans can quickly make sense of the very complex and robust world is because, as he observes “It’s a computational problem”. His colleagues and team members examined “human performance in ‘random projection tests'”. These measure the degree to which we learn to identify an object. In their work, they showed their test subjects “original, abstract images” and then asked them if they could identify them once again although using a much smaller segment of the image. This led to one of their two principal discoveries that the test subjects required only 0.15% of the data to repeat their identifications.

Algorithmic Agility

In the next phase of their work, the researchers prepared and applied an algorithm to enable computers (running a simple neural network, software capable of imitating very basic human learning characteristics), to undertake the same tasks. These digital counterparts “performed as well as humans”. In turn, the results of this research provided new insight into human learning.

The team’s objective was to devise a “mathematical definition” of typical and non-typical inputs. Next, they wanted to “predict which data” would be the most challenging for the test subjects and computers to learn. As it turned out, they each performed with nearly equal results. Moreover, these results proved that “data will be the hardest to learn over time” can be predicted.

In testing their theory, the team prepared 3 different groups of abstract images of merely 150 pixels each. (See the Phys.org link above containing these images.) Next, they drew up “small sketches” of them. The full image was shown to the test subjects for 10 seconds. Next they were shown 16 of the random sketches. Dr. Vempala of the team was “surprised by how close the performance was” of the humans and the neural network.

While the researchers cannot yet say with certainty that “random projection”, such as was demonstrated in their work, happens within our brains, the results lend support that it might be a “plausible explanation” for this phenomenon.

My Questions

  • Might this research have any implications and/or applications in virtual reality and augment reality systems that rely on both human vision and processing large quantities of data to generate their virtual imagery? (These 13 Subway Fold posts cover a wide range of trends and applications in VR and AR.)
  • Might this research also have any implications and/or applications in medical imaging and interpretation since this science also relies on visual recognition and continual learning?
  • What other markets, professions, universities and consultancies might be able to turn these findings into new entrepreneurial and scientific opportunities?

 


1.  I was unable to definitively source this online but I recall that I may have heard it from the comedian Steven Wright. Please let me know if you are aware of its origin. 

2.  For the work of Georgia’s Tech’s startup incubator see the Subway Fold post entitled Flashpoint Presents Its “Demo Day” in New York on April 16, 2015.

3.   These six Subway Fold posts cover a range of trends and developments in machine learning.

4.   Computer vision was recently taken up in an October 14, 2015 Subway Fold post entitled Visionary Developments: Bionic Eyes and Mechanized Rides Derived from Dragonflies.

New Job De-/script/-ions for Attorneys with Coding and Tech Business Skills

"CODE_n SPACES Pattern", Image by CODE_n

“CODE_n SPACES Pattern”, Image by CODE_n

The conventional wisdom among lawyers and legal educators has long been that having a second related degree or skill from another field can be helpful in finding an appropriate career path. That is, a law degree plus, among others, an MBA, engineering or nursing degree can be quite helpful in finding an area of specialization that leverages both fields. There are synergies and advantages to be shared by both the lawyers and their clients in these circumstances.

Recently, this something extra has expanded to include very timely applied tech and tech business skills. Two recently reported developments highlight this important emerging trend. One involves a new generation of attorneys who have a depth of coding skills and the other is an advanced law degree to prepare them for positions in the tech and entrepreneurial marketplaces. Let’s have a look at them individually and then what they might means together for legal professionals in a rapidly changing world. I will summarize and annotate both of them, and compile a few plain text questions of my own.

(These 26 other Subway Fold posts in the category of Law Practice and Legal Education have tracked many related developments.)

Legal Codes and Lawyers Who Code

1.  Associates

The first article features four young lawyers who have found productive ways to apply their coding skills at their law offices. This story appeared in the November 13, 2015 edition of The Recorder (subscription required) entitled Lawyers Who Code Hack New Career Path by Patience Haggin. I highly recommend reading it in its entirely.

During an interview at Apple for a secondment (a form of temporary arrangement where a lawyer from a firm will join the in-house legal department of a client)¹, a first-year lawyer named Canek Acosta was asked where he knew how to use Excel. He “laughed – and got the job” at Apple. In addition to his law degree, he had majored in computer science and math as an undergraduate.

Next, as a law student at Michigan State University College of Law, he participated in the LegalRnD – The Center for Legal Services Innovation, a program that teaches students to identify and solve “legal industry process bottlenecks”.  The Legal RnD website lists and describes all eight courses in their curriculum. It has also sent out teams to legal hackathons. (See the March 24, 2015 Subway Fold post entitled “Hackcess to Justice” Legal Hackathons in 2014 and 2015 for details on these events.)

Using his combination of skills, Acosta wrote scripts that automated certain tasks, including budget spreadsheets, for Apple’s legal department. As a result, some new efficiencies were achieved. Acosta believes that his experience at Apple was helpful in subsequently getting hired at the law firm of O’Melvany & Myers as an associate.

While his experience is currently uncommon, law firms are expected to increasingly recruit law students to become associates who have such contemporary skills in addition to their legal education. Furthermore, some of these students are sidestepping traditional roles in law practice and finding opportunities in law practice management and other non-legal staff roles that require a conflation of “legal analysis and hacking skills”.

Acosta further believes that a “hybrid lawyer-programmer” can locate the issues in law office operational workflows and then resolve them. Now at O’Melvany, in addition to his regular responsibilities as a litigation associate, he is also being asked to use his programming ability to “automate tasks for the firm or a client matter”.

At the San Francisco office of Winston & Strawn, first-year associate Joseph Mornin has also made good use of his programming skills. While attending UC-Berkeley School of Law, he wrote a program to assist legal scholars in generating “permanent links when citing online sources”. He also authored a browser extension called Bestlaw that “adds features to Westlaw“, a major provider of online legal research services.

2.  Consultants and Project Managers

In Chicago, the law firm Seyfarth Shaw has a legal industry consulting subsidiary called SeyfarthLean. One of their associate legal solutions architects is Amani Smathers.  She believes that lawyers will have to be “T-shaped” whereby they will need to combine their “legal expertise” with other skills including “programming, or marketing, or project management“.² Although she is also a graduate of Michigan State University College of Law, instead of practicing law, she is on a team that provides consulting for clients on, among other things, data analytics. She believes that “legal hacking jobs” may provide alternatives to other attorneys not fully interested in more traditional forms of law practices.

Yet another Michigan State law graduate, Patrick Ellis, is working as a legal project manager at the Michigan law firm Honigman Miller Schwartz and Cohn. In this capacity, he uses his background in statistics to “develop estimates and pricing arrangements”. (Mr. Ellis was previously mentioned in a Subway Fold post on March 15, 2015, entitled Does Being on Law Review or Effective Blogging and Networking Provide Law Students with Better Employment Prospects?.)

A New and Unique LLM to be Offered Jointly by Cornell Law School and Cornell Tech

The second article concerned the announcement of a new 1-year, full-time Master of Laws program (which confers an “LLM” degree), to be offered jointly by Cornell Law School and Cornell Tech (a technology-focused graduate and research campus of Cornell in New York City). This LLM is intended to provide practicing attorneys and other graduates with specialized skills needed to support and to lead tech companies. In effect, the program combines elements of law, technology and entrepreneurship. This news was carried in a post on October 29, 2015 on The Cornell Daily Sun entitled Cornell Tech, Law School Launch New Degree Program by Annie Bui.

According to Cornell’s October 27, 2015 press release , students in this new program will be engaged in “developing products and other solutions to challenges posed by companies”. They will encounter real-world circumstances facings businesses and startups in today’s digital marketplace. This will further include studying the accompanying societal and policy implications.

The program is expected to launch in 2016. It will be relocated from a temporary site and then moved to the Cornell Tech campus on Roosevelt Island in NYC in 2017.

My Questions

  • What other types of changes, degrees and initiatives are needed for law schools to better prepare their graduates for practicing in the digital economy? For example, should basic coding principles be introduced in some classes such as first-year contracts to enable students to better handle matters involving Bitcoin and the blockchain when they graduate? (See these four Subway Fold posts on this rapidly expanding technology.)
  • Should Cornell Law School, as well as other law schools interested in instituting similar courses and degrees, consider offering them online? If not for full degree statuses, should these courses alternatively be accredited for Continuing Legal Education requirements?
  • Will or should the Cornell Law/Cornell Tech LLM syllabus offer the types of tech and tech business skills taught by the Michigan State’s LegalRnD program? What do each of these law schools’ programs discussed here possibly have to offer to each other? What unique advantage(s) might an attorney with an LLM also have if he or she can do some coding?
  • Are there any law offices out there that are starting to add an attorney’s tech skills and coding capabilities to their evaluation of potential job candidates? Are legal recruiters adding these criteria to job descriptions for searching they are conducting?
  • Are there law offices out there that are beginning to take an attorney’s tech skills and/or coding contributions into account during annual performance reviews? If not, should they now considering adding them and how should they be evaluated?

May 3, 2017 Update:  For a timely report on the evolution of new careers emerging in law practice for people with legal and technical training and experience, I highly recommend a new article publish in the ABA Journal entitled  Law Architects: New Legal Jobs Make Technology Part of the Career Path, by Jason Tashea, dated, May 1, 2017.


1.  Here is an informative opinion about the ethical issues involved secondment arrangements issued by the Association of the Bar of the City of New York Committee on Professional and Judicial Ethics.

2.  I had an opportunity to hear Ms. Smathers give a very informative presentation about “T-shaped skills” at the Reinvent Law presentation held in New York in February 2014.

Semantic Scholar and BigDIVA: Two New Advanced Search Platforms Launched for Scientists and Historians

"The Chemistry of Inversin", Image by Raymond Bryson

“The Chemistry of Inversion”, Image by Raymond Bryson

As powerful, essential and ubiquitous as Google and its search engine peers are across the world right now, needs often arise in many fields and marketplaces for platforms that can perform much deeper and wider digital excavating. So it is that two new highly specialized search platforms have just come online specifically engineered, in these cases, for scientists and historians. Each is structurally and functionally quite different from the other but nonetheless is aimed at very specific professional user bases with advanced researching needs.

These new systems provide uniquely enhanced levels of context, understanding and visualization with their results. We recently looked at a very similar development in the legal professions in an August 18, 2015 Subway Fold post entitled New Startup’s Legal Research App is Driven by Watson’s AI Technology.

Let’s have a look at both of these latest innovations and their implications. To introduce them, I will summarize and annotate two articles about their introductions, and then I will pose some additional questions of my own.

Semantic Scholar Searches for New Knowledge in Scientific Papers

First, the Allen Institute for Artificial Intelligence (A2I) has just launched its new system called Semantic Scholar, freely accessible on the web. This event was covered on NewScientist.com in a fascinating article entitled AI Tool Scours All the Science on the Web to Find New Knowledge on November 2, 2015 by Mark Harris.

Semantic Scholar is supported by artificial intelligence (AI)¹ technology. It is automated to “read, digest and categorise findings” from approximately two million scientific papers published annually. Its main objective is to assist researchers with generating new ideas and “to identify previously overlooked connections and information”. Because of the of the overwhelming volume of the scientific papers published each year, which no individual scientist could possibly ever read, it offers an original architecture and high-speed manner to mine all of this content.

Oren Etzioni, the director of A2I, termed Semantic Scholar a “scientist’s apprentice”, to assist them in evaluating developments in their fields. For example, a medical researcher could query it about drug interactions in a certain patient cohort having diabetes. Users can also pose their inquiries in natural language format.

Semantic Scholar operates by executing the following functions:

  • crawling the web in search of “publicly available scientific papers”
  • scanning them into its database
  • identifying citations and references that, in turn, are assessed to determine those that are the most “influential or controversial”
  • extracting “key phrases” appearing similar papers, and
  • indexing “the datasets and methods” used

A2I is not alone in their objectives. Other similar initiatives include:

Semantic Scholar will gradually be applied to other fields such as “biology, physics and the remaining hard sciences”.

BigDIVA Searches and Visualized 1,500 Year of History

The second innovative search platform is called Big Data Infrastructure Visualization Application (BigDIVA). The details about its development, operation and goals were covered in a most interesting report posted online on  NC State News on October 12, 2015 entitled Online Tool Aims to Help Researchers Sift Through 15 Centuries of Data by Matt Shipman.

This is joint project by the digital humanities scholars at NC State University and Texas A&M University. Its objective is to assist researchers in, among other fields, literature, religion, art and world history. This is done by increasing the speed and accuracy of searching through “hundreds of thousands of archives and articles” covering 450 A.D. to the present. BigDIVA was formally rolled out at NC State on October 16, 2015.

BigDIVA presents users with an entirely new visual interface, enabling them to search and review “historical documents, images of art and artifacts, and any scholarship associated” with them. Search results, organized by categories of digital resources, are displayed in infographic format4. The linked NC State News article includes a photo of this dynamic looking interface.

This system is still undergoing beta testing and further refinement by its development team. Expansion of its resources on additional historical periods is expected to be an ongoing process. Current plans are to make this system available on a subscription basis to libraries and universities.

My Questions

  • Might the IBM Watson, Semantic Scholar, DARPA and BigDIVA development teams benefit from sharing design and technical resources? Would scientists, doctors, scholars and others benefit from multi-disciplinary teams working together on future upgrades and perhaps even new platforms and interface standards?
  • What other professional, academic, scientific, commercial, entertainment and governmental fields would benefit from these highly specialized search platforms?
  • Would Google, Bing, Yahoo and other commercial search engines benefit from participating with the developers in these projects?
  • Would proprietary enterprise search vendors likewise benefit from similar joint ventures with the types of teams described above?
  • What entrepreneurial opportunities might arise for vendors, developers, designers and consultants who could provide fuller insight and support for developing customized search platforms?

 


October 19, 2017 Update: For the latest progress and applications of the Semantic Scholar system, see the latest report in a new post on the Economist.com entitled A Better Way to Search Through Scientific Papers, dated October 19, 2017.


1.  These 11 Subway Fold posts cover various AI applications and developments.

2.  These seven Subway Fold posts cover a range of IBM Watson applications and markets.

3A new history of DARPA written by Annie Jacobsen was recently published entitled The Pentagon’s Brain (Little Brown and Company, 2015).

4.  See this January 30, 2015 Subway Fold post entitled Timely Resources for Studying and Producing Infographics on this topic.

Movie Review of “The Human Face of Big Data”

"Blue and Pink Fractal", Image by dev Moore

“Blue and Pink Fractal”, Image by dev Moore

What does big data look like, anyway?

To try to find out, I was very fortunate to have obtained a pass to see a screening of a most enlightening new documentary called The Human Face of Big Data. The event was held on October 20, 2015 at Civic Hall in the Flatiron District in New York.

The film’s executive producer, Rick Smolan, (@ricksmolan), first made some brief introductory remarks about his professional work and the film we were about to see. Among his many accomplishments as a photographer and writer, he was the originator and driving force behind the A Day in the Life series of books where teams of photographers were dispatched to take pictures of different countries for each volume in such places as, among others, the United States, Japan and Spain.

He also added a whole new meaning to a having a hand in casting in his field by explaining to the audience that he had recently fallen from a try on his son’s scooter and hence his right hand was in a cast.

As the lights were dimmed and the film began, someone sitting right in front of me did something that was also, quite literally, enlightening but clearly in the wrong place and at the wrong time by opening up a laptop with a large and very bright screen. This was very distracting so I quickly switched seats. In retrospect, doing so also had the unintentional effect of providing me with a metaphor for the film: From my new perspective in the auditorium, I was seeing a movie that was likewise providing me with a whole new perspective on this important subject.

This film proceeded to provide an engrossing and informative examination of what exactly is “big data”, how it is gathered and analyzed, and its relative virtues and drawbacks.¹ It accomplished all of this by addressing these angles with segments of detailed expositions intercut with interviews of leading experts. In his comments afterwards, Mr. Smolan described big data as becoming a form of “nervous system” currently threading out across our entire planet.

Other documentarians could learn much from his team’s efforts as they smartly surveyed the Big Dataverse while economically compressing their production into a very compact and efficient package. Rather than a paint by, well, numbers production with overly long technical excursions, they deftly brought their subject to life with some excellent composition and editing of a wealth of multimedia content.

All of the film’s topics and transitions between them were appreciable evenhanded. Some segments specifically delved into how big data systems vacuum up this quantum of information and how it positively and negatively affects consumers and other demographic populations. Other passages raised troubling concerns about the loss of personal privacy in recent revelations concerning the electronic operations conducted by the government and the private sector.

I found the most compelling part of the film to be an interview with Dr. Eric Topol, (@EricTopol), a leading proponent of digital medicine, using smart phones as a medical information platform, and empowering patients to take control of their own medical data.² He spoke about the significance of the massive quantities and online availability of medical data and what this transformation  mean to everyone. His optimism and insights about big data having a genuine impact upon the quality of life for people across the globe was representative of this movie’s measured balance between optimism and caution.

This movie’s overall impression analogously reminded me of the promotional sponges that my local grocery used to hand out.  When you returned home and later added a few drops of water to these very small, flat and dried out novelties, they quickly and voluminously expanded. So too, here in just a 52-minute film, Mr. Smolan and his team have assembled a far-reaching and compelling view of the rapidly expanding parsecs of big data. All the audience needed to access, comprehend and soak up all of this rich subject matter was an open mind to new ideas.

Mr. Smolan returned to the stage after the movie ended to graciously and enthusiastically answer questions from the audience. It was clear from the comments and questions that nearly everyone there, whether they were familiar or unfamiliar with big data, had greatly enjoyed this cinematic tour of this subject and its implications. The audience’s well-informed inquiries concerned the following topics:

  • the ethics and security of big data collection
  • the degrees to which science fiction is now become science fact
  • the emergence and implications of virtual reality and augment reality with respect to entertainment and the role of big data in these productions³
  • the effects and influences of big data in medicine, law and other professions
  • the applications of big data towards extending human lifespans

Mr. Smolan also mentioned that his film will be shown on PBS in 2016. When it becomes scheduled, I very highly recommend setting some time aside to view it in its entirety.

Big data’s many conduits, trends, policies and impacts relentlessly continue to extend their global grasp. The Human Face of Big Data delivers a fully realized and expertly produced means for comprehending and evaluating this crucial and unavoidable phenomenon. This documentary is a lot to absorb yet an apt (and indeed fully app-ed), place to start.

 


One of the premiere online resources for anything and everything about movies is IMDB.com. It has just reached its 25th anniversary which was celebrated in a post in VentureBeat.com on October 30, 2015, entitled 25 Years of IMDb, the World’s Biggest Online Movie Database by Paul Sawers.


1These 44 Subway Fold Posts covered many of the latest developments in different fields, marketplaces and professions in the category of Big Data and Analytics.

2.  See also this March 3, 2015 Subway Fold post reviewing Dr. Topol’s latest book, entitled Book Review of “The Patient Will See You Now”.

3These 11 Subway Fold Posts cover many of the latest developments in the arts, sciences, and media industries in the category of Virtual and Augmented Reality. For two of the latest examples, see an article from the October 20, 2015 edition of The New York Times entitled The Times Partners With Google on Virtual Reality Project by Ravi Somaiya, and an article on Fortune.com on September 27, 2015 entitled Oculus Teams Up with 20th Century Fox to Bring Virtual Reality to Movies by Michael Addady. (I’m just speculating here, but perhaps The Human Face of Big Data would be well-suited for VR formatting and audience immersion.)

NASA is Providing Support for Musical and Humanitarian Projects

"NASA - Endeavor 2", Image by NASA

“NASA – Endeavor 2”, Image by NASA

In two recent news stories, NASA has generated a world of good will and positive publicity about itself and its space exploration program. It would be an understatement to say their results have been both well-grounded and out of this world.

First, NASA astronaut Chris Hadfield created a vast following for himself online when he uploaded a video onto YouTube of him singing David Bowie’s classic Space Oddity while on a mission on the International Space Station (ISS).¹ As reported on the October 7, 2015 CBS Evening News broadcast, Hadfield will be releasing an album of 12 songs he wrote and performed in space, today on October 9. 2015. He also previously wrote a best-selling book entitled An Astronaut’s Guide to Life on Earth: What Going to Space Taught Me About Ingenuity, Determination, and Being Prepared for Anything (Little, Brown and Company, 2013). I highly recommend checking out his video, book and Twitter account @Cmdr_Hadfield.

What a remarkably accomplished career in addition to his becoming an unofficial good will ambassador for NASA.

The second story, further enhancing the agency’s reputation, concerns a very positive program affecting many lives that was reported in a most interesting article on Wired.com on September 28, 2015 entitled How NASA Data Can Save Lives From Space by Issie Lapowsky. I will summarize and annotate it, and then pose some my own terrestrial questions.

Agencies’ Partnership

According to a NASA administrator Charles Bolden, astronauts frequently look down at the Earth from space and realize that borders across the world are subjectively imposed by warfare or wealth. These dividing lines between nations seem to become less meaningful to them while they are in flight. Instead, the astronauts tend to look at the Earth and have a greater awareness everyone’s responsibilities to each other. Moreover, they wonder what they can possibly do when they return to make some sort of meaningful difference on the ground.

Bolden recently shared this experience with an audience at the United States Agency for International Development (USAID) in Washington, DC, to explain the reasoning behind a decade-long partnership between NASA and USAID. (This latter is the US government agency responsible for the administration of US foreign aid.) At first, this would seem to be an unlikely joint operation between two government agencies that do not seem to have that much in common.

In fact, this combination provides “a unique perspective on the grave need that exists in so many places around the world”, and a special case where one agency sees it from space and the other one sees it on the ground.

They are joined together into a partnership known as SERVIR where NASA supplies “imagery, data, and analysis” to assist developing nations.  They help these countries with forecasting and dealing “with natural disasters and the effects of climate change”.

Partnership’s Results

Among others, SERVIR’s tools have produced the following representative results:

  • Predicting floods in Bangladesh that gives citizens a total of eight days notice in order to make preparations that will save lives. This reduced the number to 17 during the last year’s monsoon season whereas previously it had been in the thousands.
  • Predicting forest fires in the Himalayas.
  • For central America, NASA created  a map of ocean chlorophyll concentration that assisted public officials in identifying and improving shellfish testing in order to deal with “micro-algae outbreaks” responsible for causing significant health issues.

SERVIR currently operates in 30 countries. As a part of their network, there are regional hubs working with “local partners to implement the tools”. Last week it opened such a hub in Asia’s Mekong region. Both NASA and USAID are hopeful that the number of such hubs will continue to grow.

Google is also assisting with “life saving information from satellite imagery”. They are doing this by applying artificial intelligence (AI)² capabilities to Google Earth. This project is still in its preliminary stages.

My Questions

  • Should SERVIR reach out to the space agencies and humanitarian organizations of other countries to explore similar types of humanitarian joint ventures?
  • Do the space agencies of other countries have similar partnerships with their own aid agencies?
  • Would SERVIR benefit from partnerships with other US government agencies? Similarly, would it benefit from partnering with other humanitarian non-governmental organizations (NGO)?
  • Would SERVIR be the correct organization to provide assistance in global environmental issues? Take for example the report on the October 8, 2015 CBS Evening News network broadcast of the story about the bleaching of coral reefs around the world.

 


1.  While Hatfield’s cover and Bowie’s original version of Space Oddity are most often associated in pop culture with space exploration, I would like to suggest another song that also captures this spirit and then truly electrifies it: Space Truckin’ by Deep Purple. This appeared on their Machine Head album which will be remembered for all eternity because it included the iconic Smoke on the Water. Nonetheless, Space Truckin‘ is, in my humble opinion, a far more propulsive tune than Space Oddity. Its infectious opening riff will instantly grab your attention while the rest of the song races away like a Saturn Rocket reaching for escape velocity. Furthermore, the musicianship on this recording is extraordinary. Pay close attention to Richie Blackmore’s scorching lead guitar and Ian Paice’s thundering drums. Come on, let’s go space truckin’!

2. These eight Subway Fold posts cover AI from a number of different perspectives involving a series of different applications and markets.