ΑΙhub.org
 

Book review: Building the new economy

by
15 September 2020



share this:


Building the New Economy is a work in progress book edited (and in large part authored) by Alex Pentland, Alexander Lipton, and Thomas Hardjono. It lays out a vision for a new economy, with data and AI at its heart, that is more resilient to crises.

Aims of the book

With major crises such as financial recession, the COVID-19 pandemic and climate change affecting the world, the relationships between governments, individuals, companies, and societies are facing new challenges. This book presents possible solutions for balancing personal privacy, social and governmental responsibility, business policy, and international collaboration, from a technological perspective. It provides a fundamental economic analysis, and suggests ways of deploying technologies such as AI and blockchain to reinvent areas like data mining, healthcare, financial systems, digital currency, and public policy.

Here we take a look at some of the chapters of the book.

Shared data: backbone of a new knowledge economy

This chapter explains how an efficient data economy can develop while also preserving privacy, trade secrets, and general cybersecurity.

The authors argue that the reason no transparent data market has emerged is due to the nature of the data itself: 1) Data is non-fungible, which makes it very different from other traditional production factors such as labour, oil and capital, 2) typically organisations need to purchase huge amounts of data to train algorithms and gain insights, 3) it is illegal to access personal data without prior authentication, 4) some organisations have data that is very sensitive and do not wish to grant direct access to third parties.

Trading data transparently and securely is a problem. Not all organisations can access data and there are no standard terms of purchase.

To solve this problem, data exchanges, “platforms that have the permission to gather, create and aggregate data from many different sources, in order to allow third parties to gain knowledge from these data,” are suggested. The authors highlight one such platform, “Open Algorithms” (or OPAL) developed by MIT Media Lab, Imperial College London, Orange, the World Economic Forum and Data-Pop Alliance. The objective of OPAL is to make data available for analysis in a way that does not violate personal privacy. This is achieved by: 1) keeping algorithms and data in the same environment (so that the data is secure in its original repository), 2) only producing safe or aggregated answers, and 3) ensuring the data is always in an encrypted state.

The authors believe that the implementation of such data exchanges will open up new opportunities for individuals and organisations.

Health IT: algorithms, privacy, and data

This chapter presents a framework for deploying new IT infrastructure that deals with the various aspects of health-related data. The authors explore the open algorithms paradigm and couple this with strategies for confidential computing – vital for preserving the privacy of individuals. They consider the following: 1) Open algorithms for minimization of data movement and loss, 2) federated consent and authorization across institutions, and 3) data protection in collaborative computations.

The authors propose a privacy-centric data architecture. They stress that this should focus on sharing insights by design (rather than exporting data, which is currently the most prevalent method). Particular attention should also be paid to the quality and provenance of data, and data protection during computations and storage.

Relating to open algorithms, the authors suggest the following fundamental principles: 1) moving the algorithm into the data repository instead of the other way around, 2) data should never be exported from its repository, 3) studying and vetting algorithms by domain experts, 4) defaulting to safe answers, 5) keeping data in an encrypted state by data-at-rest protection and privacy-preserving computation schemes while in processing, and 6) using decentralized data architecture.

The authors provide a nice summary of the Precision Medicine Initiative (PMI) which was announced by President Obama in 2015. Precision medicine, as opposed to the current “one-size-fits-all” approach, aims to treat patients individually based on their unique characteristics. The PMI sought to build a cohort of at least one million participants between 2016 and 2020 for the purpose of creating a resource for researchers looking to understand the many aspects that influence health and disease. The participants provide DNA samples, share their records and take part in mobile health data-gathering activities to collect geospatial and environmental data. Key here is to ensure that all data is secure. Rolling this kind of system out more widely would be a challenge, with several ethical, legal and social issues coming into play.

Stablecoins, digital currency, and the future of money

In this chapter, the authors provide a historical summary of stablecoins and their adoption. They review terminology surrounding cryptocurrency and examine the disruptive potential of stablecoins. The authors propose a novel definition of stablecoins and use cases and economic incentives for creating them.

Stablecoins have been considered as a novel, technology-neutral, faster, more accessible, anti-money laundering form of currency. The concept of alternative currency has been around for a while, with one of the more successful examples being the WIR currency (‘CHW’) released by the Swiss WIR Bank. The authors provide a brief summary of this currency in the book. They also review Clayton Christensen’s Disruptive Innovation Theory, and use it to consider four examples: Libra, JP Morgan Coin, TradeCoin, and Tether.

The novel definition of stablecoins that the authors propose in this book is as follows: A stablecoin is a digital unit of value with the following three properties: 1) it is not a form of currency, 2) it can be used without any direct interaction with the issuer, and 3) it is tradable on a secondary market and has a low-price volatility in terms of a target quote currency. Additionally, the authors suggest additional classification of stablecoins into three groups: 1) claim-based, 2) good-faith-based, and 3) technology-based.

The authors conclude that it is vital for people to understand the history of stablecoins, and why they came about, in order to understand what problems they could solve now, and in the future. The authors believe that stablecoins have the potential to fundamentally change the financial system, but it is difficult to predict whether they will take over or compliment existing systems.

Interoperability of distributed systems

To explore distributed systems, the authors draw comparisons between blockchain and the internet and their development. Three of the original goals of the internet that are beneficial to consider when developing blockchain include: 1) survivability, 2) a variety of service types, and 3) a variety of networks. In developing their design framework for an interoperable distributed architecture, the authors consider each of these three aspects in the context of blockchain architectures.

The authors note that one important lesson learned from the development of the internet was that interoperability is key to survivability. Thus, interoperability is core to the entire value-proposition of blockchain technology, and interoperability across blockchain systems must be a requirement.

To cover the goal of variety, the authors propose that the minimum requirements for blockchain systems should be: 1) a common standardized transaction format and syntax that will be understood by all blockchain systems regardless of their respective technological implementation, and 2) a common standardized minimal operations set that will be implemented all blockchain systems regardless of their technological choices.

Legal algorithms

In their concluding chapter, the authors note that, when it comes to legal frameworks, we need to learn from the human-machine systems framework. Such systems need to be modular, continually tweaked, densely instrumented, reiterated, and redesigned. The authors note that several underdeveloped and missing components of current computational law need close attention. These include: specification of system performance goals, measurement and evaluation criteria, testing, robust and adaptive system design, and continuous auditing.

One of the benefits of an AI-driven legal system could be in helping attorneys and legislators to deal with large-scale text-based searches, finding clauses in documents and suggesting wordings. There are also new opportunities for developing legal agreements using tools originally intended for creating large software systems.

Conclusion

Building the New Economy, a timely design philosophy, attempts to construct a view of how state-of-the-art technology could solve or reduce social dilemmas in today’s data-driven world. It provides various design concepts, architectures, potential challenges, possible solutions, and proposed prototypes in diverse applications with an emphasis on healthcare and financial systems. The authors present interesting ideas and the book is well worth a read, particularly for the macro-view of what technology offer society.

About the book editors/lead authors

Alex (Sandy) Pentland, professor of Media Arts and Science at Massachusetts Institute of Technology (MIT) and the founder and director of the MIT Media Lab, has been declared by Forbes to be the ‘most powerful data scientist in the world.’ Sandy is also one of the most-cited computational scientists.

Alexander Lipton, MIT Connection Science Fellow, is a managing director and Quantitative Solutions executive at Bank of America. He is currently an Adjunct Professor of Mathematics at New York University (NYU).

Thomas Hardjono is the CTO of Connection Science and Engineering at MIT. He leads projects in data privacy, security, and identity, and is the technical director of the Internet Trust Consortium from MIT Connection Science and Engineering.

Other authors whose chapters are featured in this review are: Jose Parra-Moyano, Karl Schmedders, Anne Kim, Aetienne Sardon, Fabian Schär, and Christian Schüpbach.




Lan (Vivien) Zou is an AI researcher.
Lan (Vivien) Zou is an AI researcher.




            AIhub is supported by:


Related posts :



AIhub coffee corner: Is it the end of GenAI hype?

The AIhub coffee corner captures the musings of AI experts over a short conversation.
08 October 2024, by

ChatGPT is changing the way we write. Here’s how – and why it’s a problem

Have you noticed certain words and phrases popping up everywhere lately?
07 October 2024, by

Will humans accept robots that can lie? Scientists find it depends on the lie

Humans don’t just lie to deceive: sometimes we lie to avoid hurting others, breaking one social norm to uphold another.
04 October 2024, by

Explainable AI for detecting and monitoring infrastructure defects

A team of researchers has demonstrated the feasibility of an AI-driven method for crack detection, growth and monitoring.
03 October 2024, by

The Good Robot podcast: the EU AI Act part 2, with Amba Kak and Sarah Myers West from AI NOW

In the second instalment of their EU AI Act series, Eleanor and Kerry talk to Amba Kak and Sarah Myers West
02 October 2024, by

Forthcoming machine learning and AI seminars: October 2024 edition

A list of free-to-attend AI-related seminars that are scheduled to take place between 1 October and 30 November 2024.
01 October 2024, by




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association