Lessons from the early Internet on why we need blockchain interoperability. By By Everett Muzzy and Mally Anderson , ConsenSys Research.
This piece is the first in a series exploring the state and future of interoperable functionality in the blockchain ecosystem. We define “interoperability” here as the ability for blockchains to exchange data between platforms—including off-chain data and transactions—without the aid of third parties. By examining the progress of Web2 architecture from early theory to mass adoption, the series argues that blockchain protocol interoperability is nothing short of a fundamental requirement to realize the full potential of the technology. The series demonstrates how the ecosystem is currently in danger of “Balkanization”—i.e. becoming a series of unconnected systems operating alongside, but siloed from, each other—in the face of competition and commercial pressure. In order for the ecosystem to prioritize interoperability, it must establish a secure, radically decentralized, and trustless settlement layer onto which simultaneously-operating blockchains can anchor their transactions. Given the current state of blockchain systems, Ethereum’s architecture most closely resembles what is required of this universal root chain.
The Risk of Balkanization
The problems of today’s Web2 architecture—in particular the siloing, vulnerability, and mismanagement of user data—are traceable to the deviation of the industry from early Internet values, which originally prioritized interoperability as a key to a sustainable and equitable web-connected world. At its current pace, the blockchain ecosystem is at risk of similar “balkanization,” where protocol interoperability is deprioritized as companies race to demonstrate their own blockchain’s use case quicker than their competitors’. The risk is that the pressures for mainstream adoption could arrive before the Web3 infrastructure is sufficiently interoperable and secure to manifest the full vision of its original architects. Web3 could end up looking much like Web2 does today in terms of financial exclusion, information siloing, and data insecurity—but instead underwritten by a series of blockchains which by competitive design do not interoperate at the protocol level.
Lessons from the Early Internet
The Web was developed as a publicly funded, academic research project beginning in the 1960s to augment humans’ capability to create, transmit, and share information. Early iterations of online information took the form of basic text and images, connected and shared by a web of hyperlinks. Since then, “information” on the Web has evolved to represent asset ownership (especially money), user profiles, and identity (specifically, scattered digital fragments of one’s identity).
Regardless of how broad the definition of digitally-represented information has become, the theory of online information management finds its roots in early Web theory. While building the next evolution in information transmission, early Internet pioneers sought to ensure that information on the Web would flow in a manner that mimicked natural patterns of human behavior. Tim Berners-Lee, inventor of the World Wide Web, positioned his mission to build a Web structure that enabled humanistic transmission of information in contrast to the hierarchical structure of a corporation—until that point, one of the dominant structures through which humans produced and managed large volumes of information. Whereas a company’s rigid, top-down structure sought to dictate information movement in an established, patterned, and traceable manner, the reality of how people communicated and shared was far messier and more amorphous. In order to emulate the natural peer-to-peer social exchange of information, Berners-Lee recommended simplicity in Web architecture. By providing just the bare bones of a digital system, information could grow and evolve in its most natural—and thus must scalable—manner. The minute the “method of storage…place[d] its own restraints” on how things could be transferred, information would suffer. Berners-Lee solidified his conviction that the Web should mimic natural structures by describing the growth of the Web as “forming cells within a global brain” [source] and hoping it might one day “mirror” the way humans interact, socialize, and live on a daily basis [source].
The goal of achieving scalable, humanistically-transmitted digital information was contingent on a crucial concept: the “end to end effect” [source]. The “end to end” effect meant that users of the Internet (i.e. those who were on either end of the transmission of a piece of information) experienced that information in a consistent manner. Humans needed to be able to adopt repetitive behaviors that would allow them to retrieve, process, and submit information in roughly the same way every time they interacted with the Web. Stated differently, the technology that served the consumer a piece of information must do so in a consistent method time after time, across geographies, and across content types.
The end-to-end effect could be achieved in two ways: 1) Third parties could establish themselves as middlemen, providing services to render information in a consistent form as it was sent from point A to point B. These companies and their engineers would “have to learn the art of designing systems” to negotiate and control the passage of information through the digital boundaries that separated incompatible protocols. 2) The second option was for all protocols through which information might need to pass to be interoperable, ensuring that data could seamlessly travel from user to user without barriers that would need additional negotiation to breach. Native protocol interoperability would create the “end to end effect” automatically, instead of relying on exploitative third parties to provide that uniformity behind the scenes.
Of these two methods, interoperability was the preferred approach of those leading the charge in early Web development. Berners-Lee often described this goal as “universality,” suggesting that the future of the Web would include a series of distinct protocols, but they would all exist in the same macrocosm, thus ensuring compatibility. Berners-Lee implored technologists to consider universal interoperability a more important goal than “fancy graphics techniques and complex extra facilities” [source]. He felt it was less important to succumb to the growing appetite for profit and commercialization (which demanded fancy graphics and extra facilities) than it was to focus on protocol design.
As commercialization accelerated and the public origins of the Internet gradually subsided, it introduced a new set of incentives to a previously largely-academic industry. As a result, a series of siloed standards began to emerge as private companies competed to outperform each other, threatening irreparable fragmentation of the Web ecosystem. The creation of separate, individual systems was antithetical to long-term economic optimization. In one of the foundational papers of the Internet, Paul Baran observed in 1964 that “In communications, as in transportation, it is most economic for many users to share a common resource rather than each to build his own system” [source]. In 1994, the World Wide Web Consortium was formed to establish industry-wide standards to ensure that the message of interoperability remained a core priority in the development of the Web. The WWW Consortium’s goal to “realize the full potential of the web” [source] depended on the belief that only through interoperability—achieved by establishing standardization across protocols—could such full potential be met.
Shifting Information Incentives
A look at content management on the Web provides a poignant example of the early ideology of interoperability and standardization. The issue of content management—specifically, the matters of capturing value, establishing ownership, and protecting copyright—was frequently called upon to highlight the potential shortcomings of the Internet and to stir developers, regulators, and technologists to begin discussing these matters early on.
“Information wants to be free” is often traced back to Stewart Brand at a 1984 convention. Information, the thinking went, should spread openly and organically in digital form just as it had between members of the species throughout human history. The Web allowed for the near-infinite dissemination of information, providing the ultimate venue to express its penchant for freedom beyond the limits of analog communication methods to date. The Web presented a magnified stage for information to be broadcast, but did so at the cost of clear definitions of ownership, scarcity, and value to which global markets had become accustomed. The Web allowed information to be free, but also exposed the opportunity for it to be economically exploited. (This had been true in other periods of information technological advancement, such as the printing revolution of the fifteenth century and radio in the early twentieth – granted, at exponentially smaller scales). This consequence relates to the second and less-frequently referenced part of Brand’s quote: “Information wants to be expensive” [The Media Lab, pg. 202-203]. Looking back, Brand’s argument might more accurately be rephrased as “information wants to be valued for what it is worth,” which means sometimes—though not always—it is expensive. New patterns and capabilities of information circulation, powered by the Web, made the proper valuation of digital information impossible. One could not, for instance, accurately trace the origin of a piece of content to provide its original creator with appropriate compensation. This lack of standard ownership protocols for content allowed third parties to step in and provide that standardization—or, more accurately, the illusion of standardization—by facilitating the end-to-end effect that was identified as crucial for scaled use of the Internet. And they did this for all types of information, not just visual and written content. The illusion of back-end protocol interoperability was augmented by a growing sterilization of what users experienced on the front end. Kate Wagner, writing about the disappearance of early Internet design idiosyncrasies through the ’90s and early 2000s, refers to the “…dying gasp of a natively vernacular web aesthetic, one defined by a lack of restriction on what the page could or should look like” [source]. The consumer-facing Web became more and more standardized, but the back-end remained siloed, and consequently remained ripe for data exploitation and profit.
As third parties stepped in and became crucial to the standard transmission of information, they began dictating the “value” of information. This early economic dynamic incentivized the creation of artificial information scarcity. Denying information its natural inclination to be free created artificially-high price tags associated with different data rather than allowing information to be valued for what it was worth. These companies have done well by restricting the flow of the information they control. They attempt to treat information like most other commodities on Earth, where simple supply-demand theory dictates that scarcity equals value. As John Barlow noted in his 1994 “The Economy of Ideas,” however, “digital technology is detaching information from the physical plane” [source]. By treating information as a physical product and controlling or restricting its ability to flow freely, third parties suppressed the unique quality of information—that it becomes more valuable the more common it is. “If we assume that value is based on scarcity, as it is with regard to physical objects,” Barlow argues, the world would be at risk of developing technologies, protocols, laws, and economies contrary to the true, human nature of information [source].
“The significance of the [Internet] lies not in the networking technology but in the fundamental shifts in human practices that have resulted,” wrote Peter Denning in a 1989 reflection on the first twenty years of the Internet [source]. At the end of the day, Web2 proliferated because the end-to-end effect was successfully implemented, achieving mass adoption and giving everyday users the illusion of a single, global Internet. Though interoperability was a core aspiration of Berners-Lee and other early Internet architects, all that mattered to the end-consumers (and thus to the companies seeking to profit from them) was that the Internet scaled to everyday utility as quickly as possible. Information appeared to travel organically and humanistically; content appeared to be sourced and verified; and data appeared to be widely available and trustworthy. Behind the scenes, however, the same third-party companies (or their descendants) from the earliest days remained the gatekeepers of information transmission on the Internet—with notable consequences.
Early Internet theorists didn’t intend for the technology to remain independent from private companies forever. In fact, the realization of the Internet’s potential relied on the assumption that the desire for wide-scale use would push private companies to step in and fund more rapid and global development. The arrival of private companies, however, precipitated the eventual balkanization of the ecosystem.
The Emergence of Balkanization
The original vision of the Internet’s architects was an open, distributed, and decentralized “network of networks” [source]. Funded by billions of public U.S. research dollars and initially conceived as an academic project, the first twenty years of Internet development unfolded in relative obscurity. Its initial funders, most notably ARPA (the Advanced Research Projects Agency, which later became DARPA) and the National Science Foundation (NSF), did not necessarily expect profit from the project, so the early Internet scaled slowly and deliberately [source].
The first instances of networking were practical: mainframe computers at research universities were prohibitively expensive, so sharing resources between them would result in better research. The government controlled those networks, meaning all participants were incentivized to share their code in order to secure continued funding and maintain an open-source ethos. Protocols had emerged in the mid-1970s, and interoperable digital communications standards emerged shortly thereafter for practical reasons: the machines had to be able to talk to one another. By 1985, the NSFNET network had connected all major university mainframes, forming the first backbone of the Internet as we know it. In the late 1980s, more participants flocked to this backbone network—enough that traffic began to outpace the network’s capacity to host it.
Congestion of the network was a primary concern as activity and enthusiasm for the technology increased. In 1991, Vinton Cerf—co-designer of the TCP/IP protocols and another major Internet architect—acknowledged the mounting challenge of scaling infrastructure: “In the boiling ferment of modern telecommunications technology, a critical challenge is to determine how the Internetting architecture developed over the past 15 years will have to change to adapt to the emerging gigabit-speed technologies of the 1990s” [source]. The NSFNET enforced a ban on commercial activity, but that still wasn’t enough to limit traffic. The ban precipitated a parallel development of private networks to host commercial activity.
In response to this parallel networking trend and the strain on NSFNET, NSF chair Stephen Wolff proposed privatizing the infrastructure layer. This would alleviate congestion by bringing private investment into enhancing the network’s capacity, allow NSFNET to integrate with private networks into a single interoperating system, and release the project from government control to allow the Internet to become a mass medium. By 1995, NSFNET was eliminated altogether and an ecosystem of private networks took its place. In its wake, five companies (UUNET, ANS, SprintLink, BBN, and MCI) emerged to form the new infrastructure layer of the Internet [source]. They had no real competitors, no regulatory oversight, no policies or governance guiding their interaction, and no requirements for minimum performance issued by any government entity. This totally open, competitive environment, while quite unprecedented, had little opposition among the thought leaders of the early Internet because they always intended for the networks to be handed over to private infrastructure providers when there was sufficient mainstream interest to uphold them. In other words, they expected the incentives to shift when the public embraced the technology. The protocol and links layers of the Web had developed in relative obscurity; only at the networking or infrastructure layer did markets form.
The five new major providers connected and integrated local and small-scale networks across the United States. Essentially, these companies began as mediators and became de facto providers by virtue of the fact that they oversaw all data in the system at some point in its transmission. This organization appears counterintuitively centralized in comparison to the prioritization of distributed, resilient system architecture up to that point, but the internet architects were aware of this. Because there was more than one provider in play, however, the advocates of privatization felt there would be sufficient competition to prevent the balkanization of the infrastructure service layer. In the years following the dismantling of NSFNET, this was not the case in practice. Privatizing the infrastructure layer resulted in an oligopoly of providers essentially controlling the data flow of the entire Internet, completely in secret, by virtue of controlling the information’s movement and throughput. They could grant one another shortcuts to overcome overall network congestion and offer preferential treatment to websites that paid for faster content delivery. Agreements between these providers were entirely unknown, since they were not obligated to disclose their terms, so smaller provider networks could not compete in the marketplace.
So, an attempt in the early 1990s to avoid balkanization of the Internet eventually resulted in accidental, extreme centralization in which a cabal of five infrastructure providers gained control of the entire protocol layer. In one sense, this is a lesson in the importance of native governance protocols and of reasonable regulation in developing healthy markets for new technologies. Good regulation that results in fairer, more open competition ultimately results in a richer market overall. Some retention of public interest also introduces a feedback loop of checks on the development of a novel technology as it scales. One shortcoming of the private infrastructure layer as it took shape was that insufficient attention to security carried over from the NSFNET, where it had not been as critical a concern; no security mechanisms nor R&D into security issues generally introduced vulnerabilities that still exist today [source]. The almost total lack of intentional governance has also resulted in the extreme lack of so-called “net neutrality,” hence unfair prioritization of network speeds to the highest bidder and vastly unequal access to networks overall. Measures taken to prevent balkanization instead resulted in an all-but-irreversibly balkanized infrastructure layer.
The lessons of this early-1990s centralization of providers are quite relevant to today’s phase of blockchain ecosystem development. Establishments of standards for interoperability are likely to emerge at scale as a necessity of functionality. This was true of the protocol layer of the Internet and is as likely to come true in Web3 when sufficient network pressures, and therefore economic incentives, emerge. But whereas the protocol layer of the Web was publicly funded and therefore free from profit expectations for more than twenty years, the first wave of blockchains have been fundamentally financial in nature, and financial incentives were present from their inception and central all the way down to the protocol layer. So while there are shared patterns in Web2 and Web3 development, the risk of balkanization emerges at very different points in their timelines.
Despite the fact that predictions of its existence have been around for decades and cryptographic theory for decades longer than that, blockchain technology in practice—let alone programmable, usable blockchain technology—is still nascent. At such an early stage, breakneck innovation and competition is important for ecosystem growth. Today’s early blockchain industry, however, is subject to the same pressures as the early Internet industry of the 1980s and 90s. The opportunity of blockchain is world-changing—and therefore so is the risk.
The opportunity of blockchain technology, as this series will argue, hinges on interoperability among all major blockchain projects as fundamental to the development of those protocols. Only by ensuring that all blockchains, whether entirely unrelated or fiercely competitive with one another, embed compatibility into their foundational functionality can the capabilities of the technology scale to global use and consequence.
With the sheer media force that crypto, token sales, and token markets have precipitated in the past two years, blockchain companies are under tremendous pressure to prove the technology’s use, profitability, and commercialization. In this way, the incentives that pushed the Internet to deprioritize interoperability and focus on the everyday usability of the technology are no different than today. If anything, our ability today to be always connected and receive real-time updates anywhere in the world ensures the blockchain ecosystem is under more pressure to demonstrate its commercial capabilities than the early Internet at a similar stage in its development. As companies race to prove themselves as “better” or more “market-ready” than other existing protocols, they abandon interoperability in order to focus on—to recycle Berners-Lee’s words—the “fancy graphics techniques and complex extra facilities” that appeal more to short-sighted investors and consumers.
The race to promise immediate functionality is economically effective, but its continuation could compromise the entire development of the blockchain industry. Should companies continue to ignore interoperability and instead each build their own proprietary blockchain and attempt to pitch it against a supposed market competitor, the ecosystem in the matter of years could look very much like the early days of the un-interoperable Internet. We would be left with a scattered collection of siloed blockchains, each supported by a weak network of nodes and susceptible to attack, manipulation, and centralization.
Imagining an un-interoperable future for blockchain technology is not too difficult. All the material and imagery to paint the picture exists in early Internet doctrine, and has already been discussed in the first section of this piece. Just as in today’s Internet, the most important quality of data in Web3 is the “end to end” effect. Consumers interacting with Web3 must experience a seamless interaction regardless of what browser, wallet, or website they are using for the technology to scale to mass adoption. In order for this end to end goal to be achieved, information must be allowed to flow in its organic, humanistic manner. It must be allowed to be free. A blockchain today, however, has no knowledge of information that might exist in a different blockchain. Information that lives on the Bitcoin network has no knowledge of the information that lives on the Ethereum network. Information, therefore, is denied its natural desire and ability to flow freely.
The consequences of information being siloed in the blockchain in which it was created are straight from the history books of the Internet. The Internet centralized at the infrastructure layer due to scaling pressures to meet public enthusiasm and mass adoption. Should the Web3 ecosystem reach that point before protocol interoperability is sufficiently pervasive, the same thing will happen again. Without native blockchain interoperability, third parties will step in to manage the transfer of information from one blockchain to another, extracting value for themselves in the process and creating the kind of friction the technology is meant to eliminate. They will have access to and control over that information, and they will have the ability to create artificial scarcity and inflated value. The vision of a blockchain-powered Internet future the industry so often evokes is nothing without interoperability. Without it, we will find ourselves in a future with a global network nearly identical to the dominant Web2 landscape today. Everyday consumers will still enjoy their smooth and consistent interaction with Web3, but their data will not be secure, their identity will not be whole, and their money will not be theirs.
All this is not to say that the industry has entirely forgotten or abandoned the importance of interoperability. Proofs of concept such as BTC Relay, consortiums such as the Enterprise Ethereum Alliance, and projects such as Wanchain demonstrate that some people do still acknowledge the critical value of interoperability. There is a good chance that market pressures will incentivize the blockchain ecosystem towards interoperability regardless of how things evolve in the short-term. However, reactionary vs. proactive interoperability can still spell the difference between where value is captured and how data is exploited. Reactionary interoperability—i.e. only deciding that interoperability should be a crucial factor of blockchain many years down the road, when the market demands it—provides opportunities for third parties to step in and facilitate that interoperability. They profit from their services and they have asymmetric access to users’ data. Proactive interoperability—i.e. ensuring interoperability is coded into protocols at this nascent phase of the ecosystem—on the other hand, ensures data can be securely and efficiently transmitted between blockchains without having to pass control over to a mediating third party.
There is, without a doubt, a necessary and healthy balance between commercialization and open-source interoperability. Commercialization promotes competition and innovation, incentivizing developers and entrepreneurs to build systems that work best for their customers. The balance, however, has proven precarious in the past. As pressure mounts for blockchain to deliver on its promise, we will find commercialization place more and more stress on blockchain to be market ready, no matter what ideologies it has to sacrifice in the short term.