- Home
- Tom Wheeler
From Gutenberg to Google Page 17
From Gutenberg to Google Read online
Page 17
While the terms “internet” and “web” have become synonymous, they function very differently. The web identifies and retrieves information using the server-client architecture enabled by the internet.
At its heart, the web consists of three components. First is a unique identifier for the information, the Uniform Record Locator (URL) that we know as the “web address.” The common language to publish that record is “hypertext markup language” (HTML). And the “hypertext transfer protocol” (HTTP) is the language for the transfer and display of the information. Taken together, this trio takes your web browser software to the specific data records you have requested, pulls the information up, and then returns and displays that information for you.
The internet made things possible. The web made it usable.
Rewriting the Rules
Moving network routing activity away from a central point and distributing it at multiple points closer to the network’s edge changed the nature of networks and ended the century-long run of Theodore Vail’s vision.
An analog telephone call required a continuously connected circuit running through a switching center that kept the lines open for the duration of the call. A TCP/IP transmission breaks whatever is being transported into packets of data that travel to their destination by whatever network routes are available, and then reassembles those packets at the other end. The result: IP dispenses with keeping a circuit open, while replacing the switch at the center of a starburst pattern with a cobweb of connected routers moving tightly bunched packets at lightning-like speeds.
In other words, the move from requiring a dedicated circuit to a shared architecture that utilizes every millisecond of capacity destroyed the foundation of network inefficiency on which Theodore Vail had built AT&T.
Because digital transmission seeks out microseconds of unused capacity on multiple paths and fills it with packets of data pressed cheek-to-jowl to other unrelated packets, the cost to carry each incremental piece of data is virtually zero. Vail’s economics of inefficiency no longer hold.
The relative absence of transport cost is illustrated by how Voice over Internet Protocol (VoIP) services such as Skype can offer “free” phone calls from computer to computer regardless of distance covered. A traditional phone line would require unused capacity waiting for someone to make a call and then dedicating an entire circuit to that single call. The longer the distance over which that circuit must be maintained, the costlier the call. When a telephone call is digitized, however, the packetized information is sent across the distributed fishnet network in an almost infinite number of route permutations based on the availability of tiny units of capacity in disparate networks. Because of the efficiency of high network utilization, the incremental cost of digital transmission approaches zero regardless of the distance traveled. One monthly internet access subscription from the phone company, cable company, or independent provider pays the cost of maintaining the available capacity, thus making each individual use of the network “free.”52
Early applications of digital efficiency began to surface outside the AT&T system. New networks arose to challenge the Bell System—often using Bell’s own lines. Led by men like Bill McGowan, companies such as MCI built their own long-haul networks as well as employed leased capacity from AT&T. Because they used digital technology, the upstarts could underprice AT&T, even when using Bell lines. AT&T, channeling Theodore Vail, tried everything possible to get the government to stop or otherwise control the new upstarts. But the digital cat was out of the bag.
The other characteristic of an IP network is that all information looks alike. Previously, as each new means of communication emerged, it required its own unique network. Radio and television, for instance, required a network different from that used to distribute telephone calls.53 The advent of the lingua franca of IP brought about an era of convergence of previously separate networks where voice, video, and data were all the same—a collection of zeros and ones riding a common network to be converted into their final format by software on the receiving computer. An IP telephone call is a series of zeros and ones indecipherable from an IP bank record or an IP television video.
Perhaps even more powerful than its trans-platform capability, IP opened up a world of new applications and opportunities. The existence of a common IP platform allows digital information to be reorganized, repurposed, and redirected to create new or improved products and services. These new products are iterative in that they allow for new applications to be built on old ones (for instance, Facebook began as a platform for friends to post messages, but its all-IP technology has allowed the introduction of video and messaging). IP is also compounding in its ability to create something new by combining pieces of previously incompatible information into a new product (for instance, digital medical records have opened up new fields of medical research by allowing the records to be searched and related by treatment and outcome). Creativity also reached new heights because of IP technology that makes everyone with internet access a publisher and videographer. Finally, IP is measurable, creating new data measurement points every time it is used.
The combination of low-cost computing power and ubiquitous digital distribution redefined the nature of networks and their applications. The computing power that once had been locked away in special rooms and tended by its own priesthood was available to everyone. The centralized network that drew economic and social activities to a common point was dispersed, and commerce and culture followed. The information age had begun.
Connections
The technology originally developed for the purpose of improving national security has, fifty years later, enabled a new generation of threats to the security of nations, the sanctity of corporations, and the privacy of individuals. As one wag observed, the internet is “a lab experiment that got loose” to infect everything it touches.54
New networks have always introduced new threats to the traffic they convey. These threats, in turn, have stimulated new safeguards. Monks typically copied their texts only in daylight lest an overturned candle ignite not just the book, but the entire library. Railroads introduced onboard security to protect both passengers and freight from train robbers. Because telegraph wires could be tapped, elaborate cipher systems were developed to encode messages.
Paul Baran developed packet switching as a response to the threat of nuclear war. His goal was to ensure the ability to respond to an attack. Now the distributed architecture he developed is being used to flow forces in the other direction to enable cyberattacks on information, individuals, and infrastructure. The technology created to secure the old network became the basis for a new network that, as networks have always done, has opened new threats that demand new solutions.
That network security challenge is made manifestly more difficult by the hallmark of the new network: its distributed architecture that is open to all. Theodore Vail was able to secure the telephone network through incessant centralization of access, switching, and innovation. The security challenges of the twentieth century, from nuclear to chemical and biological weapons, also tended to be centralized and, thus, open to control. The national security strategy of “containment”—which protected the world after World War II—is possible only when the threat is centrally containable.
But containment is the opposite of the distributed forces of the internet. The twenty-first-century challenge is to reorient how we think about network security and to replace centralized containment practices with a decentralized dispersal of responsibility for our individual, corporate, and national security. Hiding behind firewalls and other static responses is about as effective against a cyberattack as the Maginot Line was in stopping the blitzkrieg.
In a distributed network the responsibility for protecting the network and those who use it is—like the network itself—dispersed. Individuals have a greater responsibility to protect their data as well as prevent unauthorized access to their computers. Corporations have the responsibility to use the ne
w connectivity to establish collective, but distributed, defenses that share threat and mitigation information. Government must be a partner and facilitator in this highly un-government-like distributed response.
The history of networks until this point has been as a centralizing force for both the private and public sectors. As the new networks reshape economic activity in the opposite direction, it is necessary to rethink how we embrace network solutions to the new security challenges of a decentralized and open network.
Seven
The Planet’s Most Powerful and Pervasive Platform
The village of Siankaba lies along the Zambezi River in the African Republic of Zambia. Home to 180 people and countless chickens, the village has no running water and no electricity. Aside from its inhabitants’ huts, the predominant architecture of Siankaba consists of chicken coops built high on stilts as protection from nighttime predators. After a day of foraging through the village, the chickens, as if guided by GPS, return to the proper coop and climb its ladder to safety.
I wandered through Siankaba as the women were preparing dinner over open wood fires. Some of the men were setting up the evening’s news and entertainment by hooking up an old radio to a car battery.
Adjacent to one of the dinner campfires, nailed slightly askew to a tree limb, was a crudely painted sign proclaiming “Latest Fresh Eggs on Sale.” It made sense that someone identified on the sign as “Mrs. DR” would be selling the product of the ubiquitous chickens. What seemed out of place in this village, however, was the lettering squeezed onto the bottom of the sign: “Cell 0979724518.”
In remote rural Africa, in a village of huts without running water or electricity, the cell phone is changing the basic patterns of life. Siankaba’s inhabitants are part of the 95 percent of the world’s population now covered by a mobile phone signal.1
Villages such as Siankaba have no water or electricity because the construction of the necessary infrastructure is prohibitively expensive compared with the potential users’ ability to pay. And because the same market dynamics apply to telephone wires, the village was cut off from the outside world as well—until the advent of the wireless phone network. Nature’s airwaves provide a low-cost pathway that enables the new network to be sustained by pay-as-you-use fees. Thanks to the economics of airwave distribution and low-cost phones, places like Siankaba can no longer be described as isolated.
Sign in Siankaba, Zambia.
When a villager in Siankaba can receive orders for eggs from a purchaser miles away, or can call a doctor about an ailing child, or can reach out to a distant family member, life in remote villages in Africa has been changed forever. When Mrs. DR can be connected to billions of other mobile phone users located anywhere on the planet, life on all parts of that planet will never be the same.
In 2002, the penetration of mobile phones worldwide overtook the penetration of wired phones.2 The telephone had been around for 125 years, yet all the telephone networks in the world combined had not become as pervasive as the mobile technology that had its first commercial trial only twenty-four years earlier, in 1978. In the intervening years mobile connectivity has soared to the point that many individuals have more than one device, and the number of commercial wireless connections is greater than the population of the planet.3
Today the mobile phone of 2002 is a museum antique; that it made phone calls without a physical connection was a wonder in its time.4 In the new network revolution, however, wireless delivery and the internet have merged. The computing engine that started with Babbage is now a powerful processor in pocket or purse, and the universal network envisioned by Vail has become as ubiquitous as the air. Together they have created the most powerful and pervasive platform in the history of the planet.
The Path to Ubiquity
In 1873, the Scottish physicist James Clerk Maxwell published his “Treatise on Electricity and Magnetism,” in which he hypothesized “that if an electric current were to surge back and forth through a wire very rapidly, then some of the energy in this current would radiate from the wire into space as a so-called electromagnetic wave.” He called the wire from which the wave emanated an “aerial” or “antenna.” Sixteen years later a German physicist, Heinrich Hertz, proved Maxwell’s theory by generating electromagnetic waves in his laboratory. Hertz’s name was subsequently applied to the unit of measurement of those waves.
It was the young Italian Guglielmo Marconi, however, who captured the world’s attention by harnessing electromagnetic waves to send telegraph signals. In 1901 Marconi achieved the impossible by sending a wireless telegraph signal from one side of the Atlantic to the other. Five years later the Canadian-born inventor Reginald Fessenden transmitted an audio signal to ships at sea.5
The ability to transmit sound—including the human voice—without wires was the ultimate threat to Theodore Vail’s concept of universal service provided by AT&T. Shortly after Marconi and Fessenden, Wall Street began to worry about AT&T’s future. Who needed wires if the ether could deliver a conversation? Vail responded in January 1915. The AT&T board of directors appropriated $250,000 to develop a radiotelephone.6
Only nine months later, on September 29, AT&T engineers moved Vail’s voice from his desk telephone in New York via phone lines to an antenna in Arlington, Virginia, where it was cast into the air and received as far away as Honolulu.7
In a congratulatory telegram to his chief engineer Vail wrote, “Your work has indeed brought us one long step nearer our ‘ideal’—a ‘Universal Service.’ ”8
The following year, in his 1916 annual report to shareholders, Vail, comfortable in his dominance of the technology, reassured those who worried about wireless competition. “The true place of the wireless telephone, when further perfected,” he wrote, “has been ascertained to be for uses supplementary to, and in cooperation with, the wire system, and not antagonistic to it or displacing it.”9
It would take sixty years for the technology to be “further perfected.” The result of that development would belie Vail’s assertion that wireless networks were “not antagonistic” to the wired network.
The first phase of mobile communications began in 1921 when the Detroit police department took the initiative to put mobile radios in squad cars. “Calling all cars” was followed in 1929 by ship-to-shore radio, which connected an ocean liner passenger directly into the Bell System.10 During World War II mobile radios for police and emergency vehicles evolved into portable radio telephones the troops nicknamed “walkie-talkies” and “handie-talkies.”
Thirty years after Vail’s vision of a supplementary mobile telephone service, AT&T began offering the capability. On June 17, 1946, AT&T’s Southwestern Bell subsidiary launched its Mobile Telephone Service (MTS) when a St. Louis trucker was connected directly into the wired Bell network. Shortly thereafter the service was rolled out to twenty-five other cities.11
The problem with MTS was the limited amount of available airwaves. MTS was a “high tower–high power” technology in which a multi-hundred-foot antenna blasted out a signal as far as possible. The return path to the tower also required a powerful signal, necessitating an eighty-pound transmitter-receiver in the car or truck. The mobile unit sucked so much power that its use would cause the vehicle’s headlights to dim. Only a handful of individual channels were available to carry the MTS calls, limiting the number of individuals who could use the service at any one time. In all of Manhattan, for instance, the network could serve only about a dozen users simultaneously.12
The allocation of the airwaves was controlled by the Federal Communications Commission (FCC), the same agency that regulated AT&T’s wire monopoly. In 1947, AT&T petitioned the agency to make more of the airwaves (technically called the electromagnetic spectrum) available for mobile telephones. Two years later the FCC allocated a few more channels. In a break from the “natural monopoly” concept, however, the FCC allocated half of the new channels to non-Bell entrepreneurs. It would be another thirty years before the Bell S
ystem tasted real competition, but the door had cracked open.
Also in 1947, researchers at Bell Labs began investigating whether there might be a technological solution to the laws of physics that limited the number of signals available for mobile communications. Drawing on the work of Rae Young, Doug Ring wrote “Mobile Telephony: Wide Area Coverage” and advanced the innovative idea that if the spectrum could be geographically divided into a honeycomb of hexagons, each on a different channel and operating at low power, then the common block of spectrum used for the “high tower–high power” solution could be subdivided into smaller noninterfering cells capable of serving more subscribers. It is this concept that is at the heart of modern mobile networks. The work was filed away and never published.13
For the next twenty years, the concept of a cellular network languished at both AT&T and the FCC. Ring and Young had conceptualized a breakthrough, but neither existing technology nor the necessary innovative corporate or regulatory vision was available to develop that concept. The need for portable computing power to handle both signal sensing and handoff to the next cell would also have to await the development of the microprocessor. The FCC continued to focus its spectrum allocation efforts on big blocks of airwaves for broadcasters. All the while, AT&T continued to exploit its “natural monopoly.”
In 1958, AT&T petitioned the FCC for additional spectrum to be used for a mobile phone service. The agency sat on the request for over a decade.
As the federal agency responsible for the efficient and innovative use of the public’s airwaves sat on its hands, another federal agency stepped in with a wake-up call. In 1968 the Department of Transportation hired Bell Labs to develop ways to implement pay phones on the Metroliner, the new high-speed train between Washington and New York. Bell Labs recommended dividing the 225 miles between Union Station in Washington and Penn Station in New York into nine cells. Just as originally proposed by Ring and Young, each cell had its own unique frequency. When the edge of one cell was reached the speeding train would trigger a sensor on the track, send a signal to a computer in Philadelphia, and that computer would hand off the call to the next cell. The Metroliner pay-phone service—the first cellular system—became operational in January 1969.