From Gutenberg to Google Read online

Page 14


  While Burroughs and others turned out mechanical calculators powered by cranks and levers, a former employee of the U.S. Census Bureau, Herman Hollerith, was marrying electronics to mechanics. The impetus was the magnitude of the growing nation’s constitutionally mandated decennial census. The roughly 60 million returns from the census of 1880 took 1,500 clerks seven years to tally. There was fear the 1890 census wouldn’t be completed before the next one was mandated to begin.17

  Hollerith’s solution, a punch-card-driven tabulator, was inspired by a Frenchman and a train trip. To protect against stolen tickets, train conductors of the time would punch each ticket to indicate the passenger’s physical characteristics. “I was travelling in the West and I had a ticket with what I think was called a punch photograph,” Hollerith recalled. By the strategic placement of punch holes on each passenger’s ticket, “the conductor … punched out a description of the individual, as light hair, dark eyes, large nose, etc.” Hollerith’s aha moment was to record the census information on punch cards in the same manner.

  Punch cards themselves were not a new concept. In 1801, the French weaver and merchant Joseph Marie Jacquard had invented an automatic weaving loom controlled by punch cards. As the punch cards moved through Jacquard’s reader, wooden pegs fell into the holes to create the loom’s patterns.18 It revolutionized the textile business (and automated many people out of jobs). Babbage had envisioned punch cards as the input mechanism for the analytical engine.

  Hollerith took Jacquard’s process, applied it to numerical calculations, and electrified it. As the punch cards ran through Hollerith’s machine, a collection of electrified pins would slide across the card; when a pin dropped through the punched hole, it fell into a vial of mercury that completed a circuit and sent a signal to the counter.

  Six weeks after beginning the tabulation of the 1890 census, the Hollerith machines reported the population of the United States was 66,622,250. In addition to speed, the Hollerith machines offered the electromechanical ability to sort the information with unprecedented granularity. Pose any question, such as the number of farmers, or widows, or multigeneration families in a state or county, pull the appropriate cards for the target, line up the pins, and the numbers came to life.

  Hollerith started a company to exploit his machine. The Tabulating Machine Company ultimately became International Business Machines, IBM.

  From Calculating to Computing

  While Hollerith’s speedy tabulations were impressive, they weren’t computing. To move in that direction—and to move back to where Charles Babbage was a century earlier—we need to meet two men on opposite sides of history’s greatest conflagration.

  Alan Turing was not the kind of person with whom you’d like to grab a beer. Rude, gruff, disheveled, and brilliant, he had few friends and even fewer intellectual equals.19 In a 1936 paper, “On Computable Numbers,” the twenty-five-year-old Englishman proposed that one of mathematical logic’s great conundrums could be solved if only there were a machine capable of calculating algorithms. At a time when computation was done with paper and pencil, Alan Turing proposed something his contemporaries had never envisioned: a machine that automated algorithms.

  Amazingly, Alan Turing had never heard of Charles Babbage, yet both conceptualized a computing machine—100 years apart. Babbage was obsessed with the mechanics of constructing such a machine. Turing could have cared less about building the machine; his was a purely intellectual exercise. It was an intellectual breakout of such magnitude, however, that the concept of an automated algorithm computer came to be dubbed a “Turing machine.”20

  On the other side of the English Channel, the rudiments of a Turing machine were coming together on a Berlin apartment floor. At the same time that Turing was intellectualizing in 1936, a twenty-six-year-old aircraft engineer named Konrad Zuse began building a Babbage-like universal calculator in the living room of his parents’ apartment. Like Turing, Zuse had never heard of Charles Babbage.

  The Zuse machine, like Babbage’s, had a central processing unit where the computations were done, a control unit, memory, and a punch card/punch tape reader for inputting instructions. Unlike Babbage, however, Zuse substituted a base 2 binary numbering system in place of the traditional base 10 numbering. Zuse then used Boolean algebra (a form of algebra using only the binary symbols 0 and 1).21

  The switch to a binary system simplified the mechanics. Whereas Babbage’s device required a great number of complex interacting gears and levers, Zuse’s binary calculator used simple slotted metal plates. A pin’s posting in the slot on the left or right side determined whether it represented a zero or a one. In a fit of creativity, Konrad Zuse named his machine the Z1.

  The Z1 had been a breakthrough in conceptualizing the operation of a computer. Zuse’s next unit, the Z2, was a breakthrough in construction. In lieu of mechanical parts, the Z2 used secondhand telephone relays. Derived from Joseph Henry’s 1831 telegraph relay, the technology that allowed telegraph signals to be reamplified became the source of on-off signals for a binary computing device. The on-off functioning of the relays made the Z2 like a self-contained telegraph network except that the on-off of the dots and dashes were now the 0s and 1s of binary code.

  With World War II under way, Zuse’s engineering skills were put to work in the German aircraft industry. In December 1941, he completed the Z3 to speed computational problems relative to the movement of aircraft wings under stress. Whereas the Z2 had been a prototype of a relay-based machine, the Z3 became the first operational general-purpose, program-controlled calculator.22

  Konrad Zuse had built the Turing machine. All three Zuse machines were ultimately destroyed by Allied bombing raids.23 Wartime secrecy, coupled with the Allied air supremacy that destroyed Zuse’s work, meant Konrad Zuse’s innovations had no effect on the path to electronic computing. Like Babbage, the world moved ahead ignorant of Konrad Zuse’s discoveries.

  And so we return to the cornfields of central Iowa. The trip to the Illinois roadhouse had taken place in December 1937; by August 1940 John Atanasoff’s bourbon-based breakthrough was taking form in the basement of the Iowa State physics building. In December of that year Atanasoff traveled to Philadelphia for the annual meeting of the American Association for the Advancement of Science. There he met John Mauchly, the sole member of the physics department at nearby Ursinus College. Mauchly also had an interest in automating algorithms; he was developing an analog machine to trace the cycles of the weather. Atanasoff shared with Mauchly the basic concepts of his electronic computer. He even invited Mauchly to visit him in Ames to see the machine for himself.

  A 1941 road trip halfway across the country was a major undertaking. Yet in June John Mauchly made it for the sole purpose of seeing John Atanasoff’s computer. The visitor from the east moved into the Atanasoffs’ home, saw the Atanasoff-Berry Computer (ABC) in operation, visited with its developer at length about its details, read the step-by-step thirty-five-page technical description of the breakthrough, and basically sucked out all the knowledge he could get from the Iowa professor. Then he returned to the East.

  Less than three months after he left Ames and the ABC, John Mauchly wrote a paper in which he presented as his idea concepts similar to those Atanasoff had shared. Proving that science is also the domain of chutzpah, Mauchly continued to correspond with Atanasoff about the Iowan’s breakthrough while conveniently failing to mention how his own work had usurped Atanasoff’s ideas.

  Then war came.

  John Atanasoff left Iowa State in September 1942 for wartime duty at the Naval Ordnance Laboratory in Washington, D.C. The ABC stayed behind in Ames.

  Back in Philadelphia, John Mauchly had joined the faculty at the Moore School of Engineering at the University of Pennsylvania. When the Moore School received a government contract for a machine to calculate ballistic tables for artillery, Mauchly became part of the team. The ideas purloined from John Atanasoff became the heart of the project.24 Mauchly’s machine, the
ENIAC (for Electronic Numerical Integrator and Computer), would be heralded as the first electronic computer. He would leverage that claim to fame and fortune.25

  ENIAC was a thirty-ton monster comprising almost 18,000 vacuum tubes and miles of wiring. It accomplished the previously unimaginable. The calculation for a ballistic trajectory that took humans twenty hours to solve took ENIAC thirty seconds!26 In fairness, John Mauchly and his collaborator, J. Presper Eckert, must be acknowledged for the manner in which they expanded upon Atanasoff’s concepts and took them to scale. But the fact remains: they were Atanasoff’s concepts.

  While working on ENIAC, Mauchly and Eckert envisioned a new innovation. The ENIAC had been programmed by means of a plug board in which cables running between different inputs provided the necessary instructions. The next Moore School computer replaced the plug board with instructions stored in memory.27 The Electronic Discrete Variable Arithmetic Computer (EDVAC) also finally broke with decimal mathematics in favor of binary numbers.28

  When the war ended, Mauchly and Eckert left the Moore School in 1946 to start their own company. Like Hollerith, they saw opportunity in the calculations necessary for the national census. Their new company delivered the UNIVAC (UNIVersal Automatic Calculator) to the Census Bureau in 1951. It was the first commercial electronic computer. Interestingly, however, while the UNIVAC incorporated the innovation of storing commands in memory as the EDVAC did, it did not use the EDVAC’s binary system but retained the base 10 decimal system.

  The cash-flow realities of the new company ultimately led Mauchly and Eckert to sell the firm in 1950 to office equipment manufacturer Remington Rand.29 Remington Rand merged with Sperry Gyroscope five years later to become Sperry Rand. In 1986 Sperry Rand merged with Burroughs Corporation (which had grown out of William S. Burroughs’s calculator company) to become Unisys.

  It was then that John Atanasoff got his due.

  Mauchly’s and Eckert’s resignations from the Moore School in 1946 were preceded by a dispute with the school over whether they or the university would own the patent rights to the ENIAC concepts. The subsequent patents the duo filed were the underpinning of their commercial activities and the bane of competitors, who were obliged to pay royalties on the intellectual property. Things looked solid for the Mauchly-Eckert patents and Sperry Rand when in 1963 a U.S. district court judge ruled in their favor against a challenge to the patents. The judge specifically found an absence of evidence “of prior public use” of the ENIAC concepts.30

  The royalty fees assessed by Sperry Rand on its competitors prompted one of them, Honeywell, Inc., to try again to invalidate the Mauchly-Eckert patent. This time the plaintiff had something new, the long-forgotten and never-heralded work of John Atanasoff, including how he had shared his discovery with John Mauchly. After nine and a half months of complex testimony U.S. District Court Judge Earl R. Larson ruled that “Eckert and Mauchly did not themselves first invent the automatic electronic digital computer, but instead derived that subject matter from one Dr. John Vincent Atanasoff.”31

  However, John Vincent Atanasoff’s jinx on being recognized as the father of the electronic digital computer continued even as a court of law validated that fact. Judge Larson’s ruling—surely worthy of national headlines proclaiming the real Father of the Computer—was handed down on October 19, 1973. The following day President Richard Nixon, knee-deep in the Watergate scandal, fired the Watergate special prosecutor, and the attorney general resigned in protest. The media, fixated on the “Saturday Night Massacre,” overlooked the courtroom revelation about scientific intrigue at the dawn of the digital age.

  Computing on a Thumbnail

  Six years after the court decision, Mauchly’s partner, Eckert, appeared in a Sperry Rand advertisement in the Wall Street Journal. Standing in front of Sperry’s compact new 1100/60 computer, Eckert is juxtaposed with a photo of the huge ENIAC with the caption “Who would have thought that the father of Goliath would be introducing David?”32 It epitomized the progress in computing machinery. Like the steam engines that first got Charles Babbage thinking about mechanizing calculations, computing engines were becoming ever smaller and ever more powerful.

  The thirty-ton ENIAC would ultimately fit on a thumbnail. The first breakthrough on this path was the development of the transistor. It won Bell Labs scientist William Shockley the Nobel Prize in Physics.33

  As we have seen, early computers used vacuum tubes as both an amplifier for electric signals and a switch. Shockley’s transistor performed the functions of the vacuum tube but at a fraction of the weight, size, and power consumption. It was a trifecta made possible by a silicon semiconductor sandwich.

  A semiconductor is a substance with an ability to carry an electronic current that lies somewhere between that of a good conductor and an effective insulator.34 By adding impurities to a semiconductor such as silicon, Shockley could control its conductive characteristics. By sandwiching different types of silicon together he discovered it was possible to make electricity flow in one direction across the semiconductor. The word “transistor” was derived from transferring a current across a resistor.

  Shockley left Bell Labs in 1955; moved to the Santa Clara Valley outside San Jose, California; and formed Shockley Semiconductors. Attracting bright young minds to join him and his Nobel laureate reputation, Shockley then proceeded to alienate them with his autocratic management style. In 1957 eight of his engineers—Jay Last, Julius Blank, Eugene Kleiner, Robert Noyce, Gordon Moore, Jean Hoerni, Sheldon Roberts, and Victor Grinich—announced they were leaving en masse. Even though Shockley had set the precedent, behaving similarly by leaving Bell Labs and taking his knowledge with him, he labeled them the “Traitorous Eight.”

  Soon, however, this group became the “Fairchild Eight.” With investment support from Fairchild Camera and Instrument Corporation, they created Fairchild Semiconductor. There were now two semiconductor firms among the Santa Clara Valley’s fruit farms. Silicon Valley was born. The eight built not only new technology but also a management model that featured stock options, little hierarchy, and a physically and functionally open work environment.35

  Adapting semiconductor technology to produce a piece of electronic equipment in one solid piece, rather than multiple functions connected by soldered wires, became the Holy Grail of the research world. Early iterations by Texas Instruments soldered wires so small they required tweezers and a microscope to place onto a circuit. It was the same concept as a typical circuit board, but rather than wiring various components on a board, these components were all on one solid circuit semiconductor. It was called an integrated circuit.36 At this early stage, however, the fragile soldered connections affected the solution’s practicability.

  Two years after the founding of Fairchild Semiconductor, in January 1959, Robert Noyce invented the first practicable integrated circuit. By etching semiconductor circuits on a piece of silicon, Noyce created microchips that could quickly and efficiently move electric circuits. Others on the team developed and built the processes and equipment necessary to produce Noyce’s development.37

  All during these technological breakthroughs, Noyce and Gordon Moore were unsuccessfully trying to convince their parent corporation to embrace a new management philosophy that would make stock options available to a broad group of employees. While successful in their technological innovation, they did not succeed in finding favor for their management innovation at Fairchild headquarters. Once again Noyce and Moore took what was between their ears and departed (other members of the Fairchild Eight had pulled the ripcord earlier). It was a gutsy move; the company in which they had invested a decade of their lives had seen its revenues grow from nothing to $130 million by 1967.

  Noyce and Moore’s new outfit opened for business in 1968. They named their new company Intel.

  The business plan for Intel was to produce integrated circuits that could be used as memory for computers. This was an ambitious goal, as at the time such technology was at least
a hundred times more expensive than storing data on magnetic tapes.38 In 1970, however, Intel invented the dynamic random-access memory chip (DRAM) that reduced those storage costs.

  When Japanese calculator manufacturer Busicom approached Intel about fabricating a dozen different chips, each with a separate function, that could be strung together for use in a programmable desktop calculator, a young Intel engineer, thirty-one-year-old Marcian (Ted) Hoff, saw an opening for a new kind of integrated circuit. Instead of purpose-built logic chips, Hoff envisioned a programmable chip that could act like a conventional central processing unit. The client liked the idea, and the Intel 4004 was developed, a programmable computer on a single chip—a microprocessor.

  Such an idea did not harmonize with Intel’s business plan, however. The company produced memory chips, and Hoff’s idea was a far cry from those (profitable!) products. To make matters even more challenging, Intel did not own the microprocessor technology Hoff had invented. Because the work had been done under contract to Busicom, the Japanese firm owned it. Busicom, however, had fallen into difficult financial straits. In order to raise cash, it was willing to sell the technology back to Intel for $60,000.39

  Other than Busicom’s desk calculators, however, there was no established market for microprocessors. Would Intel risk $60,000 of its limited capital to buy back a technology with no current market and no place in its business plan? More important, could Intel grow two breakthrough technologies simultaneously, and how would the company’s pursuit of the DRAM market change if it redirected resources to try and build a microprocessor market? Except for Moore and Noyce, the senior managers were against the move into microprocessors.

  Gordon Moore and Robert Noyce, however, championed the microprocessor idea in a creative way. Since every microprocessor would also require DRAM, they argued that it was a strategy to sell more memory.40 The pitch prevailed—it was like investing in the steam locomotive as a way to sell coal! It was “literally betting the company,” an Intel executive would tell me years later.41