History of the Computer
“Those who cannot remember the past are condemned to repeat it”
George Santayana, 1905
“The past is a foreign country; they do things differently there.”
“History repeats itself, first as tragedy, then as farce.”
Karl Marx, Der 18te Brumaire des Louis Napoleon, 1852
“Everything that can be invented has been invented."
Charles H. Duell, Commissioner, U.S. Office of Patents, 1899.
Students of computer architecture all too frequently see the microprocessor in the
light of the latest high-
History is important because it teaches us how the world develops and enables us to understand the forces that control events. Today's computers are not the best possible machines designed by the brightest and best engineers and programmers. They're the products of a development path that has relied as much on whim and commercial considerations as on good engineering practice. In this chapter, we put the microprocessor in a historical context and discuss some of the issues related to its development.
My views on computer history have been strongly influenced by an article I read by Kaila Katz who wrote a criticism of the quality of the historical information found in the introductory chapters of typical computer science texts. Katz stated that the condensed histories provided by some authors gave a misleading account of the development of the computer. For example, Katz criticized the many technical and historical inaccuracies found in material written about Charles Babbage and Ada Byron King. While we can forgive Hollywood for romanticizing the past, textbooks should attempt to put events in the correct historical context.
Giving an account of the history of computers is a difficult task because there are so many points in history from where one can begin the discussion. It's tempting to introduce computing with the early electromechanical devices that emerged around the time of World War I. However, we really need to go back much further to find the origins of the computer. We could even go back to prehistoric times and describe the development of arithmetic and early astronomical instruments, or to ancient Greek times when a control system was first described. Instead, I have decided to begin with the mechanical calculator that was designed to speed up arithmetic calculations.
We then introduce some of the ideas that spurred the evolution of the microprocessor, plus the enabling technologies that were necessary for its development; for example, the watchmaker's art in the nineteenth century and the expansion of telecommunications in the late 1880s. Indeed, the introduction of the telegraph network in the nineteenth century was responsible for the development of components that could be used to construct computers, networks that could connect computers together, and theories that could help to design computers. The final part of the first section briefly describes early mechanical computers.
The next step is to look at early electronic mainframe computers. These physically large and often unreliable machines were the making of several major players in the computer industry such as IBM. We also introduce the minicomputer that was the link between the mainframe and the microprocessor. Minicomputers were developed in the 1960s for use by those who could not afford dedicated mainframes (e.g., university CS departments). Minicomputers are important because many of their architectural features were later incorporated in microprocessors.
We begin the history of the microprocessor itself by describing the Intel 4004, the
first CPU on a chip and then show how more powerful 8-
The last part of this overview looks at the PC revolution that introduced a computer
into so many homes and offices. We do not cover modern developments (i.e., post-
Before the Microprocessor
It’s impossible to cover computer history in a few web pages or a short article—we could devote an entire book to each of the numerous mathematicians and engineers who played a role in the computer's development. In any case, the history of computing extends to prehistory and includes all those disciplines contributing to the body of knowledge that eventually led to what we would now call the computer.
Had the computer been invented in 1990, it might well have been called an information processor or a symbol manipulation machine. Why? Because the concept of information processing already existed – largely because of communications systems. However, the computer wasn’t invented in 1990, and it has a very long history. The very name computer describes the role it originally performed—carrying out tedious arithmetic operations called computations. Indeed, the term computer was once applied not to machines but to people who carried out calculations for a living. This is the subject of D. A. Grier’s book When Computers were Human (Princeton University Press, 2007).
Even politics played a role in the development of computing machinery. Derek de Solla Price writes that, prior to the reign of Queen Elizabeth I, brass was not manufactured in England and cannon had to be imported. After 1580, brass was made in England and brass sheet became available for the manufacture of the precision instruments required in navigation. Price also highlights how prophetic some of the inventions of the 1580s were. An instrument maker in Augsburg, Germany, devised a machine that recorded the details of a journey on paper tape. The movement of a carriage's wheels advanced a paper tape and, once every few turns, a compass needle was pressed onto the paper’s surface to record the direction of the carriage. By examining the paper tape, you could reconstruct the journey for the purpose of map making.
By the middle of the seventeenth century, several mechanical aids to calculation
had been devised. These were largely analog devices in contrast to the digital calculators
we discuss shortly. Analog calculators used moving rods, bars, or disks to perform
calculations. One engraved scale was moved against another and then the result of
the calculation was read. The precision of these calculators depended on how fine
the engravings on the scale were made and how well the scale was read. Up to the
1960s, engineers used a modern version of these analog devices called a slide rule.
Even today, some aircraft pilots (largely those flying for fun in light aircraft)
use a mechanical contraption with a rotating disk and a sliding scale to calculate
their true airspeed and heading from their indicated airspeed, track, wind speed
and direction (see the panel). However, since the advent of the pocket calculator
many pilots now use electronic devices to perform flight-
Mathematics was invented for two reasons. The most obvious reason was to blight the lives of generations of high school students by forcing them to study geometry, trigonometry, and algebra. A lesser reason is that mathematics is a powerful tool enabling us to describe the world, and, more importantly, to make predictions about it. Long before the ancient Greek civilization flowered, humans had devised numbers, the abacus (a primitive calculating device), and algebra. The very word digital is derived from the Latin word digitus (finger) and calculate is derived from the Latin word calculus (pebble).
Activities from farming to building require calculations. The mathematics of measurement and geometry were developed to enable the construction of larger and more complex buildings. Extending the same mathematics allowed people to travel reliably from place to place without getting lost. Mathematics allowed people to predict eclipses and to measure time.
As society progressed, trading routes grew and people traveled further and further. Longer journeys required more reliable navigation techniques and provided an incentive to improve measurement technology. The great advantage of a round Earth is that you don’t fall off the edge after a long sea voyage. On the other hand, a round Earth forces you to develop spherical trigonometry to deal with navigation over distances greater than a few miles. You also have to develop the sciences of astronomy and optics to determine your location by observing the position of the sun, moon, stars, and planets. Incidentally, the ancient Greeks measured the diameter of the Earth and, by 150 AD; the Greek cartographer Ptolemy had produced a world atlas that placed the prime meridian through the Fortunate Islands (now called the Canaries, located off the west coast of Africa).
The development of navigation in the eighteenth century was probably one of the most important driving forces towards automated computation. It’s easy to tell how far north or south of the equator you are—you simply measure the height of the sun above the horizon at midday and then use the sun’s measured elevation (together with the date) to work out your latitude. Unfortunately, calculating your longitude relative to the prime meridian through Greenwich in England is very much more difficult. Longitude is determined by comparing your local time (obtained by observing the angle of the sun) with the time at Greenwich; for example, if you find that the local time is 8 am and your chronometer tells you that it’s 11 am in Greenwich, you must be three hours west of Greenwich. Since the Earth rotates once is 24 hours, 3 hours is 3/24 or 1/8 of a revolution; that is, you are 360°/8 = 45° west of Greenwich.
The rush to develop a chronometer in the eighteenth century, that could keep time
to an accuracy of a few seconds during a long voyage, was as exciting as the space-
The mathematics of navigation uses trigonometry, which is concerned with the relationship between the sides and the angles of a triangle. In turn, trigonometry requires an accurate knowledge of the sine, cosine, and tangent of an angle. Not very long ago (prior to the 1970s), high school students obtained the sine of an angle in exactly the same way as they did hundreds of years ago—by looking it up in a book containing a large table of sines. In the 1980s, students simply punched the angle into a pocket calculator and hit the appropriate button to calculate the appropriate since, cosine, square root, or any other common function. Today, the same students have an application on their cell phones or iPads that does the same thing.
Those who originally devised tables of sines and other mathematical functions (e.g., square roots and logarithms) had to do a lot of calculation by hand. If x is in radians (where 2p radians = 360°) and x < 1, the expression for sin(x) can be written as an infinite series of the form
In order to calculate a sine, you convers the angle in degrees to radians and then apply the above formula. Although the calculation of sin(x) requires the summation of an infinite number of terms, you can obtain an approximation to sin(x) by adding just a handful of terms together, because xn tends towards zero as n increases for x << 1.
Let’s test this formula. Suppose we wish to calculate the value of sin 15°. This angle corresponds to 15/2p radians = 0.2617993877991.
Step 1: sin(x) = x = 0.2617993878
Step 2: sin(x) = x – x3/3! = 0.2588088133
Step 3: sin(x) = x – x3/3! + x5/5! = 0.2588190618
The actual value of sin 15° is 0.2588190451, which differs from the calculated value only at the eighth decimal position.
When the tables of values for sin(x) were compiled many years ago, armies of clerks had to do all the arithmetic the hard way—by means of pencil and paper. As you can imagine, people looked for a better method of compiling these tables.
An important feature of the formula for sin(x) is that it involves nothing more than
the repetition of fundamental arithmetic operations (addition, subtraction, multiplication,
and division). The first term in the series is x itself. The second term is -
The Era of Mechanical Computers
Let’s return to the history of the computer. Although the electronic computer belongs to this century, mechanical computing devices have existed for a very long time. The abacus was in use in the Middle East and Asia several thousand years ago. In about 1590, the Scottish Nobleman John t could also perform partial multiplication. Schickard died in a plague and his invention didn’t receive the recognition it merited. Such near simultaneous developments have been a significant feature of the history of computer hardware.
The German mathematician Gottfried Wilhelm Leibnitz was familiar with Pascal’s work and built a mechanical calculator in 1694 that could perform addition, subtraction, multiplication, and division. Later versions of Leibnitz's calculator were used until electronic computers became available in the 1940s.
Within a few decades, mechanical computing devices advanced to the stage where they could perform addition, subtraction, multiplication, and division—all the operations required by armies of clerks to calculate the trigonometric functions we mentioned earlier.
Fig. 1 Pascal’s calculator: the Pascaline (David Monniaux)
If navigation generated a desire for mechanized computing, other developments provided
important steps along the path to the computer. By about 1800, the Industrial Revolution
in Europe was wNapier invented logarithms and an aid to multiplication called Napier’s
Bones. These so-
A few years later, William Oughtred invented the slide rule, a means of multiplying
numbers that used scales engraved on two wooden (and later plastic) rules that were
moved against each other. The number scales were logarithmic and converted multiplication
into addition because adding the logarithms of two numbers performs their multiplication.
The slide rule became the standard means of carrying out engineering calculations
and was in common use until the 1960s. After the mid-
During the seventeenth century, major advances were made in watch-
In fact, Wilhelm Schickard, rather than Pascal, is now generally credited with the
invention of the first mechanical calculator. His device, created in 1623, was more
advanced than Pascal's because iell under way. Weaving was one of the first industrial
processes to be mechanized. A weaving loom passes a shuttle pulling a horizontal
thread to and fro between vertical threads held in a frame. By changing the color
of the thread pulled by the shuttle and selecting whether the shuttle passes in front
of or behind the vertical threads, you can weave a particular pattern. Controlling
the loom manually is tedious and time-
The notion of a program appears elsewhere in the mechanical world. Consider the music box that plays a tune when you open it. A clockwork mechanism rotates a drum whose surface is embedded with spikes or pins. A row of thin metal strips, the teeth of a steel comb, are located along the side of the drum, but don’t quite touch the drum’s surface. As the drum rotates, a pin sticking out of the drum meets one of the strips and drags the strip along with it. Eventually, the pin rotates past the strip’s end and the strip falls back with a twang. By tuning each strip to a suitable musical note, a tune can be played as the drum rotates. The location of the pegs on the surface of the drum determines the sequence of notes played.
The computer historian Brian Randell points out that this pegged drum mechanical programmer has a long history. Heron of Alexandria described a pegged drum control mechanism about 100 AD. A rope is wound round a drum and then hooked around a peg on the drum's surface. Then the rope is wound in the opposite direction. This process can be repeated as many times as you like, with one peg for each reversal of direction. If you pull the rope, the drum will rotate. However, when a peg is encountered, the direction of rotation will change. The way in which the string is wound round the drum (i.e., its helical pitch) determines the drum’s speed of rotation. This mechanism allows you to predefine a sequence of operations; for example, clockwise/slow/long; counterclockwise/slow/short. Such technology could be used to control temple doors.
Although Heron's mechanism may seem a long way from the computer, it demonstrates that the intellectual notions of control and sequencing have existed for a very long time.
Fig. 2 The Jacquard loom (Edal Anton Lefterov)
Two significant advances in computing were made by Charles Babbage, a British mathematician born in 1792, his difference engine and his analytical engine. Like other mathematicians, Babbage had to perform all calculations by hand, and sometimes he had to laboriously correct errors in published mathematical tables. Living in the age of steam, it was quite natural for Babbage to ask himself whether mechanical means could be applied to arithmetic calculations. Babbage’s difference engine took mechanical calculation one step further by performing a sequence of additions and multiplications in order to calculate the coefficients of a polynomial using the method of finite differences.
Babbage wrote papers about concept of mechanical calculators and applied to the British Government for funding to implement them, and received what was probably the world’s first government grant for computer research. Unfortunately, Babbage didn’t actually build his difference engine calculating machine. However, he and his engineer, Clement, constructed a part of the working model of the calculator between 1828 and 1833 (Fig. 3).
Babbage’s difference engine project was cancelled in 1842 because of increasing costs.
He did design a simpler difference engine using 31-
The difference engine mechanized the calculation of polynomial functions and automatically printed the result. The difference engine was a complex array of interconnected gears and linkages that performed addition and subtraction rather like Pascal’s mechanical adder. However, it was a calculator rather than a computer because it could carry out only a set of predetermined operations.
Fig. 3 A portion of Babbage’s difference engine (Geni)
Babbage’s difference engine employed a technique called finite differences to calculate polynomial functions. Remember that trigonometric functions can be expressed as polynomials in the form a0x + a1x1 + a2x2 + … The difference engine can evaluate such expressions automatically.
Table 1 demonstrates the use of finite differences to create a table of squares without
multiplication. The first column contains the integers 1, 2, 3, ... The second column
contains the squares of these integers (i.e., 1, 4, 9,...). Column 3 contains the
first difference between successive pairs of numbers in column 2; for example, the
first value is 4 -
Table 1: The use of finite differences to calculate squares © Cengage Learning 2014
Number Number squared First difference Second difference
2 4 3
3 9 5 2
4 16 7 2
5 25 9 2
6 36 11 2
7 49 13 2
Suppose we want to calculate the value of 82 using finite differences. We simply use this table in reverse by starting with the second difference and working back to the result. If the second difference is 2, the next first difference (after 72) is 13 + 2 = 15. Therefore, the value of 82 is the value of 72 plus the first difference; that is 49 + 15 = 64. We have generated 82 without using multiplication. This technique can be extended to evaluate many other mathematical functions.
Charles Babbage went on to design the analytical engine that was to be capable of
performing any mathematical operation automatically. This truly remarkable and entirely
mechanical device was nothing less than a general-
Babbage envisaged that his analytical engine would be controlled by punched cards similar to those used to control the operation of the Jacquard loom. Two types of punched card were required. Operation cards specified the sequence of operations to be carried out by the analytical engine and variable cards specified the locations in store of inputs and outputs.
One of Babbage's contributions to computing was the realization that it is better to construct one arithmetic unit and share it between other parts of the difference engine than to construct multiple arithmetic units. The part of the analytical engine that performed the calculations was the mill (now called the arithmetic logic unit [ALU]) and the part that held information was called the store. In the 1970s, mainframe computers made by ICL recorded computer time in mills in honor of Babbage.
A key, if not the key, element of the computer is its ability to make a decision based on the outcome of a previous operation; for example, the action IF x > 4, THEN y = 3 represents such a conditional action because the value 3 is assigned to y only if x is greater than 4. Babbage described the conditional operation that was to be implemented by testing the sign of a number and then performing one of two operations depending on the sign.
Because Babbage’s analytical engine used separate stores (i.e., punched cards) for
data and instructions, it lacked one of the principal features of modern computers—the
ability of a program to operate on its own code. However, Babbage’s analytical engine
incorporated more computer-
One of Babbage’s collaborators was Ada Gordon, a mathematician who became interested in the analytical engine when she translated a paper on it from French to English. When Babbage discovered the paper he asked her to expand it. She added about 40 pages of notes about the machine and provided examples of how the proposed Analytical Engine could be used to solve mathematical problems.
Ada worked closely with Babbage, and it’s been reported that she even suggested the
use of the binary system rather than the decimal system to store data. She noticed
that certain groups of operations are carried out over and over again during the
course of a calculation and proposed that a conditional instruction be used to force
the analytical engine to perform the same sequence of operations many times. This
action is the same as the repeat or loop function found in most of today’s high-
Ada devised algorithms to perform the calculation of Bernoulli numbers, which makes
her one of the founders of numerical computation that combines mathematics and computing.
Some regard Ada as the world’s first computer programmer. She constructed an algorithm
a century before programming became a recognized discipline and long before any real
computers were constructed. In the 1970s, the US Department of Defense commissioned
a language for real-
Mechanical computing devices continued to be used in compiling mathematical tables
and performing the arithmetic operations used by everyone from engineers to accountants
until about the 1960s. The practical high-
Before we introduce the first electromechanical computers, we describe a very important step in the history of the computer, the growth of the technology that made electrical and electronic computers possible.
Enabling Technology—The Telegraph
In my view, one of the most important stages in the development of the computer was
the invention of the telegraph. In the early nineteenth century, King Maximilian
had seen how the French visual semaphore system had helped Napoleon's military campaigns,
and in 1809 he asked the Bavarian Academy of Sciences to devise a scheme for high-
In 1819, H. C. Oersted made one of the greatest discoveries of all time when he found that an electric current creates a magnetic field round a conductor. Passing a current through a coil made it possible to create a magnetic field at will. Since the power source and on/off switch (or key) could be miles away from the compass needle, invention became the telegraph.
Fig. 4 The Wheatstone and Cooke telegraph (SSPL via Getty Images)
The growth of the railway network in the early nineteenth century was one of the
driving forces behind the development of the telegraph because stations down the
line had to be warned that a train was arriving. By 1840, a 40-
The First Long-
We now take wires and cables for granted. In the early nineteenth century, plastic
hadn't been invented and the only materials available for insulation and waterproofing
were substances such as asphaltum, a type of pitch. In 1843, a form of rubber called
gutta percha was discovered and used to insulate the signal-
Submarine cable telegraphy began with a cable crossing the English Channel to France in 1850. The cable failed after only a few messages had been exchanged, but a more successful attempt was made the following year.
Transatlantic cable laying from Ireland began in 1857, but was abandoned when the
strain of the cable descending to the ocean bottom caused it to snap under its own
weight. The Atlantic Telegraph Company tried again in 1858. Again, the cable broke
after only three miles, but the two cable-
It soon became clear that this cable wasn't going to be a commercial success. The receiver used the magnetic field from current in the cable to deflect a magnetized needle. The voltage at the transmitting end used to drive a current down the cable was approximately 600 V. However, after crossing the Atlantic the signal was too weak to be reliably detected, so they raised the voltage to about 2,000 V to drive more current along the cable and improve the detection process. Unfortunately, such a high voltage burned through the primitive insulation, shorted the cable, and destroyed the first transatlantic telegraph link after about 700 messages had been transmitted in three months.
Progress continued. In England, the Telegraph Construction and Maintenance Company designed a new cable. This was 2300 miles long, weighed 9,000 tons, and was three times the diameter of the failed 1858 cable. Laying this cable required the largest ship in the world, the Great Eastern (Fig. 5). After a failed attempt in 1865, a transatlantic link was finally established in 1866. In those days, it cost $100 in gold to transmit 20 words (including the address) across the first transatlantic cable at a time when a laborer earned $20 per month.
Fig. 5 The Great Eastern: largest ship in the world (State Library of Victoria)
Telegraph Distortion and the Theory of Transmission Lines
It was soon realized that messages couldn’t be transmitted rapidly. Signals suffered from a phenomenon called telegraph distortion that limited the speed at which they could be transmitted. As the length of cables increased, it soon became apparent that a sharply rising pulse at the transmitter end of a cable was received at the far end as a highly distorted pulse with long rise and fall times. It was so bad that the 1866 transatlantic telegraph cable could transmit only eight words per minute. Figure 6 illustrates the effect of telegraph distortion.
Fig. 6 Effect of telegraph distortion
Limitations imposed by telegraph distortion worried the sponsors of the transatlantic cable project and the problem was eventually handed to William Thomson at the University of Glasgow. Thomson, who later became Lord Kelvin, was one of the nineteenth century's greatest scientists. He published more than 600 papers, developed the second law of thermodynamics, and created the absolute temperature scale. The unit of temperature with absolute zero at 0° is called the Kelvin in his honor. Thomson also worked on the dynamical theory of heat and carried out fundamental work in hydrodynamics.
Thomson analyzed this problem and was responsible for developing some of the fundamental theories of electricity and magnetism that govern the propagation of signals in circuits. In 1855, Thomson presented a paper to the Royal Society, analyzing the effect of pulse distortion. This paper became the cornerstone of what is now called transmission line theory. The cause of the problems investigated by Thomson lies in the physical properties of electrical conductors and insulators. At its simplest, the effect of a transmission line is to reduce the speed at which signals can change state. Thomson's theories enabled engineers to construct data links with much lower levels of distortion.
Thomson contributed to computing by providing the theory that describes the flow of pulses in circuits, enabling the development of the telegraph and telephone networks. In turn, the switching circuits used to route messages through networks were used to construct the first electromechanical computers.
Developments in Communications Networks
The next step along the path to the computer was the development of the telephone. Alexander Graham Bell who started work on a method of transmitting several signals simultaneously over a single wire called the harmonic telegraph in 1872. Until recently, Bell was credited with developing his work further and inventing the telephone in 1876. However, Elisha Gray an American inventor, is thought by some to have a better claim to the telephone than Bell. On February 14th 1876 Bell filed his patent and Gray filed his caveat (i.e., a claim to an invention that is soon to be patented) two hours later. In hindsight, it appears that Gray’s invention was actually more practical than Bell’s. After years of patent litigation, Bell’s claim won. I think that it can be argued that Gray has as much right to the title inventor of the telephone as Bell, not least because it has been reported that Bell was aware of Grays work and that Bell’s patent application incorporates some of Gray’s ideas.
It is believed by some that Antonio Meucci, an immigrant from Florence, invented the telephone in 1849 and filed a notice of intention to take out a patent (i.e., a caveat) in 1871. Unfortunately, Meucci was unable to pay the patent fees and abandoned the project. A lawsuit against Bell later revealed that he had worked in a laboratory at the Western Union Telegraph Company where there was a model of Meucci's invention. In 2002, the US House of Representatives righted the wrong and passed a resolution recognizing Meucci as the inventor of the telephone.
In fact, many inventors can claim the telephone as their own. Given the technology of the time, the need of instantaneous long distance communication, and the relative simplicity of the telephone, it was an inevitable invention. Such near simultaneous inventions are a characteristic of the history of the computer.
Although the first telegraph systems operated from point to point, the introduction of the telephone led to the development of switching centers. The first generation of switching centers employed telephone operators who manually plugged a subscriber's line into a line connected to the next switching center in the link. By the end of the nineteenth century, the infrastructure of computer networks was already in place.
The inventions of the telegraph and telephone gave us signals that we could create
at one place and receive at another, and telegraph wires and undersea cables gave
At the end of the nineteenth century, telephone lines were switched from center to center by human operators in manual telephone exchanges. In 1897, an undertaker called Strowger was annoyed to find that he was not getting the trade he expected. Strowger believed that the local telephone operator was connecting prospective clients to his competitor. So, he cut out the human factor by inventing the automatic telephone exchange that used electromechanical devices to route calls between exchanges. This invention required the rotary dial that has all but disappeared today.
When you dial a number using a rotary dial, a series of pulses are sent down the line to an electromechanical device in the next exchange called a uniselector (Fig. 7). This is, in fact, a form of two dimensional rotary switch with one input and ten outputs from 0 to 9. The arm can rotate in the horizontal direction and the vertical direction. This allows one of the ten lines to be selected by dialing and one of several lines to be selected at that level because there may be several paths to the next selection or switching stage in the next exchange. If you dial, for example "5", the five pulses move a switch five steps clockwise to connect you to line number five that routes your call to the next switching center. Consequently, when you phoned someone using Strowger’s technology, the number you dialed determined the route your call took through the system.
Fig. 7 The uniselector (Brock Craft / Thatbrock)
Let There be Light
The telephone and telegraph networks and their switching systems provided a good basis for the next step; the invention of the computer. However, we need something more if we are to create fast computers. We need electronic devices that amplify signals. Next, we describe the invention of the vacuum tube in 1906. This was followed by the invention of the transistor in 1947.
By the time the telegraph was well established, radio was being developed. James Clerk Maxwell predicted radio waves in 1864 following his study of light and electromagnetic waves, Heinrich Hertz demonstrated their existence in 1887, and Marconi used them to span the Atlantic in 1901.
In 1906, Lee de Forest extended the diode by placing a third electrode, a wire mesh,
between the cathode and anode. This was the triode vacuum-
Fig. 8 The Lee de Forest Audion (SSPL via Getty Images)
Without a vacuum tube (or transistor) to amplify weak signals, modern electronics would have been impossible. In general, the term electronics is used to refer to circuits with amplifying or active devices such as transistors. The first primitive computers using electromechanical devices did not use vacuum tubes and, therefore, these computers were not electronic computers.
The telegraph, telephone, and vacuum tube were all steps on the path to the computer and the computer network. As each of these practical steps was taken, there was a corresponding development in the accompanying theory (in the case of radio, the theory came before the discovery).
Typewriters, Punched Cards, and Tabulators
Another important part of computer history is the humble keyboard, which is still
the prime input device of most computers. As early as 1711, Henry Mill, an Englishman,
described a mechanical means of printing text on paper, one character at a time.
In 1829, an American inventor, William Burt was granted the first US patent for a
typewriter, although his machine was not practical. It wasn’t until 1867 that three
Americans, Sholes, Gliddend, and Soule invented their Type-
Another enabling technology that played a key role in the development of the computer was the tabulating machine, a development of the mechanical calculator that processes data on punched cards. One of the largest data processing operations carried out in the USA during the nineteenth century was the US census. A census involves taking the original data, sorting and collating it, and tabulating the results—all classic data preparation operations. Because of the sheer volume of data involved, people attempted to automate data processing. In 1872, Colonel Charles W. Seaton invented a device to mechanize some of the operations involved in processing census data.
In 1879, Herman Hollerith became involved in the evaluation of the 1880 US Census data. He devised an electric tabulating system that could process data stored on cards punched by clerks from the raw census data. Hollerith's electric tabulating machine could read cards, tabulate the information on the cards (i.e., count and record), and then sort the cards. These tabulators employed a new form of electromechanical counting mechanism. Moreover, punched cards reduced human reading errors and provided an effectively unlimited storage capacity. By copying cards, processing could even be carried out in parallel. During the 1890s and 1900s, Hollerith made a whole series of inventions and improvements, all geared towards automatic data collection, processing, and printing.
The Hollerith Tabulating Company, eventually became one of the three that made up
Three threads converged to make the computer possible: Babbage’s calculating machines
that perform arithmetic (indeed, even scientific) calculations, communications technology
that laid the foundations for electronic systems and even networking, and the tabulator.
The tabulator provided a means of controlling computers, inputting/outputting data,
and storing information until low-
The Analog Computer
We now take a short detour and introduce the largely forgotten analog computer that
spurred the development of its digital sibling. When people use the term computer
today, they are self-
Although popular mythology regards the computer industry as having its origins in World War II, engineers were thinking about computing machines within a few decades of the first practical applications of electricity. As early as 1872, the Society of Telegraph Engineers held their Fourth Ordinary General Meeting in London to discuss electrical calculations. One strand of computing led to the development of analog computers that simulated systems either electrically or mechanically.
Analog computers are mechanical or electronic systems that perform computations by simulating the system they are being used to analyze or model. Probably the oldest analog computer is the hourglass. The grains of sand in an hourglass represent time; as time passes, the sand flows from one half of the glass into the other half through a small constriction. An hourglass is programmed by selecting the grade and quantity of the sand and the size of the constriction. The clock is another analog computer, where the motion of the hands simulates the passage of time. Similarly, a mercury thermometer is an analog computer that represents temperature by the length of a column of mercury.
An electronic analog computer represents a variable (e.g., time or distance) by an electrical quantity and then models the system being analyzed. For example, in order to analyze the trajectory of a shell fired from a field gun, you would construct a circuit using electronic components that mimic the effect of gravity and air resistance, etc. The analog computer might be triggered by applying a voltage step, and then the height and distance of the shell would be given by two voltages that change with time. The output of such an analog computer might be a trace on a CRT screen. Analog computers lack the accuracy of digital computers because their precision is determined by the nature of the components and the ability to measure voltages and currents. Analog computers are suited to the solution of scientific and engineering problems such as the calculation of the stress on a beam in a bridge, rather than, for example, financial or database problems.
Vannavar Bush is regarded as the father of the analog computer, although in 1876,
the British mathematician Lord Kelvin devised a mechanical analog computer that could
be used to predict the behavior of tides. Bush developed his electromechanical differential
analyzer at MIT in the early 1930s. The differential analyzer was based on the use
of interconnected mechanical integrators, torque amplifiers, drive belts, shafts,
and gears. This 100-
Fig. 9 Bush's differential analyzer 1931 (public domain)
Fig. 9 Harmonic analyzer disc and sphere (Andy Dingley))
The key to Bush's machine was his disk-
Fig. 10 Disk-
In 1945, Bush wrote an essay in Atlantic Monthly proposing an information system he called Memex that would act as "…a personalized storehouse for books, records, correspondence, receipts…and employ a sophisticated indexing system to retrieve facts on demand." Such a system is not that far removed from today's World Wide Web.
Later analog computers used electronic integrators to simulate complex dynamic systems well into the 1970s. In many ways, the electric organ was a very sophisticated analog computer that used analog techniques to model the processes used by real musical instruments to create sound. Modern electric organs now employ digital techniques.
One advantage of the analog computer was that you did not need to program it in the sense you use programming today. If you wanted to simulate the behavior of a car’s suspension on a rough road, you would take components that model the axis of a car in terms of its mass, stiffness, and damping.
Theoretical Developments on the Way to the Computer
Although much of the development of early telegraph and radio systems were by trial and error, scientists rapidly developed underlying theories. For example, we have already described how Lord Kelvin laid the foundations of transmission line theory when he investigated the performance of telegraph systems. Now we are going to mention some of the other intellectual concepts that contributed to the birth of the computer industry.
In 1854, George Boole described a means of manipulating logical variables that provided the foundation for binary digital systems and logic design. Claude Shannon at Bell Labs extended Boole’s work in the 1930s to the algebra of the switching circuits used to design computers. Boolean algebra is now used by engineers to design logic circuits.
In the late 1930s, Alan M. Turing, a British mathematician, provided some of the theoretical foundations of computer science. In particular, Turing developed the notion of a universal machine (now called a Turing machine) capable of executing any algorithm that can be described. The structure of a Turing machine bears no resemblance to any real machine before or after Turing’s time. However, it is a simple device with memory elements, a processor, and a means of making decisions. A Turing machine has an infinitely long tape containing symbols in cells (i.e., the memory). A read/write head reads symbol X in the cell currently under the head and uses a processor to write a new symbol, Y, in the cell previously occupied by X (note that Y may be the same as X). Having read a symbol, the processor can move the tape one cell to the left or the right. The Turing machine is an example of a finite state machine.
Turing’s work led to the concept of computability and the idea that one computer can emulate another computer and, therefore, that a problem that can be solved by one computer can be solved by every other computer. A consequence of Turing’s work is that problems that can be shown to be unsolvable by a Turing machine cannot be solved by any future computer, no matter what advances in technology take place.
Alan Turing played a major role in World War II when he worked on code-
Suppose neither analog nor digital computers had been invented. Does that mean we would be without computational devices of some type or another? The answer is probably “no”. Other forms of computing may well have arisen based on, for example, biological, chemical, or even quantum computers.
As you would expect, the human brain has been studied for a long time by the medical profession and many attempts have been made to determine how it is constructed and how it operates. In the 1940s, scientists began to study the fundamental element of the brain, the neuron. A neuron is a highly specialized cell that is connected to other neurons to form a complex network of about 1011 neurons. In 1943, a US neurophysiologist Warren McCulloch worked with Walter Pitts to create a simple model of the neuron from analog electronic components.
Scientists have attempted to create computational devices based on networks of neurons. Such networks have properties of both analog and digital computers. Like analog computers, they model a system and like digital computers they can be programmed, although their programming is in terms of numerical parameters rather than a sequence of operations. Like analog computers, they cannot be used for conventional applications such as data processing and are more suited to tasks such as modeling the stock exchange.
The McCulloch and Pitts model of a neuron has n inputs and a single output. The ith
input xi is weighted (i.e., multiplied) by a constant wi, so that the ith input becomes
xi ∙ wi. The neuron sums all the weighted inputs to get x0 ∙ w0. + x1 ∙ w1. +…+ xn-
A single neuron is not very interesting. Figure 11 shows a simple neural network composed of nine neurons in three layers. This network is programmed by changing the weights of the inputs at each of the nine neurons.
Fig. 11 A neural network
A neural network is very different from a digital computer. It doesn’t execute a
Although the neural net isn’t normally regarded as part of computing history, we are making the point that research into calculating devices took place on a number of different fronts, and that there is more than one way of synthesizing computing machines.
The First Electromechanical Computers
The forerunner of today’s digital computers used electro-
In 1914, Torres y Quevedo, a Spanish scientist and engineer, wrote a paper describing how electromechanical technology, such as relays, could be used to implement Babbage's analytical engine. The computer historian Randell comments that Torres could have successfully produced Babbage’s analytical engine in the 1920s. Torres was one of the first to appreciate that a necessary element of the computer is conditional behavior; that is, its ability to select a future action on the basis of a past result. Randell quotes from a paper by Torres:
"Moreover, it is essential – being the chief objective of Automatics – that the automata be capable of discernment; that they can at each moment, take account of the information they receive, or even information they have received beforehand, in controlling the required operation. It is necessary that the automata imitate living beings in regulating their actions according to their inputs, and adapt their conduct to changing circumstances."
One of the first electro-
In the 1940s, at the same time that Zuse was working on his computer in Germany, Howard Aiken at Harvard University constructed his Harvard Mark I computer with both financial and practical support from IBM. Aiken’s electromechanical computer, which he first envisaged in 1937, operated in a similar way to Babbage’s proposed analytical engine. The original name for the Mark I was the Automatic Sequence Controlled Calculator, which perhaps better describes its nature.
Aiken's programmable calculator was used by the US Navy until the end of World War
II. Curiously, Aiken's machine was constructed to compute mathematical and navigational
tables, the same goal as Babbage's machine. Just like Babbage, the Mark I used decimal
counter wheels to implement its main memory, which consisted of 72 words of 23 digits
plus a sign. Arithmetic operations used a fixed-
Because the Harvard Mark I treated data and instructions separately (as did several of the other early computers), the term Harvard Architecture is now applied to any computer that has separate paths (i.e., buses) for data and instructions. Aiken’s Harvard Mark I does not support conditional operations, and therefore his machine is not strictly a computer. However, his machine was later modified to permit multiple paper tape readers with a conditional transfer of control between the readers.
The First Mainframes
Relays have moving parts and can’t operate at very high speeds. Consequently, the
electromechanical computers of Zuse and Aiken had no long-
Although vacuum tubes were originally developed for radios and audio amplifiers,
they were used in other applications. In the 1930s and 1940s, physicists required
John V. Atanasoff is now credited with the partial construction of the first completely
electronic computer. Atanasoff worked with Clifford Berry at Iowa State College on
their computer from 1937 to 1942. Their machine, which used a 50-
Although John W. Mauchly’s ENIAC (to be described next) was originally granted a
patent, in 1973 US Federal Judge Earl R. Larson declared the ENIAC patent invalid
and named Atanasoff the inventor of the electronic digital computer. Atanasoff argued
that Mauchly had visited him in Aimes in 1940 and that he had shown Mauchly his machine
and had let Mauchly read his 35-
The first electronic general-
The ENIAC used 17,480 vacuum tubes and weighed about 30 tons. ENICA was a decimal
machine capable of storing twenty 10-
IBM card readers and punches implemented input and output operations. Many of the fundamental elements of digital design (e.g., timing circuits, logic circuits, and control circuits) that are now so commonplace were first implemented with the construction of ENIAC. Because ENIAC had 20 independent adding circuits, all running in parallel, the ENIAC could also be called a parallel processor.
Goldstine's report on the ENIAC, published in 1946, refers to one of the features
found in most first-
ENIAC was programmed by means of a plug board that looked like an old pre-
Like the Harvard Mark I and Atanasoff’s computer, ENIAC did not support dynamic conditional operations (e.g., IF...THEN or REPEAT…UNTIL). An operation could be repeated a fixed number of times by hard wiring the loop counter to an appropriate value. Since the ability to make a decision depending on the value of a data element is vital to the operation of all computers, the ENIAC was not a computer in today's sense of the word. It was an electronic calculator (as was the ABC machine).
Eckert and Mauchly left the Moore School and established the first computer company,
the Electronic Control Corporation. They planned to build the Universal Automatic
Computer (UNIVAC), but were taken over by Remington-
According to Grier, Mauchly was the first to introduce the term "to program" in his 1942 paper on electronic computing. However, Mauchly used "programming" to mean the setting up a computer by means of plugs, switches, and wires, rather than in the modern sense. The modern use of the word program first appeared in 1946 when a series of lectures on digital computers were given at a summer class in the Moore School.
John von Neumann and EDVAC
As we’ve said, a lot of work was carried out on the design of electronic computers
from the early 1940s onward by many engineers and mathematicians. John von Neumann,
The first US computer to use the stored program concept was the Electronic Discrete Variable Automatic Computer (EDVAC). EDVAC was designed by some members of the same team that designed the ENIAC at the Moore School of Engineering at the University of Pennsylvania. The story of the EDVAC is rather complicated because there were three versions; the EDVAC that von Neumann planned, the version that the original team planned, and the EDVAC that was eventually constructed.
By July 1949, Eckert and Mauchly appreciated that one of the limitations of the ENIAC was the way in which it was set up to solve a problem. Along with von Neumann, they realized that operating instructions should be stored in the same memory device as the data.
By the time John von Neumann became acquainted with the ENIAC project, it was too late for him to get involved with its design. He did participate in the EDVAC’s design and is credited with the "First Draft of a Report on the EDVAC" that compiled the results of various design meetings. Although only von Neumann's name appears on this document, other members of the Moore School contributed to the report. Indeed, Williams writes that this document annoyed some members of the design team so much that they left the Moore School and further development of the EDVAC was delayed.
The EDVAC described in von Neumann's unpublished report was a binary machine with
Although EDVAC is generally regarded as the first stored program computer, Randell states that this is not strictly true. EDVAC did indeed store data and instructions in the same memory, but data and instructions did not have a common format and were not interchangeable.
EDVAC also helped to promote the design of memory systems. The capacity of EDVAC's
mercury delay line memory was 1024 words of 44 bits. A mercury delay line operates
by converting serial data into pulses that are fed to an ultrasonic transducer at
one end of a column of mercury in a tube. These pulses travel down the tube in the
form of ultrasonic acoustic vibrations in the mercury. A microphone at the other
end of the delay line picks up the pulses and they are regenerated into digital form.
Finally, the pulses are fed back to the transducer and sent down the delay line again.
This type of memory stores data as pulses traveling through the mercury and is no
longer used. EDVAC's memory was organized as 128 individual 8-
Because data in a delay line memory is stored in this dynamic or sequential mode,
the time taken to access an instruction depends on where it is in the sequence of
pulses. The delay lines were 58 cm long with an end-
This instruction specifies the action to be carried out, the address of the two source operands, the location where the result is to be deposited (i.e., destination), and the location of the next instruction to be executed.
The programmer would write code in such a way that the next instruction was just about to be available from the delay line. Such code optimization techniques are not far removed from those used on modern RISC processors to minimize the effects of data dependency and branches.
Another interesting aspect of EDVAC was the use of test routines to check its hardware; for example, the "Leap Frog Test" executed a routine that included all its instructions, and then moved itself one location on in memory before repeating the test. EDVAC also implemented a register that kept a copy of the last instruction executed and its address. The operator could access this register to aid debugging when the machine crashed. Yes, the Windows unrecoverable applications error has a very long history.
Sadly, EDVAC was not a great success in practical terms. Its construction was largely completed by April 1949, but it didn’t run its first applications program until October 1951. Moreover, EDVAC was not very reliable and suffered a lot of down time.
Because of its adoption of the stored program concept, EDVAC became a topic in the first lecture course given on computers. These lectures took place before the EDVAC was actually constructed.
An important early computer was the IAS constructed by von Neumann and his colleagues
at the Institute for Advanced Studies in Princeton. This project began in 1947 and
is significant because the IAS is remarkably similar to modern computers. The IAS
supported a 40-
The IAS’s 20-
where the operation takes place between the contents of the specified memory location
and the accumulator. This format is, of course, identical to that of first generation
In the late 1940s, the first computer intended for real-
Computers at Manchester and Cambridge Universities
In the 1940s, one of the most important centers of early computer development was
Manchester University in England where F. C. Williams used the cathode ray tube,
CRT, to store binary data (2048 bits in 1947). In 1948, T. Kilburn turned the William's
memory into a prototype computer called the Manchester Baby with 32 words of 32 bits.
This was a demonstration machine that tested the concept of the stored program computer
and the Williams CRT store. CRTs were also used to implement the accumulator and
logical control. Input was read directly from switches and output had to be read
from the William's tube. The first program to run on the machine was designed to
find the highest factor of an integer. Some regard the Manchester Baby as the world's
Maurice V. Wilkes at the University of Cambridge built another early computer, the Electronic Delay Storage Automatic Calculator, EDSAC. This became operational in May 1949, used paper tape input, binary numbers, and mercury delay line storage. Like the IAS, EDSAC allowed you to modify instructions in order to implement indexed addressing.
Scientists sometimes make startlingly incorrect prophecies; for example, the British Astronomer Royal said, Space travel is bunk just before the USSR launched its first sputnik. Similarly, not everyone immediately appreciated the potential of the computer. In 1951, Professor Hartree at Cambridge University said, "We have a computer in Cambridge; there is one in Manchester and at the NPL. I suppose that there ought to be one in Scotland, but that's about all.”
Cooperation between Ferranti and Manchester University led to the development of
the Ferranti Mercury that pioneered the use of floating-
Ferranti went on to develop their Atlas computer with Manchester University (and
with relatively little funding). When it was completed in 1962, it was the world's
most powerful computer. It had one hundred twenty-
Another significant British computer was the Manchester University MU5 that became
operational in 1975. This was used as the basis for ICL's 2900 series of large scientific
computers. ICL was the UK's largest mainframe manufacturer in the 1970s and many
students of my generation used ICL 1900 series computers in their postgraduate research.
Someone once told me where the origins of designations ICL 1900 and ICL 2900. The
1900 series computers were so-
The MU5 implemented several notable architectural features. In particular, it provided
support for block-
Table 2 gives the characteristics of some of the early computers and provides a cross reference to early innovations. The date provided are problematic because there can be a considerable gap between the concept of a computer, its initial construction and testing, and its first use.
Table 2: Characteristics of some of the early computers and provides a cross reference to early innovations
Date Inventors Computer Technology Claims Limitations
1941 Zuse Z3 Electro-
1940 Atanasoff, Berry ABC Electronic First electronic computer
1943 Aiken Harvard Mark I Electro-
1947 Mauchly, Eckert ENIAC Electronic First general-
the hardware. Could not handle conditional execution
1948 Williams, Kilburn, Newman Manchester Mark I Electronic First stored program computer
1949 Mauchly, Eckert, von Neumann EDVAC Electronic
1949 Wilkes EDSAC Electronic First
fully functional, stored-
1950 Turing ACE Electronic First programmable digital computer
1951 Mauchly and Eckert UNIVAC I Electronic First commercial
computer intended for business data-
IBM’s Place in Computer History
No history of the computer can neglect the giant of the computer world, IBM, which has had such an impact on the computer industry.
IBM’s first contact with computers was via its relationship with Aiken at Harvard University. In 1948, Thomas J. Watson Senior at IBM gave the order to construct the Selective Sequence Control Computer (SSEC). Although this was not a stored program computer, it was IBM's first step from the punched card tabulator to the computer.
IBM under T. J. Watson Senior didn’t wholeheartedly embrace the computer revolution
in its early days. However, T. J. Watson, Jr., was responsible for building the Type
701 Electronic Data Processing Machine (EDPM) in 1953 to convince his father that
computers were not a threat to IBM's conventional business. Only nineteen models
of this binary fixed-
Although IBM’s 700 series computers were incompatible with their punched card processing equipment, IBM created the 650 EDPM that was compatible with the 600 series calculators and used the same card processing equipment. This provided an upward compatibility path for existing IBM users, a process that was later to become commonplace in the computer industry.
IBM’s most important mainframe was the 32-
An interesting feature of the System/360 was its ability to run the operating system in a protected state, called the supervisor state. Applications programs running under the operating system ran in the user state. This feature was later adopted by Motorola’s 680x0 microprocessor series.
In 1960, the Series/360 model 85 became the first computer to implement cache memory,
a concept described by Wilkes in 1965. Cache memory keeps a copy of frequently used
data in very high-
IBM introduced one of the first computers to use integrated circuits, ICs, in the 1970s. This was the System/370 that could maintain backward compatibility by running System/360 programs.
In August 1980, IBM became the first major manufacturer to market a personal computer. IBM had been working on a PC since about 1979, when it was becoming obvious that IBM’s market would eventually start to come under threat from the newly emerging personal computer manufacturers, such as Apple and Commodore. Although IBM is widely known by the general public for its mainframes and personal computers, IBM invented introduced the floppy disk, computerized supermarket checkouts, and the first automatic teller machines.
We now take a slight deviation into microprogramming, a technique that had a major impact on the architecture and organization of computers in the 1960s and 1970s. Microprogramming was used to provide members of the System/360 series with a common architecture.
Early mainframes were often built on an ad hoc basis with little attempt to use formal design techniques or to create a regular structure for all processors. In 1951, Maurice V. Wilkes described the fundamental concepts of microprogramming that offered a systematic means of constructing all computers.
A microprogrammed computer decomposes a machine-
Figure 12 illustrates the concept of a microprogrammed control unit. The microprogram
counter provides the address of the next microinstruction that is stored in a microprogram
Fig. 12 A microprogrammed control unit
Some of the bits in the microinstruction register control the CPU (i.e., to provide the signals that interpret the machine level macroinstruction).
The address mapper converts the often arbitrary bit pattern of the machine-
A microprogrammed control unit can be used to implement any digital machine. In order to implement the control unit of any arbitrary architecture, all that you have to change is the microprogram itself and the structure of the CPU control field.
Without microprogramming, a computer has to be constructed from digital circuits that are designed to execute the machine code instructions. Such a design is specific to a particular architecture and organization. Microprogramming offers a general technique that can be applied to any computer; for example, you could construct a microprogrammed control unit that is capable of implementing, say, the Intel IA32 instruction set, the ARM processor’s instruction set, the PowerPC instruction set, or even the ENIAC’s instruction set.
Because the microprogrammed control unit is such a regular structure, it was possible
to develop formal methods for the design of microprograms and to construct tools
to aid the creation, validation, and testing of microprograms. Microprogramming also
made it relatively easy to include self-
The historical significance of microprogramming is its role in separating architecture and organization. Prior to the introduction of microprogramming, there was no real difference between architecture and organization, because a computer was built to directly implement its architecture. Microprogramming made it possible to divorce architecture from organization. By means of a suitable microprogram you could implement any architecture on the simplest of CPUs.
In practical terms, a company could design a powerful architecture that, for example,
The microprogram store is a read-
By the late 1970s, companies like AMD were fabricating integrated circuits that made it easy to construct microprogrammed control units. These devices, that also included ALUs and register files, were called bit slice chips because they were 4 bits wide and you could wire several chips together to construct a system with any wordlength that was a multiple of 4 bits.
In 1980, two engineers at AMD with the improbable names Mick and Brick published
The rise of microprogrammed bit slice architectures coincided with the introduction
Microprogramming was still used by microprocessor manufacturers to implement CPUs;
for example, Motorola’s 16/32-
By the 1980s, the cost of memory had dramatically reduced and its access time had
fallen to 100 ns. Under such circumstances, the microprogrammed architecture lost
its appeal. Moreover, a new class of highly efficient non-
Microprogramming has come and gone, although some of today’s complex processors still use microcode to execute some of their more complex instructions. It enabled engineers to implement architectures in a painless fashion and was responsible for ranges of computers that shared a common architecture but different organizations and performances. Such ranges of computer allowed companies like IBM and DEC to dominate the computer market and to provide stable platforms for the development of software. The tools and techniques used to support microprogramming provided a basis for firmware engineering and helped expand the body of knowledge that constituted computer science.
The Birth of Transistors and ICs
Since the 1940s, computer hardware has become smaller and smaller and faster and
faster. The power-
If you can put one transistor on a slice of silicon, you can put two or more transistors on the same piece of silicon. The integrated circuit (IC), a complete functional unit on a single chip, was an invention waiting to be made. The idea occurred to Jack St. Clair Kilby at Texas Instruments in 1958 who built a working model and filed a patent early in 1959.
However, in January of 1959, Robert Noyce at Fairchild Semiconductor was also thinking of the integrated circuit. He too applied for a patent and it was granted in 1961. Today, both Noyce and Kilby are regarded as the joint inventors of the IC.
By the 1970s, entire computers could be produced on a single silicon chip. The progress of electronics has been remarkable. Today you can put over 2,000,000,000 transistors in the same space occupied by a tube in 1945. If human transport had evolved at a similar rate, and we assume someone could travel at 20 mph in 1900, we would be able to travel at 40,000,000,000 mph today (i.e., about 200,000 times the speed of light!).
Final Thoughts on the Invention of the Computer
So, who did invent the computer? By now you should appreciate that this is, essentially,
a meaningless question. Over a long period, people attempted to design and construct
calculating machines. As time passed more and more enabling technologies were developed
It seems that we always like to associate a specific person with an invention. From 1950, Mauchly and Eckert shared the original computer patent rights because of their work on the ENIAC. However, in 1973 Judge Larson in Minneapolis presided over the controversial lawsuit brought by Sperry Rand against Honeywell Incorporated for patent infringement. Judge Larson ruled that John Atanasoff should be credited with the invention of the computer, and that the Mauchly and Eckert patent was invalid because Atanasoff met John Mauchly at a conference in 1940 and discussed his work at Iowa State with Mauchly. Furthermore, Mauchly visited Atanasoff and looked at his notes on his work.
Had these computers not been developed, modern computing might have taken a different path. However, the computing industry would probably have ended up not too far from today's current state.
Table 3 provides a timescale that includes some of the key stages in the development of the computer. Some of the dates are approximate because an invention can be characterized by the date the inventor began working on it, the date on which the invention was first publicly described or patented, the date on which it was manufactured, the date on which they got it working, or the date on which the inventor stopped working on it.
Table 3: Timescale for the development of the computer
50 Heron of Alexandria invented various control mechanism and programmable mechanical devices. He is said to have constructed
the world’s first vending machine.
1520 John Napier invents logarithms and develops Napier's Bones for multiplication
1654 William Oughtred invents the horizontal slide rule
1642 Blaise Pascal invents the Pascaline, a mechanical adder
1673 Gottfried Wilhelm von Leibniz modifies the Pascaline to perform multiplication
1822 Charles Babbage works on the Difference Engine
1801 Joseph Jacquard develops a means of controlling a weaving loom using holes in punched through wooden cards
1833 Babbage begins to design his Analytic Engine capable of computing any mathematical function
1842 Ada Augusta King begins working with Babbage and invents the concept of programming
1854 George Boole develops Boolean logic, the basis of switching circuits and computer logic.
1890 Herman Hollerith develops a punched-
1906 Lee De Forest invents the vacuum tube (an electronic amplifier)
1940 John V. Atanasoff and Clifford Berry build the Atanasoff-
1941 Konrad Zuse constructs the first programmable computer, the Z3, which was the first machine to use binary arithmetic.
The Z3 was an electromechanical computer.
1943 Alan Turing designs Colossus, a machine used to decode German codes during WW2.
1943 Howard H. Aiken builds the Harvard Mark I computer. This was an electromechanical computer.
1945 John von Neumann describes the stored-
1946 Turing writes a report on the ACE, the first programmable digital computer.
1947 ENIAC (Electrical Numerical Integrator and Calculator) is developed by John W. Mauchly and J. Presper Eckert, Jr. at the
University of Pennsylvania to compute artillery firing tables. ENIAC is not programmable and is set up by hard wiring it to
perform a specific function. Moreover, it cannot execute conditional instructions.
1947 William Shockley, John Bardeen and Walter Brattain of Bell Labs invent the transistor.
1948 Freddy Williams, Tom Kilburn and Max Newman build the Manchester Mark I, the world’s first operating stored program computer.
1949 Mauchly, Eckert, and von Neumann build EDVAC (Electronic Discrete Variable Automatic Computer). The machine was
first conceived in 1945 and a contract to build it issued in 1946.
1949 In Cambridge, Maurice Wilkes builds the, the first fully functional, stored-
1951 Mauchly and Eckert build the UNIVAC I, the first commercial computer intended
for specifically for business data-
1959 Jack St. Clair Kilby and Robert Noyce construct the first integrated circuit
1960 Gene Amdahl designs the IBM System/360 series of mainframes
1970 Ted Hoff constructs the first microprocessor chip, the Intel 4004. This is commonly regarded as the beginning of the
The Minicomputer Era
The microprocessor did not immediately follow the mainframe computer. Between the
mainframe and the micro lies the minicomputer. Mainframes were very expensive indeed
and only large institutions could afford a mainframe computer during the 1950s and
1960s. Advances in semiconductor technology and manufacturing techniques allowed
computer companies to build cut-
Minicomputers were affordable at the departmental level rather than at the individual
level; that is, a department of computer science in the 1960s could afford its own
minicomputer. In the 1960s and 1970s, a whole generation of students learned computer
science from PDP-
It is interesting to read the definition of a minicomputer made by Kraft and Toy in 1970:
A minicomputer is a small, general-
1. has a word length ranging from 8 bits to 32 bits.
2. provides at least 4096 words of program memory.
3. sells for under $20,000 in a minimum usable configuration, consisting of a teletypewriter, a power supply, and a front panel to be used as a programmer’s console.
One of the first minicomputers was Digital Equipment Corporation’s 12-
The principal limitations of the PDP-
Because of its byte-
DEC built on their success with the PDP-
It would be wrong to suggest that DEC was the only minicomputer manufacturer. There
were several other major players in the minicomputer market. For example, Data General
produced its Nova range of computers in the late 1960s and announced its Eclipse
series of minicomputers in 1974. Hewlett-
We next look at the development of the device that was to trigger the microprocessor revolution of the 1970s.
Birth of the Microprocessor: From 4004 to Golden Age
Before we describe the birth of the microprocessor, we need to briefly introduce the integrated circuit that made the microprocessor possible. The transistor operates by controlling the flow of electrons through a semiconductor. When the transistor was first invented, the semiconducting element used to fabricate it was germanium, whereas today most transistors are made from silicon. A transistor is composed of nothing more than adjoining regions of silicon doped with different concentrations of impurities. These impurities are atoms of elements like boron, phosphorous, and arsenic. Combining silicon with oxygen creates silicon dioxide, SiO2, a powerful insulator that allows you to separate regions of differently doped silicon. Electrical contacts are made by evaporating (or sputtering) aluminum on to the surface of a silicon chip. In other words, an integrated circuit is made simply by modifying the properties of atoms in a semiconductor and by adding conducting and insulating layers.
Not only is the transistor a tiny device, it is manufactured by fully automated techniques. The basic fabrication process involves covering the silicon with a photosensitive layer. Then an image is projected onto the photosensitive layer and developed to selectively remove parts of the photosensitive layer. The silicon is then heated in an atmosphere containing the impurities used to dope the silicon, and the impurities diffuse into the surface to change the silicon’s electrical properties. This entire sequence is repeated several times to build up layers with different types of doping material, and then insulators and conductors created.
As manufacturing technology evolved, more and more transistors were fabricated on
single silicon chips with the maximum number of transistors per chip doubling every
year between 1961 and 1971. The basic functional units evolved from simple gates
to arithmetic units, small memories, and special-
It was inevitable that someone would eventually invent the microprocessor because,
by the late 1960s, computers built from discrete transistors and simple integrated
circuits already existed. Integrated circuits were getting more and more complex
day by day and only one step remained, putting everything together on one chip. The
only real issue was when a semiconductor manufacturer would decide that a general-
The Intel 4004
Credit for creating the world's first microprocessor, the Intel 4040, goes to Ted Hoff and Federico Faggin, although William Aspray in the Annals of the History of Computing points out that the microprocessor's development was a more complex and interesting story than many realize. In 1969, Bob Noyce and Gordon Moore set up the Intel Corporation to produce semiconductor memory chips for the mainframe industry. A year later Intel began to develop a set of calculator chips for a consortium of two Japanese companies. These chips were to be used in the Busicom range of calculators.
Three engineers from Japan worked with M. E. Hoff at Intel to implement the calculator's
digital logic circuits in silicon. Hoff had a Ph.D. from Stanford University and
a background in the design of interfaces for several IBM computers. When Hoff studied
the calculator's logic, he was surprised by its complexity (in contrast to the general-
Bob Noyce encouraged Hoff to look at the design of the calculator. One of Hoff's major contributions was to replace the complex and slow shift registers used to store data in the calculator with the DRAM memory cells that Intel was developing as storage elements. This step provided the system with more and faster memory. Hoff also suggested adding subroutine calls to the calculator's instruction set in order to reduce the amount of hardwired logic in the system.
These ideas convinced Hoff to go further and develop a general-
Towards the end of 1969, the structure of a programmable calculator had emerged and
Intel and Busicom chose the programmable calculator in preference to Busicom's original
model. However, the project was delayed until Fredrico Faggin joined Intel in 1970
and helped transform the logic designs into silicon. In order to create a chip of
such complexity, Faggin had to develop new semiconductor design technologies. The
4004 used about 2300 transistors and is considered the first general-
It is interesting to note that Faggin states that Intel discouraged the use of computer simulation because of its cost and Faggin did most of his circuit design with a slide rule, a device that few of today's students have ever seen.
The first functioning 4004 chip was created in 1971 and Busicom's calculator was constructed from a 4004 CPU, four 4001 ROMs, two 4002 RAMs, and three 4003 shift registers. By the end of 1971, the 4004 was beginning to generate a significant fraction of Intel's revenue.
Faggin realized that the 4004 was much more than a calculator chip and set about
trying to convince Intel's management to get the rights to the chip from Busicom.
Both Faggin and Hoff used the 4004 to control in-
Because Busicom was having financial problems, Intel was able to negotiate a deal
that gave Busicom cheaper chip-
The 4004 was a 4-
The 4004 was followed rapidly by the 8-
The invention of the 4004 in 1971 eclipsed an equally important event in computing,
the invention of the 8½ inch floppy disk drive by IBM. The personal computer revolution
could never have taken place without the introduction of a low-
The Golden Era—The 8-
A golden era is a period of history viewed through rose-
The first 8-
Intel didn’t have the market place to itself for very long. Shortly after the 8080
went into production, Motorola created its own microprocessor, the 8-
Both the 8080 and 6800 had modified single-
The division of the world into Intel and Motorola hemispheres continued when two
Zilog's Z80 was a success because it was compatible with the 8080 and yet incorporated
many advances such as extra registers and instructions. It also incorporated some
significant electrical improvements such as an on-
You could also say that the Z80 had a devastating effect on the microprocessor industry, the curse of compatibility. The Z80’s success demonstrated that it was advantageous to stretch an existing architecture, rather than to create a new architecture. By incorporating the architecture of an existing processor in a new chip, you can appeal to existing users who don’t want to rewrite their programs to suit a new architecture.
The down side of backward compatibility is that a new architecture can’t take a radical step forward. Improvements are tacked on in an almost random fashion. As time passes, the architecture becomes more and more unwieldy and difficult to program efficiently.
Just as Faggin left Intel to create the Z80, Chuck Peddle left Motorola to join MOS
Technology and to create the 6502. The 6502's object code was not backward compatible
with the 6800. If you wanted to run a 6800 program on the 6502, you had to recompile
it. The relationship between the 8080 and the Z80, and between the 6800 and the 6502
is not the same. The Z80 is a super 8080, whereas the 6502 is a 6800 with a modified
architecture and different instruction encoding. For example, the 6800 has a 16-
In 1976, Motorola got involved with Delco Electronics who was designing an engine
control module for General Motors. The controller was aimed at reducing exhaust emissions
in order to meet new US government regulations. Motorola created a processor (later
known as the 6801) that was able to replace a 6800 plus some of the additional chips
required to turn a 6800 microprocessor into a computer system. This processor was
backward compatible with the 6800, but included new index register instructions and
Daniels describes how he was given the task of taking the 6801 and improving it.
They removed instructions that took up a lot of silicon area (such as the decimal
adjust instruction used in BCD arithmetic) and added more useful instructions. Later,
on a larger scale, this re-
Today, the term PC or personal computer is taken to mean the IBM PC or a clone thereof. That was not always so. For some time after the microprocessor had burst onto the scene with the 4004 and 8008, the personal computer was most conspicuous by its absence. Everyone was expecting it. By 1979, it seemed surprising that no major corporation had taken one of the new microprocessors and used it to build a personal computer.
Perhaps no major company wanted to create a personal computer market because, at
that time, there were no low-
Six months after the 8008 was introduced, the first ready-
Quite a lot of interest in microprocessors came from the amateur radio community,
because they were accustomed to constructing electronic systems and were becoming
more and more interested in digital electronics (e.g., Morse code generators and
decoders, and teletype displays). In June 1974, Radio Electronics magazine published
an article by Jonathan Titus on a 8008-
Fig. 13 Scelbi-
In January 1975, Popular Electronics magazine published one of the first articles
on microcomputer design by Ed Roberts, the owner of a small company called MITS based
in Albuquerque, NM. MITS was a calculator company going through difficult times and
Roberts was gambling on the success of his 8080-
Although the Altair was intended for hobbyists, it had a significant impact on the
market and sold 2000 kits in its first year. Altair’s buoyant sales increased the
number of microprocessor users and helped encourage the early development of software.
Moreover, the Altair had a bus, the so-
Fig. 14 Altair 8800: Ed. Roberts's microcomputer was called Altair 8800 (Michael Holley)
Early microprocessors were expensive. In 1985, Mark Garetz wrote an article in the microcomputer magazine Byte. He described a conversation he had with an Intel spokesman in 1975 who told him that the cost of a microprocessor would never go below $100. On the same day, Garetz was able to buy a 6502 for $25 at the WESCON conference. With prices at this level, enthusiasts were able to build their own microcomputers and lots of homemade computers sprang up during this time. Some were based on the 6502, some on the 8080, some on the Z80, and some on the 6800. The very first systems were all aimed at the electronics enthusiast because you had to assemble them from a kit of parts.
Typical machines of the early 8-
It was surprising that no large organization wanted to jump on the personal computer bandwagon. Tredennick stated that there was a simple reason for this phenomenon. Microprocessors were designed as controllers in embedded systems such as calculators and cash registers, and the personal computer market in the 1970s represented, to a first approximation, zero percent of a manufacturer's chip sales.
In March 1976, Steve Wozniak and Steve Jobs designed their own 6502-
The next development was unanticipated, but of great importance -
We now look at the development of modern architectures, the 16/32-
The CISC Comes of Age
Electronic engineers loved the microprocessor. Computer scientists seemed to hate it. One of my colleagues even called it the last resort of the incompetent. Every time a new development in microprocessor architecture excited me, another colleague said sniffily, “The Burroughs B6600 had that feature ten years ago.” From the point of view of some computer scientists, the world seemed to be going in reverse with microprocessor features being developed that had been around in the mainframe world for a long time. What there were missing was that the microcomputer was being used by a very large number of people is a correspondingly large number of applications.
The hostility shown by some computer scientists to the microprocessor was inevitable.
By the mid 1970s, the mainframe von Neumann computer had reached a high degree of
sophistication with virtual memory, advanced operating systems, 32-
As time passed and microprocessor technology improved, it became possible to put more and more transistors on larger and larger chips of silicon. Microprocessors of the early 1980s were not only more powerful than their predecessors in terms of the speed at which they could execute instructions, they were more sophisticated in terms of the facilities they offered. For example, they supported complex addressing modes, data structures and memory management.
The first mass-
Intel took the core of their 8080 microprocessor and converted it from an 8-
When moving from 8 bits to 16 bits, you have to deal with several issues. First,
the increased instruction length allows you to have more on-
The 8086 had a superset of the 8080's registers; that is, all user-
Motorola didn’t extend their 8-
The 68000 was one of the first microprocessors to use microcoding to define its instruction set. The earlier microprocessors had random logic instruction decoding and control units.
Ironically, the 68000 was not a 16-
At the time, the only other microprocessor in the same class was Zilog's Z8000, which
had been introduced not long after the 8086. Although nominally a 16-
Several personal computer manufacturers adopted the 68000. Apple used it in the Macintosh,
and it was incorporated in the Atari and Amiga computers. All three of these computers
were regarded as technically competent and had many very enthusiastic followers.
The Macintosh was sold as a relatively high-
The popularity of the IBM PC and the competition between suppliers of PC clones led to ever cheaper hardware. In turn, this led to the growth of the PC's software base.
Although the 68000 developed into the 68020, 68030, 68040, and 68060, this family ceased to be a major contender in the personal computer world. Versions of the family were developed for the embedded processor market, but Motorola’s 68K family played no further role in the PC world, until Apple adopted Motorola’s PowerPC processor. The PowerPC was not a descendant of the 6800 and the 68K families.
In the early 1980s, semiconductor technology didn’t permit much more than a basic
CPU on a chip. Advanced features such as floating-
In 1984, Motorola introduced the 68020 that expanded the 68000's architecture to
include bit field instructions and complex memory-
Motorola's 68030, launched in 1987, was a 68020 with a larger cache and an on-
Intel's share of the PC market ensured that it would remain heavily committed to providing a continuing upgrade path for its old 86x family. In 1995, Motorola introduced its ColdFire line of processors. These were based on the 68K architecture and are intended for embedded applications. Motorola eventually dropped out of the semiconductor market and sold its processors to Freescale Semiconductor Inc. in 2004.
The RISC Challenge
A new buzzword in computer architecture arose in the 1980s, RISC. The accepted definition of RISC is reduced instruction set computer, although the term regular instruction set computer would be more appropriate. In practice, there is no such thing as a pure RISC processor. The term RISC simply describes a general historic trend in computer architecture that stresses speed and simplicity over complexity. The characteristics of processors that are described as RISC are as follows:
The origins of RISC go back to John Cocke at the IBM research center in Yorktown Heights, NY, in the mid 1970s when IBM was working on the 801 project in an attempt to improve the cost/performance ratio of computers. IBM later used the experience gained in the 801 project when it developed its PC RT system for small workstations in engineering and scientific applications. The RT chip had some of the characteristics of RISC architectures, but was fabricated in relatively old MOS technology and clocked at only 6 MHz. Consequently, the RT chip was not a commercial success, although it laid the foundations for the more successful PowerPC.
It was the work carried out by David Paterson at the University of Berkley in the early 1980s that brought the RISC philosophy to a wider audience. Paterson was also responsible for coining the term RISC in a paper he wrote in 1980.
The Berkeley RISC is an interesting processor for many reasons. Although it was constructed at a university (like many of the first mainframes, such as EDSAC), the Berkeley RISC required only a very tiny fraction of the resources consumed by these early mainframes. Indeed, the Berkeley RISC is hardly more than an extended graduate project. It took about a year to design and fabricate the RISC I in silicon. By 1983, the Berkeley RISC II had been produced and that proved to be both a testing ground for RISC ideas and the start of a new industry.
The Berkeley RISC was later transformed into a commercial product, the SPARC architecture,
which is one of the few open architectures in the microprocessor world. An architecture
is open if more than one manufacturer can produce it (this is not the same as second-
By 1986, about ten companies were marketing processors described as RISCs.
RISC architectures were quite controversial in the 1980s, partially because the large processor manufacturers were being told that they had been getting it wrong for the last few years and partially because a lot of the enthusiasm for RISC architectures came out of universities rather than industry.
It is important to emphasize that at least two developments in semiconductor technology
made RISC possible. The first was the development of low-
This CISC instruction can do an impressive amount of computation and is specified in relatively few bits. Carrying out the same operation with more primitive instructions would require a much larger instruction space and hence increase system cost if memory is very expensive.
The second development that made RISC architectures possible was the increase in
bus widths. An 8-
Because RISC data processing instructions are simple, regular and don’t access memory,
they can be executed rapidly compared to CISC instructions that, for example, perform
Another early 32-
The DEC Alpha Processor
Having dominated the minicomputer market with the PDP-
investigate how the VAX customer base could be preserved in the 1990s and beyond, the Alpha microprocessor. According to a special edition of Communications of the AMC devoted to the Alpha architecture (Vol. 36, No 2, February 1993), the Alpha was the largest engineering project in DEC’s history. This project spanned more than 30 engineering groups in 10 countries.
The group decided that a RISC architecture was necessary (hardly surprising in 1988)
and that its address space should break the 32-
Apart from high performance and a life span of up to 25 years, DEC’s primary goals for the Alpha were an ability to run the OpenVMS and Unix operating systems and to provide an easy migration path from VAX and MIPS customer bases. DEC was farsighted enough to think about how the advances that had increased processor performance by a factor of 1000 in the past two decades might continue in the future. That is, DEC thought about the future changes that might increase the Alpha’s performance by a factor of 1,000 and allowed for them in their architecture. In particular, DEC embraced the superscalar philosophy with its multiple instruction issue. Moreover, the Alpha’s architecture was specifically designed to support multiprocessing systems.
The Alpha had a linear 64-
Because the Alpha architecture was designed to support multiple instruction issue and pipelining, it was decided to abandon the traditional condition code register, CCR. Branch instructions test an explicit register. If a single CCR had been implemented, there would be significant ambiguity over which CCR was being tested in a superscalar environment.
Digital’s Alpha project is an important milestone in the history of computer architecture because it represents a well thought out road stretching up to 25 years into the future. Unfortunately, DEC did not survive and the Alpha died.
The PowerPC was the result of a joint effort between IBM, Motorola, and Apple. Essentially, IBM provided the architecture, Motorola fabricated the chip, and Apple used it in their range of personal computers.
IBM was the first company to incorporate RISC ideas in a commercial machine, the
801 minicomputer. The 801 implemented some of the characteristics of RISC architectures
and its success led IBM to develop more powerful RISC architectures. IBM created
the POWER architecture for use in their RS/6000 series workstations. The POWER architecture
had RISC features, superscalar instruction issue, but retained some traditional CISC
features such as complex bit manipulation instructions. Furthermore, POWER also provided
A consortium of IBM, Motorola, and Apple engineers took the POWER architecture and
developed it into the PowerPC family of microprocessors. As in the case of Digital’s
Alpha architecture, the PowerPC was designed to allow for future growth and a clear
distinction was made between architecture and implementation. The POWER’s architecture
was somewhat simplified and any architectural features that stood in the way of superscalar
dispatch and out-
The first member of the PowerPC architecture was the 601, originally designed by
IBM and modified by Motorola to include some of the features of Motorola’s own RISC
device, the 88110. Some of the later members of the PowerPC family were the 603 (a
The PowerPC architecture never thrived in the face of Intel’s IA32 architecture.
Motorola sold of its semiconductor division, and Apple adopted Intel’s architecture.
IBM continued to support the POWER architecture for their high-
PC Revolution and the Rise of WinTel
Personal computing has been dominated by two giants, Intel and Microsoft. Just as Windows has dominated the operating system world, Intel architectures have dominated the PC's hardware and architecture. The dominance of the Windows operating system and the Intel family in the PC market led to the coining of the term WinTel to describe the symbiotic relationship between the Microsoft Corporation and Intel.
The relationship between Microsoft and Intel is not entirely cosy. Although Intel's
chips and Microsoft's operating system form the foundations of the personal computer,
each of these two organizations views the world from a different perspective. Microsoft
is happy to see other semiconductor manufacturers produce clones of Intel processors.
If chip prices are driven down by competition, people can afford to pay more for
Microsoft's software. Similarly, if the freely available Linux operating system becomes
more widely available, Intel can sell more chips for Linux boxes. Linux is an open-
IBM PCs – The Background
The success of the first serious personal computer, the 6502-
Apple's failure to displace the IBM PC demonstrates that those in the semiconductor
industry must realize that commercial factors are every bit as important as architectural
excellence and performance. IBM adopted open standards, and, by making the IBM PC
Apple favored a very different approach. They marketed the computer. And the operating
system. And the peripherals. Apple refused to publish detailed hardware specifications
or to license their BIOS and operating system. Apple may well have had the better
processor and an operating system that was regarded as more user friendly. In particular,
As time passed, the sheer volume of PCs and their interfaces (plus the software base) pushed PC prices down and down. The Apple was perceived as overpriced. Even though Apple adopted the PowerPC, it was too late and Apple's role in the personal computer world was marginalized. By the mid 1990s, it was possible to joke, "Question: What do you get if you merge Apple with IBM? Answer: IBM". Unlike many other computer companies, Apple did not fade away. They developed stylish mobile devices, beginning with MP3 players, then cell phones that incorporated many of the functions of a personal computer, and then tablet computers that included everything from a camera to a satnav system to a cellphone to an electronic book.
Tredennick points out that the fundamental problem facing anyone who wishes to break into the PC field is amortization. A manufacturer has to recover its development costs over its future sales. Intel can spread the cost of a processor over, say, millions of units. In other words, the amortization cost of each processor is just a few dollars and is well below its manufacturing costs. On the other hand, a competitor attempting to break into the market will have to amortize development costs over a much smaller number of processors.
Development of the IA-
Due to the inherent complexity of the 86x processors, we can provide only an overview
of their development here. In 1975, work started on the 16-
Fig. 15 Structure of the 8086 (Intel Corporation)
Intel was the first company to introduce the coprocessor (i.e., an auxiliary external
processor that augments the operation of the CPU) to enhance the performance of its
processors. Intel added the 8087 coprocessor to 8086-
In order to simplify systems design, Intel introduced the 8088, a version of the
8086 that had the same architecture as the 8086, but which communicated with memory
via an 8-
At the beginning of 1982, Intel introduced the 80286, the first of many major upgrades
to the 86x family that would span two decades (Fig. 16). The 80286 had a 16-
Fig. 16 Structure of the 80286 (Intel Corporation)
The 80286 had 24 address lines making it possible to access 224 bytes (i.e., 16 Mbytes)
of memory, the same as the Motorola 68000. It also incorporated a basic on-
In 1985, the 80286 was followed by the 80386, which had 32-
In 1989, Intel introduced the 80486, which was a modest step up from the 80386 with
relatively few architectural enhancements. The 80486 was Intel’s first microprocessor
to include on-
Intel developed its Pentium in 1993. This was Intel’s 80586, but, because Intel couldn’t
patent a number, they chose the name Pentium™ with an initial speed of 60 MHz (rising
to 166 MHz in later generations). The Pentium was architecturally largely the same
as a 32-
Intel has continued to develop processors. The Pentium was long-
Just as the IBM PC was copied by other manufacturers to create so-
The first Intel clone was the Nx586 produced by NextGen ,which was later taken over by AMD. This chip provided a similar level of performance to early Pentiums, running at about 90 MHz, but was sold at a considerably lower price. The Nx586 had several modern architectural features such as superscalar execution with two integer execution units, pipelining, branch prediction logic, and separate data and instruction caches. The Nx586 could execute up to two instructions per cycle.
The Nx586 didn’t attempt to execute Intel’s instruction set directly. Instead, it translated IA32 instructions into a simpler form and executed those instructions.
Other clones were produced by AMD and Cyrix. AMD’s K5 took a similar approach to
NexGen by translating Intel’s variable-
By early 1999, some of the clone manufacturers were attempting to improve on Intel’s processors rather than just creating lower cost, functionally equivalent copies. Indeed AMD claimed that its K7 or Athlon architecture was better than the corresponding Pentium III; for example, the Athlon provided 3DNow! technology that boosted graphics performance, a level 1 cache four times larger than that in Intel’s then competing Pentium III, and a system bus that was 200% faster than Intel’s (note that AMD’s processors required a different mother board to Intel’s chips).
In 1995, Transmeta was set up specifically to market a processor called Crusoe that
would directly compete with Intel’s Pentium family, particularly in the low-
Although hardware and software inhibit different universes, there are points of contact;
for example, it is difficult to create a new architecture in the absence of software,
and computer designers create instruction stets to execute real software. Users interact
with operating systems in one of two ways: via a command languages like UNIX and
Operating systems have been around for a long time and their history is as fascinating as that of the processor architectures themselves. In the early days of the computer when all machines were mainframes, manufacturers designed operating systems to run on their own computers.
One of the first operating systems that could be used on a variety of different computers
was UNIX, which was designed by Ken Thompson and Dennis Richie at Bell Labs. UNIX
was written in C; a systems programming language designed by Richie. Originally intended
to run on DEC’s primitive PDP-
UNIX is a powerful and popular operating system because it operates in a consistent and elegant way. When a user logs in to a UNIX system, a program called the shell interprets the user’s commands. These commands take a little getting used to, because they are heavily abbreviated and the abbreviations are not always what you might expect. UNIX’s immense popularity in the academic world has influenced the thinking of a generation of programmers and systems designers.
The first command-
Version 1.0 of MS-
Over the years, Microsoft refined MS-
Some believe that one of the most important factors in encouraging the expansion
of computers into non-
Like UNIX, MS-
Before we discuss the history of Windows, we have to say something about Linux. Essentially,
Linux is a public-
Apple’s operating system, OS-
Many of today’s computers will be unaware of Microsoft’s early history and MS-
We should also point out that other GUIs were developed during the 1980s. In 1985,
Digital Research introduced its GEM (Graphics Environment Manager) and in 1984 the
X Windows graphical interface was created at MIT. The X Window Consortium created
this as an open system and it became a popular front-
However, it was Microsoft’s Windows running on the PC that brought computing to the
masses because it is intuitive and relatively easy to use. In many ways, the first
versions of Microsoft’s Windows were not really operating systems; they were front-
Table 4: A brief chronology of the development of Windows
Date Product Characteristics
November 1983 Widows Microsoft announces Windows
November 1985 Windows 1.0 First version of Windows goes on sale. Only tiled windows are supported.
December 1987 Windows 2.0 Windows 2.0 ships – this version allows overlapping windows. Support for protected modes 80286 systems added, allowing programs greater than 640K.
December 1987 Windows 386 This version of Windows 2.0 is optimized for 80386 machines and better supports multitasking of DOS programs
December 1987 OS/2 Microsoft and IBM begin joint development of OS/2, an operating system that better exploits the 80286’s architecture than Windows. By 1990 OS/2 had emerged as a very sophisticated multitasking, multithreading operating system. Microsoft later dropped out of the OS/2 project leaving its development to IBM. In spite of OS/2’s continuing development, it failed to attract users and software developers.
May 1990 Windows 3.0 This is the first really popular version of Windows and is widely
supported by third-
April 1992 Windows 3.1 This version of Windows is more stable than its predecessor and provides scalable fonts. From this point on, MS Windows becomes the dominant PC operating system.
October 1992 Windows for Workgroups 3.1 This version of Windows 3.1 includes networking
for workgroups (mail tools, file and printer sharing). This operating system makes
May 1993 Windows NT Windows New Technology is launched. This professional operating
system is not compatible with earlier versions of Windows. Users are forced to take
one of two paths, Windows 3.0 (and later) or Windows NT. Windows NT is a true 32-
August 1995 Windows 95 Windows 95 is the first version of Windows that doesn’t require
July 1996 Windows NT 4.0 Microsoft closes the gap between Windows NT 3.5 and Windows 95 by providing Win NT with a Win 95 user interface.
October 1996 Windows 95 OSR2 Microsoft improves Windows 95 by releasing a free service
pack that fixes bugs in Win 95, includes a better file structure called FAT32 that
uses disk space more efficiently, and improves dial-
June 1998 Windows 98 Windows 98 provides better support for the growing range of PC peripherals, such as USB devices. This was to be Microsoft’s last operating system based on the old DOS kernel.
May 1999 Windows 98 Second Edition Win 98 SE is an incremental upgrade to Windows 98 and offers little new apart from improved USB and DVD support.
February 2000 Windows 2000 Windows 2K is really an update to Windows NT 4.0 rather
than Windows 98. However, Windows 2000 was priced at the domestic consumer. This
is a full 32-
September 2000 Windows ME Windows Millennium Edition was aimed at the home user who
wished to upgrade from Windows 98 SE without going to Windows 2K. Windows ME includes
strong support for the growth of multimedia applications such as digital video and
sound processing. Windows ME still included some 16-
2001 Windows XP Windows XP represented a merging of Widows NT2000 and the series
of Windows 95, 96, and ME. Microsoft developed a range of XP operating systems, each
targeted on a different market sector: Home Edition, Professional, Media Center Edition,
Tablet Edition, Embedded, and a 64-
November 2006 Windows Vista Vista was an improved version of XP that had a higher level of security (i.e., greater tolerance to malware). Life XP, Vista was sold in a range of different editions.
July 2009 Windows 7 Windows 7 was also available in multiple editions and represented
another incremental step in the Windows development cycle. Most PC systems sold with
2012 Windows 8
Windows 8 appeared at a time when Microsoft was coming under threat from new forms of user friendly operating systems that had been appearing on tablet computers and in mobile phones (e.g., Android, BlackBerry and Symbian). These operating systems are App (application) driven. Windows 8 ventures into the mobile phone, tablet, and touch screen territory while attempting to retain traditional professional users.
© Cengage Learning 2014
The Phenomenon of Mass Computing and the Rise of the Internet
By the late 1990s, the PC was everywhere (at least in developed countries). Manufacturers have to sell more and more of their products in order to expand. PCs had to be sold into markets that were hitherto untouched; that is, beyond the semiprofessional user and the games enthusiast.
Two important applications have driven the personal computer expansion, the Internet and digital multimedia. The Internet provides interconnectivity on a scale hitherto unimagined. Many of the classic science fiction writers of the 1940s and 1950s (such as Isaac Asimov) predicted the growth of the computer and the rise of robots, but they never imagined the Internet and the ability of anyone with a computer to access the vast unstructured source of information that now comprises the Internet.
Similarly, the digital revolution has extended into digital media—sound and vision.
All these applications have had a profound effect on the computer world. Digital
video requires truly vast amounts of storage. Within a four-
We can’t neglect the effect of computer games on computer technology. The demand
for increasing reality in video games and real-
The effect of the multimedia revolution has led to the commoditization of the PC,
which is now just another consumer item like a television or a stereo player. Equally,
the growth of multimedia has forced the development of higher speed processors, low-
It would be wrong to give the impression that the PC and the Internet are the only applications of the computer. The largest user of the microprocessor is probably the automobile with tens of microprocessors embedded in each machine (e.g., in radios to tune the receiver and operate the display, in CDs to perform motor control, audio decoding, in engine control, in automatic braking systems, in lighting, door and window controls, and so on). Similarly, one or more microprocessors are embedded in every cell phone where the system had to be optimized for both performance and power consumption.
The Internet Revolution
No history of computers can ignore the effect of the Internet on computer development.
Although the Internet is not directly related to computer architecture, the Internet
has had a major effect on the way in which the computer industry developed because
users want computers that can access the Internet easily and run video-
It is impossible to do justice to the development of the Internet and the World Wide Web in a few paragraphs and we will, therefore, describe only some of the highlights of its development.
Just as the computer itself was the result of many independent developments (the need for automated calculation, the theoretical development of computer science, the enabling technologies of communications and electronics, the keyboard, and data processing industries), the Internet was also the fruit of multiple developments.
The principal ingredients of the Internet are communications, protocols, and hypertext. Communications systems have been developed throughout human history, as we have already pointed out when discussing the enabling technology behind the computer. The USA’s Department of Defense created a scientific organization, Advanced Research Projects Agency (ARPA), in 1958 at the height of the Cold War. ARPA had some of the characteristics of the Manhattan project that had preceded it during World War II; for example, large group of talented scientists were assembled to work on a project of national importance. From its early days, ARPA concentrated on computer technology and communications systems. Moreover, ARPA was moved into the academic area, which meant that it had a rather different ethos to that of the commercial world. Academics tend to cooperate and share information.
One of the reasons that ARPA concentrated on networking was the fear that a future war involving nuclear weapons would begin with an attack on communications centers, limiting the capacity to respond in a coordinated manner. By networking computers and ensuring that a message can take many paths through the network to get from its source to its destination, the network can be made robust and able to cope with the loss of some of its links or switching centers.
In 1969, ARPA began to construct a test bed for networking, a system that linked
four nodes: University of California at Los Angeles, SRI (in Stanford), University
of California at Santa Barbara, and University of Utah. Data was sent in the form
of individual packets or frames rather than as complete end-
In 1973, the transmission control/Internet protocol, TCP/IP, was developed at Stanford;
this is the set of rules that govern the routing of a packet from node to node through
a computer network. Another important step on the way to the Internet was Robert
Metcalfe’s development of the Ethernet that enabled computers to communicate with
each other over a local area network based using a simple low-
In 1979, Steve Bellovin and others at the University of North Carolina constructed
a news group network called USENET based on a UUCP (Unix-
Up to 1983, ARPANET a user had to use a numeric Internet Protocol, IP, address to access another user. In 1983, the University of Wisconsin created the Domain Name System (DNS) that routed packets to a domain name rather than an IP address.
The world’s largest community of physicists is at CERN in Geneva. In 1990, Tim Berners-
Servers—The Return of the Mainframe
Do mainframes exist in the new millennium? We still hear about super computers, the type of very specialized highly parallel computers used for simulation in large scientific projects. Some might argue that the mainframe has not so much disappeared as changed its name to “server”
The client–server model of computing enables users to get the best of both worlds—the personal computer and the corporate mainframe. Users have their own PCs and all that entails (graphical interface, communications, and productivity tools) that are connected to a server that provides data and support for authorized users.
Client–server computing facilitates open system computing by letting you create applications without regard to the hardware platforms or the technical characteristics of the software. A user at a workstation or PC may obtain client services and transparent access to the services provided by database, communications, and applications servers.
The significance of the server in computer architecture is that it requires computational
power to respond to client requests; that is, it provides an impetus for improvements
in computer architecture. By their nature, servers require large, fast random access
memories and very large secondary storage mechanisms. The server provides such an
important service that reliability is an important aspect of its architecture, so
server system architectures promote the development of high-