Tuesday, July 1, 2008

An Illustrated History of Computers Part 3

___________________________________


IBM continued to develop mechanical calculators for sale to businesses to help with financial accounting and inventory accounting. One characteristic of both financial accounting and inventory accounting is that although you need to subtract, you don't need negative numbers and you really don't have to multiply since multiplication can be accomplished via repeated addition.

But the U.S. military desired a mechanical calculator more optimized for scientific computation. By World War II the U.S. had battleships that could lob shells weighing as much as a small car over distances up to 25 miles. Physicists could write the equations that described how atmospheric drag, wind, gravity, muzzle velocity, etc. would determine the trajectory of the shell. But solving such equations was extremely laborious. This was the work performed by the human computers. Their results would be published in ballistic "firing tables" published in gunnery manuals. During World War II the U.S. military scoured the country looking for (generally female) math majors to hire for the job of computing these tables. But not enough humans could be found to keep up with the need for new tables. Sometimes artillery pieces had to be delivered to the battlefield without the necessary firing tables and this meant they were close to useless because they couldn't be aimed properly. Faced with this situation, the U.S. military was willing to invest in even hair-brained schemes to automate this type of computation.

One early success was the Harvard Mark I computer which was built as a partnership between Harvard and IBM in 1944. This was the first programmable digital computer made in the U.S. But it was not a purely electronic computer. Instead the Mark I was constructed out of switches, relays, rotating shafts, and clutches. The machine weighed 5 tons, incorporated 500 miles of wire, was 8 feet tall and 51 feet long, and had a 50 ft rotating shaft running its length, turned by a 5 horsepower electric motor. The Mark I ran non-stop for 15 years, sounding like a roomful of ladies knitting. To appreciate the scale of this machine note the four typewriters in the foreground of the following photo.

The Harvard Mark I: an electro-mechanical computer


You can see the 50 ft rotating shaft in the bottom of the prior photo. This shaft was a central power source for the entire machine. This design feature was reminiscent of the days when waterpower was used to run a machine shop and each lathe or other tool was driven by a belt connected to a single overhead shaft which was turned by an outside waterwheel.

A central shaft driven by an outside waterwheel and connected to each machine by overhead belts was the customary power source for all the machines in a factory

Here's a close-up of one of the Mark I's four paper tape readers. A paper tape was an improvement over a box of punched cards as anyone who has ever dropped -- and thus shuffled -- his "stack" knows.
One of the four paper tape readers on the Harvard Mark I (you can observe the punched paper roll emerging from the bottom)

One of the primary programmers for the Mark I was a woman, Grace Hopper. Hopper found the first computer "bug": a dead moth that had gotten into the Mark I and whose wings were blocking the reading of the holes in the paper tape. The word "bug" had been used to describe a defect since at least 1889 but Hopper is credited with coining the word "debugging" to describe the work to eliminate program faults.


The first computer bug [photo © 2002 IEEE]

In 1953 Grace Hopper invented the first high-level language, "Flow-matic". This language eventually became COBOL which was the language most affected by the infamous Y2K problem. A high-level language is designed to be more understandable by humans than is the binary language understood by the computing machinery. A high-level language is worthless without a program -- known as a compiler -- to translate it into the binary language of the computer and hence Grace Hopper also constructed the world's first compiler. Grace remained active as a Rear Admiral in the Navy Reserves until she was 79 (another record).

The Mark I operated on numbers that were 23 digits wide. It could add or subtract two of these numbers in three-tenths of a second, multiply them in four seconds, and divide them in ten seconds. Forty-five years later computers could perform an addition in a billionth of a second! Even though the Mark I had three quarters of a million components, it could only store 72 numbers! Today, home computers can store 30 million numbers in RAM and another 10 billion numbers on their hard disk. Today, a number can be pulled from RAM after a delay of only a few billionths of a second, and from a hard disk after a delay of only a few thousandths of a second. This kind of speed is obviously impossible for a machine which must move a rotating shaft and that is why electronic computers killed off their mechanical predecessors.

On a humorous note, the principal designer of the Mark I, Howard Aiken of Harvard, estimated in 1947 that six electronic digital computers would be sufficient to satisfy the computing needs of the entire United States. IBM had commissioned this study to determine whether it should bother developing this new invention into one of its standard products (up until then computers were one-of-a-kind items built by special arrangement). Aiken's prediction wasn't actually so bad as there were very few institutions (principally, the government and military) that could afford the cost of what was called a computer in 1947. He just didn't foresee the micro-electronics revolution which would allow something like an IBM Stretch computer of 1959:


(that's just the operator's console, here's the rest of its 33 foot length:)


to be bested by a home computer of 1976 such as this Apple I which sold for only $600:



The Apple 1 which was sold as a do-it-yourself kit (without the lovely case seen here)

Computers had been incredibly expensive because they required so much hand assembly, such as the wiring seen in this CDC 7600:

Typical wiring in an early mainframe computer [photo courtesy The Computer Museum]

The microelectronics revolution is what allowed the amount of hand-crafted wiring seen in the prior photo to be mass-produced as an integrated circuit which is a small sliver of silicon the size of your thumbnail .

An integrated circuit ("silicon chip") [photo courtesy of IBM]

The primary advantage of an integrated circuit is not that the transistors (switches) are miniscule (that's the secondary advantage), but rather that millions of transistors can be created and interconnected in a mass-production process. All the elements on the integrated circuit are fabricated simultaneously via a small number (maybe 12) of optical masks that define the geometry of each layer. This speeds up the process of fabricating the computer -- and hence reduces its cost -- just as Gutenberg's printing press sped up the fabrication of books and thereby made them affordable to all.

The IBM Stretch computer of 1959 needed its 33 foot length to hold the 150,000 transistors it contained. These transistors were tremendously smaller than the vacuum tubes they replaced, but they were still individual elements requiring individual assembly. By the early 1980s this many transistors could be simultaneously fabricated on an integrated circuit. Today's Pentium 4 microprocessor contains 42,000,000 transistors in this same thumbnail sized piece of silicon.

It's humorous to remember that in between the Stretch machine (which would be called a mainframe today) and the Apple I (a desktop computer) there was an entire industry segment referred to as mini-computers such as the following PDP-12 computer of 1969:


The DEC PDP-12

Sure looks "mini", huh? But we're getting ahead of our story.

One of the earliest attempts to build an all-electronic (that is, no gears, cams, belts, shafts, etc.) digital computer occurred in 1937 by J. V. Atanasoff, a professor of physics and mathematics at Iowa State University. By 1941 he and his graduate student, Clifford Berry, had succeeded in building a machine that could solve 29 simultaneous equations with 29 unknowns. This machine was the first to store data as a charge on a capacitor, which is how today's computers store information in their main memory (DRAM or dynamic RAM). As far as its inventors were aware, it was also the first to employ binary arithmetic. However, the machine was not programmable, it lacked a conditional branch, its design was appropriate for only one type of mathematical problem, and it was not further pursued after World War II. It's inventors didn't even bother to preserve the machine and it was dismantled by those who moved into the room where it lay abandoned.

The Atanasoff-Berry Computer [photo © 2002 IEEE]

Another candidate for granddaddy of the modern computer was Colossus, built during World War II by Britain for the purpose of breaking the cryptographic codes used by Germany. Britain led the world in designing and building electronic machines dedicated to code breaking, and was routinely able to read coded Germany radio transmissions. But Colossus was definitely not a general purpose, reprogrammable machine. Note the presence of pulleys in the two photos of Colossus below:

Two views of the code-breaking Colossus of Great Britain



The Harvard Mark I, the Atanasoff-Berry computer, and the British Colossus all made important contributions. American and British computer pioneers were still arguing over who was first to do what, when in 1965 the work of the German Konrad Zuse was published for the first time in English. Scooped! Zuse had built a sequence of general purpose computers in Nazi Germany. The first, the Z1, was built between 1936 and 1938 in the parlor of his parent's home.


The Zuse Z1 in its residential setting

Zuse's third machine, the Z3, built in 1941, was probably the first operational, general-purpose, programmable (that is, software controlled) digital computer. Without knowledge of any calculating machine inventors since Leibniz (who lived in the 1600's), Zuse reinvented Babbage's concept of programming and decided on his own to employ binary representation for numbers (Babbage had advocated decimal). The Z3 was destroyed by an Allied bombing raid. The Z1 and Z2 met the same fate and the Z4 survived only because Zuse hauled it in a wagon up into the mountains. Zuse's accomplishments are all the more incredible given the context of the material and manpower shortages in Germany during World War II. Zuse couldn't even obtain paper tape so he had to make his own by punching holes in discarded movie film. Because these machines were unknown outside Germany, they did not influence the path of computing in America. But their architecture is identical to that still in use today: an arithmetic unit to do the calculations, a memory for storing numbers, a control system to supervise operations, and input and output devices to connect to the external world. Zuse also invented what might be the first high-level computer language, "Plankalkul", though it too was unknown outside Germany.



by: http://www.computersciencelab.com/


No comments: