小土刀

【计算机系统导论】3.1 概述


公式

写作思路,基本的逻辑

  1. 处理器概览
  2. 处理指令的流程
    1. 单条指令执行
    2. 流水线的加入
    3. 流水线的改进(超标量,多发射,tomasulo)
  3. 指令系统

数字计算机如何计算

All digital computers work on the same principle: manipulating on/off signals to implement logic functions.

There have been many ways to generate those on/off signals, from mechanical devices to electromagnetic relays, vacuum tubes, transistors, and integrated circuits (ICs). This evolution brought ever-faster, smaller components, yielding dramatic improvements in capacity and cost that transformed computers from specialty tools to everyday devices.

How Do Digital Computers “Think”?

All digital computers rely on a binary system of ones and zeros, and on rules of logic set out in the 1850s by English mathematician George Boole.

A computer can represent the binary digits (bits) zero and one mechanically with wheel or lever positions, or electronically with voltage or current. The underlying math remains the same. Bit sequences can represent numbers or letters.

Boolean Logic

Just three operations (specifying AND, OR, and NOT) can perform all logic functions. So argued self-taught mathematician George Boole in his 1847 work, The Mathematical Analysis of Logic. In 1854, as Professor of Mathematics at Queens College, Ireland, Boole expanded his concept in An Investigation of the Laws of Thought.

For decades, Boole’s ideas had no apparent practical use. His work was largely ignored until applied by Claude Shannon to telephone switch design in the 1930s. Today it is called Boolean Algebra, a foundation of digital logic.

Putting Boolean Logic to Work

Claude Shannon encountered George Boole’s ideas in a college philosophy class in the 1930s. He recognized its value for real world problems.

Shannon’s 1937 MIT master’s thesis, A Symbolic Analysis of Relay and Switching Circuits, applied Boolean algebra to the design of logic circuits using electromechanical relays. Shannon is also remembered for a seminal 1948 paper on information theory, A Mathematical Theory of Communication.

Claude Shannon wasn’t the first to apply Boole’s concepts. Victor Shestakov proposed similar ideas in 1935, but didn’t publish until 1941—and then only in Russian.

What Makes A Computer Circuit?

Computer circuits are built from simple elements called “gates,” made from either mechanical or electronic switches. They operate according to Boolean algebra to determine the value of an output signal (one or zero), or to save a value in a “flip-flop,” a storage unit built from several gates.

Three basic gate types are AND, OR, and NOT. But others, such as NAND (NOT AND), can by themselves form any computer circuit, including those for arithmetic, memory, and executing instructions. Modern computers have the equivalent of hundreds of millions of NAND gates.

Digital Machines

Digital logic machines may seem the embodiment of 21st century technology. Yet they are centuries old.

The earliest examples used gears, rods, wheels, or sliding plates as switches. Charles Babbage designed several mechanical calculators in the 1800s, and a computer called the “Analytical Engine.” In 1930s Germany, Konrad Zuse built a computer with logic switches made from sliding metal plates and pins.

Relay Switches

Early logic switches were purely mechanical. Relays, by comparison, use mechanical switches that are opened or closed with electromagnets.

George Stibitz used relays in 1937 for a demonstration adder (called “Model K” because he built it on his kitchen table). This led to the Bell Labs “Model 1 Complex Calculator” in 1939. That same year, Konrad Zuse built a computer using 600 relays.

In 1944, IBM built Howard Aiken’s design for the Automatic Sequence Controlled Calculator (Harvard Mark 1) with 3,500 relays—nearly six times the number Zuse used just five years earlier.

Electronic Vacuum Tubes

Ambrose Fleming patented the two-electrode vacuum tube diode in 1904, which swiftly replaced “cat’s whisker” crystal detectors in early radios. Radio pioneer Lee de Forest added a third electrode in 1906 to create the triode tube, with which he built two key electronic building blocks (amplifiers and oscillators) in 1911.

For decades, the soft glow of vacuum tubes lit up radios and by the 1950s had largely replaced relays as computing switches. The clack of mechanical relays yielded to the hum and heat of power hungry, but far faster tubes.

How a Vaccum Tube Works

Vacuum tubes were developed at the turn of the 20th century. A vacuum tube is an electronic valve (like a faucet) that controls the flow of electricity, allowing a small signal to control a larger one. Tubes can also be used as switches—representing a zero or a one–which is how they were used in early electronic computers. Vacuum tubes use a heated filament, called a cathode, to boil off electrons into a vacuum. These electrons then pass through a grid (or several grids) which control their flow. The electrons then strike the anode (plate) and are absorbed. By designing the cathode, grid(s) and plate properly, the tube will either amplify or switch.

The U.S. Army’s ENIAC represented the world’s first large-scale use of electronics for computing. It had about 18,000 vacuum tubes.

The Next Generation: Semiconductors

Smaller. Cheaper. Faster. Cooler. The invention in 1947 of semiconductor transistors – miniature electronic devices based on the principles of solid-state physics – brought a remarkable new alternative to vacuum tube amplifiers and switches.

Today, multiple transistors connected in “integrated circuits” are fabricated with a process similar to printing. Steady reduction in transistor size has revolutionized computing, making systems ever tinier, speedier, and more power efficient.

What is a Semiconductor?

There are two basic types of materials: conductors, which let electricity flow freely, and insulators, which don’t. Semiconductors have a foot in both camps. Their conductivity can change depending on electrical, thermal, or physical stimulation. This lets them act as amplifiers, switches, or other electrical components.

British physicist Michael Faraday experimented with semiconductors in 1833. German physicist Ferdinand Braun discovered in 1874 that galena crystals could function as diodes, letting electricity flow in just one direction. Indian physicist Jagdish Chandra Bose patented its use as a crystal radio signal detector in 1901.

Inventing the Transistor

Scientists in the 1920s proposed building amplifiers from semiconductors. But they didn’t understand the materials well enough to actually do it. In 1939, William Shockley at AT&T’s Bell Labs revived the idea as a way to replace vacuum tubes.

Under Shockley’s direction, John Bardeen and Walter Brattain demonstrated in 1947 the first semiconductor amplifier: the point-contact transistor, with two metal points in contact with a sliver of germanium. In 1948, Shockley invented the more robust junction transistor, built in 1951.

The three shared the 1956 Nobel Prize in Physics for their inventions.

How Bardeen and Brattain’s Transistor Worked

Bardeen and Brattain’s transistor consisted of a sliver of germanium with two closely spaced gold point contacts held in place by a plastic wedge. They selected germanium material that had been treated to contain an excess of electrons, called N-type. When they caused an electric current to flow through one contact (called the emitter) it induced a scarcity of electrons in a thin layer (changing it locally to P-type) near the germanium surface. This changed the amount of current that could flow through the collector contact. A small change in the current through the emitter caused a larger change in the collector current. They had created a current amplifier.

Transistors Take Off

AT&T, which had invented the transistor, licensed the technology in 1952. It hoped to benefit from others’ improvements.

Transistors swiftly left the lab and entered the marketplace. Although costlier than vacuum tubes, they were ideal when portability and battery operation were important. The 1952 Sonotone hearing aid was America’s first transistorized consumer product. AT&T also used transistor amplifiers in its long distance telephone system. They soon appeared as switches, beginning with an experimental computer at Manchester University in 1953.

As prices dropped, uses multiplied. By 1960, most new computers were transistorized.

Switching to Silicon

America’s high-tech home might have been “Germanium Valley” if named for the material in early transistors. Silicon offered better performance, but was too hard to work with.

That changed in 1954. “Contrary to what my colleagues have told you about the bleak prospects for silicon transistors,” announced Texas Instruments’ Gordon Teal at a conference, “I happen to have a few of them here.” He then demonstrated a record player that failed when its germanium transistors were heated, but not with silicon transistors.

By 1960, most transistors were silicon. TI was their leading manufacturer.

The silicon transistor’s ability to operate at temperatures up to 150°C made it an essential component in U.S. space and defense programs.

Mass Producing Semiconductors

Manufacturing semiconductors commercially isn’t like making them in a lab. Production must be cost effective and materials pure to one part in ten billion—less than a pinch of salt in three freight cars of sugar.

Better crystal growing and refining techniques brought bigger wafers and higher quality. Innovations in complex fabrication processes, such as oxide masking, photolithography, high-temperature diffusion, ion-implantation, film deposition, and etching, increased yields and improved reliability.

High-volume assembly of transistors and diodes in miniature packages, together with sophisticated testing equipment, reduced costs and increased production to meet growing demand.

The Integrated Circuit

Throughout history, military needs (and military budgets) have spurred technological innovation. During the Cold War, demand for increasingly complex yet smaller, lighter, and more reliable electronic equipment fed the quest for better ways to package transistors.

Modules and “hybrid” microcircuits squeezed components into miniature enclosures. But engineers dreamed of fabricating multiple devices and interconnections on a single piece of semiconductor material.

A Solid Block Without Wires

Beginning in the mid-1950s, several research groups and scientists, including G.W.A. Dummer himself in the U.K., embarked on projects aimed at realizing Dummer’s vision of a complete electronic circuit on a single piece of semiconductor material.

Different teams followed different paths.

Engineers at Bell Labs and IBM independently built complex multi-junction devices that operated as digital counters. The Air Force funded RCA to make integrated logic gates and shift registers. William Shockley became obsessed with developing a four-layer diode switch, which led to his company’s downfall. Westinghouse even pursued an idea proposed by MIT professor Arthur von Hipple that involved arranging materials at the molecular level to perform electronic functions.

Until 1958, however, nobody had demonstrated a general-purpose solution.

Kilby’s Flying Wires

Texas Instruments hired Jack Kilby to design transistor circuit modules. Kilby had other ideas.

Believing the modules a dead end, he spent TI’s company-wide summer vacation in 1958 looking for an alternative. Kilby etched separate transistor, capacitor, and resistor elements on a single germanium slice, then connected them with fine gold “flying” wires into oscillator and amplifier “solid circuits.”

TI introduced Kilby’s Type 502 Binary Flip-Flop in 1959. Although Kilby’s hand-crafted solid-circuit approach was impractical for mass production, his work pointed the way to a practical monolithic solution.

Fairchild’s Approach: The Planar Process

The next step in IC evolution after Kilby’s “flying wire” circuits came at Fairchild Semiconductor in 1959.

Jean Hoerni’s “planar” process improved transistor reliability by creating a flat surface structure protected with an insulating silicon dioxide layer. Robert Noyce then proposed interconnecting transistors on the wafer by depositing aluminum “wires” on top. Following Noyce’s lead, Jay Last’s team built the first planar IC in 1960, spawning the modern computer chip industry.

Kilby and Noyce are considered co-inventors of the IC. Kilby alone received the 2000 Nobel Prize because Noyce had died in 1990.

This first production version of the Micrologic “F” element flip-flop planar IC was made by Isy Haas and Lionel Kattner in September 1960.

ICs Rocket to Success

Early integrated circuits were expensive. So engineers used them sparingly, where small size and low power consumption were paramount. Aerospace fit that description.

The Apollo Guidance Computer used Fairchild ICs and inspired new manufacturers such as Philco. Westinghouse joined Texas Instruments to build custom circuits for the Minuteman II missile.

Tortoise of Transistors Wins the Race

MOS shift register, 1964

This was the first commercial MOS IC: Robert Norman’s 20-bit shift register using 120 p-channel transistors.

Engineers conceived of semiconductor amplifiers in the 1920s. Four decades later, they finally got them to work.

A simpler structure, MOS (Metal Oxide Semiconductor) promised higher density than junction transistors at lower cost and power. Technical problems delayed their use in ICs until the late 1960s, but more than 99% of today’s chips are MOS.

The MOS Integrated Circuit

The first ICs used bipolar transistors, so-called because both the electrons and “holes” (an electron deficit) acted as charge carriers. MOS transistors, named for their sandwich-like layers of Metal, Oxide and Semiconductor (silicon), employ only one type of carrier: either electrons or holes.

In general MOS transistors are slower than bipolar. But they are smaller, cheaper, and less power hungry. Once developed they were quickly adopted for consumer products like clocks and calculators.

The pioneering MOS manufacturers at first had trouble building reliable chips. But by the early 1970s, innovations from many companies—including Fairchild, IBM, Philips, and RCA—had fixed the reliability problems, opening the door to a new wave of start-up MOS companies such as Intel and Mostek.

“Moore’s Law” inspired manufacturing advances that continually produced faster and more complex MOS chips. And electronic design automation (EDA) tools made possible chips with hundreds of thousands, and eventually hundreds of millions, of transistors.

Watching Power Use

Reducing power use was particularly important in small, battery operated instruments or portable consumer products such as digital watches. These devices were first to use the new CMOS (Complementary MOS), which significantly cut chip power consumption.

As ICs held more transistors, low-power CMOS became the predominant technology, helping prevent overheating.

Memory Integrated Circuits

Graphics engine HD 3800

Much of the die area of this ATI/AMD Mobility Radeon high-performance graphics engine chip is consumed by semiconductor memory.

A chain is only as strong as its weakest link. That principle applies to computer performance, too. As fast ICs replaced vacuum tubes and transistors for logic, slow magnetic core memory became a performance bottleneck.

Happily, ICs also offered the solution. Falling cost made them economical for memory applications.

By the early 1980s, semiconductors were the dominant memory type. And since memory cells can share the same chip with logic, new architectures such as microprocessors emerged. Much of the chip area of modern microprocessors and graphics engines holds memory, not logic.

Semiconductor Memory Integrated Circuits

In the mid-1960s, with capacities from 8 to 64-bits memory ICs were used only for high-speed, local scratchpad storage. Bipolar technology eventually allowed sizes from 128 to 1024-bits. In the 1970s, the metal-oxide-semiconductor (MOS) process’s higher density let semiconductors compete with magnetic core prices. By 2000, DRAM chips over 1,000 million-bits were available.

Moore’s Law

The number of transistors and other components on integrated circuits will double every year for the next 10 years. So predicted Gordon Moore, Fairchild Semiconductor’s R&D Director, in 1965.

“Moore’s Law” came true. In part, this reflected Moore’s accurate insight. But Moore also set expectations— inspiring a self-fulfilling prophecy.

Doubling chip complexity doubled computing power without significantly increasing cost. The number of transistors per chip rose from a handful in the 1960s to billions by the 2010s.

The Man Behind The “Law”

A Ph.D. in chemistry and physics, Gordon Moore joined Shockley Semiconductor in 1956, left with Robert Noyce and other Shockley colleagues to create Fairchild in 1957, and in 1968 co-founded Intel—serving in roles from Executive Vice President to Chairman/CEO. He became Chairman Emeritus in 1997.

Moore made his original prediction to encourage sales of ever more complex Fairchild Semiconductor chips. With new data, in 1975 he revised his prediction forecasting that IC density would double every two years. Meeting Moore’s timetable became the goal for engineers who design chips.

Bigger Wafers, Cheaper Chips

As IC chips grew larger and more densely packed with smaller transistors, the wafers they were fabricated on also grew. This combination reduced the cost per transistor from several dollars in the early 1960s to cheaper than a grain of rice today.

During the 50-year progression shown here, the smallest physical element on an IC has shrunk from 50 microns (μ) — smaller than a human hair, which is 80-100 μ in diameter — to less than 0.1 μ. (One micron is one thousandth of a millimeter, or one millionth of a meter.)

The Smart IC: Microprocessors

The CPU (“Central Processing Unit”) is the heart of a computer – the part that decodes and executes instructions. As integrated circuits grew more complex, with more computer logic squeezed onto each device, it became clear that eventually an entire CPU would fit on a single chip, called a “microprocessor.”

But the first step was putting the CPU onto just a few chips.

The Intel 4004

Commercial success rests on the ability to develop new technology… and the vision to recognize its potential.

When a customer requested custom ICs for its new calculator, Intel’s Ted Hoff proposed an alternate solution: a general-purpose 4-bit computer on just four chips. Federico Faggin adapted the company’s MOS memory technology to squeeze the 4004 microprocessor’s 2,300 transistors onto a single chip.

Intel, seeing the potential for sales to other customers, secured marketing rights. They introduced this groundbreaking microcomputer chip family to the world in 1971 as the MCS-4 Micro Computer Set.

The Chip Champ: Intel

Consumers generally buy products with little thought to who made the individual components inside. Intel changed that equation in 1991 with its bold “Intel Inside” campaign, making the company a household name.

Founded in 1968 by Fairchild Semiconductor alumni Robert Noyce and Gordon Moore, Intel (Integrated Electronics) began making semiconductor memory chips, then focused on microprocessors when Japanese companies surpassed it in memories.

Intel’s ultimate dominance as the largest chipmaker, along with Microsoft’s as the leading PC software company, reshaped the industry from vertical (each company making everything) to horizontal (specialists making each element).

The 8-bit Generation

Intel’s 4004 processed one 4-bit “nibble” at a time. But broader use required microprocessors able to manipulate at least 8-bit “bytes.” When the Computer Terminal Corporation (CTC) ordered a custom-designed chip, Intel began developing an 8-bit solution: the 8008.

In the end, CTC didn’t use the 8008. But Intel, recognizing the value of a general-purpose chip, marketed it in 1972.

Customer response to the 8008 inspired Intel’s more powerful 8080 and Motorola’s 6800 in 1974. Both succeeded as replacements for lower-complexity logic chips, and as the heart of new personal computers.

Intel “x86” Family and the Microprocessor Wars

Xeon wafer

This 300 mm wafer holds 94 Xeon x86-compatible microprocessors. The Xeon processor was designed for server, workstation and embedded system markets.

More is never enough. As cheaper memory encouraged bigger programs, 8 bits became insufficient.

Intel developed the 16-bit 8086 as a stopgap while it worked on a more sophisticated chip. But after IBM adopted the 8088, a low-cost version of the 8086, the stopgap became an industry standard.

Intel’s 80386 later extended the architecture to 32 bits.

Generations of the Intel x86 Family

Shown below are generations of Intel microprocessors derived from the original 8086 architecture. As the number of bits in the CPU increased from 16 to 32 to 64, the number of input/output and power supply leads and the power consumption of the chip increased resulting in significant increases in the size and complexity of the packages.

  • 8086,1976
  • 8088,1979
  • 80286,1982
  • 80386,1985
  • 80486,1989
  • Pentium,1993
  • Pentium Pro,1995
  • Pentium III,1999
  • Pentium 4,2000

Intel’s success inspired competitors. AMD, NEC, and Nexgen pursued variants of Intel’s x86 devices. Others, including Motorola, National, and Zilog, introduced competing architectures. Seeking higher performance and lower cost, large workstation manufacturers developed their own RISC (Reduced Instruction Set Computer) chips.

Ultimately, the x86 architecture dominated the PC market.

RISC: Is Simpler Better?

As microprocessor instruction sets grew more complex, it was proposed that sequences of simpler instructions could perform the same functions faster with smaller chips.

IBM developed a Reduced Instruction Set Computer (RISC) in 1980. But the approach was widely adopted only after the U.S. government funded university research programs and workstation vendors developed their own RISC chips. In 1991, IBM, Motorola, and Apple allied to produce the PowerPC.

None of the RISC suppliers were able to prevail in the PC market, but the approach thrived in microcontroller and specialized applications.

Designing Integrated Circuits

Integrated circuits give new meaning to the cliché, “big things come in small packages.” Each holds millions of microscopic electronic components—diodes, transistors, capacitors, and resistors—configured to amplify, condition, store, and switch electronic signals.

Packing so much in so little space is challenging. It begins with a circuit diagram that is translated into a physical layout of devices and interconnections on a chip.

How Many Engineers Does it Take to Design an IC?

One or two engineers armed with slide rules or calculators could design early integrated circuits. A layout person then created masks on plastic sheets, which were methodically checked and converted to glass plates used for photolithographic “printing” onto silicon wafers.

As ICs grew to millions of transistors, however, slide rules yielded to computers. Teams of engineers now design circuits, programming each step in high-level languages that automate the process.

The detailed chip layouts are now also generated automatically, having grown more complex than a street map of the entire United States.

From a Slice of Crystal to an IC Wafer

Building an integrated circuit requires a series of manufacturing steps that introduce precise quantities of chemicals onto selected areas of the silicon wafer to form microscopic devices and interconnections. And each step must be performed in an ultra- clean environment to avoid contaminants.

Completed wafers are tested, sliced into individual chips, assembled into protective packages, and then tested again.

简单介绍一下整个制造过程

Evolution of the Manufacturing Process

Few industrial processes ever devised can match the complexity of manufacturing modern integrated circuits.

Early ICs were built on custom equipment costing a few thousand dollars, housed in conventional laboratories, and operated by workers in street clothes. Not any more!

As circuit dimensions shrank toward atomic levels, equipment became more sophisticated—and more expensive. Each step of the process became a specialized industry in itself.

Manufacturing now requires ultra-clean, block-long, multi- billion-dollar factories that operate with very few people present.

The Manufacturing Process

A step-by-step process transforms a semiconductor crystal into an integrated circuit.

First, silicon wafers sawn from an ingot are polished to a mirror finish. After processing using photolithographic techniques—similar to printing processes—the completed wafer is tested, cut into individual chips, and assembled in robust packages for insertion into computer boards.

您的支持是对我创作最大的鼓励!

热评文章