小土刀

【计算机系统导论】3.1 计算的发展

中央处理器作为计算机的『大脑』,其发展过程实际上可以看作是计算机系统发展的缩影,为了让计算变得更快,成本变得更低,计算机系统的各个部分也在不断进行演变,本节我们会简单介绍各种变化以及背后的原因。


  • 布尔运算
  • 门和触发器
  • 计算机组成与体系结构
  • 结构和功能
  • 性能设计,功耗墙
  • Intel x86 体系结构的进展
  • 嵌入式系统和 ARM
  • 性能评价
  • 从单处理器向多处理器转变

3.1.1 数字计算机如何计算

所有的数字计算机都是基于同一个原理工作的,这就是操作 开/关信号来实现各类逻辑功能。

产生这些 开/关 信号有许多中方法,从机械设备到电磁中继器、真空管、电阻和集成电路。一代一代的进化使得计算机越来越快、越来越小、能存储的越来越多、价格越来越便宜,最终『旧时王谢堂前燕,飞入寻常百姓家』

所有的数字计算机基于 0 和 1 的二进制系统以及由英国数学家 George Boole 于 1850 年代所设定的逻辑规则。

计算机可以用不同的方式来表示二进制位(比特),可以是机械式的(比如轮或者杠杆),可以是电子式的(利用电压或电流)。但不同的实现方式背后的原理是一样的。比特的序列可以表示数字和字母。

布尔逻辑

自学成才的数学家 George Boole 在 1847 年提出只需要三个操作(与 AND,或 OR,否 NOT)就可以完成所有的逻辑功能。而在 1854 年,作为 Queens College 的数学教授,布尔在研究思维的定律的时候扩展完善了布尔代数这个概念。

但是几十年时间过去了,似乎布尔的想法并没有什么实际的用途,直到 Claude Shannon 于 1930 年代把它应用在了电话交换机的设计上。现在人们称布尔当年的想法为布尔代数,是数字逻辑的基础。

计算机电路的最基本构建元素称为『门』,可以用机械或者电子开关来实现。『门』的运算满足布尔代数并以此决定输出的信号(0 或 1),或者把值保存在一个 flip-flop(用若干个『门』构造的存储单元)中。

三种最基础的门电路是 AND,OR 和 NOT,其他的比如 NAND(NOT AND) 可以由三种基础的门电路组合而成,各类运算、内存、执行指令都可以以此实现。现代计算机等同于数以百万计的 NAND 门电路。

Digital Machines

Digital logic machines may seem the embodiment of 21st century technology. Yet they are centuries old.

The earliest examples used gears, rods, wheels, or sliding plates as switches. Charles Babbage designed several mechanical calculators in the 1800s, and a computer called the “Analytical Engine.” In 1930s Germany, Konrad Zuse built a computer with logic switches made from sliding metal plates and pins.

Relay Switches

Early logic switches were purely mechanical. Relays, by comparison, use mechanical switches that are opened or closed with electromagnets.

George Stibitz used relays in 1937 for a demonstration adder (called “Model K” because he built it on his kitchen table). This led to the Bell Labs “Model 1 Complex Calculator” in 1939. That same year, Konrad Zuse built a computer using 600 relays.

In 1944, IBM built Howard Aiken’s design for the Automatic Sequence Controlled Calculator (Harvard Mark 1) with 3,500 relays—nearly six times the number Zuse used just five years earlier.

Electronic Vacuum Tubes

Ambrose Fleming patented the two-electrode vacuum tube diode in 1904, which swiftly replaced “cat’s whisker” crystal detectors in early radios. Radio pioneer Lee de Forest added a third electrode in 1906 to create the triode tube, with which he built two key electronic building blocks (amplifiers and oscillators) in 1911.

For decades, the soft glow of vacuum tubes lit up radios and by the 1950s had largely replaced relays as computing switches. The clack of mechanical relays yielded to the hum and heat of power hungry, but far faster tubes.

How a Vaccum Tube Works

Vacuum tubes were developed at the turn of the 20th century. A vacuum tube is an electronic valve (like a faucet) that controls the flow of electricity, allowing a small signal to control a larger one. Tubes can also be used as switches—representing a zero or a one–which is how they were used in early electronic computers. Vacuum tubes use a heated filament, called a cathode, to boil off electrons into a vacuum. These electrons then pass through a grid (or several grids) which control their flow. The electrons then strike the anode (plate) and are absorbed. By designing the cathode, grid(s) and plate properly, the tube will either amplify or switch.

The U.S. Army’s ENIAC represented the world’s first large-scale use of electronics for computing. It had about 18,000 vacuum tubes.

The Next Generation: Semiconductors

Smaller. Cheaper. Faster. Cooler. The invention in 1947 of semiconductor transistors – miniature electronic devices based on the principles of solid-state physics – brought a remarkable new alternative to vacuum tube amplifiers and switches.

Today, multiple transistors connected in “integrated circuits” are fabricated with a process similar to printing. Steady reduction in transistor size has revolutionized computing, making systems ever tinier, speedier, and more power efficient.

What is a Semiconductor?

There are two basic types of materials: conductors, which let electricity flow freely, and insulators, which don’t. Semiconductors have a foot in both camps. Their conductivity can change depending on electrical, thermal, or physical stimulation. This lets them act as amplifiers, switches, or other electrical components.

British physicist Michael Faraday experimented with semiconductors in 1833. German physicist Ferdinand Braun discovered in 1874 that galena crystals could function as diodes, letting electricity flow in just one direction. Indian physicist Jagdish Chandra Bose patented its use as a crystal radio signal detector in 1901.

Inventing the Transistor

Scientists in the 1920s proposed building amplifiers from semiconductors. But they didn’t understand the materials well enough to actually do it. In 1939, William Shockley at AT&T’s Bell Labs revived the idea as a way to replace vacuum tubes.

Under Shockley’s direction, John Bardeen and Walter Brattain demonstrated in 1947 the first semiconductor amplifier: the point-contact transistor, with two metal points in contact with a sliver of germanium. In 1948, Shockley invented the more robust junction transistor, built in 1951.

The three shared the 1956 Nobel Prize in Physics for their inventions.

How Bardeen and Brattain’s Transistor Worked

Bardeen and Brattain’s transistor consisted of a sliver of germanium with two closely spaced gold point contacts held in place by a plastic wedge. They selected germanium material that had been treated to contain an excess of electrons, called N-type. When they caused an electric current to flow through one contact (called the emitter) it induced a scarcity of electrons in a thin layer (changing it locally to P-type) near the germanium surface. This changed the amount of current that could flow through the collector contact. A small change in the current through the emitter caused a larger change in the collector current. They had created a current amplifier.

Transistors Take Off

AT&T, which had invented the transistor, licensed the technology in 1952. It hoped to benefit from others’ improvements.

Transistors swiftly left the lab and entered the marketplace. Although costlier than vacuum tubes, they were ideal when portability and battery operation were important. The 1952 Sonotone hearing aid was America’s first transistorized consumer product. AT&T also used transistor amplifiers in its long distance telephone system. They soon appeared as switches, beginning with an experimental computer at Manchester University in 1953.

As prices dropped, uses multiplied. By 1960, most new computers were transistorized.

Switching to Silicon

America’s high-tech home might have been “Germanium Valley” if named for the material in early transistors. Silicon offered better performance, but was too hard to work with.

That changed in 1954. “Contrary to what my colleagues have told you about the bleak prospects for silicon transistors,” announced Texas Instruments’ Gordon Teal at a conference, “I happen to have a few of them here.” He then demonstrated a record player that failed when its germanium transistors were heated, but not with silicon transistors.

By 1960, most transistors were silicon. TI was their leading manufacturer.

The silicon transistor’s ability to operate at temperatures up to 150°C made it an essential component in U.S. space and defense programs.

Mass Producing Semiconductors

Manufacturing semiconductors commercially isn’t like making them in a lab. Production must be cost effective and materials pure to one part in ten billion—less than a pinch of salt in three freight cars of sugar.

Better crystal growing and refining techniques brought bigger wafers and higher quality. Innovations in complex fabrication processes, such as oxide masking, photolithography, high-temperature diffusion, ion-implantation, film deposition, and etching, increased yields and improved reliability.

High-volume assembly of transistors and diodes in miniature packages, together with sophisticated testing equipment, reduced costs and increased production to meet growing demand.

The Integrated Circuit

Throughout history, military needs (and military budgets) have spurred technological innovation. During the Cold War, demand for increasingly complex yet smaller, lighter, and more reliable electronic equipment fed the quest for better ways to package transistors.

Modules and “hybrid” microcircuits squeezed components into miniature enclosures. But engineers dreamed of fabricating multiple devices and interconnections on a single piece of semiconductor material.

A Solid Block Without Wires

Beginning in the mid-1950s, several research groups and scientists, including G.W.A. Dummer himself in the U.K., embarked on projects aimed at realizing Dummer’s vision of a complete electronic circuit on a single piece of semiconductor material.

Different teams followed different paths.

Engineers at Bell Labs and IBM independently built complex multi-junction devices that operated as digital counters. The Air Force funded RCA to make integrated logic gates and shift registers. William Shockley became obsessed with developing a four-layer diode switch, which led to his company’s downfall. Westinghouse even pursued an idea proposed by MIT professor Arthur von Hipple that involved arranging materials at the molecular level to perform electronic functions.

Until 1958, however, nobody had demonstrated a general-purpose solution.

Kilby’s Flying Wires

Texas Instruments hired Jack Kilby to design transistor circuit modules. Kilby had other ideas.

Believing the modules a dead end, he spent TI’s company-wide summer vacation in 1958 looking for an alternative. Kilby etched separate transistor, capacitor, and resistor elements on a single germanium slice, then connected them with fine gold “flying” wires into oscillator and amplifier “solid circuits.”

TI introduced Kilby’s Type 502 Binary Flip-Flop in 1959. Although Kilby’s hand-crafted solid-circuit approach was impractical for mass production, his work pointed the way to a practical monolithic solution.

Fairchild’s Approach: The Planar Process

The next step in IC evolution after Kilby’s “flying wire” circuits came at Fairchild Semiconductor in 1959.

Jean Hoerni’s “planar” process improved transistor reliability by creating a flat surface structure protected with an insulating silicon dioxide layer. Robert Noyce then proposed interconnecting transistors on the wafer by depositing aluminum “wires” on top. Following Noyce’s lead, Jay Last’s team built the first planar IC in 1960, spawning the modern computer chip industry.

Kilby and Noyce are considered co-inventors of the IC. Kilby alone received the 2000 Nobel Prize because Noyce had died in 1990.

This first production version of the Micrologic “F” element flip-flop planar IC was made by Isy Haas and Lionel Kattner in September 1960.

ICs Rocket to Success

Early integrated circuits were expensive. So engineers used them sparingly, where small size and low power consumption were paramount. Aerospace fit that description.

The Apollo Guidance Computer used Fairchild ICs and inspired new manufacturers such as Philco. Westinghouse joined Texas Instruments to build custom circuits for the Minuteman II missile.

Tortoise of Transistors Wins the Race

MOS shift register, 1964

This was the first commercial MOS IC: Robert Norman’s 20-bit shift register using 120 p-channel transistors.

Engineers conceived of semiconductor amplifiers in the 1920s. Four decades later, they finally got them to work.

A simpler structure, MOS (Metal Oxide Semiconductor) promised higher density than junction transistors at lower cost and power. Technical problems delayed their use in ICs until the late 1960s, but more than 99% of today’s chips are MOS.

The MOS Integrated Circuit

The first ICs used bipolar transistors, so-called because both the electrons and “holes” (an electron deficit) acted as charge carriers. MOS transistors, named for their sandwich-like layers of Metal, Oxide and Semiconductor (silicon), employ only one type of carrier: either electrons or holes.

In general MOS transistors are slower than bipolar. But they are smaller, cheaper, and less power hungry. Once developed they were quickly adopted for consumer products like clocks and calculators.

The pioneering MOS manufacturers at first had trouble building reliable chips. But by the early 1970s, innovations from many companies—including Fairchild, IBM, Philips, and RCA—had fixed the reliability problems, opening the door to a new wave of start-up MOS companies such as Intel and Mostek.

“Moore’s Law” inspired manufacturing advances that continually produced faster and more complex MOS chips. And electronic design automation (EDA) tools made possible chips with hundreds of thousands, and eventually hundreds of millions, of transistors.

Watching Power Use

Reducing power use was particularly important in small, battery operated instruments or portable consumer products such as digital watches. These devices were first to use the new CMOS (Complementary MOS), which significantly cut chip power consumption.

As ICs held more transistors, low-power CMOS became the predominant technology, helping prevent overheating.

Memory Integrated Circuits

Graphics engine HD 3800

Much of the die area of this ATI/AMD Mobility Radeon high-performance graphics engine chip is consumed by semiconductor memory.

A chain is only as strong as its weakest link. That principle applies to computer performance, too. As fast ICs replaced vacuum tubes and transistors for logic, slow magnetic core memory became a performance bottleneck.

Happily, ICs also offered the solution. Falling cost made them economical for memory applications.

By the early 1980s, semiconductors were the dominant memory type. And since memory cells can share the same chip with logic, new architectures such as microprocessors emerged. Much of the chip area of modern microprocessors and graphics engines holds memory, not logic.

Semiconductor Memory Integrated Circuits

In the mid-1960s, with capacities from 8 to 64-bits memory ICs were used only for high-speed, local scratchpad storage. Bipolar technology eventually allowed sizes from 128 to 1024-bits. In the 1970s, the metal-oxide-semiconductor (MOS) process’s higher density let semiconductors compete with magnetic core prices. By 2000, DRAM chips over 1,000 million-bits were available.

Moore’s Law

The number of transistors and other components on integrated circuits will double every year for the next 10 years. So predicted Gordon Moore, Fairchild Semiconductor’s R&D Director, in 1965.

“Moore’s Law” came true. In part, this reflected Moore’s accurate insight. But Moore also set expectations— inspiring a self-fulfilling prophecy.

Doubling chip complexity doubled computing power without significantly increasing cost. The number of transistors per chip rose from a handful in the 1960s to billions by the 2010s.

The Man Behind The “Law”

A Ph.D. in chemistry and physics, Gordon Moore joined Shockley Semiconductor in 1956, left with Robert Noyce and other Shockley colleagues to create Fairchild in 1957, and in 1968 co-founded Intel—serving in roles from Executive Vice President to Chairman/CEO. He became Chairman Emeritus in 1997.

Moore made his original prediction to encourage sales of ever more complex Fairchild Semiconductor chips. With new data, in 1975 he revised his prediction forecasting that IC density would double every two years. Meeting Moore’s timetable became the goal for engineers who design chips.

Bigger Wafers, Cheaper Chips

As IC chips grew larger and more densely packed with smaller transistors, the wafers they were fabricated on also grew. This combination reduced the cost per transistor from several dollars in the early 1960s to cheaper than a grain of rice today.

During the 50-year progression shown here, the smallest physical element on an IC has shrunk from 50 microns (μ) — smaller than a human hair, which is 80-100 μ in diameter — to less than 0.1 μ. (One micron is one thousandth of a millimeter, or one millionth of a meter.)

The Smart IC: Microprocessors

The CPU (“Central Processing Unit”) is the heart of a computer – the part that decodes and executes instructions. As integrated circuits grew more complex, with more computer logic squeezed onto each device, it became clear that eventually an entire CPU would fit on a single chip, called a “microprocessor.”

But the first step was putting the CPU onto just a few chips.

The Intel 4004

Commercial success rests on the ability to develop new technology… and the vision to recognize its potential.

When a customer requested custom ICs for its new calculator, Intel’s Ted Hoff proposed an alternate solution: a general-purpose 4-bit computer on just four chips. Federico Faggin adapted the company’s MOS memory technology to squeeze the 4004 microprocessor’s 2,300 transistors onto a single chip.

Intel, seeing the potential for sales to other customers, secured marketing rights. They introduced this groundbreaking microcomputer chip family to the world in 1971 as the MCS-4 Micro Computer Set.

The Chip Champ: Intel

Consumers generally buy products with little thought to who made the individual components inside. Intel changed that equation in 1991 with its bold “Intel Inside” campaign, making the company a household name.

Founded in 1968 by Fairchild Semiconductor alumni Robert Noyce and Gordon Moore, Intel (Integrated Electronics) began making semiconductor memory chips, then focused on microprocessors when Japanese companies surpassed it in memories.

Intel’s ultimate dominance as the largest chipmaker, along with Microsoft’s as the leading PC software company, reshaped the industry from vertical (each company making everything) to horizontal (specialists making each element).

The 8-bit Generation

Intel’s 4004 processed one 4-bit “nibble” at a time. But broader use required microprocessors able to manipulate at least 8-bit “bytes.” When the Computer Terminal Corporation (CTC) ordered a custom-designed chip, Intel began developing an 8-bit solution: the 8008.

In the end, CTC didn’t use the 8008. But Intel, recognizing the value of a general-purpose chip, marketed it in 1972.

Customer response to the 8008 inspired Intel’s more powerful 8080 and Motorola’s 6800 in 1974. Both succeeded as replacements for lower-complexity logic chips, and as the heart of new personal computers.

Intel “x86” Family and the Microprocessor Wars

Xeon wafer

This 300 mm wafer holds 94 Xeon x86-compatible microprocessors. The Xeon processor was designed for server, workstation and embedded system markets.

More is never enough. As cheaper memory encouraged bigger programs, 8 bits became insufficient.

Intel developed the 16-bit 8086 as a stopgap while it worked on a more sophisticated chip. But after IBM adopted the 8088, a low-cost version of the 8086, the stopgap became an industry standard.

Intel’s 80386 later extended the architecture to 32 bits.

Generations of the Intel x86 Family

Shown below are generations of Intel microprocessors derived from the original 8086 architecture. As the number of bits in the CPU increased from 16 to 32 to 64, the number of input/output and power supply leads and the power consumption of the chip increased resulting in significant increases in the size and complexity of the packages.

  • 8086,1976
  • 8088,1979
  • 80286,1982
  • 80386,1985
  • 80486,1989
  • Pentium,1993
  • Pentium Pro,1995
  • Pentium III,1999
  • Pentium 4,2000

Intel’s success inspired competitors. AMD, NEC, and Nexgen pursued variants of Intel’s x86 devices. Others, including Motorola, National, and Zilog, introduced competing architectures. Seeking higher performance and lower cost, large workstation manufacturers developed their own RISC (Reduced Instruction Set Computer) chips.

Ultimately, the x86 architecture dominated the PC market.

RISC: Is Simpler Better?

As microprocessor instruction sets grew more complex, it was proposed that sequences of simpler instructions could perform the same functions faster with smaller chips.

IBM developed a Reduced Instruction Set Computer (RISC) in 1980. But the approach was widely adopted only after the U.S. government funded university research programs and workstation vendors developed their own RISC chips. In 1991, IBM, Motorola, and Apple allied to produce the PowerPC.

None of the RISC suppliers were able to prevail in the PC market, but the approach thrived in microcontroller and specialized applications.

Designing Integrated Circuits

Integrated circuits give new meaning to the cliché, “big things come in small packages.” Each holds millions of microscopic electronic components—diodes, transistors, capacitors, and resistors—configured to amplify, condition, store, and switch electronic signals.

Packing so much in so little space is challenging. It begins with a circuit diagram that is translated into a physical layout of devices and interconnections on a chip.

How Many Engineers Does it Take to Design an IC?

One or two engineers armed with slide rules or calculators could design early integrated circuits. A layout person then created masks on plastic sheets, which were methodically checked and converted to glass plates used for photolithographic “printing” onto silicon wafers.

As ICs grew to millions of transistors, however, slide rules yielded to computers. Teams of engineers now design circuits, programming each step in high-level languages that automate the process.

The detailed chip layouts are now also generated automatically, having grown more complex than a street map of the entire United States.

From a Slice of Crystal to an IC Wafer

Building an integrated circuit requires a series of manufacturing steps that introduce precise quantities of chemicals onto selected areas of the silicon wafer to form microscopic devices and interconnections. And each step must be performed in an ultra- clean environment to avoid contaminants.

Completed wafers are tested, sliced into individual chips, assembled into protective packages, and then tested again.

简单介绍一下整个制造过程

Evolution of the Manufacturing Process

Few industrial processes ever devised can match the complexity of manufacturing modern integrated circuits.

Early ICs were built on custom equipment costing a few thousand dollars, housed in conventional laboratories, and operated by workers in street clothes. Not any more!

As circuit dimensions shrank toward atomic levels, equipment became more sophisticated—and more expensive. Each step of the process became a specialized industry in itself.

Manufacturing now requires ultra-clean, block-long, multi- billion-dollar factories that operate with very few people present.

The Manufacturing Process

A step-by-step process transforms a semiconductor crystal into an integrated circuit.

First, silicon wafers sawn from an ingot are polished to a mirror finish. After processing using photolithographic techniques—similar to printing processes—the completed wafer is tested, cut into individual chips, and assembled in robust packages for insertion into computer boards.

重要规则

处理器的数据通路和控制通路的设计,可以从指令集系统和对工艺基本特性的理解开始。当然,背后的工艺也影响许多设计决策,如数据通路中哪些部件可用,以及单周期实现是否有意义等。流水线提高了吞吐率,单不能提高指令的内在执行时间;对某些指令而言,指令延迟与单周期实现的延迟类似。多发射增加了额外的允许每个时钟周期发射多条指令的数据通路硬件,但是却增加了有效延迟。为了减少简单的单周期实现数据通路的时钟周期,提出了流水线技术。相比之下,多发射关注于减少每条指令的平均时钟周期数 CPI。流水线和多发射都试着开发指令级并行。开发更高指令级并行的主要限制因素是存在数据相关和控制相关。在软硬件上都使用调试和推测执行,是降低相关带来影响的主要手段。为了维持通过并行处理器带来的计算性能提高,Amdahl 定律预言了系统中的其他部件会成为瓶颈。这个瓶颈就是内存系统。

您的支持是对我创作最大的鼓励!

热评文章