小土刀

【计算机系统导论】4.1 概述

计算机如何『记忆』


Computers are master jugglers, multitasking as we play music, solve equations, surf the web, and write novels. They also have become vast, searchable libraries of everything from banking records and encyclopedias to grandma’s recipes.

These abilities require two kinds of memory: main memory (fast and comparatively expensive) and storage (big, slower, and cheap). Both types have rapidly and continually improved.

保存信息

基本上可能的物理介质都被尝试过拿来存储信息,而能够存储的数量又随着最小单位的大小而决定,介绍下发展历程。

Human history is built on memories. Civilizations flourish by preserving and sharing what people have previously experienced and learned.

Centuries ago, oral traditions began yielding to other ways of recording information, from tally sticks to writing. Today, cheap and abundant digital memory—able to hold all of history’s accumulated information—has changed what we can remember and how we use it.

The Quipu

The Inca Empire spawned a vast bureaucracy sending and receiving information, from tax records to census data. Much of it traveled via the quipu, an assembly of knotted, colored cords encoding the data.

Quipucamayu (quipu makers), specialists responsible for encoding and decoding the information, were the scribes of their day.

Tally Sticks

Roughly 30,000 years ago, people carved information on “tally sticks.” They were still doing it 100 years ago, making tally sticks among history’s most enduring storage devices.

In England, from the 13th-19th centuries, notched sticks recorded financial transactions. Notch sizes indicated currency units. Splitting the stick lengthwise gave each party a “receipt.”

Magnetic tape library

As computer centers moved away from punched cards in the 1970s, their libraries of magnetic tapes grew. A typical reel of 9-track tape could hold up to 160 MB.


At any given moment, most information a computer holds is not being used: programs not being run; files not being worked on. These “wait their turn” in storage, often called mass storage.

Storage is larger and more permanent than main memory. Speed is less critical than size, reliability, cost, and data integrity. The co-evolution of processors, memory, and storage dramatically improved computers.

Memory and Storage: Ever More Dense

Hard disk storage has become denser at an exponential rate over the last 50 years, just like main memory. The dramatic increase in capacity and speed of both has fueled the increasing power of computers.

Since the invention of writing, people have found ways to store information. The methods have changed over the millennia, but the goals have remained the same: a medium that’s long lasting, affordable, easy to use, reproducible, and holds lots of data.

In the computer age, a succession of technologies has offered those qualities. Each brought unique strengths, from the simple readability of punched cards, to the low cost of magnetic tape, the fast, random access of disk drives, and the easy duplication of optical disks.

Punched Cards & Paper Tape

Programmer standing beside punched cards

This stack of 62,500 punched cards — 5 MB worth — held the control program for the giant SAGE military computer network.

Many people were at first dubious that hole-filled cards were better than ledger books. Nonetheless, punched cards dominated data processing from the 1930s to 1960s. Clerks punched data onto cards using keypunch machines without needing computers.

Magnetic Tape

Magnetic tape began as a medium for audio recordings in the 1930s. In 1951—six years before the first magnetic disks—UNIVAC introduced tape drives for computers.

Tape was a storage mainstay for many years and still survives, thanks to its low cost, portability, unlimited offline capacity, and standardized formats that make tapes interchangeable.

The First Disk Drive: RAMAC 350

RAMAC actuator and disk stack

This is the heart of the world’s first disk drive. It has 50 24” disks spinning at 1,200 RPM holding 5 million characters of information.

Computers hold thousands of data records. Imagine if finding the one you wanted required starting with the first, then going through them in order.

High speed, random access memory—plucking information from storage without plodding through sequentially—is essential to the way we use computers today. IBM’s RAMAC (Random Access Method of Accounting and Control) magnetic disk drive pioneered this ability.

The RAMAC 350 storage unit could hold the equivalent of 62,500 punched cards: 5 million characters.

Why: The Need

Computers were victims of their own success. As businesses came to rely on them, it became increasingly cumbersome to process towering stacks of punched cards or read data sequentially from magnetic tapes.

IBM recognized the need—and business opportunity—for a storage device that swiftly accessed data in any order. Its RAMAC magnetic disk drive, much faster than any previous technology, unleashed computers’ processing speed by providing inexpensive, fast, large capacity storage.

This first disk drive begat an entire industry that has been fundamental to the computer’s success.

Magnetic Hard Disks

Ever since engineers first took the RAMAC disk for a spin in the 1950s, magnetic disk storage has held sway.

There are periodic predictions of its demise. In 1975, Andrew Bobeck, developing “bubble memory” at Bell Labs, forecast the end of “those marvelous mechanical whirling dervishes….” But the dervishes remain.

Disks endure because they store data permanently, and are rewritable, relatively fast, and portable. Capacity continually grows thanks to improvements in read/write heads and a reduction of the “flying” height between head and disk surface, which allows more tightly packed data bits.

The Market Gets Crowded

As personal computer use exploded, so did demand for hard disks. Competition to meet this burgeoning demand peaked in 1984 with 77 manufacturers. After that, decreasing profit margins—and the strength of more established manufacturers—brought consolidations and closures. Only a handful of disk drive companies remain.

IBM, inventor of the hard disk, saw its market share shrink to 10%. It exited the business by selling it to Hitachi in 2002.

10GB iPod, SanDisk Memory Stick

Disks Go Global

Disk drives were born in San Jose, California. But they exploded into an international industry.

Fierce price competition drove manufacturers to make the drives where labor costs were lower. Seagate moved production to Singapore in 1982, where assemblers were paid a dollar an hour. Today, East Asia produces most disks.

Floppy Disks

3.5-inch inch floppy disk drive

The 3.5-inch format was the last mass-produced floppy disk format, replacing 5.25-inch floppies by the mid-1990s. It was more durable than previous floppy formats since the packaging was rigid plastic with a sliding metal shutter. It was eventually made obsolete by CDs and flash drives.

Magnetic hard disks transformed data storage, but were initially large and expensive. That was fine for mainframes, but personal computers needed something else. And the alternative already existed: the floppy disk.

In the 1970s and 1980s, floppy disks were the primary storage device for word processors and personal computers, and became the standard way to distribute software.

The Floppy Disk: from Mainframe to PC

How to preserve the data when the power goes off? That was the conundrum confronting IBM engineers.

The System/370 was IBM’s first computer using read/write semiconductor memory for its microcode. But without power, its microcode disappeared and had to be reloaded. The solution, delivered in 1971, was an 8” diameter flexible Mylar disk holding 80KB.

Al Shugart left IBM to make floppy disk drives for small computers. Competition soon stimulated smaller sizes and higher capacities, and floppy disks played a critical role in the rapid growth of PCs.

3.5-inch floppy disk

The 3.5-inch floppy disk format was the last mass-produced format, replacing 5.25-inch floppies by the mid-1990s. It was more durable than previous floppy formats since the packaging was rigid plastic with a sliding metal shutter. It was eventually made obsolete by CDs and flash drives.

Optical Storage

Sometimes it’s better to borrow than invent. DEC worked with Philips and Sony to adapt their music CD for use as a read-only, optical storage medium for distributing software in high volumes.

David Paul Gregg’s original optical disk for video, patented in 1961, was read-only. Today, many CD and DVDs are rewriteable.

DVD+R, DVD+RW 之类的

CD

内存与存储

Memory & Storage: Different Tasks, Different Technologies

Computers have employed various technologies to preserve information. Most fall into two broad categories: memory and storage.

Memory holds running programs and information the processor is currently using. Storage preserves data and programs for future use.

Why treat them differently? Memory must be fast and flexible. Storage has to be big, permanent, and affordable. No single technology meets all those requirements.

Pretending You Have More Memory Than You Do

Abundant memory lets computers juggle many tasks simultaneously and swiftly. But memory is comparatively expensive. In the 1950s, engineers developed a way to “fake it.”

Virtual memory (paging systems) uses disk storage as an extension of memory, letting even small computers run big programs. All modern operating systems use virtual memory.

How Does Virtual Memory Work?

Main memory is expensive. And like most expensive things, there’s never enough. “Virtual memory,” first prototyped in 1959 for the University of Manchester’s Atlas computer, uses drum or disk memory to simulate a larger main memory. How does it work?

Imagine a large “virtual” memory divided into big (1,024 word) blocks called pages. Only some pages can fit in main memory. The rest are temporarily stored on the disk. The computer hardware knows which pages are where.

If the program references a page in main memory, it uses it immediately. But when the program references a page not in memory, the operating system interrupts and reads the missing page from the disk. If the page displaced by the new addition has changed, it is written to the disk. The program then resumes and accesses the new page as if nothing had happened—except a time delay.

主存

Fast, reliable computers depend on fast, reliable main memory to hold actively running programs and data.

Recognizing the need was relatively straightforward. Meeting the need, not so much. Designers of the first computers struggled to find a memory technology that combined speed, dependability, and affordability. In its pursuit they were creative, persistent, sometimes frustrated…and eventually, successful.

One Bit at a Time

为什么随机读取很重要

Early memory technologies, such as delay lines and magnetic drums, were serial. To read or write data, the computer waited for information that circulated in a loop to arrive at a place where it could be read or written.

Random access memory (RAM) eliminated the wait, enabling much faster operation.

Main Memory: The Winners

There were reliable forms of memory. And cheap ones. There were random access devices and fast, electronic memory. But not until the early 1950s did all those qualities come together.

Two memory breakthroughs transformed computers from laboratory equipment to household tools: Magnetic core memory, which reigned supreme after 1953; and semiconductor memory, today’s technology, which became dominant around 1980.

Magnetic Drums

The drum’s read/write heads are in the open top cover. Its surface has been scratched by misaligned heads.

The Cold War was gathering steam in 1948. Eager to enhance America’s code-breaking capabilities, the U.S. Navy contracted with Engineering Research Associates (ERA) for a stored program computer. The result was Atlas, completed in 1950.

Atlas used magnetic drum memory, which stores information on the outside of a rotating cylinder coated with magnetic iron (ferromagnetic) material and circled by read/write heads in fixed positions.

Earlier drum systems included a 1932 non-rotating model by Austrian inventor Gustav Tauschek. Faster spinning drums improved data rates and cut waiting times to locate needed data.

Williams-Kilburn Tubes

Williams-Kilburn tube from an IBM 701 computer

Electrostatic memory tubes could store 512 to 2048 bits of data as dots on the screen.

Electronic computers offered unprecedented speed. But mechanical memory—slowed by moving parts—was a nagging speed bump.

The Williams-Kilburn tube, tested in 1947, offered a solution. This first high-speed, entirely electronic memory used a cathode ray tube (as in a TV) to store bits as dots on the screen’s surface. Each dot lasted a fraction of a second before fading.

Its roots stretched back to 1946, when British researcher F.C. Williams saw cathode ray tube storage at MIT. Ultimately, however, the unreliable Williams-Kilburn Tube proved a technological dead end.

Delay Lines

Advances in radar during World War II had an unanticipated spinoff: delay lines as computer memory.

Delay lines were developed to store radar blips so that screens displayed only new, moving blips. In computers, delay lines converted data bits (ones and zeros) into sound waves, transmitted them acoustically, then converted them back into bits. They circulated forever until changed by the computer.

Mercury-filled tubes had transducers at the ends to generate and receive bits. In magnetostrictive delay lines, an electromagnet twisted a long wire one way or the other to represent ones or zeros.

Magnetic Core Memory

Whirlwind core plane

Whirlwind was originally designed to use cathode ray tube (CRT) memory. Its slow speed and unreliability led inventor Jay Forrester to use core instead

Magnetic Core Memory

Tiny donuts made of magnetic material strung on wires into an array: the idea revolutionized computer memory. Each donut was a bit, magnetized one way for “zero,” and the other way for “one.” The wires could both detect and change the magnetization. In 1953, MIT’s Whirlwind became the first computer to use this technology.

Core memory swiftly swept away competing technologies. But manufacturing it was a delicate job, entrusted mostly to women using microscopes and steady hands to thread thin wires through holes about the diameter of a pencil lead.

Who Invented Core Memory?

Success has a thousand fathers. Or in this case, at least, five.

Amateur inventor (and street inspector for Los Angeles) Frederick Viehe filed a core memory patent in 1947. Harvard physicist An Wang filed one in 1949. RCA’s Jan Rajchman and MIT’s Jay Forrester filed in 1950 and 1951 respectively.

Core memory proved extraordinarily successful. Success brought extraordinary profits…which in turn ignited ownership disputes. Whose invention was it?

In 1964, after years of legal wrangling, IBM paid MIT $13 million for rights to Forrester’s patent—the largest patent settlement to that date.

Semiconductor Memory: Fast, Cheap, or Dense?

Static RAM to the Rescue

Daniel Slotnick’s ideas for the high-performance ILLIAC IV computer were ambitious. Developed for the Department of Defense, ILLIAC IV featured 64 parallel processing elements, each requiring 131,072 bits of memory. But finding the right memory was challenging.

In 1970, Fairchild Semiconductor provided its new 256-bit bipolar SRAMs. “ILLIAC IV was the first machine to have all-semiconductor memories,” recalled Slotnick. “Fairchild did a magnificent job of pulling our chestnuts out of the fire.”

The path was paved considerably earlier: Robert Norman had patented a semiconductor static RAM design at Fairchild in 1963.

Dynamic RAM: Smaller is better

The key to higher density is minimizing the number of transistors required to access the storage capacitor.

Early DRAMs, like Fairchild’s in 1968, used four to six. Honeywell’s Bill Regitz proposed a three-transistor design, which Intel built for them. Intel’s next DRAM, the 1K bit “1103,” was the first commercially available. In 1976, Mostek used IBM’s 1967 patent for a one-transistor cell to create a 16K-bit DRAM.

The secret to smaller packages is reusing (“multiplexing”) pins that address a desired bit. Mostek achieved that first with its 1973 4K-bit DRAM.

How Does Semiconductor Memory Work?

The various types of semiconductor memory store bits in one of two ways: transistorized “flip-flops” (which switch between two states representing one and zero), or capacitors (storing a charge for one, no charge for zero).

Shift Registers store bits of either kind in a serially connected string. Data is read out in order, so access to a specific bit is slow.

SRAMs use flip-flops organized to access any bit directly. They use more transistors per bit and are faster than shift registers, but more expensive.

DRAMs, which use capacitors, need as little as one transistor per bit and have simpler access circuits. DRAMs are denser than SRAMs, but slower and must be refreshed periodically.

NVM chips retain data when power is switched off. The manufacturer permanently configures Read-Only Memory (ROM). Forms of memory based on charge storage, such as Flash, can be changed, but only a limited number of times.

Dynamic RAM: Trade Wars

Sales soared when DRAMs entered commercial production in the early 1970s. With customer demand in the millions, DRAMs became the first “mass market” chips, sparking fierce international competition.

In 1976, the Japanese Trade Ministry saw a chance to make Japan a leader in this new industry. It funded Fujitsu, Hitachi, Mitsubishi, NEC, and Toshiba to develop 64K DRAMs. The consortium triumphed, decimating American memory suppliers and provoking the U.S. government to threaten trade sanctions.

Eventually, a Japanese-Americans agreement eased tensions. But it didn’t ease competition. Korea soon eclipsed both.

Memory Packaging

Fierce battles for market dominance in the infant semiconductor memory business included the packaging. Intel used an 18-pin DIP (Dual In-line Package) for its 1970 breakthrough 1K DRAM. TI and Intel used 22 pins for their competing, next-generation 4K devices in 1973. But Mostek soon dominated the 4K market by squeezing it into a 16 pin package. By 1976 everyone adopted Mostek’s approach for 16K and larger DRAMs.

MOS DRAMs Replace Magnetic Core Arrays

By the early 1970s, dynamic circuit designs, coupled with the silicon gate MOS (Metal-Oxide-Silicon) process, made DRAM chips competitive in cost with magnetic cores.

Introduced at 1¢ per bit, Intel’s 1024-bit 1103 DRAM opened the market for semiconductor main memory. In 1972, IBM pioneered a fast N-channel MOS process for its System 370 Model 158 and quickly became the world’s largest semiconductor memory manufacturer.

For the next 30 years, advances in technology—coupled with aggressive international competition—increased density by a factor of four approximately every three years. A 4K-bit device arrived in 1973, the 16K in 1976 and the 64K in 1979. The first gigabit (1,000 million bits) DRAM arrived in 2000. Every increase in density brought a corresponding decrease in cost that opened up new applications in PCs, games, and portable electronic products.

Inventing Memory, but Feeling Forgotten

Fujio Masuoka

Fujio Masuoka invented flash memory while at Toshiba.

Fujio Masuoka invented Flash memory in 1984 while working for Toshiba. Masuoka’s idea won praise. Masuoka didn’t.

Unhappy with what he saw as Toshiba’s failure to reward his work, Masuoka quit to become a professor at Tohoku University. Bucking Japan’s culture of company loyalty, he sued his former employer demanding compensation, settling in 2006 for a one-time payment of ¥87m ($758,000).

Flash memory, named for its ability to erase data in a split second, has since become a key component in digital cameras, phones, and portable music players.

Databases

Every hour, every day, digital databases quietly store, cross-reference, and return information on every aspect of our lives. Discover the history of these powerful tools, including one man’s struggle to convince his employer listen to his idea – the idea that led to a billion dollar industry.

The more information a computer holds, the more essential it is to arrange and organize that information in manageable ways. Databases, which create collections of easily searched “records,” can be structured in many ways—by relationships, in hierarchies, or other methods.

Oracle

Larry Ellison bet $2,000 on an idea he’d read in a paper by mathematician Ted Codd about organizing information in relational databases. With co-founders Robert Miner and Ed Oates, he created a database software firm. By 2000, that company—Oracle—was the world’s second biggest software company, after Microsoft.

The company debuted in 1979 with Oracle V2. There was no “V1.” The “V2” was a marketing ploy to create the illusion of a more mature product.

Oracle narrowly avoided bankruptcy in 1990, but the company’s ultimate success underscores the importance of databases.

MySQL

NoSQL 系列的出现

Digital Dark Age

这一部分可以好好写一些,断代

Imagine a future where humans are unable to access the data, literature, art, photographs, discoveries, and vital records of previous generations. That bleak future may be on the horizon! Learn how our fragile, rapidly-obsolete systems of storing data could lead to a digital dark age.

http://www.computerhistory.org/revolution/memory-storage/8/325/2208

您的支持是对我创作最大的鼓励!

热评文章