Sunday, September 28, 2008

Random Acces Memory (RAM)



Random-access memory (usually known by its acronym, RAM) is a type of computer data storage. Today it takes the form of integrated circuits that allow the stored data to be accessed in any order, i.e. at random. The word random thus refers to the fact that any piece of data can be returned in a constant time, regardless of its physical location and whether or not it is related to the previous piece of data.[1]
This contrasts with storage mechanisms such as tapes, magnetic discs and optical discs, which rely on the physical movement of the recording medium or a reading head. In these devices, the movement takes longer than the data transfer, and the retrieval time varies depending on the physical location of the next item.
The word RAM is mostly associated with volatile types of memory (such as DRAM memory modules), where the information is lost after the power is switched off. However, many other types of memory are RAM as well (i.e. Random Access Memory), including most types of ROM and a kind of flash memory called NOR-Flash.

History

An early type of widespread writable random access memory was the magnetic core memory, developed in 1949-1951, and subsequently used in most computers up until the development of the static and dynamic integrated RAM circuits in the late 1960s and early 1970s. Before this, computers used relays, delay lines or various kinds of vacuum tube arrangements to implement "main" memory functions (i.e. hundreds or thousands of bits), some of which were random access, some not. Latches built out of vacuum tube triodes, and later, out of discrete transistors, were used for smaller and faster memories such as registers and (random access) register banks. Prior to the development of integrated ROM circuits, permanent (or read-only) random access memory was often constructed using semiconductor diode matrixes driven by address decoders.
Overview

Types of RAM

Modern types of writable RAM generally store a bit of data in either the state of a flip-flop, as in SRAM (static RAM), or as a charge in a capacitor (or transistor gate), as in DRAM (dynamic RAM), EPROM, EEPROM and Flash. Some types have circuitry to detect and/or correct random faults called memory errors in the stored data, using parity bits or error correction codes. RAM of the read-only type, ROM, instead uses a metal mask to permanently enable/disable selected transistors, instead of storing a charge in them.
As both SRAM and DRAM are volatile, other forms of computer storage, such as disks and magnetic tapes, have been used as "permanent" storage in traditional computers. Many newer products instead rely on flash memory to maintain data between sessions of use: examples include PDAs, small music players, mobile phones, synthesizers, advanced calculators, industrial instrumentation and robotics, and many other types of products; even certain categories of personal computers, such as the OLPC XO-1, Asus Eee PC, and others, have begun replacing magnetic disk with so called flash drives (similar to fast memory cards equipped with an IDE or SATA interface).
There are two basic types of flash memory: the NOR type, which is capable of true random access, and the NAND type, which is not; the former is therefore often used in place of ROM, while the latter is used in most memory cards and solid-state drives, due to a lower price.

Memory hierarchy

Many computer systems have a memory hierarchy consisting of CPU registers, on-die SRAM caches, external caches, DRAM, paging systems, and virtual memory or swap space on a hard drive. This entire pool of memory may be referred to as "RAM" by many developers, even though the various subsystems can have very different access times, violating the original concept behind the random access term in RAM. Even within a hierarchy level such as DRAM, the specific row, column, bank, rank, channel, or interleave organization of the components make the access time variable, although not to the extent that rotating storage media or a tape is variable. (Generally, the memory hierarchy follows the access time with the fast CPU registers at the top and the slow hard drive at the bottom.)
In many modern personal computers, the RAM comes in an easily upgraded form of modules called memory modules or DRAM modules about the size of a few sticks of chewing gum. These can quickly be replaced should they become damaged or too small for current purposes. As suggested above, smaller amounts of RAM (mostly SRAM) are also integrated in the CPU and other ICs on the motherboard, as well as in hard-drives, CD-ROMs, and several other parts of the computer system. The overall goal of using a memory hierarchy is to obtain the higher possible average access speed while minimizing the total cost of entire memory system.

Swapping

If a computer becomes low on RAM during intensive application cycles, the computer can perform an operation known as "swapping". When this occurs, the computer temporarily uses hard drive space as additional memory. Constantly relying on this type of backup memory is called thrashing, which is generally undesirable because it lowers overall system performance. In order to reduce the dependency on swapping, more RAM can be installed.

Other uses of the "RAM" term

Other physical devices with read/write capability can have "RAM" in their names: for example, DVD-RAM. "Random access" is also the name of an indexing method: hence, disk storage is often called "random access" because the reading head can move relatively quickly from one piece of data to another, and does not have to read all the data in between. However the final "M" is crucial: "RAM" (provided there is no additional term as in "DVD-RAM") always refers to a solid-state device.

RAM disks

Software can "partition" a portion of a computer's RAM, allowing it to act as a much faster hard drive that is called a RAM disk. Unless the memory used is non-volatile, a RAM disk loses the stored data when the computer is shut down. However, volatile memory can retain its data when the computer is shut down if it has a separate power source, usually a battery.

Shadow RAM

Sometimes, the contents of a ROM chip is copied to SRAM or DRAM to allow for shorter access times (as ROM may be slower). The ROM chip is then disabled while the initialized memory locations are switched in on the same block of addresses (often write-protected). This process, sometimed called shadowing, is fairly common in both computers and embedded systems.
As a common example, the BIOS in typical personal computers often have an option called “use shadow BIOS” or similar. When enabled, functions relying on data from the BIOS’s ROM will instead use DRAM locations (most can also toggle shadowing of video card ROM or other ROM sections). Depending on the system, this may or may not give a performance boost. On some systems the benefit may be hypothetical because the BIOS is not used after booting in favour of direct hardware hardware access. Of course, somewhat less free memory is available when shadowing is enabled.[2]

Recent developments

Several new types of non-volatile RAM, which will preserve data while powered down, are under development. The technologies used include carbon nanotubes and the magnetic tunnel effect. In summer 2003, a 128 KB magnetic RAM chip manufactured with 0.18 µm technology was introduced. The core technology of MRAM is based on the magnetic tunnel effect. In June 2004, Infineon Technologies unveiled a 16 MB[3] prototype again based on 0.18 µm technology. Nantero built a functioning carbon nanotube memory prototype 10 GB[3] array in 2004. Whether some of these technologies will be able to eventually take a significant market share from either DRAM, SRAM, or flash-memory technology, however, remains to be seen.
Since 2006, "Solid-state drives" (based on flash memory) with capacities exceeding 150 gigabytes and speeds far exceeding traditional disks have become available. This development has started to blur the definition between traditional random access memory and "disks", dramatically reducing the difference in performance.

Memory wall

The "memory wall" is the growing disparity of speed between CPU and memory outside the CPU chip. An important reason for this disparity is the limited communication bandwidth beyond chip boundaries. From 1986 to 2000, CPU speed improved at an annual rate of 55% while memory speed only improved at 10%. Given these trends, it was expected that memory latency would become an overwhelming bottleneck in computer performance. [4]
Currently, CPU speed improvements have slowed significantly partly due to major physical barriers and partly because current CPU designs have already hit the memory wall in some sense. Intel summarized these causes in their Platform 2015 documentation (PDF)
“First of all, as chip geometries shrink and clock frequencies rise, the transistor leakage current increases, leading to excess power consumption and heat (more on power consumption below). Secondly, the advantages of higher clock speeds are in part negated by memory latency, since memory access times have not been able to keep pace with increasing clock frequencies. Third, for certain applications, traditional serial architectures are becoming less efficient as processors get faster (due to the so-called Von Neumann bottleneck), further undercutting any gains that frequency increases might otherwise buy. In addition, partly due to limitations in the means of producing inductance within solid state devices, resistance-capacitance (RC) delays in signal transmission are growing as feature sizes shrink, imposing an additional bottleneck that frequency increases don't address.”
The RC delays in signal transmission were also noted in Clock Rate versus IPC: The End of the Road for Conventional Microarchitectures which projects a maximum of 12.5% average annual CPU performance improvement between 2000 and 2014. The data on Intel Processors clearly shows a slowdown in performance improvements in recent processors. However, Intel's new processors, Core 2 Duo (codenamed Conroe) show a significant improvement over previous Pentium 4 processors; due to a more efficient architecture, performance increased while clock rate actually decreased.

No comments:

Pengalaman Mengikuti Program COOP Telkom 2011

Sahabat-sahabatku sekalian, pada kesempatan kali ini, gw akan berbagi sedikit pengalaman ketika menjalani program COOP Telkom 2001. Kalau ...