Industry Publications Index ... Click Here




What's Coming Up in Storage Technology?

Originally published  June, 1998
by Carlo Kopp
¿ 1998, 2005 Carlo Kopp

Storage technology is central to modern computing and the last decade has seen numerous important advances at every level of the storage hierarchy. In this feature we will look at some of the most notable recent developments in the marketplace, and also explore some of the technology which is currently in the R&D pipeline.

In perspective, there are three major technological areas of interest in the storage hierarchy. The first of these is the storage device itself, characterised most importantly by its capacity and internal access bandwidth. The second area of interest is storage connectivity, a measure of what bandwidth is available between hosts and storage devices, and how these devices are organised for access. The third issue is the logical architecture of access, ie how do Gigabytes of storage appear to the host operating system's filesystems. Over the last twelve months we have seen interesting developments in all of these areas, and these will be now explored briefly.

Disk Drive Head Technology

Without doubt of the most important technological developments to hit the disk drive market in the last six months is the IBM DeskStar 16GP. This disk drive is the first to use what is termed "Spin Valve" or "Giant MagnetoResistance" effect read heads. The 3.5" DeskStar 16 can store 2.69 Gigabits per square inch of platter surface, with 3 platters the drive stores a staggering 16.2 Gigabytes in the volume of the standard half height 3.5" drive.

Hardly wishing to sound like an IBM commercial here, but credit is due here without doubt for the extensive research effort which made the GMR head design possible. Conventional disk drive heads in the early days employed miniature electrical coils for both writing and reading of data. It was very soon found that with increasing density of data less and less magnetic flux was produced per bit of data, and this in turn set practical limits on how much data could be read back from the disk before errors swamped the ones and zeroes.

The next important development in this area was the adoption of the magneto-resistive (MR) read head, the technology which is the basis of the typical 2 to 4 Gigabyte 3.5" disk which is the market standard today. In a typical MR drive head, writing is performed by a coil, but reading employs a MR sensor which is made from a ferromagnetic alloy. The resistance of the MR elements changes with the magnetic flux in encounters on the platter, and by passing a small current through it, voltage changes are produced. These can be then amplified by a head amplifier.

The GMR effect occurs in sandwiched multilayered structures of alloys, such as permalloy/copper/permalloy, and provides typically fives times the resistance change per flux change, in comparison with established MR head technology. In practical terms, this means that a similar electrical output can be produced for a magnetic recording which is about fives times weaker than established technology.

Research papers published by IBM suggest that head bandwidths of 80 or more MBits/s sec are achievable with this technology, and a doubling of track density on the disk platters is feasible. For this improvement in density, the price to be paid is a more finicky fabrication process for the read head structures. Readers interested in more detail are directed to the January, 1998 issue of the IBM Journal of Research and Development.

Holographic Optical Storage

While holographic bulk storage may be commonly regarded to be a Babylon 5 or Star Trek technology, the truth is much less dramatic and the technology approaching a standard where it may become commercially viable inside the decade. Importantly, the paradigm itself is not only applicable to the storage cube of sci-fi soapies, but when applied to more conventional media such as optical disks, can significantly enhance storage densities over and above what we are accustomed to.

The fundamental idea behind all holographic storage media is that of using an electrooptical crystal, to trap optical interference patterns produced by two laser beams, one of which is spatially modulated with data. Most readers will be familiar with the planar hologram, produced on a flat film by combining a direct reference beam of laser light, and a so called object beam which is reflected off an imaged object.

The two laser beams interfere, and the resulting interference fringes are recorded on the planar film. When the film is developed, illuminating it with a readout beam causes reflections off the interference fringes recorded in the film, and these reflections create the illusion of a 3D spatial image infront of the film (or behind it). Holographic techniques employ the same principles, but their operation is a little more subtle.

The easiest example is the bulk or volume hologram (see diagram), commonly used for experimental work in this area and a likely candidate for future manufacture. In such a bulk hologram, a crystal of an photorefractive material, such LiNbO/3, is doped with iron or rare earth ions, in a process not unlike that in which semiconductors are made. A piece of this material is then cut into a cube, slab or similar regular shape and polished.

Such crystals have in interesting optical property, which is that variations in applied light such as interference patterns produce a relatively long lived localised variation in the index of refraction of the crystal. In the simplest of terms, the material can "remember" an applied interference pattern.

This effect is produced by charge within the material migrating between areas of different local light intensity, and becoming trapped within the crystal when the source of illumination is removed. The effect is typically temporary, and the trapped charge bleeds out after several months or years of storage in the dark. To produce a holographic memory device from a chunk of such material requires that we illuminate it with a modulation (ie object) beam, spatially modulated with a data pattern, and a reference beam.

This produces an interference pattern within the bulk of the material, which is recorded. To read out the recorded data, it is necessary to illuminate the material with a readout beam which duplicates the reference beam, and use a device such as a CCD camera to detect the resulting light radiated from the device, which contains the same spatial brightness variations as the original modulation beam did.

In this manner, the array of bits which modulated the brightness of the original modulation beam is replicated on the face of the CCD and may then be read out again. In practice a spatial modulator might be built with an LCD light valve, which allows pixels to be turned on and off easily. This arrangement as is would allow us to record only one array of data bits, which exploits but a small fraction of the capacity of the material. If we change the angle of the reference beam slightly, we can in turn record another array. In this fashion, termed angular multiplexing, it is possible to record hundreds to thousands of arrays of pixels, forming a "stack" of holographic data pages.

The record to date stands at about 10,000 holograms. To select a specific array of bits, we merely need to adjust the readout beam to the appropriate angle, and the indexed "page" of data appears on the readout CCD, whence it can be converted into an electrical readout. This is alas easier said than done, since the angular changes required are extremely fine, of the order of 0.002 degrees of arc.

Therefore selecting a specific "page" of data can be a little tricky. Another important issue with bulk holographic memory technology is managing the effects of noise. Because the data bit array readout is very faint, the noise in the CCD detector elements can swamp the data. Even small misalignments in the optics can cause light to spill over from neighbouring bit cells, and scattered light within the optics will further degrade the detection thresholds.

The solution to these limitations is in principle no different from that used in established technology such as disk drives. A certain proportion of the total data bits recorded will be used for Forward Error Control (FEC), and are redundant to allow the recovery of the data with a very low error rate. Other tricks from established technology can also be used, such as spatially precoding the data array to avoid data patterns which exacerbate inter-pixel interference, in a manner not unlike ISI control techniques in comms channels.

Differential encoding in holograms is a direct analogue to the popular Manchester code used in LANs, and research at this stage is moving toward more sophisticated coding schemes. The aimpoint for raw Bit Error Rates (BER) in hardware at this stage is around 10^-4 BER, which combined with suitable coding techniques can produce system level BERs of about 10^-15, which is competitive with established magnetic media.

A holographic RAM (HRAM) based on this technology has its idiosyncrasies. One is that data can only be accessed and read out a page at a time. Typical readout times are dominated by the performance of the CCD detectors, but existing research indicates that a 1 millisecond readout time is practically achievable. If we assume one Megabit per page, ie about 100 kiloBytes with coding, and one thousand pages read out per second, this provides the HRAM with a read access bandwidth of about 1 Gbit/s or 100 Megabytes/sec, which is respectable performance by any measure. Recording rates for HRAMs are much slower, typically 10 to 100 milliseconds per page, thus resulting in write bandwidths between 1 to 10 Megabytes/sec, which is still competitive with magnetic disk technology.

An important limitation of HRAMs is that data can only be manipulated at the page level, and thus changing a byte as is done in solid state RAM is not practical. Difficulties have also been encountered with selective writing of pages in a "stack" of hologram pages in the crystal.

The approach followed mostly at this time is to treat the HRAM as a WORM (Write Once Read Many) device, write it once until full, use it for high speed data readout, and then repeatedly bulk erase it for reuse again. Another issue is recording lifetime, and subject to materials used, and the difficulty in selective page refreshing, this is an area requiring some further effort. The WORM-like characteristics of the HRAM suggest that its best application lies in areas where high bandwidth readout and infrequent writes are encountered, and web servers have been suggested as a suitable application. While the HRAM is as yet not mature enough for production purposes, the technology is rapidly approaching this point. A major DARPA funded program in the US includes IBM, Rockwell, Kodak, Polaroid and Optitek, while individual research programs are being pursued by Bell Labs and Holoplex. Readers interested in a more comprehensive discussion are directed to the February, 1998 issue of IEEE Computer.

Advanced Parallel SCSI

In earlier features in this series we reviewed the 20 MHz clocked UltraSCSI standard, and the rapidly growing Fibre Channel SCSI variant. The latter discussion included some speculation on the performance limits of parallel SCSI bus technology, and the likely directions the technology will take. Recent research publications indicate that the latter speculation is nearer to practical hardware than many might anticipate. At this time the ANSI SCSI community have defined the terminology for the next growth phases in parallel SCSI interfaces, thus producing the following family of SCSI interfaces:

  • Slow SCSI at 5 Megabytes/sec narrow (8 bit) and 10 Megabytes/sec wide (16 bit)
  • Fast SCSI at 10 Megabytes/sec narrow (8 bit) and 20 Megabytes/sec wide (16 bit)
  • Ultra/Ultra1/Fast-20 SCSI at 20 Megabytes/sec narrow (8 bit) and 40 Megabytes/sec wide (16 bit)
  • Ultra2/Fast-40 SCSI at 40 Megabytes/sec narrow (8 bit) and 80 Megabytes/sec wide (16 bit)
  • Ultra3/Fast-80/Fast-100 SCSI at 80-100 Megabytes/sec narrow (8 bit) and 160-200 Megabytes/sec wide (16 bit)

At this time only the Ultra1/Fast-20 is in the marketplace, as the Ultra2 and Ultra3 standards are still in definition. The speed doubling of the Ultra1/Fast-20 against the established Fast SCSI standard required several design changes in the standard, which however do retain backward compatibility.

A number of important new changes were introduced with the Ultra1/Fast-20 standard, and these will introduce some interesting changes to SCSI bussing downstream.

The primary change in Ultra was to double the clock speed in synchronous burst transfers, this was enabled by setting much tighter controls on the timing and threshold performance of receiver chips and timing of transmitter chips.

The second important change with the Ultra family of standards is the adoption of new cable and connector formats, the latter designed to better handle mating/de-mating operations (hot plugging) and the higher speeds.

The third change was the the adoption of SCSI expanders, hardware bus repeaters which allow via a star topology access to much larger numbers of drives per interface, and single drive interfaces.

From an integrator's practical perspective the new connector and cable arrangements are of most interest. External connections out of chassis will use the Very High Density Cable Interconnect (VHDCI) connectors, which are similar in appearance and layout to the established 50/68 pin high density SCSI connector, but considerably smaller. The VHDCI standard defines 26 different styles of connector, all with the same connector pin style to allow interoperability between connectors of like pin counts.

A typical arrangement on an Ultra controller board will be a pair or quartet of backpanel VHDCI connectors, mounted on a mezzanine card and often electrically separated by expander chips on the board. Round "Micro SCSI" cables using 30 or 32 gauge wire, with 7.62 mm and 6.35 mm external diameters, will be typical for Ultra installations. Internal connections will utilise the new SCA-2 standard 80-pin unshielded connector.

The SCA-2 is cleverly designed with different contact lengths for ground, power and signal pins. As a result, ground pins will engage first and disengage last during mating and de-mating of connectors, followed by power pins, and finally data. As a result, on mating the signal pins are only connected once the ground/power environment is stable, and on de-mating are disconnected before the power and ground connections are broken. This should allow reliable hot-plugging on devices, something RAID array administrators and maintainers will no doubt appreciate.

The SCSI expander used in Ultra is essentially a customised bidirectional repeater, built to propagate signals without the reflection, jitter, noise, crosstalk and interference effects which tend to add up on multidrop SCSI busses. This will allow the crafting of tree structured Ultra cabling arrangements, where expanders are used to fan out in a manner no different from that used with the 10/100-Base-T LAN hub.

Moreover, expanders will allow arbitrary transitions between single-ended and differential Ultra devices. In this manner a long differential cable can for instance be used to access a chassis, within which expanders convert to single ended Ultra to access single ended drives. It is expected that another idea from the LAN domain will be adapted, which is the bridging expander, which will allow further expansion of the usable address space. Readers interested in more detail should consult the relevant ANSI drafts and Bill Ham's excellent paper in No.3, Vol.9 1997 issue of the Digital Technical Journal.

IRAMs

The Intelligent RAM or IRAM is the brainchild of Prof David Patterson, at Berkeley, one of the authors of the RISC and RAID paradigms. Both RISC and RAID have proven to be ideas with significant long term impact on the industry. IRAM promises to do the same, in the longer term. The central idea behind the IRAM is that of fabricating a large DRAM and a microprocessor CPU on a single die.

The idea of putting CPUs and RAM on single dies is not new, indeed many micros exist for embedded applications which share a CPU, EPROM and static RAM on the one die. The drawback with SRAMs is however density, which is significantly lower than that achieved with state of the art DRAM fabrication processes.

The current performance bottleneck with most microprocessor designs is memory bandwidth, ie how many Megabytes per second cand be transferred between the host's main memory and the CPU's instruction buffering and pipeline. While clever use of caching techniques has proven to be a very effective means of alleviating this performance shortfall, it is nevertheless an ongoing problem.

The concept of the IRAM is to bypass this bottleneck by embedding the memory to CPU bus on the single die, thereby allowing for a much cheaper, wider and faster bus. While this in a more general sense produces a single chip computer system, rather than just a single chip computer, in the shorter term we are unlikely to see the IRAM displace established machine architectures. The first generation of IRAMs, when they appear, are unlikely to challenge the CPU performance of established CPUs, which employ production processes optimised for performance rather than DRAM cost/density. The importance of the IRAM will lie in its ability to significantly reduce the cost of embedded CPUs with a respectable amount of DRAM attached.

The first area which we can expect to see the benefits of the IRAM are disk drives, since with the move into multi-Gigabyte technology, caching on the drive becomes all that more important to maintaining throughput performance. An IRAM which is programmed to perform as an intelligent cache is an obvious and clearly profitable early application for this technology.

The second area where the IRAM is likely to make an impact is in storage arrays and clusters, since the IRAM as a building block packs a lot of bandwidth into a small and relatively cheap package. Combining suitable high speed switching techniques, and multiple disk drives with an array of IRAMs as controllers would provide a high performance storage array or cluster at modest cost. While the IRAM is still in the research phase, we can expect to see early products inside the next half decade. It is clearly a promising technology.

Summary

As is evident, the strong market demand for storage capacity and speed is stimulating a vigourous research and development effort across the technology base used for storage products. How soon we see the full impact of technologies like GMR heads, UltraN SCSI, HRAM and the IRAM is still unclear, the example of RAID suggests that anything up to a decade is required for significant market penetration. However, once these technologies do hit the market, we will having another set of learning curves to climbed before the user base comes to grips with the practical implications of these changes. What is certain is that we have yet to see growth in the storage market falter.



$Revision: 1.1 $
Last Updated: Sun Apr 24 11:22:45 GMT 2005
Artwork and text ¿ 2005 Carlo Kopp


Industry Publications Index ... Click Here