Industry Publications Index ... Click Here




Storage Area Networks

Originally published  July, 1999
by Carlo Kopp
¿ 1999, 2005 Carlo Kopp

The Storage Area Network (SAN) is finally coming of age, with the proliferation of Fibre Channel hardware and supporting software, it is much the latest technological fad in the US storage market.

So understandably the questions will arise, what are the benefits of the SAN against established technology, and is the SAN the answer to every site's storage needs ?

In this month's feature we will explore the limitations of established storage access technology and architectures, and compare the attributes of the SAN against such.

Established Storage Architectures

The traditional model for accessing bulk central storage, in the current networked computing paradigm, is the use of a central server host supporting a package of disk and tape drives.

This model is essentially an extension of the sixties and seventies mainframe architecture model. In that model, the large mainframe, often equipped with multiple processors, would support multiple parallel I/O adaptors of one or another proprietary flavour, which in turn would support large chains or disk drives and tape drives.

High I/O rates were supported by using parallelism and spreading I/O requests across multiple adaptors. Indeed many currently popular ideas such as striping owe their origins to this period.

The seventies and eighties saw the ascendancy of the minicomputer and the larger superminicomputers, over the mainframe, and as a result little happened in the way of advancement in storage architectures. The storage hot spot of the large mainframe was distributed across multiple minicomputers and where bulk storage was still required, there was always the fallback to the trusty central mainframe, or its equivalent in the large supermini.

The first serious architectural deviation from the norm during this period was DEC's clustering scheme, which entered the market in the mid to late eighties. This arrangement, tied very much to the proprietary VMS operating system, used a PDP-11 based storage controller, which employed 70 MBit/s serial coaxial links to provide access to VAX superminis. The then proliferating 10 Mbit/s Ethernet II simply did not have the capacity to support the required throughput. The 16-bit mini based storage controller had multiple proprietary I/O interfaces to support in effect a storage farm of disks and tape drives.

The next big paradigm in the technology base was the ascendancy of the smart desktop, with Unix workstations running NFS occupying the upper niche of the market, and PCs running proprietary file sharing protocols over proprietary transport and network protocols such as IPX, XNS and later the open CIFS. Storage was accessed initially over proprietary LANs, but over time the Ethernet/802.3 protocol displaced the opposition and became the industry standard.

The typical architecture of a medium sized site would employ a room full of servers, each with hundreds of Megabytes to several Gigabytes of disk, with one or more local high density tape drives attached to each server.

Software to support storage applications was also evolving, as the backup of desktop storage became increasingly an issue for many sites. Many users did not fancy running "daily incrementals" or even "weekly globals" on their desktop systems and this became a major issue for system administrators. Remote backup became very popular and by the late eighties and early nineties, a number of software tools became available which allowed the system administrator the hitherto luxury of backing up desktops over the network to a central server with tape drives.

Needless to say this created other headaches, the foremost of which was network saturation. Sites which experienced little after hours activity could manage, others experienced genuine pain.

The insatiable greed of the user base for storage placed much pressure on the industry in this period, and this resulted in an explosive growth of storage capacity and speed, per dollar, during the nineties.

The Gigabyte storage capacity barrier for single Winchester disks was crossed during this period, and the industry settled upon the ANSI SCSI standard as the industry standard, seeing the departure of the remaining proprietary disk and tape protocols. SCSI throughputs progressed from several Megabytes/sec on 8-bit busses, running at clock speed of Megabits/sec, up to tens of Megabytes/sec over 8, 16 and 32 bit wide SCSI variants. The speed of disk drives also grew, incrementally, from the industry standard of 3600 RPM initially to 4400, 5400, and more recently 7200 RPM. The higher density of drives and higher RPM resulted in increasingly higher internal data transfer rates, which often outstripped the transfer rates of the SCSI interface. Importantly, internal caches in drives became increasingly popular, and also increasingly larger. The current high performance commodity SCSI drive will have a 7200 RPM rotational speed, many Gigabytes of capacity, a 20 to 40 Megabyte/sec SCSI interface, and an internal cache of Megabytes in size.

Another very important technological development seen since this early nineties has the RAID array. The RAID array was initially devised by UCB researchers as a means of breaking the storage capacity barrier and throughput barriers inherent in the modestly sized and modestly fast disks of the late eighties and early nineties. The central idea of RAID is to create a larger and faster "virtual disk" by using arrays of cheap disks hooked up to a RAID controller with many SCSI interfaces.

The RAID array has matured over the last decade, with large current technology arrays offering Terabytes of storage, Gigabytes of internal DRAM cache, frequently non-volatile, multiple fast/wide SCSI or Fibre Channel (FC) I/O interfaces, embedded support for high speed tape backup, redundant power supplies and often various other features designed to produce a highly reliable and exceptionally fast, by established standards, bulk storage device.

It is the high performance large scale RAID array which is the driving force for the current SAN paradigm.

The primary issue lies in the inability of many, often large, host machines to handle the throughput of such RAID arrays. Many of the larger servers we now see in the marketplace, with switched hypercube or matrix internal bussing architectures and dozens or more 64-bit RISC CPUs, will have the capability to drive such RAID boxes without undue throughput bottlenecking on their I/O interfaces.

However if we step down a tier or two, to medium sized servers, then such machines frequently do not have the CPU or internal bus or I/O bus bandwidth to genuinely exploit the top end RAID technology appearing in the current marketplace.

Many sites have a genuine need for the kind of bulk storage provided by large RAID arrays, but often have neither the budgets nor the infrastructure to support a very large server host. For many sites, the single point of failure which such a host could represent, with the caveat that many such hosts now have a genuine hot-swapping capability for many components, is simply not an acceptable mode of operation. Indeed often a preferable approach is to employ multiple servers, which access a shared storage system.

This of course brings us to the central subject of this discussion, the SAN itself.

Storage Area Networks

In the absence of a robust and formal definition of what a SAN is, we can employ the following loose definition:

A Storage Area Network is a topological arrangement wherein a very high speed shared bus is employed to connect multiple server hosts with one or more dedicated high speed storage systems, such as RAID arrays.

The typical generic SAN arrangement would see several machine room servers connected via a SAN to one or more RAID boxes, and possibly other storage devices like tape libraries or jukeboxes. The servers are in turn connected to the site LAN which provides connectivity to other systems, such as user desktop machines.

The heart of the SAN paradigm is the Fibre Channel (FC) protocol and the hardware which implements it (FC was discussed in detail in the September and October 1997 features).

The FC protocol is designed to transparently encapsulate higher level protocols, such as IP or SCSI-3, and provides high speed serial access using bit rates of between 133 Mbits/sec and 4 Gigabits/sec. The current state of the art in this technology delivers typically 1062.5 Mbit/sec throughput, which in practical terms amounts to about 120 Megabytes/sec of data rate.

An important feature of the FC model is that it is designed to be quite transparent to the link level hardware being employed. This means that an FC loop can run for instance over a combination of multimode graded index fibre links, single mode fibre links, and if distances are short, also coaxial copper links.

The transparency is typically achieved at the interface by providing line drivers and receivers on daughter-boards with typically vendor specific 8 bit wide parallel interfaces. This means that often equipment can be configured to provide the specific style of interface required, and later upgraded to a different interface, eg starting with copper coax and moving to fibre.

The flexibility provided by the choice of copper or various fibre interfaces is important insofar as it decouples the SAN implementation from the traditional aches and pains of colocating machinery in a single area.

Where the choice exists to place all elements of the SAN into a single machine room, then copper coax is frequently the medium of choice since it is cheaper to install and less demanding of connector handling.

However, optical fibre allows runs of hundreds of metres with graded index fibre interfaces, and many kilometres with single mode fibre interfaces. This means that a SAN can be implemented through a large complex of buildings, using fibre to interconnect equipment in various areas with no loss in throughput performance.

For an organisation which likes diversity in equipment location to protect from power failures, or other uglier calamities, an FC based SAN provides the means of mirroring bulk storage virtually instantaneously, over significant distances.

The equipment interfaces are of importance. Other than requiring compatibility between flavours of FC link level interfaces, more issues arise.

One issue is the exploitation of existing storage assets which employ parallel SCSI interfaces, or the use of storage equipment which is supplied only with a parallel SCSI interface. An organisation with a $250,000 investment in a SCSI RAID array or tape library which is only available with parallel SCSI interfaces is unlikely to be enthused about the prospects of replacing the equipment to get a newfangled FC interface board.

Such scenarios can be accommodated via the use of SCSI routers, which typically provide one or more parallel SCSI ports, and one or more high speed FC ports. The firmware in the FC router enables SCSI transfers between the FC interfaces and the parallel SCSI controllers in the router. In this manner, hosts can access the parallel SCSI devices through the FC loop. They will appear as FC SCSI devices to the rest of the SAN.

The growing range of FC hardware available now encompasses hubs and switches, which allow considerable topological flexibility and redundancy within a SAN design.

Increasingly we are also seeing the availability of FC interfaces on storage devices, RAID arrays, and other bulk storage peripherals such as tapes libraries.

Therefore at this time almost any reasonable SAN topology can be implemented in existing off the shelf hardware, providing that your hosts have suitable supported FC interfaces. The SAN topology can be manipulated to yield the best tradeoff in throughput performance, redundancy, physical location and functionality.

The SAN thus the potential to improve the throughput of bulk storage access, while also improving system level reliability, and most importantly, segregating bulk storage traffic from the existing site LAN.

At a first glance the SAN is clearly the answer to every medium to large site's storage related woes.

The question is, what are the "gotchas" in the SAN paradigm.

To SCSI or to IP?

The current weakness in the SAN paradigm lies in the immaturity of software support at the device driver and upper protocol level.

From a simple throughput perspective, the preferred mode of SAN operation is SCSI-3 based, since the computational overheads of the protocol are very modest, and the protocol overheads in the channel itself are very modest. This means that a SAN oriented toward high throughput performance, arguably the greatest benefit the model has to offer, should best be implemented running FC/SCSI-3 protocol. Moreover, accommodating existing or specialised bulk storage devices which employ only parallel SCSI interfaces dictates this mode of operation.

The problem in playing this particular game lies in the unavailability, generally, of SCSI device drivers and filesystem support for shared storage between multiple server hosts. While such support will exist for hosts which are supplied by their principal vendor with optional SAN hardware, this is not generally true of all hosts, and certainly unlikely to be true of a highly inhomogeneous computing environment.

The basic assumption made in most operating systems, and this is generally true of Unix, is that the SCSI devices are private to the host and not shared with another system. Therefore to provide shared access to a single filesystem on a SCSI device, between one or more hosts, a protocol must exist which allows state information to be exchanged between the hosts. Otherwise there is potential for much chaos and mutual trashing of write operations.

Of course, if the storage is segregated and individual hosts only see their own portions, then this is not an issue. However, this approach denies many of the benefits of high throughput shared storage.

An issue which remains unresolved in the SCSI domain is security, since the protocol model essentially assumes that the channel and all devices on it are basically trusted devices. This may or may not be true, depending upon the topology of the network.

The alternative strategy is to fall back upon that mainstay of current networked storage, the NFS protocol (or variants such as XFS), and run the SAN using FC-IP rather than FC-SCSI. This approach requires that every shared storage device has NFS (or XFS) protocol support available to its FC-IP interface.

This strategy provides a genuinely transparent access model, in that multiple servers can robustly access shared storage in the manner which we are accustomed to.

The drawback lies in complexity and performance. Complexity because mostly devices currently available do not have this capability. With the exception of Network Appliance, or Meridian Data, who provide turnkey NFS capable bulk storage devices, most existing hardware will require that a Unix server of decent performance be piggybacked on to it to provide the FC-SCSI to FC-IP/NFS/XFS protocol mapping function. How big a machine is required to deliver the goods depends quite critically upon the required throughput, suffice to say however that this is not the most efficient nor necessarily cheapest approach to solving the problem.

Performance is the other issue with the FC-IP/NFS/XFS strategy, since the protocol overheads, even with a decent MTU size on the channel, are not by any means modest either on the channel or in terms of compute cycles to be devoured.

At this time the only strategy which allows the full exploitation of the flexibility inherent in the SAN paradigm is the FC-IP/NFS/XFS strategy. In turn the additional overheads in complexity and cost will reduce the attractiveness of a SAN solution for many sites, especially smaller to medium in size.

There is considerable market pressure in the US at this time to extend the capability of SCSI support in SANs, however we have yet to see a general trend in the market in this direction. Some protocol support will be required and this will need to appear on various vendor's hosts.

Will the SAN displace the established model of networked storage ? Over the longer term, in larger sites, this is almost a certainty. However, in medium to small sites, the balance between benefits and cost overheads is likely to delay proliferation of the model into the marketplace, especially in Australia, where IT managers are not renowned for being either technologically adventurous or well endowed with computing budgets.

There is much to be gained from the SAN model, from a performance perspective, however the technology base is still immature at this time and we are unlikely to see this change dramatically until fibre channel becomes more widely deployed, and the overheads of a transition become much cheaper. By such a time the outstanding issues in protocol support are likely to be resolved.

Whether the SAN becomes another technology within the reach of the "do-it-yourself" integration market remains to be seen.



$Revision: 1.1 $
Last Updated: Sun Apr 24 11:22:45 GMT 2005
Artwork and text ¿ 2005 Carlo Kopp


Industry Publications Index ... Click Here