Industry Publications Index ... Click Here




Beating the Bandwidth Bottleneck:
Part 2 Gigabit Speed IP LANs

Originally published  October, 1997
by Carlo Kopp
¿ 1997, 2005 Carlo Kopp

In the first part of this feature we explored the emergence of Fibre-Channel (FC) as a high speed storage bus, and reviewed the basic technological ideas behind the FC protocol. In this second part, we will explore Fibre Channel as a medium for Gigabit/s speed LANs, and some of the practical implications of this technology.

Networking bandwidth, much like memory and storage, has Parkinsonian properties - however much we have, we always seem to need more. This should come as no surprise, since the ever-bloating "productivity" applications which, productively or not, occupy the CPUs on most users desktops have this annoying habit of sprouting features with every new release. In practical terms, this pressure translates into an ever increasing demand for computational bandwidth, which in turn translates into a demand for storage and networking bandwidth. Moreover, the usability of networking bandwidth in established technology took a great dive with the growth in the W3 and Internet, in general, over the last three years.

Whereas the networking bandwidth needs of desktop users may continue to be satisfied in the near future by 100 MHz clock rate Fast Ethernet, in its many incarnations, at the server end this technology is becoming hard pressed to keep up. This is essentially for two reasons:

  • a server providing NFS or HTTP to a large site will have to service the aggregate packet rates of a large number of users. If each user has a 100 MHz Ethernet port to themselves, they can and will generate bursts of packets at high rates.

  • the self-similar statistical properties of Ethernet protocols will cause bursts to appear additive at the server end - this means that the burstiness seen at the server will often reflect the sum of the peak burst rates at user level platforms.

  • ever increasing amounts of storage reflect in the need for very fast backup tapes, such as DLT. As a result, the network becomes a backup channel and must have adequate bandwidth to cope with multiple Gigabyte backups, if it is not to become a critical performance bottleneck in the backup mechanism.

  • clustering of servers is becoming a popular concept, yet again (funny how these fads wax and wane), and a very fast IP channel is seen to be the most appropriate means of hooking a cluster together at this time.

As a result, the same pressures which have pushed RAID and high speed storage interfaces into the forefront are also manifesting themselves at the network level. Therefore, computer room high speed networking looks to be another big growth area in the next several years. Indeed, one manufacture has already coined the term "Storage Area Network" to describe a FC computer room network using a shared FC Arbitrated Loop to connect servers, routers and RAID storage arrays, with traffic using either FC-SCSI or FC-IP as required.

The Limitations of Existing LAN Technology

Existing 100 MHz clock speed Ethernet variants will be hard pressed to operate at faster speeds, certainly using the established 100 Base T cabling infrastructure. Unlike SCSI, where there is still some headroom to play with, Ethernet even using a star topology is pushing the performance limits of twisted pair cabling.

Because a building-wide LAN must be capable of covering at least 30-100 metre distances between hubs, and between hubs and user devices, squeezing more bits/sec out of this model will not be all that easy to do. Optical fibre is certainly feasible, but the reluctance of the market to embrace fibre Ethernet variants and FDDI suggests that the technology has yet to be fully accepted. Given all of the performance advantages, security and RF emission/immunity strengths, the scale of fibre proliferation to date is to say the least, very disappointing.

Twisted pair is on its last legs, for high speed LANs. Going beyond 100 Mbit/s will be problematic. Certainly ECL signalling, tight control of cable specs, pulse shaping networks at the receivers and transmitters, and Schottky clippers, will be hard pressed to beat the existing 100 Base T design. Using a custom driver design, with a higher voltage swing and low impedance bipolar differential drivers, should allow a doubling of the existing clock speed, although it is likely that the end product could well become a significant RF interference source.

Of course, there is the option of going to a coaxial cable, with either a shared bidirectional channel, or a transmit and receive cable. Again, doing this will require a custom high speed driver, receiver, and pulse shaping and clipping circuits. A high quality coax would allow up to about one Gigabit/s speeds over tens to hundreds of metres of cable length, subject to driver and receiver design, neither of which would be trivial.

However going to a pair of coaxial cables (twinax) really takes us the full circle, since it is one of the standard interfaces defined for the Fibre Channel standard. The only difference between a coaxial star topology Gigabit/s Ethernet and a copper Fibre Channel LAN would lie in the structure of the protocol. So why bother ?

Fibre-Channel - A Gigabit/s LAN

A Fibre Channel LAN or Storage Area Network could in theory be implement using the FC fabric model, or the Arbitrated Loop model. In practice, at this time, most sites will have no choice than to opt for the Arbitrated Loop model, simply since nearly all currently available hardware falls into this category.

To set oneself up to run a Fibre Channel LAN we need host interfaces, suitable optical fibre or copper coax cables, one or more hubs or concentrators, and if we are serious about bandwidth in and out, a router.

The host interface for a Fibre Channel LAN could in theory be identical to that used for Fibre Channel SCSI. This is usually implemented as a standard bus adaptor card (VME, PCI, Sbus, Turbochannel etc), which contains the Fibre Channel controller ASIC (Application Specific IC = Silicon), and either an embedded or piggyback parallel/serial interface adaptor, containing the 8B10B bidirectional code converter, oscillator and the line interface, be it optical or coax (described in some detail in the last issue).

At this time much of the adaptor hardware has been squarely aimed at the storage market, and thus contains an embedded SCSI protocol engine, which essentially precludes the carriage of IP traffic. A purchaser should therefore be cautious when attempting to integrate such equipment, to ensure that the Fibre Channel bus adaptor does indeed support IP encapsulation.

The other critical component at the host end is a suitable device driver, which can provide support for IP character mode traffic. Needless to say such a driver will need to provide interfaces for the TCP/IP protocol stack, and have appropriate provisions for device/interface state management. In a well integrated design the Fibre Channel adaptor should simply appear as yet another network device, manageable by ifconfig.

The simplest network topology which can be used is a simple ring, where a cable connects all devices in the loop, with the last device connected though to the first. This arrangement is very simple, indeed the cheapest possible supported by the protocol, but it does require that each an every device has a robust failover mechanism, to ensure that the failure of the device does not bring down the loop.

The alternative to the simple loop is the use of a hub or concentrator, which allows a physical star topology, while retaining a logical loop topology. The hub/concentrator will provide pairs of Fibre Channel interfaces, a pair for each device or node), and provides internally the loop connection between the ports. Many hub/concentrators will also provide automatic bypass of a failed device/connection, and some also have provisions for manual bypass when debugging the loop. Many hub/concentrator products also have provisions for cascading, which allows multiple hub/concentrators to be stacked and interconnected to expand the size of the loop. Hubs or concentrators are available with optical fibre, copper, or both types of interface. Needless to say, a hub/concentrator does represent a possible single point of failure for the whole loop.

Having fitted out our hosts with Fibre Channel adaptors, and tied them all together with copper or fibre cables via a stack of hubs, we now must deal with the subject of getting IP traffic in an out of the computer room.

The simplest strategy is to sacrifice a host of appropriate performance, connect it into the loop, and then load it up with suitable network adaptors to support other LAN or WAN protocols. The host then becomes a router.

The other approach to providing routing in and out of the Fibre Channel LAN or cluster is to acquire a dedicated Fibre Channel router. Fibre Channel routers may be existing protocol routers with a Fibre Channel adaptor added, or more customised designs. A typical router would have one or more Fibre Channel interfaces, and then multiple Ethernet, Fast Ethernet, ATM, Token Ring or other interfaces. It is generally envisaged that "trendy" sites would use ATM for fast WAN traffic, and Fast Ethernet for connectivity to the building-wide LAN. A more practical approach is to simply hook the Fibre Channel router to your existing battery of routers via the fastest channel each will support.

Clearly the paradigm retains the flexibility of existing LAN technology, and there should be no difficulties in integrating it into an existing corporate LAN/WAN.

The primary technical issue at this time is that most off-the-shelf Fibre Channel hardware and software technology is dedicated either to SCSI or to IP, and thus the full flexibility of this model cannot be exploited. A site which commits to Fibre Channel at this early stage will most likely employ separate Fibre Channel interfaces between hosts and their respective RAID arrays, and between peer level hosts and routers. Given the potential performance gains to be had, this is hardly a critical limitation to early deployment.

Encapsulating IP in FC-4

No discussion of of the use of Fibre Channel for networking would be complete without some exploration of the protocol level issues involved in carrying IP over the FC medium. As noted in the first part of this feature, the basic model used is that of a generic transmission medium, designed to encapsulate other, established, protocols. For each protocol, a Fibre Channel Profile is defined. The Profile specifies the configuration parameters for Fibre Channel nodes intending to carry the specific protocol.

The profile for carrying IP traffic defines also the specific activities required in order to carry IP traffic.

The starting point when a node becomes active is that it executes the standard Fibre Channel login to determine whether it can connect to the fabric, or access a specific node's Port ID in a loop. Once this is done, the host must execute an ARP (Address Resolution Protocol) operation to determine the mapping between the node's Port ID and IP address. A two layer scheme is used, in which the IP address is mapped to a MAC (Medium Access Control) address, and the MAC address mapped to the Fibre Channel Port ID.

An ARP message carried over Fibre Channel comprises an IEEE 802 format LLC/SNAP header, an protocol header and a data field. The protocol header contains the following fields:

  • 16 bits defining ARP hardware type

  • 16 bits identifying the ARP protocol

  • two 8 bit fields defining the length of the hardware/MAC address, and the length of the IP address

  • 16 bits defining either ARP request or reply

  • 48 bits defining sender MAC address

  • 32 bits defining target IP address

  • 48 bits defining target MAC address

  • 32 bits defining sender IP address

The ARP packet is limited to a size of 532 bytes. A generic IP packet is allowed an MTU size of up to 65280 bytes, although specific implementations may carry less.

Once the address mapping has been done, IP traffic can be carried between the two nodes. The Fibre Channel protocol header for generic IP packets is identical to the header for ARP, but with differences in specific fields which identify the protocol. The IP packet is then embedded in the data payload, and extracted by the receiving node.

To improve DMA performance at the host end, the FC-IP protocol headers are designed to align on 32-bit boundaries. To ensure that bridging between FC and other protocols can be implemented, the protocol header includes both sender and target MAC addresses, even though this is partly redundant since the FC frame itself contains destination addressing.

The Fibre Channel IP encapsulation scheme is of some technical interest because it combines both IEEE 802 standard mechanisms and IP protocol suite mechanisms, all embedded in the rather unique Fibre Channel model.

FC Hub and Router Products

At the time of writing a number of vendors were offering products to support IP networking over Fibre Channel. A typical contemporary FC-AL hub/concentrator supports the following features:

  • automatic insertion of devices without disrupting the existing loop configuration

  • automatic removal of devices without disrupting the existing loop configuration

  • provision of a defacto star topology for networking

  • provision of network management features

Several vendors are actively marketing such products, examples would be Emulex, Gadzoox, Jaycor Networks Inc. , Lextel, Prisa, Transoft and Vixel.

Emulex offer the Lightpulse Fibre HUB, supporting up to ten interfaces with optical fibre or copper interfaces, three configurations are supported: 10 x fibre, 10 x copper, or 8 x fibre and 2 x copper. Automatic bypass is provided, HUBs may be stacked to provide for up to 126 FC-AL devices, and the full 1062.5 Megabit/s speed is supported. The smaller sibling of this device, the LightPulse Mini Hub, will support only 5 copper interfaces.

Gadzoox are touting the FCL1063TW Fibre Channel Arbitrated Loop Active Hub, which provides 9 copper interfaces at 1062.5 Gigabit/s, automatic failover and reconfiguration, optical adaptors may be fitted, and the device may be cascaded to connect up to 126 nodes. The Hub provides full regenerative repeating, so that retimed frames are jitter free.

Prisa supply the NetFX Loop Hub, a device which offers 10 interfaces at 1062.5 Gigabits/s, in 10 x fibre, 10 x copper, or 8 x fibre + 2 x copper configurations. Automatic bypass is provided, and the device is also a fully regenerative repeater.

Lextel offer the FC-2000 which is a larger 16 port 1062.5 Gigabit/s hub. The device provides automatic failover, an optional optical port to cluster a remote (not very) site, and serial port control is provided. The FC-2000 may also be clustered to connect up to 126 devices.

Transoft will supply the StudioBOSS FC Hub, which supports nine devices.

Vixel CORPORATION's IntraLink 1000 Storage Interconnect Manager has 7 full speed 1062.5 Gigabit/s interfaces, with automatic failover and fully regenerative repeating. Plug in media adaptors allow for fibre or copper interfaces.

It is worth noting that since these hubs are relatively dumb and typically do not decode the FC frames, they can be used to cluster storage devices as well as IP hosts.

The router market is perhaps little less buoyant at this time, arguably since FC routers will need to compete against hosts kitted out to act as routers. A number of vendors offer FC routers

At the time of writing the author was only aware of the ANCOR FCS 266/1062 Fibre Link System, which is designed to provide a modular router architecture to route between FC and Ethernet, Token Ring, FDDI, and ATM networks.

The state of the art in this technology suggests that it is still a little immature, but with the caveat of taking care with integration, it is certainly a viable tool for clustering hosts, storage and backup devices. It will be by all means interesting to observe the rate at which the market adopts this technology.




$Revision: 1.1 $
Last Updated: Sun Apr 24 11:22:45 GMT 2005
Artwork and text ¿ 2005 Carlo Kopp


Industry Publications Index ... Click Here