Industry Publications Index ... Click Here




Networking - A Perspective

Originally published  November, 1996
by Carlo Kopp
¿ 1996, 2005 Carlo Kopp

Over the last decade, networking has become ubiquitous. The idea of a typical office site using standalone machines is today, quite strange. The tremendous growth which we have seen in the Internet over the last three years, and the impending proliferation of this medium into the household, heralds what is likely to become a new era in computing.

Attempting to make exact long term predictions in an area as volatile as networking is today is very risky. Nevertheless, there are a number of evident trends in contemporary technology in this area which are worth some exploration. This is what we will now proceed to do.

Current Trends in the Datalink Layer

The datalink layer in any networking environment comprises the hardware and the lowest level of protocols used, typically referred to as datalink protocols. The importance of this layer lies in its throughput performance, robustness and interoperability.

At the bottom of the performance tier, the Point to Point Protocol (PPP) has finally become adopted widely as the standard for dialup access. While PPP is often classed as an IP variant, because it provides HDLC style encapsulation of IP packets, it should actually be regarded as a datalink protocol at a peer level to LAN protocols. The growth in the PPP base can be directly linked to the growing demand for domestic IP access. The very limited SLIP protocol is finally beginning to fade out.

It is now fair to say that Ethernet and its IEEE 802.3 offspring have wholly dominated the established base of LAN sites. A decade ago many other alternatives competed for the central role in the LAN marketplace, five years ago it was clear that Ethernet would prevail, and so it has happened.

Within the Ethernet base, a one half decade ago the "thinwire" 10 Base-2 was the dominant medium, with the "backbone" 10 Base 5 and optical fibre variants occupying a much smaller proportion of the site population. In the intervening period, the unshielded twisted pair variant, 10 Base-T, has clearly overwhelmed most of the market. An observation from that period was that "10 Base-T Ethernet will become the RS-232 of the 1990s...", and so it has indeed transpired.

Using a hierarchical star topology rather than the cable TV inspired linear segmented topology of the coaxial Ethernet variants, 10 Base-T became very popular very quickly. The fundamental reason lay in the nature of the topology, as a star topology was easily adapted to the established infrastructure, and also separated individual hosts electrically. The familiar "...has anybody moved their PC..." call across the office is largely a thing of the past.

At this time there are two strong trends in the Ethernet LAN marketplace. These are the adoption of Ethernet switches, and the increasing penetration of 100 Mbit/s variants of the protocol. Both of these trends are the direct result of the increasing amount of computing power on the user desktop. Even three or four years ago, state of the art Unix workstations and workgroup servers could push out enough packets on a single Ethernet interface to saturate the 10 Mbit/s channel (which occurs at 8-9 Mbit/s throughput). With the increasing market penetration of fast 32-bit CPUs such as the Pentium and PowerPC in the PC base, and the dominance of 64-bit or pseudo-64-bit engines in the Unix base, every single platform has the capacity to saturate a shared 10 Mbit/s channel.

The conventional Ethernet, whatever medium it uses, employs a collision detection protocol on a logically shared channel. What this means is that the channel bandwidth must be shared by all machines connected to the segment in question.

The Ethernet switch improves performance by emulating the behaviour of a crossbar switch, and logically connecting only the two machines involved in the transfer of any given packet. A good switch will allow concurrent connections between all pairs of ports on the switch. In this fashion it produces the illusion of each pair of machines having a private Ethernet to themselves, with a slight loss in performance. The author supervised a throughput benchmark on such a device last year, and the results pretty much fulfilled expectations.

The limitation of the switch lies in the logical rather than physical topology of the network in question, because sites with single servers will still have the performance bottleneck of the server's network interface. With an increasing proportion of peer-to-peer traffic rather than client to server traffic, the switch offers an increasing improvement in aggregate network performance.

The answer to the performance problems of sites with single or multiple servers is to move to a higher channel bandwidth than the established 10 Mbit/s, and equipping servers with high throughput network interface adapters. This indeed is the reason behind the increasing popularity of 100 MHz Ethernet schemes and the ATM protocol.

Regardless of whether the site is using 100 Base-T, a proprietary 100 MHz Ethernet variant, 155 Mbit/s ATM or 100 Mbit/s FDDI, the central caveat of adaptor performance does not change. Putting a 100 MHz Ethernet chip on to a board which can pump only enough packets to saturate a 10 MHz channel, is likely to result in an adaptor which can successfully fill only 10% of the potential channel bandwidth. This is the reason why many sites have found that 100 Base-T did not fulfill their perhaps overly optimistic expectations of a tenfold increase in performance.

This of course brings us to the other increasingly popular standard in the LAN market, which is Asynchronous Transfer Mode (ATM). The conventional approach to ATM is to use an ATM switch in place of an Ethernet switch, and fit the machines in question with ATM adaptors rather than Ethernet adaptors. The long term argument which is promoted by ATM fans is that the ATM switches will eventually allow direct integration with ATM telecommunications services, thus providing a seamless interface to the WAN. Whether the latter is a reasonable expectation within the lifespan of the switch in question remains to be seen, given that the deployment of wide area ATM services is still some time away. It can be argued that there is little to be gained by moving to ATM early, as throughput performance of adapters will be similar to that of 100 Mbit/s Ethernet variants, and the user is paying a price premium on switches and adaptors which may no longer be in use when the potential long term payoff of ATM access comes about.

The FDDI protocol has evidently not caught the imagination of the customer base, as it has been rapidly overtaken by the 100 Mbit/s Ethernet variants in the market. FDDI is often used in computer room situations to connect servers with single or dual counter-rotating rings. FDDI has been in many respects a victim of too early an arrival in the marketplace, inadequate adaptor throughput performance and high cost. Part of the latter has to do with the network's ring topology and the use of optical cabling, and it may be argued that the early mass marketing of an FDDI hub device to provide for a star topology could have helped this protocol to proliferate. As things stand, it appears that FDDI will die a quiet death, overtaken by 100 Mbit/s Ethernet and ATM.

Regardless of the merits or lack thereof in the FDDI protocol, its failure to secure a strong position in the market will delay the proliferation of optical fibre LAN cabling for at least another generation of networking hardware. This is unfortunate for a number of fundamental reasons. The first is that twisted pair cabling used with 100 MHz Ethernets and ATM will not cope with subsequent bandwidth increases into hundreds of Megabits per second or Gigabit per second. These are inevitable, if Moore's Law is anything to go by (and it hasn't failed as yet !), and it follows that we will see yet another round of expensive cabling reworks in five to ten years. There are however other good reasons why fibre should be preferred in LAN environments. The first is that of electromagnetic compatibility and the potential for interference to be radiated. The second is related to the first, in that radiated transients can be eavesdropped under the right conditions, thus producing a serious security hole. The final reason is the growing incidence of info-terrorism, where systems are maliciously brought down. A Copper LAN medium provides a shared channel via which a whole site can be hit (this will be addressed in much more detail in a future feature). Those readers who may doubt the last statements should consider attaching an antenna to a spectrum analyser, and taking a look at their building from 50 yards away. It may prove to be a somewhat rude surprise.

So far we have addressed established and thus reasonably familiar trends in the area of datalink protocols. We will now look ahead and indulge in some speculation.

 Emerging Trends in the Datalink Layer

There are a number of technologies which are currently in the process of standardisation which will have the potential to significantly extend the utility of networking as we know it.

The first of these is the proliferation of cable modem technology, addressed in some detail in a previous issue. What the cable modem will offer is downstream bandwidths of up to 36 Mbit/s to domestic and corporate customers with suitable access. This will in turn be a enabling technology base for a large range of interactive services, as well as providing almost ubiquitous HTTP protocol access.

The second technology with a similar enabling effect will be the provision of ADSL/SDSL/HDSL/VDSL subscriber services to domestic or corporate premises. Both cable modems and evolved DSL services will hopefully produce a significant drop in the cost of medium to high bandwidth services in metropolitan areas.

The metropolitan area networking (MAN) model was initially proposed with the IEEE 802.6 DQDB (QPSX) protocol, developed by the University of WA in the mid eighties. Sadly, this protocol did not proliferate widely, and has since been displaced by ATM which is likely to provide the metropolitan networking infrastructure foundation for the evolved DSL services.

The MAN model is expected to mesh with the provision of interactive consumer services. It may well be that you will some time in the near future receive your phone bill, electricity bill and parking ticket via email !

The ANSI Fibre Channel standard is another technology with the potential to produce some big changes. The FC standard is intended to produce a common Copper or Optical Fibre based medium for supporting both FDDI, SCSI, HIPPI and IPI peripherals, as well as very high speed LANs. It is expected that the FC protocol will eventually encompass 802.2 and IP encapsulation, providing the potential for LAN speeds of up to a Gigabit/s. With an emerging Gigabit 802.3z/Ethernet standard, it appears that having a Gigabit/s to a server is not an "if" question, but rather a "when" question. Again the central issue for users will be the performance of network adapters. Pushing out packets at current or slightly better than current rates into a one Gigabit/s pipe may not yield the anticipated result !

Another important technology and emerging standard is the 802.11 Wireless LAN protocol. What 802.11 will provide is a Code Division Multiple Access (CDMA) protocol for short range radio (or infrared) building or site wide radio access to a LAN. The bandwidth is set at this time at 2 Mbit/s which is arguably inadequate for serious use, but still significantly better than your 28.8 Kbps domestic PPP connection.

The 802.11 protocol utilises spread spectrum techniques, a technology which has been used in military applications (eg GPS) for many years, and which has recently been adopted for mobile phone use (a future feature will address 802.11 in detail).

The impact of wireless computing is yet to be seen, but we can say with certainty at this time that it will never displace the cabled network infrastructure because its bandwidth performance is simply too low. What we can expect to see is wireless access on laptops, palmtops, personal organisers and Newtons or Newton clones. If you are the kind of person who likes to carry laptops or Newtons into meetings to suitable awe and impress the techno-peasants present, then the wireless LAN will be the answer to your problems !

Another technology which we are yet to see but which has the potential to produce significant changes in the state of the art is the Low Earth Orbit (LEO) communications satellite.

Conventional communications satellites sit in geostationary orbits, which means that they occupy a single spot in the sky from the perspective of any given user. You point your dish at the satellite, and subject to the service provided, you receive or even send data/voice/picture.

The LEO satellite is a very different beast, as it sits in a polar orbit of several hundred kilometers altitude and traverses the surface of the Earth, repeating its orbital track every so many hours or days, subject to the geometry of the orbit. The idea of using such satellites for mobile communications is based on the model of a constellation of tens or hundreds of satellites, the footprint of each being a circle or ellipse on the surface of the Earth, with these footprints forming individual cells similar to those used with a mobile phone network. Whereas in a mobile phone network the cell is geographically fixed, in an LEO satellite network, the cells are moving much faster than the users within the cell.

The potential of LEO satellite networks is to provide truly global coverage, with the user of a satellite mobile phone or modem being able to connect to any other user worldwide at any time.

The player most likely to get into the market first is Motorola with its TDMA protocol based Iridium network. It will provide a basic 64kbps phone service. A large number of other players are involved, and many are proposing much superior CDMA based protocols which allow multiple LEO constellations to coexist in the same bandwidth.

True to form, the most visible player in the LEO satellite comms market is Bill Gates, who has proposed a huge network with individual (presumably Microsoft proprietary) protocol routers in each and every satellite. As yet it is unclear whether these satellite borne routers will be running W95 or NT, and it will certainly be interesting to speculate on the possible outcome.

The eventual result of properly implemented and deployed LEO satellite networks will be totally ubiquitous worldwide access to a basic low bandwidth service. While telephony is likely to be the big money earner in the short term, we can expect to see dialup style network access follow very quickly. You will then be able to browse your favourite web site while sitting on a flight enroute to LA, or do so while indulging yourself on a Safari holiday. A frightening prospect indeed, as the IT manager will be able to track you down no matter where you have hidden!

The medium to long term trends in the networking technology base can be summarised as an ongoing growth in available bandwidth for fixed services, and the proliferation of mobile services on a local and eventually global scale.

Current Trends in the Upper Layers

The biggest happening in the IP layer is the impending deployment of the IPng or IPv6 version of the Internet Protocol. This protocol has been reviewed extensively in earlier issues of OSR, so we won't indulge in unnecessary detail.

The pressure for IPv6 came from the exhaustion of the existing IPv4 address space. With the expected address space of IPv6 don't be surprised to see your toaster or microwave eventually acquire its own address.

While IPv6 is the biggest impending event for the Internet community, it isn't the only one in the pipeline.

A draft standard is being worked on for mobile IP, intended to support remote access by mobile hosts (laptops, organisers, etc). At this time it is expected that mobile IP will use a similar model to that used with mobile phones. In such schemes, the remote host is given a connection to its "home" router, whence it connects to the "outside" world. With mobile IP, the connection to the "home" router would be via the IP tunneling protocol, another recent development. The tunneling protocol allows IP to encapsulate transparently other protocols, including IP itself, and "tunnel" such traffic through an IP network to its intended destination.

A version 2 of the Routing Information Protocol is now in the committees, and it is intended to fix security issues in the existing RIP. Security in the IP environment is now a major issue, with no less than 17 IETF drafts under discussion which are specifically concerned with adding in security features such as authentication and encryption. We can also expect to see a substantially revised DNS with cryptographic authentication features built in.

From a perspective view, we will see improvements in both the utility and the security of the Internet protocol suite over the next few years. There is much pressure to upgrade the security of the protocols as understandably the existing protocols are not quite robust enough to carry transactions securely, over a network when any node could be compromised.

Summary

We are now at the threshold of a period of massive change and growth in the networking infrastructure and technology base. Over the next decade we can expect to see a significant growth in bandwidth available, a wide range of interfaces and access media, and hopefully significant improvements in IP security. Areas which the current standards effort has yet to address are electrical security and survivability, and we can hope to see further effort in these areas. The outlook for the next decade is a faster, more secure and vastly more diverse networking environment. We have a very interesting period ahead of us.



$Revision: 1.1 $
Last Updated: Sun Apr 24 11:22:45 GMT 2005
Artwork and text ¿ 2005 Carlo Kopp


Industry Publications Index ... Click Here