Industry Publications Index ... Click Here




Remote Access - Coming of Age?

Originally published  April, 1999
by Carlo Kopp
¿ 1999, 2005 Carlo Kopp

Remote access to domestic or small business premises has traditionally been the decisive performance bottleneck in the data connectivity game. While the performance of LANs and WANs has increased by at least an order of magnitude in the last decade, the remote access channel has been mostly constrained by the limits of the humble telephone line.

As we approach the new millennium, it is perhaps appropriate to reflect upon recent trends in technology and explore the possibilities which exist in the most recent generation of digital communications technology.

This last decade has seen the optical fibre mature as a medium, while LANs have crossed the 100 Mbit/s threshold, and 1 Gbits/s hardware is beginning to proliferate. Yet, the humble modem, for all of its digital signal processing sophistication, is rapidly approaching the theoretical limits set down by Shannon a half century ago and is unlikely to provide much growth downstream.

The abundance of LAN and WAN bandwidth was fostered the evolution of genuinely bandwidth hungry applications, thus putting increasing pressure on the bandwidth of all channels.

What happens next ? This is a critical question and one for which there are a number of possible answers. To gain a better insight into the alternatives and their implications, we will take a brief look at the evolving technology base.

Technology for Remote Access

The starting point for any discussion of remote access technology is consider briefly what it is to be used for. Most domestic remote access falls into the category of web browsing, email, Usenet access for those brave enough, the odd bit of file transfer. Small business access falls into a similar category, perhaps with slightly different loading statistics between services.

The power of a LAN/WAN to deliver services such as NFS, X11, CIFS and similar bandwidth hungry protocols does not exist in the established remote access model, since performance is so abysmal that most users will find it futile. An xterm or gnuplot window will run, lethargically, over a low speed link. Try NFS, CIFS or a large X11 application such as Netscape or the Citrix Windoze X11 front ends, and you are simply out of the game.

The standard peripheral we all use today for remote access is the data compressing voiceband modem running over POTS. A modern modem today will typically contain a microprocessor of very decent performance, and a high performance modem chipset.

The basic functional model is that the datastream passing through the modem is compressed or uncompressed, subject to direction, and then fed into the modem chipset. A typical contemporary design will employ one or another variant of Quadrature Amplitude Modulation (QAM), allowing the aggregate throughput of 56 kbits/s/s or more.

Such performance is however predicated upon an acceptable line quality, in terms of interference, crosstalk and simple signal-noise ratio. The longer the line, the poorer the performance typically, and the less potential bandwidth relative to the ideal limits for the design. Another basic obstacle to pushing the voiceband modem further is that of bandlimited digital switches and PABXs, which essentially strip the baseband spectrum away beyond some limit. This can be exacerbated in some designs by a propensity to phase distortion in anti-aliasing filters which may also impair throughput, even if the bandwidth is acceptable. Since the information is being encoded as a combination of amplitude and phase, this problem, incidently common to US NTSC TV colour encoding, can be a genuine problem.

As noted earlier, we are reaching the limits of the voiceband modem. There is little left to squeeze from it, especially since bulk traffic such as web pages frequently contain plenty of GIF and JPEG files which do not usually lend themselves to much in the way of further compression.

The next step up is the ISDN service, providing one or two 64 kbits/s synchronous B channels, via a V.110 or V.120 interface. ISDN is still a relatively expensive service, running into hundreds of dollars a month for moderate traffic loads. For the privilege of approximately doubling (or less) the throughput of an expensive voiceband modem, a substantial cost is incurred.

ISDN, like the voiceband modem, runs over a twisted pair cable and was originally intended as a direct replacement for the analogue voice service, using a much faster digital modulation over the cable. The block replacement of analogue telephones has yet to transpire.

ISDN is now approaching the level of maturity where the quality of service is acceptable for most applications. However, the marginal performance gain over a voiceband modem raises serious questions about its worth, given the costs involved.

It would be fair to describe ISDN as a failure, insofar as the performance/price ratio of the service is arguably not competitive, and the throughput limit of 2 x 64 kbits/s has for most of the last decade lagged well behind that needed to provide a full LAN quality service to an end user.

The successor to the ISDN scheme for digital end user connectivity is the family of Digital Subscriber Line (xDSL) protocols, which extend the same idea to much faster speeds by using more sophisticated modulation schemes.

Several protocols fall under this description. The "slowest" is the ADSL (Assymetric Digital Subscriber Line) which is to run over standard twisted pairs, trading speed for distance. ADSL is by definition assymetric, and the uplink is much slower than the downlink path. In this respect it carries a heavy optimisation for services such W3 and interactive "broadcast".

In terms of speed, ADSL is to provide a T.1 to 6 km, an E.1 to 5 km, a DS2 6.312 Mbit/s to 4 km and an 8.448 Mbit/s service to 3 km, subject to cable and junction box quality. Uplinks vary between 64 and 640 kbits/s.

Such performance is clearly well ahead of ISDN and getting in to the bracket of acceptable for web browsing. However, the assymetric channel is not well suited to NFS/CIFS file serving and other LAN-like services. Should the remote client wish to do a lot of writes, ADSL will quickly lose its attractiveness. Indeed this is the penalty paid by the professional user who must try to survive on a service designed for mass market domestic users.

The next step in the DSL scheme is the follow-on to ADSL, VDSL (Very high speed DSL). Like ADSL it is intended to provide an assymetric service, up to about 55 Mbits/s, and also trades speed against distance, being also much shorter ranging.

Two other xDSL variants exist, but to date have been mostly confined to internal telco uses between switches. These are the symmetric HDSL and SDSL, providing T.1/E.1 speeds to distances between 3.3 and 4 km. Whether these become mor widely available to end users remains to be seen. As with other DSL family schemes, cable quality is an issue.

The xDSL family of protocols is with the exception of VDSL, sitting at the lower boundary of acceptable performance for the professional end user, and in its assymetric variants may fall below this boundary due to uplink speed limits.

In the low density Australian environment, with typically longer cable runs on average, only those subscribers close to switches are likely to see the full performance of the DSL services. As these are bound to the twisted pair infrastructure, they are in many respects operating with an apriori handicap.

Another contender for remote access has cropped up in recent times, and has quite a bit of technological potential. It is the Cable Modem.

The cable modem is a radio frequency modem, coupled in effect to a small IP router, which runs over otherwise uncommitted 6 MHz television channel slots on the cable TV network. As such it has an immediate advantage over the twisted pair technology of ISDN/DSL, since the basic channel in use is a high quality UHF band coaxial cable, with an aggregate bandwidth of at least a GHz for a run of modest length.

Cable modems also achieve throughput without the "smoke and mirrors" of clever line equalisation, echo cancelling and other tricks of the twisted pair trade designed to overcome the pathology of the twisted pair. Thus a cable modem is potentially much simpler and cheaper in design than an xDSL modem. Cable modems will most frequently use a QAM modulation variant, since the medium is relatively well behaved in terms of phase distortion and interchannel interference. Top of the line cable modem technology can provide 30 Mbit/s or more downstream throughput, albeit shared between multiple users.

But the cable modem is not without its warts. The first of these is asymmetry in channel speeds, common to many modem types. Again, the target domestic user and his/her preference for web browsing means that much of the cable modem technology in the marketplace carries this basic optimisation and the issues which come with it.

In some respects, asymmetry is as much a design optimisation as a consequence of the difficulties involved in pumping a signal upstream in a tree structured cabling network, designed for a broadcast medium.

In Australia cable modem services are now available in some of the capitals, where the cable TV network is available in the suburbs. At a monthly cost typically of the order of one hundred dollars or more, subject to traffic levels, you can enjoy a downstream bit rate which is frequently as good as Megabits/s. My enquiries with users of this service suggest that quality and throughput tend to vary often strongly with load, and the service can be at times quite marginal.

Other difficulties can also arise with a service which is heavily optimised for domestic web browsing. One is the common practice of dynamically allocating IP addresses upon connection, in a manner analogous to a modem pool. This needless to say much complicates routing from a company or organisation site. The standard practice is to cobble up a script via which a routing node on the central site is told what the "route of the day (read session)" is at connect time. Trivial for a computer scientist who likes to play with scripts and understands IP routing well, but clearly out of the reach of the typical end user.

Asymmetry being what it is, services such as NFS, CIFS and X11 do not always find a cable modem channel entirely agreeable, and thus the utility of the service for the professional user may be marginal. Crossmounting drives between remote and central sites is hardly a practical proposition here, throughput instability aside. So yet again, the desired LAN like performance is not really available.

Clearly what we see both in the DSL and Cable Modem services is the mass market bias of the major vendors, who clearly take the position that if you are a professional user, you ought to be paying for the privilege of high bandwidth, technological issues aside. The cost of running a symmetrical 2.048 Mbit/s service, let alone a faster one, to remote premises or a household is simply not feasible given the current pricing regimes.

Are the xDSL family of services, or cable modems the only available choices for high speed remote access ? if you wish to buy the service off your Telco, odds are this is all you will be seeing for the forseeable future. If you don't like the cost or the characteristics of the service, tough luck. They own the cables and thus hold the monopoly.

Are there other technological alternatives out there ? Most definitely. Will they become available, and at what cost, that is an interesting question within itself.

Future Alternatives for Remote Access

Were the telcos to have a genuine commitment to servicing the professional user of a remote access service, or commercial incentives to do so, then arguably both xDSL and cable modem services could be adapted for this purpose and priced competitively. However, since no such short term incentives exist, no Telco will ever bite the bullet of a strategic ten year timeline investment in these days of hungry shareholders, technology averse bankers, undersubscribed cable TV services, indifferent governments, and mostly technically illiterate end users.

An example of a strategic technology investment would the the replacement of the copper subscriber end network with single mode optical fibre. Providing virtually unlimited bandwidth, a fibre distribution network at the end user level would provide for basically unrestricted growth in the longer term. Since a robust glass fibre in a suitable jacket can last for decades, the bandwidth problem simply goes away longer term. The only performance constraint is set by the equipment at the end user and the exchange ends of the fibre.

Is this likely to happen in our lifetimes ? Probably not, since it is not in the short or medium term commercial interests of any Telco to do so. In the days of a single government owned monopoly, a massive investment of this scale may have still been possible. In the current deregulated and short term profit centred scheme of things, it is a virtual impossibility.

Do other technological options exist for getting decent amounts of bandwidth into end user premises and suburbs ? If we choose to abandon the fixed cable infrastructure and the defacto monopolies which preside over it, then the answer is definitely yes.

The technology is the microwave link, in its various incarnations.

Many Australian universities and larger corporate bodies have already chosen this route and set up their own private inter-campus microwave networks for the carriage of internal digital traffic. The costs of setting up and maintaining chains of microwave repeater sites, often spanning hundred kilometre or greater distances, was found to be significantly cheaper than buying the same bandwidth from the telco community.

So the telco community basically missed out on one of the largest domestic markets for raw bandwidth, in order to maintain its price structures in other segments of the market, which were less geographically or technically able to adopt this model.

The long to medium haul microwave link, with its bandwidth licencing overheads and specified power limits, is not a viable proposition for end user distribution, since it requires some critical mass in volume to be cost competitive and requires that the operator know what they are doing.

However, other alternatives are beginning to appear. One, which is yet to become firmly established, and for which costs are yet to be determined, is the Low Earth Orbit (LEO) satellite network. The two most prominent examples of which are the Motorola Iridium network, and the Gates/McCaw sponsored Teledesic network. The latter is now appearing to be a viable concern, after many teething problems with basic technology, a break with a principal subcontractor, and the recent announcement that an additional MEO polar orbit constellation will need to be added to "decongest" traffic levels within the existing architecture (a fundamental design weakness of the original proposal which I pointed out in Systems Magazine some time ago).

The LEO constellations exploit the idea of "crosslinks" between satellites, which can operate at extremely high bandwidths in the millimetric bands, since they do not have to punch their way through the tropospheric soup of oxygen, water vapour, clouds and rain, ie the traditional bandwidth and quality limiters of satellite communications.

The traditional Clarke orbit or GEO (Geostationary Earth Orbit) satellite sits in space suspended above one point on the equator, and floods a fixed footprint with a broadcast service. Such systems are termed in the satellite trade "bent pipe" systems, since they are basically dumb analogue repeaters. Because of the large distances to GEO stations, such links are typically limited in bandwidth for reasonable antenna sizes, and suffer significant latencies due to the large distance up and down from the satellite.

Teledesic is the first to genuinely break this well established paradigm, by putting intelligence on the satellites, building each to carry a high speed router (we are yet to find out whether these routers will be based upon NT .... if your Teledesic link drops out with a "general protection fault", then this may be the case). The Teledesic model like all LEO schemes does not suffer the latency problems of a GEO system, since the data they carry goes up from the surface no more than a few hundred or so kilometres, subject to slant angle, then hops across a chain of satellites, and then drops back to the surface.

The existing Teledesic model is optimised for long haul backbone services and is thus not likely to be the kind of thing which you can cheaply tap into by putting a dish on your roof. Teledesic do however claim that they will offer bandwidth at about 1/3 to 1/4 the cost of GEO systems. Until the network is up and running, this assertion is untestable.

However, it illustrates the longer term potential for Iridium like digital satellite services to end subscribers. Providing that a sufficient number of subscribers exist, then costs could become quite competitive for end users in country and remoter rural areas, who are not well served by the existing model.

Is this the limit of what is achievable using microwave links ? By no means. The recent adoption of the IEEE 802.11 standard for wireless Ethernet is a very important milestone which is likely to produce the technology base for some very high quality end user services.

The 802.11 standard is optimised for the provision of building, campus or site level 2.45 GHz Industrial Scientific Medical (ISM) band wireless RF connectivity to portable computers, using spread spectrum low power transmission techniques.

The ISM bands (0.9, 2.45 and 5.8 GHz) are available for use without licencing, providing that output field strengths fall under a specified threshold (specifically an EIRP of 0 dBW or 1 W at the transmitter). It is worth noting that the ISM band is regarded to be "garbage" spectrum, in that it is polluted by leakage from 2.45 GHz microwave ovens), therefore it was something which could not be sold like cleaner parts of the spectrum.

Basic 802.11 hardware is optimised for LAN applications, both in terms of the protocols used and the off-the-shelf hardware which is available. Bandwidth provided by the current standard is either 1 or 2 Mbits/s, symmetrical, which sits at the bottom end of usable LAN class performance. This assumes that not heavy interference takes place.

1 - 2 Mbit/s is easily adequate though for the intended provision of connectivity to laptops, palmtops and "wearables".

However, extensions to the standard currently under debate are planned to extend this as far as 20 Mbits/s, which is by any measure decent performance for a LAN application. Backward compatible implementations of these draft proposals are being marketed and sold now as proprietary products, an example being the popular 5.5-11.0 Mbits/s Harris chipset.

The importance of 802.11 in the context of remote access lies in the technology base which it will foster. Chipsets and board level products designed for 802.11 applications will be cheap high volume commodity products, much like current LAN adaptors. However, by wrapping additional hardware and code around them, they can be very easily adapted to form the basis of very low cost ISM band lower power spread spectrum radio modems for point to point links.

Microwave 802.3 bridges operating in the ISM band have been available in the US market for quite some time now, intended for uses such as connecting two campuses, or two buildings, up to about 10-20 km apart, providing that a direct line of sight exists. However, due to small production volumes such devices have been relatively expensive, with the cost of a link running into thousands of dollars.

Nevertheless, for a medium sized company or ISP requiring several Megabits/s between two sites within 5-20 km distance, this one-off cost hit will be amortised very quickly given prevailing rates for bandwidth in the telco marketplace. It is therefore quite curious that the technology has not been more popular in the Australian marketplace.

As the proliferation of 802.11 hardware drives down the cost of this class of product, we can expect to see ISM band microwave radio modems drop well below the magic $1k barrier, indeed a $500 device or adaptor of this ilk is entirely technically and commercially feasible in the next several years.

Once this technology crosses such a unit price threshold, various interesting technological possibilities arise.

One which is obvious is that of small, medium and larger ISPs running their own ISM band private microwave networks out into the suburbs. Once established within a suburb, the ISP can then run individual links from a central site to individual households. The same is true for commercial organisations, educational institutions or government bodies, supporting their staff or remote offices.

Other more evolved alternatives also exist, such as cooperative community networks. At least two research projects aimed at developing such schemes exist, one in the US and one in Australia. In such cooperative networks, each node acts as a router thereby decentralising the distribution architecture at the bottom end of the network. Numerous technical issues remain to be resolved before such a distributed model can hit the streets, however none of these fall into the category of genuinely insurmountable problems. Security and robustness are of major importance in this context, since existing technology does not address these well.

Spread spectrum microwave technology thus has the technological potential to effectively bypass the existing end user bandwidth bottleneck and provide a genuine symmetrical high speed remote access channel, with LAN-like performance, out to a domestic or small business user.

The sad truth is that the technology to do this has existed for many years now, but has remained unexploited by the telcos since it has never been in their interests to provide competitively priced bandwidth in the short haul, end user domain. The supporting infrastructure to provide this class of service has existed for some time now in the profusion of mobile telephone repeater towers throughout the suburbs. Since these are frequently positioned in optimal locations for local geographical coverage, piggybacking additional antennas and point to point hookups would hardly be a technically difficult task to perform. A good section of any inner suburb could be flooded from a single station, using perhaps $10-20k of volume production hardware to support around a hundred end users.

It will be interesting to see whether the telcos bite the bullet and deal with the issue of low cost end user connectivity. All evidence at this point in time suggests that they will desperately cling to their existing monopoly and continue to constipate the market until such time as users begin to jump ship, something likely to happen very quickly once viable alternatives begin to proliferate.

If you have the choice of paying hundreds of dollars a month for a marginal cable modem or DSL service, or investing in a thousand bucks of ISM band RF modem and antenna hardware and tapping into a local ISP, who will be highly competitive for bandwidth outside the suburb, and likely to be free for bandwidth locally, would you continue to use the telco ?

The telco community will then have the choices of either becoming genuinely competitive, or abandoning the market altogether and concentrating upon long haul bulk bandwidth.

The future presents us with some very interesting prospects, and interesting commercial opportunities fro those bold enough, in breaking the existing remote access bandwidth bottlenecks.

From a strategic perspective, we must never lose sight of the fact that every cent which is expended on paying for connectivity is a cent which reduces our collective national economic competitiveness, whatever line of business we are in, other that being a telco.

It may be argued that the economic rationalist doctrine which several past governments have espoused in the context of connectivity has been counterproductive. Granted it has produced much excess capacity in various areas of the infrastructure, driven down the costs of some bulk services, but the mechanics of competition between small numbers of large players will prevent substantial downward shifts in end user remote connectivity costs, and bias available services to large markets only, while also paralysing long term strategic investment in infrastructure. The equilibrium point about which current market forces hold the telco community is not serving the national interest particularly well, even it it does serve the telco's interests.

The Internet provides tremendous potential as a marketing vehicle for businesses of all sizes, both domestically, but more importantly on the large international scene. To exploit this effectively, bandwidth must be abundant and cheap, especially to the end user. Otherwise, whatever advantages are gained are eroded by the cost of connectivity.

A good case can be made for the government to sponsor a national information infrastructure program to provide the investment base and strategic leadership to achieve the objective of universal, low cost high speed IP connectivity. Whilst it may not be an ideologically proper thing to do, it has the potential to stimulate the economy in the short, medium and long term.

The existing model via which remote access connectivity is provided is clearly not going to achieve this.



$Revision: 1.1 $
Last Updated: Sun Apr 24 11:22:45 GMT 2005
Artwork and text ¿ 2005 Carlo Kopp


Industry Publications Index ... Click Here