Remote
Connectivity - THE Bandwidth
Bottleneck? |
Originally published May, 1998 |
by
Carlo Kopp |
¿ 1998, 2005 Carlo Kopp |
Remote connectivity is without any doubt the weakest link in our established networking paradigm, be it from the perspective of achievable throughput performance, or link robustness. A very large proportion of the pain suffered by users and administrators with contemporary computing is directly attributable to problems associated with remote access. In this feature we will explore the established technology base and its limitations, and then also explore some of the emerging technology and its limitations. Clearly the mobile or domestic computing platform is the most disadvantaged in connectivity, and this situation is unlikely to change in the nearer future. Was this situation inevitable or impossible to predict, in hindsight ? Hardly so, since fifteen years ago the technology was available to run a multiple Gigabit/s optical fibre out to every telephone subscriber, be they corporate or domestic. Has this yet transpired ? When is it likely to transpire ? These are very good questions, to which we have yet to hear serious answers from the vendors of connectivity services, be they private or publicly owned. During the early eighties Australia was a world leader in optical fibre technology R&D, and the adoption of a fully optical subscriber network, costly as it may be, would have provided an opportunity to build up a world class domestic and export industry in this vital technology. This opportunity was squandered and as is typical, nothing happened. More recently we have seen the deployment of cable television in our larger cities, an inherently wideband medium. Again, optical fibre was rejected in favour of cheaper copper based installations, despite the fact that the copper cables deployed have an in-service life of about a decade, and will need to be replaced yet again downstream, periodically. Given that optical fibre cables have a lifetimes of decades, this is yet another case of making short term investments. Another opportunity was lost. So where does this leave the hapless consumer of digital bandwidth ? At this time mostly with POTS (Plain Old Telephone Service), some ISDN services, and some basic cable modem services. The limitations of these we will explore a little more closely. The Limitations of Voiceband POTS Services The analogue telephone line and its supporting technology, the analogue modem, have made tremendous progress in basic capability over the last one and a half decades. We have seen achievable throughput performance improve basically by an order of magnitude in this period, no mean feat for a piece of technology which was arguably conceptually obsolete over a decade ago. To best understand the basic limitations of the voiceband band modem it is helpful to look at the fundamentals of this technology. The earliest voiceband modems, limited to several hundred bits per second of throughput, are not a bad starting point. Such devices used very simple schemes such as frequency or phase modulation of a voiceband carrier wave. Basically a sine-wave carrier was deviated in frequency/phase to signal either a logical 1 or a logical 0. The simplest modem would in fact accept a clocked bit stream and produce a clocked bit stream at the other end. The computer's I/O interface would have to worry about the data content and make sense of it. The first big innovation in voiceband modems was the introduction of a UART (Universal Asynchronous Receiver and Transmitter) into the modem. Built around a single bit register, a UART can receive and send basic asynchronous serial data. On receipt a UART uses the start and stop bits to frame the incoming 7 or 8 bit character of data, and on transmission it adds the same to data it was received from the remote end. A more sophisticated modem would employ a microprocessor to load and unload the UART, a very simple modem would do it with a mere hardwired state machine. The next big innovation in modems was the adoption of more sophisticated modulation schemes based upon ideas like Quadrature Amplitude Modulation (QAM), in which several bits can be sent at once, by allowing the modem to signal multiple levels at its given Baud rate (using the term Baud rate properly, to denote the signalling rate on the line rather than the rate at which bits are pumped). As usually there are no free lunches, since multilevel signalling is basically more sensitive to interfering signals on the line, and degraded signal to noise ratio. Two factors made the adoption of more sophisticated modulations possible. One was the increasing proliferation of digital telephone networks and switches, which have much better Signal/Noise ratio performance than basic analogue networks. The other was the adoption of clever error correction schemes. A modem would not transmit just raw data, it would send encapsulated packets of data with embedded error control or cyclic redundancy check data. In this manner data corrupted by errors was detected and either corrected or resent. Clearly throughput was a still a major issue, and we saw the adoption of data compression schemes in modems. The modem would accumulate a batch of characters, and subject to the statistics of the data, compress them down to something between 1/2 and 1/4 of the original size before sending. At this time modems providing 28.8 to 33.6 kbps using compression schemes are the established market standard, with modems capable of 56 kbps becoming increasingly common. At a bit rate of 56 kbps a voiceband modem running over a "pseudo" analogue telephone line rivals a single 64 kbps ISDN B subchannel for throughput. The real throughput of such high performance analogue modems is limited by several factors, all of which relate to the basic supporting infrastructure. The copper twisted pair which runs from the exchange switch to the end user is one of these. Earlier features in this series have explored the limitations of this medium for transmission of higher speed (up to Megabits/s) signals. While basic bandwidth of the cable is not an issue for most voiceband applications, crosstalk from adjacent cables and the presence of other interfering signals coupled into the cable most certainly are. Both introduce errors into the data stream, which must be handled by the error correction mechanism. The use of clever multi-bit signalling exacts a penalty in exactly this area. The other serious throughput limitation, and one which sets an upper limit on the channel throughput, is the performance of the exchanges and a switches in between the modem. While the nominal throughput limit is set by 8 bits of data transmitted at 8000 samples per second (setting a basic 64 kbps limit), in practice the performance of switch Analogue to Digital and Digital to Analogue converters can be a major factor. These are usually equipped with lowpass anti-aliasing filters. The combination of a poor quality A/D or D/A, and phase distortion and frequency roll-off introduced by such filters can often severely impair the performance of sophisticated high speed analogue modems. This I can state from lengthy experience over many years. One blunder committed by many organisations is to route their modem pool through a PABX, in order to simplify exchange wiring and gain more flexible control over rotary group allocations. In practice this means that the channel must go through a cheapie PABX standard D/A converter and filter set (and often also an A/D as well). The result is often a highly error prone modem connection, with frequent dropouts and impaired throughput performance. It is always amusing to dial interstate over a pair of genuine comms standard A/D and D/A and get much better behaviour from an interstate connection than a local one. In practice, both domestic and travelling remote access users will have to grapple with these limitations. While we are seeing some proliferation now of mobile phone interfaces for data, they in turn are constrained in throughput performance by the limitations of the voiceband radio channels used. Most interfaces I have have seen to date are limited to 9600 bps for mobile phones. What more can we expect to see from the voiceband modem in coming years ? Incremental refinements in signalling and compression will asymptotically approach the magic theoretical limits imposed by the 64 kbps digital network limit. We may see smarter modems optimised to carry web browsing traffic, with the modem containing an embedded cache for HTTP traffic - with the declining cost of DRAMs this will become feasible in coming years. Cable modems at this time already contain embedded routers. Adding a cache is prevented only by the cost of the modem CPU and DRAM. For all practical purposes we are very close to the throughput limits of voiceband technology. Certainly there are other areas where existing technology leaves much to be desired. One area is that of call setup and PPP/SLIP protocol link establishment. The current paradigm is that we connect to a terminal server, and have either a program or script fire ASCII strings, and decode ASCII strings, to and from a terminal server to log in and to establish the PPP/SLIP connection. Once this is set up, IP traffic may be carried. Needless to say there is nothing more delightful to behold than a system's startup and login script falling out of sync with the terminal server and disconnecting itself to retry. I can recall the woes of at least one friend who, tied to an unnamed comms package on a proprietary OS, typically has to go through up to eight consecutive connection attempts to get in. All due to this category of problem. Under such conditions it probably pays to get a second phone line and leave the connection permanently up. Clearly it would be nice to see a standardised login and startup protocol for terminal servers and hosts, one which would be adhered to widely. The current profusion of kludges and home-grown solutions has distinct limitations. Whatever tweaks may be applied to voiceband technology, it is clearly inadequate for the provision of serious bandwidth for professional use. Any attempt to run serious protocols like NFS, CIFS, X or others in this class is doomed to inadequate performance. Digital Remote Connectivity - Will It Measure Up? Alternatives do exist to voiceband modems, and many of these are becoming more available and often more affordable. The cost factor here is the greatest impediment to the provision of serious multi-Megabit/s remote access bandwidth, and the existing pricing policies of connectivity vendors are clearly aimed at milking every cent out of the hapless consumer. The starting point for the discussion of digital remote access must be ISDN, or the Integrated Digital Services Network scheme. Historically ISDN dates back to the early eighties, when farsighted telecommunications vendors identified the need for "high speed" direct digital access to the end user of telephone services. ISDN was thus devised to extend the digital exchange-to-exchange connectivity out to the telephone handset. The basic 2B+D ISDN service comprises a pair of 64 kbit/s synchronous data channels, termed B-channels, and a single 32-16 kbit/s D-channel, which uses the X.25 like LAP-D protocol. The ITU V.110 and V.120 interfaces were defined to provide data access. Fifteen years ago, when ISDN was at the cutting edge of technology and voiceband modems ran at 2400 bit/s, ISDN may have indeed seemed to be the ultimate answer for remote connectivity. Today a basic ISDN service is marginally competitive against a 56 kbit/s voiceband modem. Both are clearly inadequate to carry even the most trivial of required end user services, web browsing. The sad truth is that the deployment of ISDN fell well behind the growth in required bandwidth for remote access, and the advancements in voiceband modem technology narrowed the gap to the point, where there is little point in spending on an ISDN service unless it is priced at a similar level to a POTS connection. Given that the original objective of the ISDN model was to provide a uniform replacement for the analogue household telephone, with the capacity to handle additional digital services, it is fair to say that the whole ISDN model was a dismal failure. Even should we see the adoption of ISDN as the standard end user interface, it will be at least a half a decade behind the demand curve for end user bandwidth. Is there potential to squeeze more bandwidth out of ISDN ? Assuming that we do have raw access to a pair of 64 kbps digital synchronous channels, ie 128 kbps, and we apply existing compression technology to a data stream being pumped through this channel, then the ultimate throughput potential of the ISDN channel will fall somewhere between 128 kbps and 512 kbps. The cost penalty will lie in the more expensively priced ISDN service, and the required data compressing modem device which may not be cheap. Again, many of the caveats applying to error correcting and compressing voiceband modems will apply. I am yet to see such a device in commonplace usage. The follow on protocols to ISDN are soon to become a reality, at least outside Australia, with the ADSL (Assymetric Digital Subscriber Line) and VDSL (Very high speed Digital Subscriber Line) protocols in the pipeline. Because both were discussed in considerable detail earlier in this series, we will confine this discussion to the basics. The ADSL protocol is designed to provide an assymetric service, optimised to carry heavy downstream loads and light upstream loads. In effect it has been optimised for tasks such as digitised video distribution (H.26X series protocols) and web browsing. Clearly mass market forces are at play and thus ADSL will not provide the symmetric bandwidth to accommodate NFS, CIFS and other such protocols. In terms of throughput, ADSL trades bandwidth for distance, offering a T.1 throughput (1.5 Mbit/s) to 6,000 metres, an E.1 throughput (2.048 Mbit/s) to 5,000 metres, a DS2 throughput (6.312 Mbit/s) to 4,000 metres and an 8.448 Mbit/s service to 3,000 metres. The upstream rate will vary between 64 kbit/s and 640 kbit/s. Under the best of circumstances, the ADSL service will not match the company LAN at 10 Mbit/s. VDSL is still at this time a development item, and is intended to provide up to tens of Mbit/s over short distances. Clearly the short to medium term expectations of remote users will not be met by ADSL, and odds are that by the time in deploys on a large scale, it will like ISDN be hopelessly behind the necessary requirements for on site throughput performance. The cable modem is now becoming available in Australia, but as yet is to see wider deployment and genuinely competitive pricing. The idea behind the cable modem is to pump a QAM or similar high speed modulated carrier through unused 6 MHz television channels in the cable TV network. This technology has considerable potential in that it can support downstream bit rates of tens of Mbits/s, certainly numbers as high as 36 Mbit/s have been quoted. Once equipment compliant with the new IEEE 802.14 cable modem protocol becomes widely available, the technology could make the pain of web browsing from domestic and remote sites a little less painful. There are several problematic issues with cable modem technology, which may not be overcome in the shorter term. One is that coverage is limited to areas with cable TV access. Another is that the service is in most instances assymetric, again reducing its utility for general connectivity other than inherently assymetric activities such as web browsing. The genuinely problematic issue will be standardisation, since it is in the best interests of a cable TV provider to offer only proprietary rather than industry standard connectivity protocols. As well, it is also in the best interests (ie short term profitability) of a provider to oversubscribe channels, to the detriment of bandwidth available to individual users. Unless pressure is applied by our reluctant regulators the odds are that the cable modem may suffer a similar fate to ISDN, and never achieve its originally intended availability. Hopefully it will not become yet another case of too little too late. What are the Alternatives? One of the fundamental problems inherent with remote access, be it from home to a company site, or between small remote sites and central sites, or providing a service to employees on the road, is that we are reliant upon the large providers who can name a price for their bandwidth. The provider of connectivity holds what is termed a natural monopoly. It has been argued that deregulation of the communications industry will fix this. In practice, deregulation only favours those market segments which are very large and thus have the collective purchasing power to influence provider behaviour. The hapless staff member who needs to work from home or a remote site is thus in the unfortunate position of representing a small market segment, and one where the commercial pressures may dictate that whatever tribute a service provider names, that tribute is paid. Or work is done differently, and the enormous technological potential of universal computer networking is not exploited. Clearly the communications provider community has strong commercial incentives to minimise available bandwidth and maximise their profit margins by in effect extorting top dollar for high speed data. Beating them up for exhibiting normal commercial behaviour is clearly not the answer. By the same token deregulation and more competition only works where you have a sufficient number of providers with large networks, and a sufficiently large market segment to provide clear commercial incentives to compete. High speed data for remote access purposes is not a service which falls into this category, and thus will continue to be the Cinderella in this game. Are there alternatives ? Clearly yes, but all require some level of investment by one or another participant. As noted earlier, were universal optical fibre access to be provided as a standard service, many of the bandwidth problems would vanish in the medium to long term. The problem is that such a fundamental piece of infrastructure will be a huge national investment which somebody would have to pay for in the short term, even if it would provide a tremendous long term payoff. In a deregulated multi-player market no single entity will have the capital to finance such an effort. That by default leaves it to a taxpayer already heavily burdened with existing large scale infrastructure outlays. For the forseeable future universal optical fibre access will simply not happen. Another approach to beat the monopolistic mechanics of the connectivity game is for large to medium size organisations to set up their own microwave networks. Indeed, many Australian Universities and corporate users already do this, bypassing wholly the cost structures imposed by the providers. Campus to campus, or site to site links provide high speed connectivity between LANs. Alas, the cost structures still make this uneconomical for provision of access to smaller sites and individual users. The recent emergence of cheap ISM band wireless networking technology will provide the potential for cheap, user owned high speed data connections in metropolitan areas. Already we are seeing a number of smaller ISPs providing feeder services to client sites using 900 MHz, 2.45 GHz and 5 GHz band wireless LAN equipment. Top of the line remote repeaters can cover tens of kilometres of distance and do not incur the licensing fees of conventional narrowband microwave channels. Can this technology support the remote access user ? The answer is yes, but with the caveat that some changes will be required in how organisations and employees participate in the process. There is no reason why an employer or ISP cannot build up a network of point to point links between rooftop antennas on employees' or clients' rooftops, and produce in effect a cooperative network across the suburbs. The upfront investment for this could vary between $1K and $3k per site, but importantly 2 Mbit/s or better bandwidth is available with virtually no ongoing costs. Obviously geographical location and line of sight accessibility will be issues, but the technology is now available for the bypassing of the fixed connectivity providers. In summary it is fair to say that the existing national infrastructure for the provision of remote connectivity is suffering a systemic failure born of its basic commercial and structural nature, and one which is not going to be easily fixed by legislation or simple commercial measures. In the short to medium term it would appear that significant potential exists in the use of ISM band wireless LAN technology to bypass the cabled infrastructure and its prohibitive cost structures for fast data services. Whether our industry can adapt to such a radical change of paradigm remains to be seen. |
$Revision: 1.1 $ |
Last Updated: Sun Apr 24 11:22:45 GMT 2005 |
Artwork and text ¿ 2005 Carlo Kopp |