Stallite Communications - Part 2
A Technical Critique
|Originally published April, 1997|
|¿ 1997, 2005 Carlo Kopp|
In last issue's feature we surveyed the new generation of mobile communications satellites, and briefly reviewed some of the basic technical issues surrounding this new alternative in communications. In this follow-up feature, we will take a closer look at some of the technical issues and fundamental limitations of the existing schemes.
The best starting point for any such discussion is to articulate the basic properties which users of long distance computer communications expect of a transmission medium. Using these as a baseline, we will then look at a number of existing schemes and determine what their strengths and weaknesses are in relation to the ideal model.
What Would the Ideal Mobile Communications Scheme Offer ?
The trivial answer to this question is infinite zero latency error free bandwidth at zero cost to consumers, with global all weather coverage. As nice as this sounds, fundamental physics and information theory suggest that this cannot be, so the consumer will have to accept some significant compromises. Even so, producing and implementing a scheme which delivers good performance for computer users, as compared to voice users, is not an easy task. To better understand why, it helps to review the most important criteria:
If you wish to feed individual users with mobile laptops or portables, you immediately incur penalties in cost and complexity, particularly at the satellite end. Satellite schemes intended to support individual users will run into a major issue, which is that of population density in the satellites' footprints. Consider an LEO scheme where each satellite has a circular footprint 300 kilometres in diameter.
While this satellite is over Tahiti, Bouganville, Baluchistan, the Kalahari or the Simpson desert, it will probably take a handful of connections from geologists, missionaries, the odd tourist and possibly a local government. Consider however the load upon such a satellite over Tokyo, New York, LA, London or Singapore ? If we assume that 5% of the population will each want a 2 Mbit/s connection, then we immediately run up an aggregate bandwidth requirement of the order of hundreds of Gigabits/sec for that satellite alone, and the overheads to manage the state of hundreds of thousands of connections. If we assume that only 0.5% of the population wants a connection, the numbers are still very problematic. This is indeed the Achilles heel of most of the mobile satellite schemes proposed to date.
They will indeed provide an unprecedented service for users out in Woop-woop, but are likely to suffer significant difficulties once confronted with the high population densities of the First World. Since most of the world's computer comms users live in the First World, which also produces most of the world's GDP, we must ask a fundamental question - why should shareholders of global communications schemes want to provide a low cost worldwide service when most of the best revenue sources will be unable to extract a truly high quality high speed service from the system and thus are unlikely to subscribe in viable numbers ?
Let us however assume that some clever engineering tricks are played and a satellite can be built to carry many Gigabits/s of traffic within its footprint. We are then confronted with the issue of carrying this traffic to adjacent satellite borne routers, and forwarding it to its destination. Should we adopt conventional shortest path routing algorithms, we could simply trace great big lines across the globe between the First World's major population centres, and expect that satellites along these lines will be extremely busy simply carrying traffic between their neighbours. Again, we are likely to confront similar problems in saturation of routers, and thus performance problems due queuing delays. So to avoid saturating satellites along paths between Europe, the US and the Far East, we adopt a routing scheme which channels the traffic through less geometrically advantageous satellites which are lightly loaded with traffic, as they are passing over Third World countries.
We then begin to incur latency delays through having to hop across more satellites, and traverse greater distances. In any event, we still end up with ever increasing traffic density as we approach the First World. This raises serious questions about the technical and commercial viability of a number of mobile satellite schemes, particularly in relation to the carriage of high speed computer traffic. Indeed the the only viable near term consumer of high bandwidth digital satellite comms will be the military in the First World. The US DoD Milstar constellation, with four cross-linked GEO orbit satellites, using 60 GHz crosslinks, provides a T1 service for a limited number of channels, and is limited to 2,400 Bits/s for its standard high volume low data rate service. The Milstar I/II is both large and expensive to build and deploy.
The conclusion we can draw from a basic analysis is that the current generation of proposed mobile satellite communication schemes suffer significant technical limitations in the carriage of computer traffic which will in turn reduce their utility in the highest density and thus best revenue generating parts of the world. They do however provide useful if limited connectivity to parts of the world's geography which are not provided with viable terrestrial links. Whether provision of service to such areas will provide sufficient revenue sources for a follow-on generation of sats remains to be seen.
|$Revision: 1.1 $|
|Last Updated: Sun Apr 24 11:22:45 GMT 2005|
|Artwork and text ¿ 2005 Carlo Kopp|