Evolving in Realtime |
Originally published June, 2001 |
by
Carlo Kopp |
¿ 2001, 2005 Carlo Kopp |
|
Real time programming has traditionally been a bit of a black art, usually beloved by it practicioners, and seen as a little esoteric by many in the wider code-cutting community. For many decades, realtime was the province of programmers who coded in assembler, C, Jovial and lovingly hand crafted their code around the hardware it ran on, to squeeze every clock cycle of speed out of the processor in use. Evolution being what it is, we are now seeing an explosive growth in cheap computing power and this in turn is leading to some very fundamental trends away from the traditional approach to coding realtime. Does this mean that the classical approach to realtime programming is about to become an artifact of computing history, worthy of interest only to those who like to indulge in computational archaelogy? Perhaps not, in that the nature of realtime means that many of the fundamental and traditional skills which make for a good realtime code cutter remain essential. Realtime vs Non-realtime Programming The fundamental distinction between realtime and non-realtime or conventional application programming can be summarised well in two words: timing matters. A conventional application will be run on a multiuser timesharing host or a desktop platform. The primary aim of the developer will be to implement some specified function, and given the opportunity make it run quickly. If a menu pops up half a second late, or even two seconds late, it might be annoying or even embarrassing, but it does not usually amount to a disaster. The opposite is true of realtime systems. Whether they are being used in industrial process control, to control a machine or network appliance, a jet engine in an airliner, an air traffic control system, or a railway signalling system, a failure to respond within some design timing window can frequently spell a complete disaster. A great number of case studies exist, whereby poorly thought out realtime system design resulted in some very costly consequences. How is this critical timing requirement addressed in a realtime system? Two key elements come into play:
In a most fundamental sense, a failure to properly address either of these aspects of a realtime design will cause problems, the severity of which depends wholly on the system in question. Would you like to travel on an airliner which has a digital engine control system, which from time to time causes the engine compressor to overspeed? This underscores another important aspect of realtime systems - integrity and reliability are frequently non-negotiable design requirements. Having your air traffic control system crash due to traffic saturation on a Monday morning is not a great way to prevent mid-air collisions. The issue of realtime software reliability is closely tied to the ability of that application to handle stringent timing requirements. While bugginess in code and problems with control flow and decision logic can cause fatal disasters in a realtime system, as they can in any other system, the failure of a perfectly robsut realtime system to service an event in a timely manner can yield the same result. The changes we are seeing at this time in realtime development techniques and software tools are a reflection not only of a maturing software technology and development technique, but also a reflection of Moore's Law and the commodification of high performance processors. Without the raw computing power of a modern microprocessor, many of the more sophisticated software environments used for modern realtime systems could never run fast enough to do the job. The persistence of assembly language and C programming in the realtime world is a good indicator of this very fundamental reality. The three key trend indicators we are seeing in the realtime game today can be broadly divided into hardware, operating systems and application environments. Hardware for Realtime Systems Hardware is the lowest layer in any realtime system, and one which a developer will ignore at his or her peril. In designing a realtime system, hardware considerations must be addressed very early in the development cycle. Choosing an inappropriate platform early may yield serious problems much later in the development cycle, when the cost of the changing the basic platform could become prohibitive both in project timelines and project budgets. Several considerations come into play:
Distilling these questions down yields two central issues. The first is whether the basic architecture of the platform is suitable for the application, in terms of its ability to support the number and type of I/O ports required for the system, and whether the gross sizing of the system CPU and I/O rates fits the applicaiton. The second is whether the platform has proper growth paths in compute performance, memory capacity and I/O capacity and throughput. A prudent realtime designer treats the second issue religiously, since any blunders committed in sizing around the first issue inevitably flow into the second. Defending against uncertainties in early estimation of hardware needs has an additional benefit, since it provides robust insurance against specification creep flowing down from customers or marketeers. Inevitably, somebody at sometime will want or expect more from the system than it was originally designed to handle. Having once been the route of redesigning my hardware three times, each time to fit a bigger EPROM for the runtime firmware, I can recommend playing safe in this game from the perspective of prior experience. In terms of available hardware platforms for realtime applications, the market offers a range which would have left a realtime developer of a decade ago bedazzled. Two basic trends are evident in the current market. For non-embedded realtime control applications, the platform of choice is very frequently an Intel based desktop or deskside PC. With higher levels of integration, reliability is now superb and with commodity 1 GHz class Intel architecture CPUs cheaply available, this platform is hard to beat. PCI and older ISA/EISA format I/O boards, including discrete I/O, Analogue/Digital and Digital/Analogue converters, IEEE 1394 Firewire, Mil-Std-1553B/Arinc and various interfaces are widely available. Therefore, a judicious choice of motherboard, chassis, power supply and I/O boards will allow a very capable system to be built up, if required. Moreover, ruggedised Intel PCs, originally targeted for the military computing market, have become more readily available and thus viable for environmentally benign mobile and industrial applications. Commodity Intel desktops, desksides, and rack-mounts are likely to dominate those portions of the market where size, packaging and environment are secondary considerations. Where the operating environment is more demanding, such as embedded industrial applications, military platforms, aviation/marine/vehicular applications and other environments where size and vibration/shock tolerance matter, the trusty 6U VMEbus and newer embedded PCI/Intel formats continue to occupy the lion's share of the market. The Motorola 68K family of CPUs in VME packaging remains one of the mainstays of this market, in a large part due to the ongoing through life support and further development of legacy systems originally built around this CPU. The natural inheritor of this market niche is the Motorola/IBM PowerPC RISC processor family, now very widely available in VME packaging, including conduction cooled militarised variants for airborne and space launch applications. With the latest G4 Altivec short vector processing variants coming on the market, embedded image processing, signal processing and graphics applications can be easily accommodated, a market which has been traditonally dominated by the VME hosted TI TMS32K series signal rpocessor chips. As with the PCI/ESA I/O adaptor market, there is a plethora of discrete I/O, Analogue/Digital and Digital/Analogue converters, IEEE 1394 Firewire, Mil-Std-1553B/Arinc and other adaptors in the market, as well as a wide range of VME chassis, including airborne Milspec rated types. It is unlikely that this market will swing dramatically in the direction of Intel motherboard or VMEbus systems in the forseeable future, since both occupy discrete environment niches. The evident trend is that high performance commodity CPUs will continue to occupy an increasing proportion of the realtime market in coming years, at the expense of traditional specialised and legacy architectures. Operating Systems for Realtime Applications The operating system has traditionally been one of the do-or-die pillars of realtime system design. A poorly chosen operating system is guaranteed to knobble a realtime system just as effectively as poorly chosen hardware, and a poorly written application. Four key parameters are drivers in the choice of a realtime operating system (RTOS):
Over the last two decades we have seen a away from the hardware vendor developed and supported realtime operating systems, such as the DEC (Compaq), Motorola and Intel in-house offerings, to third party realtime operating systems. Products such as Wind River VxWorks, LynxOS, QNX and others dominate this market. In a large part this is because they support Unix like services, libraries, languages and tools, yet provided the essential context switching performance and priority pre-emptive process scheduling capabilities needed. This generation of RTOS will continue to occupy a large portion of the market, since it is mature and robiust technology with a large base of established applications and ported platforms. The key issue is cost, since the vendors must continue to maintain and requalify a large number of hardware ports, drivers and custom tools. For a niche market, this does represent a large overhead which is paid for by the end consumer of the realtime product. The latest boom trend in this market is the adoption of Linux and to a lesser degree BSD, as RTOS platforms. Traditionally Unix has not been the system of choice for realtime, since lazy context switching performance and the lack of a rigorous priority pre-emptive scheduling scheme in most Unix variants yielded a platform which is far from optimal, even if vastly better than many of the proprietary OS in the market. With an order of magnitude jump in CPU speeds over the last decade, context swithicng performance has improved to the point where it is more than viable for many realtime applications. Therefore the main issue for Unix derivatives becomes the scheduling model, which is something which lends itself to manipulation. The rising stars of the RTOS market are now Linux derivatives, either run as taks on top of a proprietary RTOS kernel, or with a suitable package of kernel modifications, as a hybrid Unix-like RTOS. The attraction of Linux derivatives is the very same open source phenomenon which has seen explosive growth in the number of Linux applications in web related applications. Royalty free runtime systems, a vast array of public domain applications and tools, and an increasing number of competent Linux systems and kernel programmers provide the low capital overhead environment needed for startup companies in the realtime/embedded industries. We are likely to see realtime Linux variants occupying an increasing share of the realtime market over time, especially for new applications. Established RTOS products will continue to dominate the defence and aviation markets, where maturity and extreme reliability can be easily afforded. Programming Environments The programming environment is the third vital pillar of the realtime technology base. While hardware and RTOS technologies have pursued an incremental evolutionary path, programming environments and development tools have seen explosive technological growth in recent years. As noted earlier, this is in part a consequence of more available computing power, but also reflects a maturing Object Oriented (OO) technology base. Until a decade ago, most realtime systems were meticulously hand crafted systems, very frequently written in low level languages or assembly code to meet demanding performance requirements given limited computing resources on the runtime platform. The design of such systems has always been demanding in its own right. Most realtime systems involve the association of particular event handling algorithms with specific events. Very frequently these algorithms operate on some central status table(s) or other data structure(s) which maintain(s) state information for the system. Therefore at a very fundamental level, such systems reflect a model in which objects are manipulated by operators. Whether the object is an I/O device being controlled, or a datastructure maintaining system state, the basic model fits well with the OO programming paradigm. What OO languages and programming environments therefore provide is a means of providing a rigourous and syntactically robust system design and implementation. If the system can be modelled and its behaviour defined exactly in a programming environment, the result is a design which can be modified and evolved cleanly over its life cycle, but also more readily ported to a new platform. This contrasts dramatically with the traditional realtime model, which frequently involved piecemeal modifications to spaghetti code. The rigourous demands for reliability and provability provided further impetus for this trend. A system which can be modelled wholly in its behaviour by using exactly defined syntactic constructs is a system which can be much more easily tested in simulations. While the proof-of-the-pudding test of a realtime system can only occur in a live runtime environment, the costs of doing so can be prohibitive. Consider a system to control a refinery, chemical plant, airport luggage system, or other extremely complex and large systems with thousands or more state variables. Testbeds are simply not affordable for such systems, and the only practical approach for risk minimisation and design validation before going to a live platform is to perform either realtime or non-realtime simulations. Booch's OO design method was rapidly adopted in the realtime programming community, in concert with OO languages such as C++ and Smalltalk. The much maligned ADA has also found its adherents in this market. Over the last decade C++ has become the dominant language and supporting environment for commercial realtime systems, and has also penetrated into hitherto close markets such as defence. Two technologies are now climbing to prominence in realtime environments. The first is the UML paradigm, with realtime extensions derived from ROOM. It attraction lies in its capacity to provide a robust modelling syntax which can be ported across platforms, and provide a high level of abstraction for a developer to work with, yet it promotes modularity and reusability of code modules, with well defined interfaces between modules. For large and complex systems, UML provides a technology base which provides an escape from the traditional heartache of implementation. The second technology which has captured the imagination of the market is Java, which due to its portability and OO syntactic features has become a popular environment for small realtime systems. Run as interpreted portable byte-code, Java has achieved success where the cumbersome interpreted byte-code of Forth crashed and burned a decade ago. Gazing into the crystal ball, it is a reasonable judgement that UML and its associated systematic design methodology will occupy an increasing share of the realtime market, especially for new large systems. Legacy systems in older toolsets and languages may or may not make the transition. In a broader context, the realtime market will see ongoing growth, as manufacturers embed increasing levels of intelligence into commodity products such as cars, domestic appliciances, entertainment products and personal communications and networking devices. The trend to commodity hardware and open source operating system derivatives will accelerate this growth by reducing the development overheads of producing new designs. Whether we are looking at large and complex realtime systems, or cheap consumer commodity products, the longer term trend is to more affordable development and more supportable designs over product life cycles. The big question to be asked is whether abundant computing power and toolsets capable of supporting high levels of complexity will follow the establised trend of pushing system design complexity to the limits of what the technology can reliably support. That is worth carefully watching over coming years. |
$Revision: 1.1 $ |
Last Updated: Sun Apr 24 11:22:45 GMT 2005 |
Artwork and text ¿ 2005 Carlo Kopp |