This chapter has been excerpted from the book "Flow-Based Programming: A New Approach to Application Development" (van Nostrand Reinhold, 1994), by J.Paul Morrison.
To find out more about FBP, click on FBP.
For definitions of FBP terms, see Glossary.
Material from book starts here:
Imagine that you have a large and complex application running in your shop, and you discover that you need what looks like fairly complex changes made to it in a hurry. You consult your programmers and they tell you that the changes will probably take several months, but they will take a look. A meeting is called of all the people involved - not just programmers and analysts, but users and operations personnel as well. The essential logic of the program is put up on the wall, and the program designers walk through the program structure with the group. During the ensuing discussion, they realize that two new modules have to be written and some other ones have to change places. Total time to make the changes - a week!
Quite a few parts of this scenario sound unlikely, don't they? Users, operations people and programmers all talking the same language - unthinkable! But it actually did happen just the way I described. The factor that made this experience so different from most programmers' everyday experience is the truly revolutionary technology I will be describing in this book.
While this technology has been in use for productive work for the last 20 years, it has also been waiting in the wings, so to speak, for its right time to come on stage. Perhaps because there is a "paradigm shift" involved, to use Kuhn's phrase (Kuhn 1970), it has not been widely known up to now, but I believe now is the time to open it up to a wider public.
This technology provides a consistent application view from high level design all the way down to implementation. It requires applications to be built using reusable "black boxes" and encourages developers to construct such black boxes, which can then improve the productivity of other developers. It forces developers to focus on data and its transformations, rather than starting with procedural code. It encourages rapid prototyping and results in more reliable, more maintainable systems. It is compatible with distributed systems, and appears to be on a convergent path with Object-Oriented Programming. In this book, I will describe the concepts underlying this technology and give examples of experience gained using it. Does it sound too good to be true? You be the judge! In the following pages, we will be describing what I believe is a genuine revolution in the process of creating application programs to support the data processing requirements of companies around the world.
Today, in the early 90's, the bulk of all business programming is done using techniques which have not changed much in 30 years. Most of it is done using what are often called Higher-Level Languages (HLLs), by far the most popular of which is COBOL, the COmmon Business-Oriented Language. A distant second is probably PL/I, not as widespread in terms of number of customers, but in use at some of the biggest organizations in North America. C appears to be gaining steadily in popularity, especially as it is often the first programming language students encounter at university. It appears to be especially convenient for writing system software, due to its powerful and concise pointer manipulation facilities, but by the same token, it may be less well adapted for writing business applications. Some languages are used by particular sectors of the programming community or for certain specialized purposes. There are also the "4th generation languages", which are higher level than the HLLs but usually more specialized.
There are plenty of design methodologies and front-end tools to do them with, but most of these do not really affect the mechanics of creating programs. After the design has been done, the programmer still has the job of converting his or her elegant design into strings of commands in the chosen programming language. Although generators have had some success, by and large most of today's programmers painstakingly create their programs by hand, like skilled artisans hand-crafting individual pieces of cabinetry. One of the "grand old men" of the computing fraternity, Nat Rochester, said a number of years ago that programming probably absorbs more creativity than any other professional pursuit, and most of it is invisible to the outside world. Things really haven't changed all that much since those days. There are also what might be called procedural or organizational approaches to improving the application development process, e.g. structured walk-throughs, the buddy system, chief programmer teams, third-party testing. My experience is that the approaches of this type which have been successful will still be valid whatever tool we eventually use for producing applications. However, if all you do is take the existing hand-crafting technology and add a massive bureaucracy to cross-check every chisel stroke and hammer blow, I believe you will only get minor improvements in your process, at a considerable cost in productivity and morale. What is needed instead is a fundamental change in the way we do things, after which we will be able to see which procedures and organizations fit naturally into the new world.
It is a truism that most businesses in the Western world would stop functioning if it were not for the efforts of tens of thousands, if not hundreds of thousands, of application programmers. These people are practising a craft which most of the population does not understand, and would not be willing to do if it did. The archetypal programmer is viewed as a brilliant but impractical individual who has a better rapport with computers than with people, slaving long hours at a terminal which is at the very least damaging to his or her eyesight. In fact, of course, the programmer is the key interface between his clients, who speak the language of business, and the computer and its systems, which speak the language of electrons. The more effectively and reliably the programmer can bridge between these worlds, the better will be the applications which he or she builds, but this requires an unusual combination of talents. If you have any of these paragons in your organization, guard them like the treasures they are! In what follows, one of the recurring themes will be that the problems with today's programming technology arise almost entirely from the continuing mismatch between the problem area the programmer works in and the tools he or she has to work with. Only if we can narrow the gap between the world of users and that of application developers, can we produce applications which fit the needs of users and do it in a timely and cost-effective manner.
The significant fact I have come to realize over the last twenty years is that application programming in its present form really is hard and in fact has not progressed all that much since the days of the first computers. This lack of progress is certainly not due to any shortage of advocates of this or that shiny new tool, but very few of these wonder products have delivered what was promised. When I started in the business in 1959, we already had higher-level languages, interpreters and subroutine calls - these are still the basic tools of today's programming professionals. The kind of programming most of us do has its roots in the procedural programming that arose during the 40's and 50's: this new invention called a computer filled a growing need for repetitive, mainly mathematical calculations, such as tide tables, ballistics and census calculations. In these areas, computers were wildly successful. However, even then, some of the experts in this new world of computing were starting to question whether procedural application programming was really appropriate for building business applications. The combination of more and more complex systems, the sheer difficulty of the medium programmers work in and the need for businesses to reduce overhead is resulting in more and more pressure on today's programming professionals.
In addition, as programmers build new systems, these add to the amount of resources being expended on maintaining them, to the point where the ability of many companies to develop new applications is being seriously impacted by the burden of maintaining old systems. This in turn adversely affects their ability to compete in the new competitive global market-place. Many writers have talked about the programming backlog - the backlog of programming work that DP departments are planning to do but can't get to because of lack of resources. I have also heard people use the phrase "hidden backlog" - this is programming work that users would like to get done but know there's no point in even talking to their DP department about, so it tends not to show up in the statistics! I think this is at least partly why non-DP departments have been buying up PCs in recent years - they feel that having their own machines will make them independent of the DP department, but of course this only means they face the same old programming problems on their own machines!
At one time, it was predicted that more telephone switchboard operators would be needed than the total number of available young ladies. Of course, this problem was solved by the development of automatic telephone switching systems. Similarly, many people believe the present situation in computing can only be solved by a quantum jump in technology, and of course each new software technology claims to be the long-awaited solution. I and a number of other people believe that the concepts described in what follows really do have the potential to solve this problem, and I hope that, as you read this book, you will come to agree with us. However, they represent a true paradigm change which fundamentally changes the way we look at the programming process. Like many important discoveries, this new paradigm is basically quite simple, but far-reaching in its implications.
Mention of a new paradigm makes one immediately think of another new paradigm which is growing steadily in popularity, namely Object-Oriented Programming (usually abbreviated to OOP). What I am about to describe is not OOP, but bears certain similarities to it, and especially to the more advanced OOP concepts, specifically the concept of "active objects". In the long run, these two paradigms appear to be on a converging path, and, as I will be describing in a later chapter, I believe that it may well be possible to fuse the two sets of concepts to achieve the best of both worlds. In most of this book, however, I will be presenting our concepts and experience as they evolved historically, using our own terminology.
After a few years in the computer business, I found myself puzzling over why application programming should be so hard. Its complexity is certainly not the complexity of complex algorithms or logic. From an arithmetic point of view, one seldom encounters a multiplication or division in business programming, let alone anything as arcane as a square root. The vast majority of business applications do such things as transforming data from one format to another, accumulating totals or looking up information in one file and incorporating it into another file or a report. Given what seems like a fairly simple problem space, I wondered why application development should be so arduous and why, once built, a program should be so hard to maintain. Over the last few years, I and a number of other workers in the field have come to believe that the main cause of the problem is in fact the same thing that powered the computer revolution itself, namely the von Neumann computer model.
This model is the traditional one that has been so productive over the last few decades, designed around a single instruction counter which walks sequentially through strings of codes deciding what to do at each step. These codes can be treated both as data (e.g. by compilers) and as commands. This design is usually, but not necessarily, combined with a uniform array of memory cells from which the instructions take data, and into which they return it. As described in a recent article by Valiant (1990), the power of this model derives from the fact that it has acted as a bridge between the twin "diverse and chaotic" worlds (as Valiant calls them) of hardware and software, while allowing them to evolve separately. But, by the same token, its very success convinced its practitioners that the problems we are facing cannot possibly be due to any fundamental problems with this set of concepts. Programmers are not bright enough, they don't have good enough tools, they don't have enough mathematical education or they don't work hard enough - I'm sure you've run into all of these explanations. I don't believe any of these are valid - I believe there is a far more fundamental problem - namely that, at a basic level, the medium is simply inappropriate for the task at hand. In fact, when you look at them objectively, the symptoms our business is experiencing now are quite characteristic of what one would expect when people try to do a complicated job using the wrong tools. Take a second and really try to imagine building a functioning automobile out of clay! It's highly malleable when wet, so you should be able to make anything, but after it has been fired it is very hard but very brittle! In fact that's quite a good analogy for the "feel" of most of our applications today!
The time is now ripe for a new paradigm to replace the von Neumann model as the bridging model between hardware and software. The one we will be describing is similar to the one Valiant proposes (I'll talk about his in more detail in Chapter 27) and in fact seems to be one of a family of related concepts which have appeared over the last few years in the literature. The common concept underlying much of this work is basically that, to solve these problems, we have to relax the tight sequential programming style characteristic of the von Neumann machine, and structure programs as collections of communicating, asynchronous processes. If you look at applications larger than a single program or go down inside the machine, you will find many processes going on in parallel. It is only within a single program (job step or transaction) that you still find strict traditional, sequential logic. We have tended to believe that the tight control of execution sequence imposed by this approach is the only way to get predictable code, and that therefore it was necessary for reliable systems. It turns out that machines (and people) work more efficiently if you only retain the constraints that matter and relax the ones that don't, and you can do this without any loss of reliability. The intent of this book is to try to describe a body of experience which has been built up using a particular set of implementations of this concept over the years, so I will not go into more detail at this point. In this chapter, we will be talking more about the history of this concept than about specific implementations or experience gained using them.
Another factor which makes me think it is timely for this technology to be made public is that we are facing a growing crisis in application development. At the same time as new requirements are appearing, the underlying technology is changing faster and faster. The set of concepts I will be describing seems to fit well with current directions for both software and hardware. Not only can it support in a natural manner the requirements of distributed, heterogeneous applications, but it also seems an appropriate programming technology for the new multiprocessor machines being worked on by universities and leading-edge computer manufacturers all over the world. As the late Wayne Stevens, the noted writer on the subject of application design methodologies, has pointed out in several of his articles (e.g. Stevens 1985), the paradigm we will be describing provides a consistent, natural way to view applications from the workings of whole companies all the way down to the smallest component. Since you can describe manual applications with data-flow diagrams, the connection between manual and system procedures can be shown seamlessly.
In what follows, I will be using the term "Flow-Based Programming" (or FBP for short) to describe this new set of concepts and the software needed to support it. We have in the past used the term "Data Flow" as it conveys a number of the more important aspects of this technology, but there is a sizable body of published work on what is called "dataflow architectures" in computer design and their associated software (for instance the very exciting work coming out of MIT), so the term dataflow may cause confusion in some academic circles. It was also pointed out to me a few years ago that, when control flow is needed explicitly, FBP can provide it by the use of such mechanisms as triggers, so the term Flow-Based Programming avoids the connotation that we cannot do control flow. This is not to say that the two types of data flow do not have many concepts in common - dataflow computer architectures arise also from the perception that the von Neumann machine design that has been so successful in the past must be generalized if we are to move forward, whether we are trying to perform truly huge amounts of computation such as weather calculations or simply produce applications which are easier to build and maintain.
One significant difference between the two schools, at least at this time, is that most of the other data flow work has been mathematically oriented, so it tends to work with numbers and arrays of numbers. Although my early data flow work during the late 60s also involved simple numeric values travelling through a network of function blocks, my experience with simulation systems led me to the realization that it would be more productive in business applications to have the things which flow be structured objects, which I called "entities". This name reflected the idea that these structured objects tended to represent entities in the outside world. (In our later work, we realized that the name "entity" might cause confusion with the idea of entities in data modelling, although there are points of resemblance, so we decided to use a different word). Such a system is also, not coincidentally, a natural design for simulating applications, so the distinction between applications and their simulations becomes much less significant than in conventional programming. You can think of an entity as being like a record in storage, but active (in that it triggers events), rather than passive (just being read or written). Entities flow through a network of processes, like cars in a city, or boats in a river system. They differ from the mathematical tokens of dataflow computers or my early work chiefly in that they have structure: each entity represents an object with attributes, for example an employee will have attributes such as salary, date of hire, manager, etc. As you read this book, it should become clear why there has to be at least one layer of the application where the entities move as individual units, although it may very well be possible to integrate the various dataflow approaches at lower levels.
At this point I am going to have to describe FBP briefly, to give the reader something to visualize, but first a caveat: the brief description that follows will probably not be enough to let you picture what FBP is and how it does it. If we don't do this at this point, however, experience shows that readers find it hard to relate what I am describing to their own knowledge. The reverse risk is that they may jump to conclusions which may prevent them from seeing what is truly new about the concepts I will be describing later. I call this the "It's just..." syndrome.
In conventional programming, when you sit down to write a program, you write code down the page - a linear string of statements describing the series of actions you want the computer to execute. Since we are of course all writing structured code now, we start with a main line containing mostly subroutine calls, which can then be given "meaning" later by coding up the named subroutines. A number of people have speculated about the possibility of instead building a program by just plugging prewritten pieces of logic together. This has sometimes been called 'Legoland' programming. Even though that is essentially what we do when we use utilities, there has always been some doubt whether this approach has the power to construct large scale applications, and, if it has, whether such applications would perform. I now have the pleasure to announce that the answer is 'Yes' to both these questions!
The "glue" that FBP uses to connect the pieces together is an example of what Yale's Gelernter and Carriero (1992) have called a "coordination language". I feel the distinction between coordination languages and procedural languages is a useful one, and helps to clarify what is different about FBP. Conventional programming languages instruct the machine what logic to execute; coordination languages tell the machine how to coordinate multiple modules written in one or several programming languages. There is quite a bit of published material on various approaches to coordination, but much of that work involves the use of special-purpose languages, which reduces the applicability of these concepts to traditional languages and environments. Along with Gelernter and Carriero, I feel a better approach is to have a language-independent coordination notation, which can coordinate modules written in a variety of different procedural languages. The individual modules have to have a common Application Programming Interface to let them talk to the coordination software, but this can be relatively simple.
Coordination and modularity are two sides of the same coin, and several years ago Nate Edwards of IBM coined the term "configurable modularity" to denote an ability to reuse independent components just by changing their interconnections, which in his view characterizes all successful reuse systems, and indeed all systems which can be described as "engineered". Although I am not sure when Nate first brought the two words "configurable" and "modularity" together, the report on a planning session in Palo Alto in 1976 uses the term, and Nate's 1977 paper (Edwards 1977) contains both the terms "configurable architecture" and "controlled modularity". While Nate Edwards' work is fairly non-technical and pragmatic, his background is mainly in hardware, rather than software, which may be why his work has not received the attention it deserves. One of the important characteristics of a system exhibiting configurable modularity, such as most modern hardware or Flow-Based Programming, is that you can build systems out of "black box" reusable modules, much like the chips which are used to build logic in hardware. You also, of course, have to have something to connect them together with, but they do not have to be modified in any way to make this happen. Of course, this is characteristic of almost all the things we attach to each other in real life - in fact, almost everywhere except in conventional programming. In FBP, these black boxes are the basic building blocks that a developer uses to build an application. New black boxes can be written as needed, but a developer tries to use what is available first, before creating new components. In FBP, the emphasis shifts from building everything new to connecting preexisting pieces and only building new when building a new component is cost-justified. Nate Edwards played a key role in getting the hardware people to follow this same principle - and now of course, like all great discoveries, it seems that we have always known this! We have to help software developers to move through the same paradigm shift. If you look at the literature of programming from this standpoint, you will be amazed at how few writers write from the basis of reuse - in fact the very term seems to suggest an element of surprise, as if reuse were a fortuitous occurrence that happens seldom and usually by accident! In real life, we use a knife or a fork - we don't reuse it!
We will be describing similarities between FBP and other similar pieces of software in later chapters, but perhaps it would be useful at this point to use DOS pipes to draw a simple analogy. If you have used DOS you will know that you can take separate programs and combine them using a vertical bar (|), e.g.
A | B
This is a very simple form of what I have been calling coordination of separate programs. It tells the system that you want to feed the output of A into the input of B, but neither A nor B have to be modified to make this happen. A and B have to have connection points ("plugs" and "sockets") which the system can use, and of course there has to be some software which understands the vertical bar notation and knows what to do with it. FBP broadens this concept in a number of directions which vastly increase its power. It turns out that this generalization results in a very different approach to building applications, which results in systems which are both more reliable and more maintainable. In the following pages I hope to be able to prove this to your satisfaction!
The FBP systems which have been built over the last 20 years have therefore basically all had the following components:
a number of precoded, pretested functions, provided in object code form, not source code form ("black boxes") - this set is open-ended and (hopefully) constantly growing
a "Driver" - a piece of software which coordinates the different independent modules, and implements the API (Application Programming Interface) which is used by the components to communicate with each other and with the Driver
a notation for specifying how the components are connected together into one or more networks (an FBP application designer starts with pictures, and then converts them into specifications to be executed by the Driver)
this notation can be put into a file for execution by the Driver software. In the most successful implementation of FBP so far (DFDM - described in the next section of this chapter), the network could either be compiled and link edited to produce an executable program, or it could be interpreted directly (with of course greater initialization overhead). In the interpreted mode, the components are loaded in dynamically, so you can make changes and see the results many times in a few minutes. As we said before, people find this mode extremely productive. Later, when debugging is finished, you can convert the interpretable form to the compilable form to provide better performance for your production version.
procedures to enable you to convert, compile and package individual modules and partial networks
documentation (reference and tutorial) for all of the above
In the above list I have not included education - but of course this is probably the most important item of all. To get the user started, there is a need for formal education - this may only take a few days or weeks, and I hope that this book will get the reader started on understanding many of the basic concepts. However, education also includes the practical experience that comes from working with many different applications, over a number of months or years. In this area especially, we have found that FBP feels very different from conventional programming. Unlike most other professions, in programming we tend to underestimate the value of experience, which may in fact be due to the nature of the present-day programming medium. In other professions we do not recommend giving a new practitioner a pile of books, and then telling him or her to go out and do brain surgery, build a bridge, mine gold or sail across the Atlantic. Instead it is expected that there will be a series of progressive steps from student or apprentice to master. Application development using FBP feels much more like an engineering-style discipline: we are mostly assembling structures out of preexisting components with well-defined specifications, rather than building things from scratch using basic raw materials. In such a medium, experience is key: it takes time to learn what components are available, how they fit together and what trade-offs can be made. However, unlike bridge-builders, application developers using FBP can also get simple applications working very fast, so they can have the satisfaction of seeing quite simple programs do non-trivial things very early. Education in FBP is a hands-on affair, and it is really a pleasure seeing people's reactions when they get something working without having to write a line of code!
Now that graphics hardware and software have become available at reasonable cost and performance, it seems very desirable to have graphical front-ends for our FBP systems. Since FBP is a highly visual notation, we believe that a graphical front-end will make it even more usable. Some prototype work has already been done along these lines and seems to bear this idea out. Many potential users of FBP systems will already have one or more graphical design tools, and, as we shall see, there is an especially good match between Structured Analysis and FBP, so that it seems feasible, and desirable, to base FBP graphical tools on existing graphical tools for doing Structured Analysis, with the appropriate information added for creating running FBP programs.
Now I feel it would be useful to give you a bit of historical background on FBP: the first implementation of this concept was built by myself in 1969 and 1970 in Montreal, Quebec. This proved very productive - so much so that it was taken into a major Canadian company, where it was used for all the batch programming of a major on-line system. This system was called the Advanced Modular Processing System (AMPS). This system and the experience gained from it are described in a fair amount of detail in an article I wrote a few years later for the IBM Systems Journal (Morrison 1978). I am told this was the first article ever published in the Systems Journal by an author from what was then called the Americas/Far East area of IBM (comprising Canada, South America and the Far East).
Although the concepts are not well known, they have actually been in the public domain for many years. The way this happened is as follows: in late 1970 or early '71 I approached IBM Canada's Intellectual Property department to see if we could take out a patent on the basic idea. Their recommendation, which I feel was prescient, was that this concept seemed to them more like a law of nature, which is not patentable. They did recommend, however, that I write up a Technical Disclosure Bulletin (TDB), which was duly published and distributed to patent offices world-wide (Morrison 1971). A TDB is a sort of inverse patent - while a patent protects the owner but requires him or her to try to predict all possible variations on a concept, a TDB puts a concept into the public domain, and thereby protects the registering body from being restricted or impeded in the future in any use they may wish to make of the concept. In the case of a TDB, it places the onus on someone else who might be trying to patent something based on your concept to prove that their variation was not obvious to someone "skilled in the art".
Towards the end of the 80's, Wayne Stevens and I jointly developed a new version of this software, called the Data Flow Development Manager (DFDM). It is described in Appendix A of Wayne Stevens' latest book (Stevens 1991) (which, by the way, contains a lot of good material on application design techniques in general). What I usually refer to in what follows as "processes" were called "coroutines" in DFDM, after Conway (1963), who described an early form of this concept in a paper back in the 60's, and foresaw even then some of its potential. "Coroutine" is formed from the word "routine" together with the Latin prefix meaning "with", as compared with "subroutine", which is formed with the prefix meaning "under". (Think of "cooperative" vs. "subordinate").
DFDM was used for a number of projects (between 40 and 50) of various sizes within IBM Canada. A few years later, Kenji Terao got a project started within IBM Japan to support us in developing an improved version for the Japanese market. This version is, at the time of writing, the only dialect of FBP which has been made available in the market-place, and I believe enormous credit is due to Kenji and all the dedicated and forward-looking people in IBM Japan who helped to make this happen. While this version of DFDM was in many ways more robust or "industrial strength" than the one which we had been using within IBM Canada, much of the experience which I will be describing in the following pages is based on what we learned using the IBM Canada internal version of DFDM, or on the still earlier AMPS system. Perhaps someone will write a sequel to this book describing the Japanese experience with DFDM...
Last, but I hope not least, there is a PC-based system written in C, which attempts to embody many of the best ideas of its ancestors. [Reference to HOMEDATA in book removed from this web page, as they are (to the best of my knowledge) no longer involved in this effort.] It [THREADS] has been available since the summer of 1993, running on Intel-based machines. It has been tested on 268, 386 and 486-based machines. Since it is written in C, we are hoping that it will also be possible to port it later to other versions of C, although there is a small amount of environment-dependent code which will have to be modified by hand. This software is called THREADS - THREads-based Application Development System (I love self-referential names!) [see THREADS]. Like DFDM, it also has interpreted and compiled versions, so applications can be developed iteratively, and then compiled to produce a single EXE file, which eliminates the network decoding phase.
The terminology used in this book is not exactly the same as that used by AMPS and DFDM, as a number of these terms turned out to cause confusion. For instance, the data chunks that travel between the asynchronous processes were called "entities" in AMPS and DFDM, but, as I said above, this caused confusion for people experienced in data modelling. They do seem to correspond with the "entities" of data modelling, but "entities" have other connotations which could be misleading. "Objects" would present other problems, and we were not comfortable with the idea of creating totally new words (although some writers have used them effectively). The "tuples" of Carriero and Gelernter's Linda (1989) are very close, but this name also presents a slightly different image from the FBP concept. We therefore decided to use the rather neutral term "information packet" (or "IP" for short) for this concept. This term was coined as part of work that we did following the development of DFDM, in which we also tied FBP concepts in with other work appearing in the literature or being developed in other parts of IBM. Some of the extensions to the basic AMPS and DFDM substructure that I will be talking about later were also articulated during this period. When I need to refer to ideas drawn from this work I will use the name FPE (for Flow-Based Programming Environment), although that is not the acronym used by that project. THREADS follows this revised terminology, and includes a number of ideas from FPE.
As I stated in the prologue, for most of my 33 years in the computer business I have been almost exclusively involved with business applications. Although business applications are often more complex than scientific applications, the academic community generally has not shown much interest in this area up until now. This is a "catch 22" situation, as business would benefit from the work done in academia, yet academia (with some noteworthy exceptions) tends not to regard business programming as an interesting area to work in. My hope is that FBP can act as a bridge between these two worlds, and in later chapters I will be attempting to tie FBP to other related theoretical work which working programmers probably wouldn't normally encounter. My reading in the field suggests that FBP has sound theoretical foundations, and yet it can perform well enough that you can run a company on it, and it is accessible to trainee programmers (sometimes more easily than for experienced ones!). AMPS has been in use for 20 years, supporting one of the biggest companies in North America, and as recently as this year (1992), one of their senior people told me, "AMPS has served us well, and we expect it will continue to do so for a long time to come." Business systems have to evolve over time as the market requirements change, so clearly their system has been able to grow and adapt over the years as the need arose - this is a living system, not some outdated curiosity which has become obsolete with the advance of technology.
And now I would like to conclude this chapter with an unsolicited testimonial from a DFDM user, which we received a few years ago:
"I have a requirement to merge 23 ... reports into one .... As all reports are of different length and block size this is more difficult in a conventional PLI environment. It would have required 1 day of work to write the program and 1 day to test it. Such a program would use repetitive code. While drinking coffee 1 morning I wrote a DFDM network to do this. It was complete before the coffee went cold [my italics]. Due to the length of time from training to programming it took 1 day to compile the code. Had it not been for the learning curve it could have been done in 5 minutes. During testing a small error was found which took 10 minutes to correct. As 3 off-the-shelf coroutines were used, PLI was not required. 2 co-routines were used once, and 1 was used 23 times. Had it not been for DFDM, I would have told the user that his requirement was not cost justified. It took more time to write this note than the DFDM network."
Notice that in his note, Rej (short for Réjean), who, by the way, is a visually impaired application developer with many years of experience in business applications, mentioned all the points that were significant to him as a developer - he zeroed right in on the amount of reuse he was getting, because functions he could get right off the shelf were ones he didn't have to write, test and eventually maintain! In DFDM, "coroutines" are the basic building blocks, which programmers can hook together to build applications. They are either already available ("on the shelf"), or the programmer can write new ones, in which case he or she will naturally try to reuse them as often as possible - to get the most bang for the proverbial buck. Although it is not very hard to write new PL/I coroutines, the majority of application developers don't want to write new code - they just want to get their applications working for the client, preferably using as little programming effort as will suffice to get a quality job done. Of course there are always programmers who love the process of programming and, as we shall see in the following pages, there is an important role for them also in this new world which is evolving.
Rej's note was especially satisfying to us because he uses special equipment which converts whatever is on his screen into spoken words. Since FBP has always seemed to me a highly visual technique, I had worried about whether visually impaired programmers would have any trouble using it, and it was very reassuring to find that Rej was able to make such productive use of this technology. In later discussions with him, he has stressed the need to keep application structures simple. In FBP, you can use hierarchic decomposition to create multiple layers, each containing a simple structure, rather than being required to create a single, flat, highly complex structure. In fact, structures which are so complex that he would have trouble with them are difficult for everyone. He also points out that tools which he would find useful, such as something which can turn network diagrams into lists of connections, would also significantly assist normally sighted people as they work with these structures.
Rej's point about the advantages of keeping the structures simple is also borne out by the fact that another application of DFDM resulted in a structure of about 200 processes, but the programmer involved (another very bright individual) never drew a single picture! He built it up gradually using hierarchical decomposition, and it has since had one of the lowest error rates of any application in the shop. I hope that, as you read on, you will be able to figure out some of the reasons for this high level of reliability for yourself.
In what follows, I will be describing the main features of FBP and what we have learned from developing and using its various implementations. Information on some of these has appeared in a number of places and I feel it is time to try to pull together some of the results of 20 years of experience with these concepts into a single place, so that the ideas can be seen in context. A vast number of papers have appeared over the years, written by different writers in different fields of computing, which I believe are all various facets of a single diamond, but I hope that, by pulling a lot of connected ideas together in one place, the reader will start to get a glimpse of the total picture. Perhaps there is someone out there who is waiting for these ideas, and will be inspired to carry them further, either in research or in the marketplace!