Flow-Based Programming - Chap. XXIX
Endings and Beginnings

drawflow.ico

This chapter has been excerpted from the book "Flow-Based Programming: A New Approach to Application Development" (van Nostrand Reinhold, 1994), by J.Paul Morrison.

A second edition (2010) is now available from CreateSpace eStore and Amazon.com.

The 2nd edition is also available in e-book format from Kindle (Kindle format) and Lulu (epub format).

To find out more about FBP, click on FBP.                               

For definitions of FBP terms, see Glossary.                                                                                       


[Some of my predictions have turned out to be amazingly accurate (smile!) - and other things I mention (back in 1994, mind you) have fallen on stony ground or been superseded by more modern technologies.  Wait for the revised edition of my book - hopefully due out some time in 2010.]

Material from book starts here:

I did not want to call this chapter "Conclusion" as I hope it is more of a beginning than an ending. We have certainly travelled a long distance, and if you have stuck with me you are definitely to be congratulated! However, the journey in many ways is just beginning. We have come a long way, but these are only the first small steps.

Programming in the year 2010 [darn it, we're there already - you will have to judge for yourselves how much change there has been!] is going to be very different from what we are used to today, it is going to feel very different, and it is going to be done by different types of people from the practitioners of today. Brad Cox wants to get his "Software Industrial Revolution" started, and I agree the concepts are there and the time is right. He points out that today we are still software craftsmen, making once-off items one piece at a time with very simple tools. If you want to read about how things were done before mass production, read "A Book of Country Things" (Needham 1965). These were smart people who knew their materials, and many of their techniques were excellent. But you didn't go to the store to buy a standard part - you made whatever you needed yourself. Now, isn't it rather strange that so many programmers today would still rather build a new piece of code than (re)use one that already exists! Some of them certainly do it because they love it, and there is a role for these people to play, but modern business cannot afford to continue to have code built laboriously by hand. In the old days, people did this because they had to; today, we have had to divide up the work - some people do the designs, others make them. Machines make all of us more productive. Are people less happy today? I believe only a Luddite would maintain that the old days were better - yes, some of those old tools were works of art, but we have magical things available at our fingertips which one of those old-timers would give his eye-teeth for. Cheaper doesn't have to mean nastier. If you believe that programmers don't reuse on account of their love of their craft, I think that's wrong. Most programmers don't reuse because it's just more trouble than building new. But, you see, our experience with FBP has been completely different - in one case I mentioned earlier a programmer used an off-the-shelf component to do a function, even when she could have done the same thing by adding a single line to an adjacent PL/I component she was writing! So there are profound psychological differences between the two environments.

I believe that an important part of the change is that application developers who are trained in FBP are moving away from procedural thinking towards what you might call "spatial" thinking. Procedural thinking is quite rare in ordinary life, so it's not surprising that people find it difficult, and that its practitioners are (viewed as) different in some ways from ordinary mortals.

If you start to look for examples of procedural thinking in real life, you discover that there are in fact very few areas where we do pure procedural thinking, but there is one which we have been doing since we lived in caves, and that is cooking. Some years ago at the IBM Research Center in Yorktown, Dr. Lance Miller started studying the differences between recipes and programs. He noticed that recipes often had implicit parallelism, e.g., "add boiling water" implies that the water must have started boiling while some other step was going on, so that it would be available when needed. The term "preheated oven" is so common we probably don't even notice that it also violates sequentiality. People say and write things like this all the time without thinking - it is only if you try to execute the instructions in a rigidly serial manner (i.e. play computer) that you may run into some surprises!

The other thing he noticed was that the individual steps of the process very often start with a verb and then add qualifiers, e.g. "boil until done", "beat egg whites until soft peaks form", so you know very early in the sentence what kind of activity you will be doing. In programming, we usually bury the "verb" deep inside a nest of do-loops to get the same effect, e.g.

do while a....
    if c is true
        then
            do until b...
                compute
            enddo
    endif
enddo

The effect of this is that you don't know what operation you are going to do until you are deep inside a nest of do-loops or conditional statements. This would be like telling a visitor to town, "Until you come to the fountain after the church across from the train station, keep going straight". The visitor would get a pretty strange impression of your town and its inhabitants!

The other main kind of procedural behaviour in fact is just this one: following directions. Have you ever tried following someone's directions, only to find out that they forgot one item, or someone changed a street sign, and you are now facing north instead of west (if you can even figure that out). If you want to make a grown man or woman cry, ask them to assemble a child's tricycle, whose instructions have been translated into English from another language by a speaker of a third language, and which probably describe the wrong model anyway, if you could figure out what it meant. A lot like programming, isn't it? Maps are much easier because they let you visualize relationships between places synoptically, so you can handle unexpected changes, make corrections, and even figure out how to get to your destination from somewhere your informant never imagined you'd land up! On a recent trip to England, I found myself very impressed with the amount of information packed into the signs announcing "roundabouts": general shape of the roundabout, angles, destinations and relative sizes of roads entering the roundabout - and all specified visually!

This is something like the difference between conventional programming and FBP - FBP gets you away from procedural thinking and lets you think map-style.

It occurred to me recently that we finally have a unique opportunity, both to take advantage of the CPU power available on most programmers' desks today and to actually use this power to take advantage of natural human abilities. Just like other human abilities which we take for granted, visual processing is extremely difficult to program into a computer, and requires a lot of computer horsepower. However, many programmers today have powerful machines sitting on their desks which are basically being used as "dumb" terminals. A few years ago, Bucky Pope of IBM did a study in which he concluded that "editing" text really doesn't exercise a machine much - most of the time the programmer is just thinking. So entering a linear string of COBOL doesn't take advantage of the power on his or her desk. And what power! If you remember Chapter 22 on Performance, I said that it took 10 microseconds to do an API call on an IBM 168 (on which we ran a bank - 5,000,000 accounts) 20 years ago, and 50 microseconds on the machine on my desk at home today - so that means I have the processing power to run 1/5 of a bank right here on my desk (yes, I know that's stretching it a bit!). What we might call "visual programming" could actually start to take advantage of the relatively enormous processing power available to each programmer today. And visual programming means much more than drawing pictures to represent logic - it means developing a synergy between human and machine which takes advantage of one of the human's strong suits, not his or her weakest. A recent article by K. Kahn and V. Saraswat (1990), alluded to earlier, which I found absolutely visionary, describes an approach to a totally visual style of programming, which would not only have a visual syntax, but would also show computation using visual transformations. Software supporting this would have to be able to perform and understand topological transforms, just as humans do without effort. Interestingly, my group at IBM built a visual animation showing the creation and movement of IPs through an FBP network, and showed this at a CASE conference a few years ago - this proved a very effective way of conveying some of the basic concepts of FBP. We have also speculated that visual interaction with a picture of a network would be a very natural debugging environment. At this juncture, 7 years from the end of the century, I believe we have the computer horse-power to make such approaches economically feasible - now we just have to develop the technology.

Up to now, I have concentrated on technology, and I confess to being technologically-oriented, so assume we have gotten these details out of the way. However, we also have to look at the sociological and psychological factors. What will be needed to get such a technology into use in the workplace? Well, for one thing it is going to need extensive cooperation between business and academia. As long as business and academia are two solitudes, staring at each other across a deep chasm of noncommunication, we are not going to be able to make the transition to a new way of thinking. Business has gotten the impression that it has to become a bunch of mathematical geniuses to do the new programming, because the academics are broadcasting that image. So it retreats into its corner, and keeps trying to build and maintain systems using linear COBOL text. However, I don't believe that we all have to become mathematicians - nothing in this book would give a bright 16-year-old any difficulty at all.

Instead, at this point in time, business people are more willing to hire hundreds of COBOL programmers than to invest in new technologies. The problem is, if you were a CIO (maybe you are), which technology would you invest in? Well, currently it is not a very hard economic decision - it makes more sense to stay with the COBOL coders. At least you can do anything with COBOL (no, I am not refuting everything I have said in this book) - but it takes ages to do it, and the result is almost unmaintainable. Now suppose you were running a cotton plantation a century or so ago, using all manual labour - not very productive, but at least output was predictable, if slow. Imagine, now, that some city slicker comes along with a cotton-picking machine, which he claims is going to improve productivity enormously. But you, the land-owner, figure that you are going to have more highly trained people running it, it may break down at awkward moments, it's going to need parts from halfway across the country, and so forth. You'll do the "smart" thing, right? It wasn't until the prevailing morality started to take into account the feelings and needs of everybody involved and the technology reached a certain level of maturity that the balance tilted in favour of using technology (I wish I could say that this has happened universally, but at least it's a start). I date many of my feelings about technology to a visit I made to a match factory as a schoolboy (circa 1948) - that was the nearest I've ever come to seeing humans used as robots, and I never want to see anything like it again...

I often think the attitude of business is best summed up by a cartoon I saw a few years ago - a scientist type is trying to sell a medieval king a machine-gun, and the caption says, "Don't bother me - I've got a war to fight"! Why should we change the way we do application development? Everyone's happy with the status quo, right? I don't think so. I think in fact there is a general dissatisfaction with the way things are now - it's just that nobody has shown a clear way to solve it that will benefit all the stakeholders. Over the last few decades, there has been an unending series of snake-oil salesmen, each peddling their own panacea. Why do you think each new remedy gets adopted so enthusiastically? Because there is a real need out there. So far, they have all turned out to be inadequate in some way, and usually get dropped. This is understandable, but it is very important that we learn from each such experience, so that collectively we move forward, rather than just jigging in one place all the time.

But, to give all groups a chance to take potshots at me, I am afraid academia is partly to blame as well. Some academics, I am afraid, are doing the modern equivalent of fiddling while Rome burns. A professor of computer science told me a few years ago that, at his university, a thesis on application development technology just wouldn't be accepted. I found this shortsightedness absolutely incredible, and he agreed! It would seem to be obvious that application development technology is fascinating stuff, and it's even useful!  Hopefully this attitude has changed in the years since, but I'm sure it has cost application development research a good decade or two.  Even if theses on application development are now being written, how do we get these ideas into programming shops around the world?

I have a warning for any academics who may be reading this - there is so much stuff to read now in this area, that it would be quite understandable if you ignored papers like the one that appeared in the IBM Systems Journal back in 1978, because they aren't full of Greek letters. However, we were using these concepts to run a real live bank, and building up real experience trying to make these concepts work. This experience is priceless, and perfectly complements the interesting theoretical work that you people are doing all over the world.

Inflexible systems not only cost money, but they contribute to users' perception of computers as inhuman, inflexible, and oppressive. How many times have you been told, "It's the computer", when confronted with some particularly asinine bit of bureaucracy? We know it's not the hardware's fault but too often it's the fault of some short-sighted or just over-burdened programmer. Does the public know this? If they do, they've probably been told, "Yes, we know it's awkward, but it would cost too much to change it." Why does it cost so much? If there is only one message I want to leave you with after reading this book, it's that the root cause of the present state of programming is the von Neumann paradigm. It's that simple, and that difficult (you know we humans prefer things to be complex but easy, like taking a pill, but life isn't like that).

We started this book talking about how we have to relax the tight procedural nature of most of today's programming, imposed, not so much by the von Neumann machine, as by the mistaken belief that we still have to code according to the von Neumann model. Internally, today's computers are no longer tightly synchronous - nor are the environments that they run in. This can now be seen as a special case of a much larger issue: the key to improving productivity and maintainability in application development and to making programming accessible to a wider public is to make the world inside the computer match more closely to the world outside it. The real world is full of many entities all doing things in parallel: you do not stop breathing when I draw a breath. It is therefore not surprising that we were specifically designed to be able to function in such a world, and we get frustrated when we are forced to only do one thing at a time.

In a similar development, but at a different scale, we are starting to distribute our systems across different machines and/or different geographical locations. Imagine a world of multiprocessor machines communicating over LANs, WANs or satellite links across the whole world, and you get a vision of a highly asynchronous, massively parallel data processing network of world-wide scope. Since this is clearly the way our business is going, why should we have to be restricted to using synchronous, non-parallel, von Neumann machines as the processing nodes?! So our applications would be networks, networking with other networks, extending eventually around the planet.

Fact is starting to catch up with fiction: John Brunner is thought to have originated the use of the word "worm" in his 1975 novel, The Shockwave Rider, to describe an autonomous program travelling the computer net. Hackers (using both the positive and negative connotations of that term) have developed computer viruses, whose behaviour mirrors that of biological viruses. If you have had a system attacked by one of these critters, it may be hard for you to think kindly thoughts about what are usually perceived today as nuisances at best, and at worst something downright dangerous and destructive. However, a lot of recent work by responsible scientists has suggested that crossbreeding these two species may result in very powerful tools for making computers more user-friendly. Read what E. Rietman and M. Flynn (1993) have to say about worms and viruses in the future. They describe scenarios such as: worm programs assembling personalized newspapers for subscribers, using data extracted from databases all over the world; "vampire worms" taking advantage of available CPU time to do useful jobs (at night, mostly - hence the name); viruses automatically inserting hypertext buttons into text databases; viruses doing automatic database compression and expansion as time becomes available, scavenging dead data, monitoring for broken data chains, and on and on. In fact, there is already a precursor of this kind of facility roaming the net, called "Gopher" (Martin 1993), which lets you look at the weather in Australia from a terminal in Ohio, and generally roam around "gopherspace" from any terminal connected to Internet, or from a terminal talking to a remote computer that can access the Internet. Gopher now seems to have been upstaged by the WorldWideWeb, which sounds truly marvellous (Cramer 1994)!  [I seem to have been truly prescient here - remember this was written in 1994!]  As an interesting aside, Rietman and Flynn also point out that worms could be a useful method to "program massively parallel computers" (their italics).

Talking about fact catching up with fiction, some recent work coming out of Xerox PARC is even more incredible.  Instead of creating make-believe environments, which people can move through using the computer (Virtual Reality), how about enhancing our real-world environment with a myriad of small computing devices that we can talk to, using voice and gestures, and that talk to each other, perhaps about us?  How about an office that automatically adjusts the temperature and humidity to suit whoever is occupying it, and plays soothing music if you want it to?  How about children's building blocks that can be assembled into working computerized structures, or a pipe that you can use to point at a virtual blackboard, has a small microphone and speaker inside it, and perhaps monitors its user's health as well?! A lecturer once asked us: "Where will computers be used?" and he answered: "Anywhere that it makes sense to put the word intelligent." For example, not just intelligent cars or planes, but intelligent offices, desks, and blackboards, intelligent bookshelves - maybe even intelligent paper and pencils.  This work goes under the general name "ubiquitous computing" or "ubicomp", and you can read about it in Vol. 36, No. 7 of Communications of the ACM (Wellner et al., 1993).  This enormously exciting work seems to me to be totally compatible with everything that has gone before in this book.

The IBM scientist Nat Rochester once described programmers as working more closely than any other profession with what he called pure "mind-stuff". If you imagine a world-wide network of "mind stuff", then this corner of the universe is starting to see a new type of intelligence, or at least a new vehicle for intelligence. Baird Smith of IBM used to describe software as "explosive" while hardware is "implosive". While software, which is built out of mind-stuff, is becoming more and more powerful and complex, hardware is getting smaller and smaller (although more complex!) and cheaper and cheaper - as it should do, since, after all, its basic building material is sand! This is a perfect example of what Buckminster Fuller called "doing more with less".

I think we are starting to see a truly massive convergence of ideas. It is unfortunate that most of the science-fiction which has been accepted by the main stream is dystopian, because most good science fiction is very optimistic and upbeat. It has always surprised me how few programmers, who live their professional lives on the cutting edge of change, actually read science fiction. Does this reflect a perception that what they do is not imaginative and exciting? Is this still another vicious circle?

J.W. Campbell Jr., the editor of Analog, originally Astounding, was an inspiration to a generation of science-fiction readers. He taught us the value of considering new ideas objectively, avoiding the extremes either of rejecting ideas out of hand because they are new and different, or accepting them instantly without proper evaluation. It was from his editorials that I learned the idea of "rope logic", which I believe this book embodies: the ideas described in this book are not built up on a basis of Aristotelian syllogisms, but as a multitude of small threads. While individually none of the threads may be very strong, they complement each other, resulting in a rope of logic which, in its totality, is strong enough to move pyramid blocks around (and maybe even a whole industry)! By the same token, you, my readers, may be able to snap a few of these threads, but my conviction is that the rope as a whole will remain as strong as ever. Am I deluding myself? The only possible test is the systems built using these concepts, and they are some of the sturdiest systems the companies that use them have in their collection.

While it may be true that some science fiction is naïve, some of the most exciting and forward-looking thinking going on today is described in the fiction and fact pages of today's science-fiction magazines. Read them and stretch your minds! If we have the will, we can make pretty good lives for ourselves. In my experience it is the pessimists (have you noticed they usually call themselves realists?) who show a certain naïveté - the naïveté of believing that things will go on just the way they are, that there will not be technological, political, commercial, social, artistic or spiritual breakthroughs. In fact, the pace of change in all areas is on an accelerating curve - if you doubt this, just look back 50 years and see how far we have come. We are only limited by our imaginations, and that's a resource that is never going to be used up! Even if you feel solving the world's problems is too big a task, I see no reason why we can't at least tackle the smaller task of making programmers more productive and programming more fun - there is no law that says work has to be drudgery. In fact, when one masters a medium, and the medium fits the hand, there is a feeling of being at one with one's tools which can border on the transcendent. That's what training is about - not to turn out a generation of button-pushers, but to produce "masters". Our goal in our profession is not to be able to push a button and turn out a payroll program, but to be more like those ancient Celtic artisans who made a drinking vessel a work of art. Were they having fun? You just bet they were! And so can we - so why not start right now?! And if this isn't sufficient justification by itself, wouldn't it be neat if we could have happier, more productive programmers, working for companies that are more responsive to their clients, and saving everybody money as well (and therefore improving the quality of life for you and me). Utopian? Perhaps, but if we have a goal, we can start moving in that direction, and we can measure how much progress we have made towards getting there. If we have no goal, then we'll be just wasting energy running round in small circles. Programming is a part of life, so this idea really shouldn't surprise anybody!

So what's going to happen over the next couple of decades? I used to think that if you built a better mousetrap, everyone would beat a path to your door. I learned differently by personal experience - and then I read Kuhn (1970), who put it all in perspective for me. Did you know that the phlogiston theory didn't yield instantly to the oxygen theory the moment someone did the deciding experiment? Some people never really took to this weird oxygen idea! Me, I'm betting on the next generation. Here we have a new paradigm which is clearly superior, but experience shows that it may take as much as a generation for it to catch on. But it could take a lot less than that! By the way, if you want to learn more about paradigms, how they affect how we think and how they get adopted, Kuhn's ideas have been extended over the last few years in a series of thought-provoking books and video tapes by Joel Barker (e.g. Barker 1992). Necessary reading!

As I have tried to show in Chapter 26, some of the most advanced practitioners of OOP are discovering FBP or something very close to it. There will be people who say programming will never become simple, or that the man or woman in the street will never see it as enjoyable. I personally believe that computers and people will always need go-betweens, just as people from different cultures do (sometimes so do members of the same family!). Programmers are skilled at interpreting between people and machines and there will always be a need for their services. However, by giving programmers inadequate tools, and then blaming them if they aren't successful, we scare off the very people who would be best at this work. It is time we stopped doing that! It's never too late to realize that we took a wrong tack a few decades ago, and change direction.

One of the most exciting things about FBP for me is that it provides a bridge between ideas that are currently restricted to very technical papers, and businesses which think they are stuck with COBOL assembly lines for ever. We also have today the potential to create a new era of productivity, based on a marketplace of reusable components. Wayne Stevens, who is responsible for a number of the ideas in this book, was very optimistic about the potential of these concepts and was tireless in promoting it within the DP community, putting his not inconsiderable reputation behind them. Where I was the paradigm shifter, to use Joel Barker's phrase, he was the paradigm pioneer. He believed, as I do, that these concepts will have a big effect on our future as an industry, and I deeply regret that he was not able to see them become widely accepted in his lifetime. I have always liked the phrase, "Will those who say it can't be done please move aside and let the rest of us get on with doing it" - this was very much his attitude to life! I and my colleagues over the years have had a glimpse of the future of the DP industry, and I have tried to share this vision with my readers. I hope that you have enjoyed reading about it as much as I have enjoyed telling you about it!