1249 Pineview Dr., Apt 4
Morgantown, WV 26505
adtodd@mail.wvnet.edu (formerly U46A8@WVNVM.WVNET.EDU)
Interfaces and standards are worthless unless large numbers of people do a great deal of work to give them effect. Most of this work is uncelebrated, in the form of learning to use application software. At some point, people are going to simply stop making the effort, because the costs outweigh the gains. A technology's proprietary status is obviously a considerable potential cost and additional deterrent to its adoption. "Thus does conscience make cowards of us all."
The spread of open source fundamentally reflects a shift of priorities. The kind of hacking required to make a program run fast on a computer of limited performance is a kind of lie told to a machine. What Microsoft was good at was making sure that everyone told the machine the same lie. Conversely, the open- source movement adopted the social arrangements of traditional science, the science of Einstein, Bohr, Watson and Crick, etc., which were aimed at producing truth. Bug-free software is an expression of truth. Open-source software is reliable precisely because it is open to public criticism. It is not simply Eric Raymond's "many eyes," but something more fundamental. Hierarchical organizations such as corporations are rarely very self-critical. At a certain point criticism becomes treason, and errors are perpetuated until they encounter the outside world. I should hurry to add that Microsoft is by no means the worst offender in this regard, as corporations go. The notorious temper tantrums of Bill Gates are not to be compared with Henry Ford II's habitual firing (often by deputy) of anyone who ever disagreed with him. But then, no one ever suggested that the Ford Motor Company could produce a viable windowing operating system. The point is that open-source permits its participants to address Richard M. Stallman himself with such robust language as "chill out," and the usual range of anglo-saxon terms. There is no suggestion that bug reports can be suppressed or denied. Once software disengages itself from Moore's law, open-source software will more or less inevitably prevail because it offers greater prospects of reliability. Network programming will no doubt be based on open software and open standards.
But at the same time, this is likely to be an empty victory. Network programming seemed to be very largely associated with "programming in the large," that is, the kind of programming done by firms such as Oracle for large, dispersed organizations. Such programming is primarily concerned with accounting and logistics, and the related management information systems. The characteristic customer firms are those for which accounting and logistics are synonymous with action.
More specifically, network programming is largely keyed to the idea that different computer systems belonging to different organizations will become extensively interoperable, far beyond the level of the existing standard public protocols. A classic definition of interoperability seems to be that one computer, in charge of monitoring one company's inventory, decides that it is low on some item, and automatically places an order with the computer of another company.
However, this kind of programming is tied to a declining sector of the economy. For example, the number of commercial banks is being steadily decreased by merger. When one bank absorbs another, the customers of one bank are given accounts in the other, their balances transferred, and after a reasonable transition lag, one of the two banks simply ceases to exist as an organization. Its software is simply discarded.
Likewise, not only has the number of automakers declined, but programs of internal standardization have reduced the number of distinct components and manufacturing operations at each automaker. Automakers are such a major and archetypal part of the manufacturing economy that one is justified in dealing with them at some length.
Automobiles are in the process of becoming software. The physical mechanisms are becoming simpler, and more dominated by computerized controls. For example, each gear ratio in a conventional automotive transmission is represented by a pair of gears. In short, the gear ratios are "hard-wired" in the most fundamental way. This means that the automakers are obliged to produce different kinds of transmissions for different uses. By contrast, look at the electric transmissions used in up-to-date railroad locomotives. The diesel engine turns a generator, which feeds current to a solid-state commutator, which can switch the current's polarity back and forth, so that the electricity coming out of the commutator is at any desired frequency. This current is fed to an electric motor, and it happens that the "gear ratio" of an alternating-current induction motor depends on the frequency of its electric power. Thus, the gear ratios are nothing more that numbers inside the computer which controls the commutator. The automakers are gradually experimenting with railroad-style electric drives. Furthermore, the simplest, and at the same time, the most advanced kind of engine is something called a "free-piston engine," a single, double-acting cylinder similar to that in a steam locomotive. Instead of a crankshaft, it has a solenoid to take off power at a rate dictated by the computer. A free-piston engine operates on a two-stroke diesel cycle, and because its compression is variable, it can use a wide range of liquid fuels. This combined package would of course be radically simpler to manufacture than a conventional engine and transmission. Most of the complexity has been displaced into the control program, and, not surprisingly, automobile control programs are pushing up into the hundred-thousand-line range.
The same principle applies to the manufacturing process itself. Inevitably, the manufacture of automobiles will consolidate itself into fewer processes, carried out by fewer workers in fewer plants in a smaller region. The design process will come to look more and more like software development, with the same characteristic emphasis on accumulating a corpus of stable, debugged code, and reusing it.
When information is used for feedback, it becomes invalid, because it has been used to change itself, and is therefore no longer true. Thus, the large-scale aggregation and collation of information is fundamentally incompatible with extensive use of feedback. An organization which collates information is not one which can respond with the greatest rapidity to changing circumstances. Effective use of feedback is typically going to mean building versatile automatic machines of one kind or another. In practice, these machines will be sold to end-users when possible, thus removing their repertoire from the scope of commerce. To put this in concrete terms: you buy a printer, paper, and ink cartridges. That is commerce. However, you use the printer to print off things you download from the web (or gnutilla) for free, and that is outside of the scope of commerce. Well, imagine an increasing range of goods and services delivered by machines with the basic flexibility of a computer printer. Much of the high-end programming work in the next few years is going to be in creating the necessary bundled software for these machines.
Of course, in the short term programming employment in the old economy will increase, but it will increase by allowing the old economy to downsize further. Eventually programmers will have to shift over to the new economy.
The growth sectors of the economy are concerned with people rather than things. The two biggest growth sectors are education and health care. However, for these sectors, command-and-control functions are not so central as for logistics-oriented businesses. People are simple too complicated to be dealt with en masse. The type of computer systems which prove useful for these growth sectors are not likely to lend themselves to "programming in the large."
For example, a physical therapist may have a requirement for a smart exercise machine, which gives the user feedback at intervals of 1/100 second, probably by adjusting the machine's internal brake settings. However, it is in the nature of feedback information that it becomes invalid as soon as it is acted on, so there is little useful purpose in aggregating or storing such information. Certainly, the aggregate horsepower generated by all the physical therapy patients in the United States is a meaningless factoid.
Here is another example, from my own field, higher education. The closest analogy we have to logistics in the corporate sense is computerized course registration and grade reporting. The grading system is undynamic, in the sense that it has not been expanded for years. Even plus/minus grading is far from universal. Furthermore, the emphasis has shifted from undergraduate to graduate programs, and here the substantive requirement is usually a thesis, a comprehensive exam (often with an oral component), or both. Likewise, enrollment limits become practically inoperative. There is always space in a sufficiently advanced course for a suitable person. The tendency is for the central administration to know less, not more, about what a student is doing.
People who are using computers in constructive ways for education are usually trying to make the freshman year more like graduate school. For example, in one major pilot project, Prof. Edward Ayres at U. Virginia has created the Valley of the Shadow project, a large on-line, web-accessable archive of documents relating to the Civil War. The idea is that instead of teaching freshmen History of Western Civilization from a textbook, you send them into the archives to actually do history, in the same sense that a professional historian does history. From an engineering point of view, Valley of the Shadow is a more-or-less conventional website. Prof. Ayres is in effect using computers and the internet as a more flexible and inexpensive replacement for photocopiers and microfilm. The kind of software which would benefit his project would probably be better optical character recognition software, much more sophisticated than anything now available commercially, with emphasis on interactive handwriting recognition, and with special facilities to aid a human in proofreading the output against the original. This kind of program really does not tend to push the limits of networking.
Similarly, in engineering education, the emphasis is not on trying to grade students more precisely, but rather to enable them to actually be engineers as soon as possible. That is of course a considerable part of the logic built into GNU HURD-- enabling more undergraduates to do real work in building components of GNU/Linux. Architecture schools are using CAD/CAM to enable students to design and build community centers in slums. Computers and wood carpentry are made to fit together in an integrated whole. The whole principle underlying such initiatives is that once you get the student actually making things, he's going get drunk on the process of creation, and work like a hacker.
Computer video is the major upcoming project for mass- production software. More specifically, the biggest project within computer video is computer animation. Progress in computer animation is going to require the creation of a considerable quantity of object-oriented model layers, forming a "standard library."
The first layer of computer animation is 3D rendering. I think one may safely say that this has been largely implemented. Both Microsoft and the open source movement (eg. Povray, VRML) have developed software, and much of the functionality of this software has been implemented in hardware to attain the necessary speed (eg. Microsoft's Xbox).
The next software layer is what is called "physics," that is, generalized finite-element modeling systems which take an immensely detailed static description of an object, and generate masses of dynamic 3D data to feed into the rendering engine. This layer is still at the proprietary stage, with people developing programs, and selling them either to game developers, or to the major proprietary video software vendors, such as RealNetworks. Physics is still at present partially bound by computer power, which is another way of saying that it is in a position to benefit from Moore's Law over the next several years.
Beyond "physics" is the problem of providing finite element models for the finite element software to use. This level has two forks, one of which is comparatively trivial. The vendors of CAD/CAM software can be expected to support "physics" as standards emerge, on the same basis that they would support an automatic manufacturing tool. A Hollywood properties man has always had the option of buying as many properties as possible in ordinary stores. To the extent that animation software is linked to CAD/CAM, much the same situation will prevail for animation "properties," scenery, etc.
The more complicated fork, however, is the animate body, such as the human body. Here, one might have a program, substantially equivalent to Gray's Anatomy, which implements several thousand anatomical landmarks, and also a large number of parameters describing the departure of a particular human body from the archetypal human body. Once you have this module in place, you can create extremely realistic electronic marionettes in about 10K each. This module is not going to present a great strain on computer power, because it need run only a few times per viewing session (rather than at the 30 frames/sec rate required of 3D and "physics"). Body generation is therefore a prime candidate for early open-sourcing. There will be a number of other equivalent modules going into an animation "word processor," which will also serve as a viewer. The catch, however, is that such a program will run on desktop supercomputer.
The object of all of this is to create a body of software which can support give-and-take form of communication, similar to what exists for the written word. Every viewer should have the capability to tell a counter-story, using the very elements of a narrative to say, in effect: "no, it didn't happen like that, it happened like this instead." I should like to suggest that MAD magazine represents a prototype of sorts. MAD is of course primarily based around using drawn cartoons to respond to movies, television, and television advertising. The magazine puts little effort into responding to print media.
From the perspective of a historian, capitalism is comparatively recent; from the perspective of an anthropologist, it is comparatively parochial. Business is only one of the possible economic systems. Perhaps the internet is the means of transition to something else.