TALK TO EACH OTHER
ASK EVERYONE TO FREE HUMANITY
UNTIL ALL OF HUMANITY PICKS THE DATE
TO BOYCOTT MONEY & POVERTY FOREVER
P.S. P.ARADIGM S.HIFT
GIVE UNTIL EVERYONE
RECEIVES ABUNDANCE FOREVER
ONLY THAN CAN WE COOPERATE FOR A
HEALTHY SUSTAINABLE AND PEACEFUL WORLD!
PASS IT ON
***
No more hate, fear, insecurity, profiteering or lies. No more poverty, politics, militarism or war.
No more slavery or child prostitution or sickness. No more drug laws or prosecution for personal choices.
No more ignorance or apathy. No more nuclear bombs or power plants, or burning fossil fuel. No more one time or plastic packaging and toxic science. No more borders…No more enemies…No more lies…
End of The World Report
This is T H E conversation that will create a sustainable world to live in.
We must act globally…NOW… on these life sustaining priorities in complete cooperation. These priorities include recovering, regenerating, and maintaining our oceans, waterways, forests, mountains, plains, desserts, atmosphere, and orbit space, NOW.
Although scientifically we have micro managed our planet for 100 years, we have allowed profiteering to dominate the condition of our planet. While we compete for our needs we ignore at the effective level of resolve…to recover our dying oceans, lakes, and rivers. We decimate our forests and coast lines, while light speed species extinction continues. The air smells of the toxic science. We continue soil depletion and the chemical nightmare that has affected all our senses. We ignore the massive global problem of packaging and trade. Thus our water ways and oceans have become global waste sites for non/degradable plastics, chemicals, nuclear reactor cores, ships and weapons. WE forget about the oils, gases, and chemicals from transportation and machinery oozing under our feet and into our foods. We compete so vehemently that most of humanity is starving for decency, dignity, mental health, and sufficiency.
We are, by all rational and scientific perceptions, too late to create a sustainable world for oxygen breathing life forms…and ONLY because we compete to prejudice each others value for money!
NOW… we must change the paradigm! A cooperative world would never use prejudice or use money for need want and desire. A cooperative world would resolve its problems before they become cyclical. A cooperative world would never burn fossil fuel or nuclear fission, for any reason. A cooperative world would not allow poverty for anyone ever. A cooperative world would not program its young to fit the mold of a profiteering society. A cooperative world would never need to lie or have enemies. A cooperative world, is a for each other world.
Participating in this conversation is our only hope for a sustainable future. We must Talk To Each Other…UNITE NOW...WORLD WIDE... without budgets or prejudice to solve our social, environmental and resource problems. We MUST UNITE by simultaneously, all at once, boycotting money forever and giving everyone abundance regardless of contribution. Only money and budgets, have kept us from a healthy and peaceful world.
OUR SYSTEMS THAT PROVIDE NEED FOR HUMANITY ARE COLLAPSING. SIMPLY PUT, WE MUST UNITE TO PUT OUR PLANET INTO EMERGENCY ROOM STATUS, NOW, BEFORE OUR SYSTEMS ARE COLLAPSED MAKING UNITING IMPOSSIBLE.
This is not a mountain of change...it is the relief our humanity and our planet needs, and must have NOW…for sustainability to have a fighting chance…A free and kind world, Tevin
What We Should All Know
We humans tend to see ourselves as less than capable of understanding the way of the world The fact is, we all understand what the source of our problems are and yet we feel or think we can’t do anything about it.
However, no matter our position in life we are equal in importance and value to each other. Human nature is not the choices we make. Human nature is in the fact that we have the choice to choose whatever we want. This means we can change our lives and our world into a healthy abundant life for everyone.
Our choices, right now, are infinite because we have all the solutions to the worlds problems and have had since time began. But the profiteers have enslaved our choices with money, war, enemies and most of all prejudicing each others value for our effort.
Can we recover the toxic madness our earth has become? Yes! Can we feed the world in abundance? Yes, and while doing so, eliminate pesticides and chemicals from our lives forever. Can we respect each cultures right to live and choose as they do? Yes because the reign of freedom for humanity will be so great that those who hold onto oppression will soon be seduced by freedom in balance with human (choice) nature. Can we fend off the greed of the few who seem to seduce us again and again. Yes, because we have chosen abundance for everyone. This leaves no one wanting and no one willing to be a slave.
Can we talk to each other, and pass on P.S. The Paradigm Shift idea and continue to make money? Yes of course. We keep doing as we have while we talk to each other about the Paradigm Shift. The more that people talk to each other the faster the Paradigm Shift happens. While making a movie or putting new tires on your customers car or getting gas or buying groceries we must ask each other to free the world of poverty by uniting to boycott money forever, and give each other, everyone abundance.
What is hard to maintain is how we live with money as our choice variable. What is easy is to give abundance to everyone first and foremost. Abundance for everyone frees humanity to cooperate with each other to recover the environment and retrieve humanity from the hell we live by: politics, special interests, militarism, enemies, poverty, unnecessary suffering, disease, famine, genocide, and profiteering.
What do we want? A dying planet, or a thriving healthy planet? The choice is always ours. Talk to each other or perish in a dying world. A free and kind world, Tevin
ReplyReply All
Move...emailsgsp
Friday, June 5, 2009
Cosas/Reliability
http://www.rebelscience.org/Cosas/Reliability.htm
Abstract: There is something fundamentally wrong with the way we create software. Contrary to conventional wisdom, unreliability is not an essential characteristic of complex software programs. In this article, I will propose a silver bullet solution to the software reliability and productivity crisis. The solution will require a radical change in the way we program our computers. I will argue that the main reason that software is so unreliable and so hard to develop has to do with a custom that is as old as the computer: the practice of using the algorithm as the basis of software construction (*). I will argue further that moving to a signal-based, synchronous (**) software model will not only result in an improvement of several orders of magnitude in productivity, but also in programs that are guaranteed free of defects, regardless of their complexity.
Software Is Bad and Getting Worse
The 'No Silver Bullet' Syndrome
Not long ago, in an otherwise superb article [pdf] on the software reliability crisis published by MIT Technology Review, the author blamed the problem on everything from bad planning and business decisions to bad programmers. The proposed solution: bring in the lawyers. Not once did the article mention that the computer industry's fundamental approach to software construction might be flawed. The reason for this omission has to do in part with a highly influential paper that was published in 1987 by a now famous computer scientist named Frederick P. Brooks. In the paper, titled "No Silver Bullet--Essence and Accidents of Software Engineering", Dr. Brooks writes:
But, as we look to the horizon of a decade hence, we see no silver bullet. There is no single development, in either technology or in management technique, that by itself promises even one order-of-magnitude improvement in productivity, in reliability, in simplicity....
Not only are there no silver bullets now in view, the very nature of software makes it unlikely that there will be any--no inventions that will do for software productivity, reliability, and simplicity what electronics, transistors, and large-scale integration did for computer hardware.
No other paper in the annals of software engineering has had a more detrimental effect on humanity's efforts to find a solution to the software reliability crisis. Almost single-handedly, it succeeded in convincing the entire software development community that there is no hope in trying to find a solution. It is a rather unfortunate chapter in the history of programming. Untold billions of dollars and even human lives have been and will be wasted as a result.
When Brooks wrote his famous paper, he apparently did not realize that his arguments applied only to algorithmic complexity. Most people in the software engineering community wrongly assume that algorithmic software is the only possible type of software. Non-algorithmic or synchronous reactive software is similar to the signal-based model used in electronic circuits. It is, by its very nature, extremely stable and much easier to manage. This is evident in the amazing reliability of integrated circuits. See Targeting the Wrong Complexity below.
Calling in the lawyers and hiring more software experts schooled in an ancient paradigm will not solve the problem. It will only be costlier and, in the end, deadlier. The reason is threefold. First, the complexity and ubiquity of software continue to grow unabated. Second, the threat of lawsuits means that the cost of software development will skyrocket (lawyers, experts and trained engineers do not work for beans). Third, the incremental stop-gap measures offered by the experts are not designed to get to the heart of the problem. They are designed to provide short-term relief at the expense of keeping the experts employed. In the meantime, the crisis continues.
Ancient ParadigmWhy ancient paradigm? Because the root cause of the crisis is as old as Lady Ada Lovelace who invented the sequential stored program (or table of instructions) for Charles Babbage's analytical engine around 1842. Built out of gears and rotating shafts, the analytical engine was the first true general-purpose numerical computer, the ancestor of the modern electronic computer. But the idea of using a step by step procedure in a machine is at least as old as Jacquard's punched cards which were used to control the first automated loom in 1801. The Persian mathematician Muhammad ibn Mūsā al-Khwārizmī is credited for having invented the algorithm in 825 AD, as a problem solving method. The word algorithm derives from 'al-Khwārizmī .'
Why The Experts Are Wrong
Turing's Baby
Early computer scientists of the twentieth century were all trained mathematicians. They viewed the computer primarily as a tool with which to solve mathematical problems written in an algorithmic format. Indeed, the very name computer implies the ability to perform a calculation and return a result. Soon after the introduction of electronic computers in the 1950s, scientists fell in love with the ideas of famed British computer and artificial intelligence pioneer, Alan Turing. According to Turing, to be computable, a problem has to be executable on an abstract computer called the universal Turing machine (UTM). As everyone knows, a UTM (an infinitely long tape with a movable read/write head) is the quintessential algorithmic computer, a direct descendent of Lovelace's sequential stored program. It did not take long for the Turing computability model (TCM) to become the de facto religion of the entire computer industry.
A Fly in the Ointment
The UTM is a very powerful abstraction because it is perfectly suited to the automation of all sorts of serial tasks for problem solving. Lovelace and Babbage would have been delighted, but Turing's critics could argue that the UTM, being a sequential computer, cannot be used to simulate real-world problems which require multiple simultaneous computations. Turing's advocates could counter that the UTM is an idealized computer and, as such, can be imagined as having infinite read/write speed. The critics could then point out that, idealized or not, an infinitely fast computer introduces all sorts of logical/temporal headaches since all computations are performed simultaneously, making it unsuitable to inherently sequential problems. As the saying goes, you cannot have your cake and eat it too. At the very least, the TCM should have been extended to include both sequential and concurrent processes. However, having an infinite number of tapes and an infinite number of heads that can move from one tape to another would destroy the purity of the UTM ideal.
The Hidden Nature of Computing
The biggest problem with the UTM is not so much that it cannot be adapted to certain real-world parallel applications but that it hides the true nature of computing. Most students of computer science will recognize that a computer program is, in reality, a behaving machine (BM). That is to say, a program is an automaton that detects changes in its environment and effects changes in it. As such, it belongs in the same class of machines as biological nervous systems and integrated circuits. A basic universal behaving machine (UBM) consists, on the one hand, of a couple of elementary behaving entities (a sensor and an effector) or actors and, on the other, of an environment (a variable).
Universal Behaving Machine
Actors
Environment
Sensor
Effector
Variable
More complex UBMs consist of arbitrarily large numbers of actors and environmental variables. This computing model, which I have dubbed the behavioral computing model (BCM), is a radical departure from the TCM. Whereas a UTM is primarily a calculation tool for solving algorithmic problems, a UBM is simply an agent that reacts to one or more environmental stimuli. As seen in the figure below, in order for a UBM to act on and react to its environment, sensors and effectors must be able to communicate with each other.
The main point of this argument is that, even though communication is an essential part of the nature of computing, this is not readily apparent from examining a UTM. Indeed, there are no signaling entities, no signals and no signal pathways on a Turing tape or in computer memory. The reason is that, unlike hardware objects which are directly observable, software entities are virtual and must be logically inferred.
Fateful Choice
Unfortunately for the world, it did not occur to early computer scientists that a program is, at its core, a tightly integrated collection of communicating entities interacting with each other and with their environment. As a result, the computer industry had no choice but to embrace a method of software construction that sees the computer simply as a tool for the execution of instruction sequences. The problem with this approach is that it forces the programmer to explicitly identify and resolve a number of critical communication-related issues that, ideally, should have been implicitly and automatically handled at the system level. The TCM is now so ingrained in the collective mind of the software engineering community that most programmers do not even recognize these issues as having anything to do with either communication or behavior. This would not be such a bad thing except that a programmer cannot possibly be relied upon to resolve all the dependencies of a complex software application during a normal development cycle. Worse, given the inherently messy nature of algorithmic software, there is no guarantee that they can be completely resolved. This is true even if one had an unlimited amount of time to work on it. The end result is that software applications become less predictable and less stable as their complexity increases.
Emulation vs. Simulation
It can be convincingly argued that the UBM described above should have been adopted as the proper basis of software engineering from the very beginning of the modern computer era. Note that, whereas a UBM can be used to simulate a UTM, a UTM cannot be used to simulate a UBM. The reason is that a UBM is synchronous (**) by nature, that is to say, more than two of its constituent objects can communicate simultaneously. In a UTM, by contrast, only two objects can communicate at a time: a predecessor and a successor. The question is, given that all modern computers use a von Neumann (UTM-compatible) architecture, can such a computer be used to emulate (as opposed to simulate) a synchronous system? An even more important question is this: Is an emulation of a synchronous system adequate for the purpose of resolving the communication issues mentioned in the previous paragraph? As explained below, the answer to both questions is a resounding yes.
Turing's Monster
It is tempting to speculate that, had it not been for our early infatuation with the sanctity of the TCM, we might not be in the sorry mess that we are in today. Software engineers have had to deal with defective software from the very beginning. Computer time was expensive and, as was the practice in the early days, a programmer had to reserve access to a computer days and sometimes weeks in advance. So programmers found themselves spending countless hours meticulously scrutinizing program listings in search of bugs. By the mid 1970s, as software systems grew in complexity and applicability, people in the business began to talk of a reliability crisis. Innovations such as high-level languages, structured and/or object-oriented programming did little to solve the reliability problem. Turing's baby had quickly grown into a monster.
Vested Interest
Software reliability experts (such as the folks at Cigital) have a vested interest in seeing that the crisis lasts as long as possible. It is their raison d'être. Computer scientists and software engineers love Dr. Brooks' ideas because an insoluble software crisis affords them with a well-paying job and a lifetime career as reliability engineers. Not that these folks do not bring worthwhile advances to the table. They do. But looking for a breakthrough solution that will produce Brooks' order-of-magnitude improvement in reliability and productivity is not on their agenda. They adamantly deny that such a breakthrough is even possible. Brooks' paper is their new testament and 'no silver bullet' their mantra. Worst of all, most of them are sincere in their convictions.This attitude (pathological denial) has the unfortunate effect of prolonging the crisis. Most of the burden of ensuring the reliability of software is now resting squarely on the programmer's shoulders. An entire reliability industry has sprouted with countless experts and tool vendors touting various labor-intensive engineering recipes, theories and practices. But more than thirty years after people began to refer to the problem as a crisis, it is worse than ever. As the Technology Review article points out, the cost has been staggering.
There Is a Silver Bullet After All
Reliability is best understood in terms of complexity vs. defects. A program consisting of one thousand lines of code is generally more complex and less reliable than a one with a hundred lines of code. Due to its sheer astronomical complexity, the human brain is the most reliable behaving system in the world. Its reliability is many orders of magnitude greater than that of any complex program in existence (see devil's advocate). Any software application with the complexity of the brain would be so riddled with bugs as to be unusable. Conversely, given their low relative complexity, any software application with the reliability of the brain would almost never fail. Imagine how complex it is to be able to recognize someone's face under all sorts of lighting conditions, velocities and orientations. Just driving a car around town (taxi drivers do it all day long, everyday) without getting lost or into an accident is incredibly more complex than anything any software program in existence can accomplish. Sure brains make mistakes, but the things that they do are so complex, especially the myriads of little things that we are oblivious to, that the mistakes pale in comparison to the successes. And when they do make mistakes, it is usually due to physical reasons (e.g., sickness, intoxication, injuries, genetic defects, etc...) or to external circumstances beyond their control (e.g., they did not know). Mistakes are rarely the result of defects in the brain's existing software.The brain is proof that the reliability of a behaving system (which is what a computer program is) does not have to be inversely proportional to its complexity, as is the case with current software systems. In fact, the more complex the brain gets (as it learns), the more reliable it becomes. But the brain is not the only proof that we have of the existence of a silver bullet. We all know of the amazing reliability of integrated circuits. No one can seriously deny that a modern CPU is a very complex device, what with some of the high-end chips from Intel, AMD and others sporting hundreds of millions of transistors. Yet, in all the years that I have owned and used computers, only once did a CPU fail on me and it was because its cooling fan stopped working. This seems to be the norm with integrated circuits in general: when they fail, it is almost always due to a physical fault and almost never to a defect in the logic. Moore's law does not seem to have had a deleterious effect on hardware reliability since, to my knowledge, the reliability of CPUs and other large scale integrated circuits did not degrade over the years as they increased in speed and complexity.
Deconstructing Brooks' Complexity Arguments
Frederick Brooks' arguments fall apart in one important area. Although Brooks' conclusion is correct as far as the unreliability of complex algorithmic software is concerned, it is correct for the wrong reason. I argue that software programs are unreliable not because they are complex (Brooks' conclusion), but because they are algorithmic in nature. In his paper, Brooks defines two types of complexity, essential and accidental. He writes:
The complexity of software is an essential property, not an accidental one.
According to Brooks, one can control the accidental complexity of software engineering (with the help of compilers, syntax and buffer overflow checkers, data typing, etc...), but one can do nothing about its essential complexity. Brooks then explains why he thinks this essential complexity leads to unreliability:
From the complexity comes the difficulty of enumerating, much less understanding, all the possible states of the program, and from that comes the unreliability.
This immediately begs several questions: Why must the essential complexity of software automatically lead to unreliability? Why is this not also true of the essential complexity of other types of behaving systems? In other words, is the complexity of a brain or an integrated circuit any less essential than that of a software program? Brooks is mum on these questions even though he acknowledges in the same paper that the reliability and productivity problem has already been solved in hardware through large-scale integration.
More importantly, notice the specific claim that Brooks is making. He asserts that the unreliability of a program comes from the difficulty of enumerating and/or understanding all the possible states of the program. This is an often repeated claim in the software engineering community but it is fallacious nonetheless. It overlooks the fact that it is equally difficult to enumerate all the possible states of a complex hardware system. This is especially true if one considers that most such systems consist of many integrated circuits that interact with one another in very complex ways. Yet, in spite of this difficulty, hardware systems are orders of magnitude more robust than software systems (see the COSA Reliability Principle for more on this subject).
Brooks backs up his assertion with neither logic nor evidence. But even more disturbing, nobody in the ensuing years has bothered to challenge the validity of the claim. Rather, Brooks has been elevated to the status of a demigod in the software engineering community and his ideas on the causes of software unreliability are now bandied about as infallible dogma.
Targeting the Wrong Complexity
Obviously, whether essential or accidental, complexity is not, in and of itself, conducive to unreliability. There is something inherent in the nature of our software that makes it prone to failure, something that has nothing to do with complexity per se. Note that, when Brooks speaks of software, he has a particular type of software in mind:
The essence of a software entity is a construct of interlocking concepts: data sets, relationships among data items, algorithms, and invocations of functions.
By software, Brooks specifically means algorithmic software, the type of software which is coded in every computer in existence. Just like Alan Turing before him, Brooks fails to see past the algorithmic model. He fails to realize that the unreliability of software comes from not understanding the true nature of computing. It has nothing to do with the difficulty of enumerating all the states of a program. In the remainder of this article, I will argue that all the effort in time and money being spent on making software more reliable is being targeted at the wrong complexity, that of algorithmic software. And it is a particularly insidious and intractable form of complexity, one which humanity, fortunately, does not have to live with. Switch to the right complexity and the problem will disappear.
The Billion Dollar QuestionThe billion (trillion?) dollar question is: What is it about the brain and integrated circuits that makes them so much more reliable in spite of their essential complexity? But even more important, can we emulate it in our software? If the answer is yes, then we have found the silver bullet.
The Silver Bullet
Why Software Is Bad
Algorithmic software is unreliable because of the following reasons:
Brittleness
An algorithm is not unlike a chain. Break a link and the entire chain is broken. As a result, algorithmic programs tend to suffer from catastrophic failures even in situations where the actual defect is minor and globally insignificant.
Temporal Inconsistency
With algorithmic software it is virtually impossible to guarantee the timing of various processes because the execution times of subroutines vary unpredictably. They vary mainly because of a construct called ‘conditional branching’, a necessary decision mechanism used in instruction sequences. But that is not all. While a subroutine is being executed, the calling program goes into a coma. The use of threads and message passing between threads does somewhat alleviate the problem but the multithreading solution is way too coarse and unwieldy to make a difference in highly complex applications. And besides, a thread is just another algorithm. The inherent temporal uncertainty (from the point of view of the programmer) of algorithmic systems leads to program decisions happening at the wrong time, under the wrong conditions.
Unresolved Dependencies
The biggest contributing factor to unreliability in software has to do with unresolved dependencies. In an algorithmic system, the enforcement of relationships among data items (part of what Brooks defines as the essence of software) is solely the responsibility of the programmer. That is to say, every time a property is changed by a statement or a subroutine, it is up to the programmer to remember to update every other part of the program that is potentially affected by the change. The problem is that relationships can be so numerous and complex that programmers often fail to resolve them all.
Why Hardware is Good
Brains and integrated circuits are, by contrast, parallel signal-based systems. Their reliability is due primarily to three reasons:
Strict Enforcement of Signal Timing through Synchronization
Neurons fire at the right time, under the right temporal conditions. Timing is consistent because of the brain's synchronous architecture (**). A similar argument can be made with regard to integrated circuits.
Distributed Concurrent Architecture
Since every element runs independently and synchronously, the localized malfunctions of a few (or even many) elements will not cause the catastrophic failure of the entire system.
Automatic Resolution of Event Dependencies
A signal-based synchronous system makes it possible to automatically resolve event dependencies. That is to say, every change in a system's variable is immediately and automatically communicated to every object that depends on it.
Programs as Communication Systems
Although we are not accustomed to think of it as such, a computer program is, in reality, a communication system. During execution, every statement or instruction in an algorithmic procedure essentially sends a signal to the next statement, saying: 'I'm done, now it's your turn.' A statement should be seen as an elementary object having a single input and a single output. It waits for an input signal, does something, and then sends an output signal to the next object. Multiple objects are linked together to form a one-dimensional (single path) sequential chain. The problem is that, in an algorithm, communication is limited to only two objects at a time, a sender and a receiver. Consequently, even though there may be forks (conditional branches) along the way, a signal may only take one path at a time.
My thesis is that this mechanism is too restrictive and leads to unreliable software. Why? Because there are occasions when a particular event or action must be communicated to several objects simultaneously. This is known as an event dependency. Algorithmic development environments make it hard to attach orthogonal signaling branches to a sequential thread and therein lies the problem. The burden is on the programmer to remember to add code to handle delayed reaction cases: something that occurred previously in the procedure needs to be addressed at the earliest opportunity by another part of the program. Every so often we either forget to add the necessary code (usually, a call to a subroutine) or we fail to spot the dependency.
Event Dependencies and the Blind Code Problem
The state of a system at any given time is defined by the collection of properties (variables) that comprise the system's data, including the data contained in input/output registers. The relationships or dependencies between properties determine the system's behavior. A dependency simply means that a change in one property (also known as an event) must be followed by a change in one or more related properties. In order to ensure flawless and consistent behavior, it is imperative that all dependencies are resolved during development and are processed in a timely manner during execution. It takes intimate knowledge of an algorithmic program to identify and remember all the dependencies. Due to the large turnover in the software industry, programmers often inherit strange legacy code which aggravates the problem. Still, even good familiarity is not a guarantee that all dependencies will be spotted and correctly handled. Oftentimes, a program is so big and complex that its original authors completely lose sight of old dependencies. Blind code leads to wrong assumptions which often result in unexpected and catastrophic failures. The problem is so pervasive and so hard to fix that most managers in charge of maintaining complex mission-critical software systems will try to find alternative ways around a bug that do not involve modifying the existing code.
The Cure For Blind Code
To cure code blindness, all objects in a program must, in a sense, have eyes in the back of their heads. What this means is that every event (a change in a data variable) occurring anywhere in the program must be detected and promptly communicated to every object that depends on it. The cure consists of three remedies, as follows:
Automatic Resolution of Event Dependencies
The problem of unresolved dependencies can be easily solved in a change-driven system through the use of a technique called dynamic pairing whereby change detectors (comparison sensors) are associated with related operators (effectors). This way, the development environment can automatically identify and resolve every dependency between sensors and effectors, leaving nothing to chance.
One-to-many Connectivity
One of the factors contributing to blind code in algorithmic systems is the inability to attach one-to-many orthogonal branches to a thread. This problem is non-existent in a synchronous system because every signal can be channeled through as many pathways as necessary. As a result, every change to a property is immediately broadcasted to every object that is affected by the change.
Immediacy
During the processing of any element in an algorithmic sequence, all the other elements in the sequence are disabled. Thus, any change or event that may require the immediate attention of either preceding or succeeding elements in the chain is ignored. Latency is a major problem in conventional programs. By contrast, immediacy is an inherent characteristic of synchronous systems.
Software Design vs. Hardware Design
All the good things that are implicit and taken for granted in hardware logic design become explicit and a constant headache for the algorithmic software designer. The blindness problem that afflicts conventional software simply does not exist in electronic circuits. The reason is that hardware is inherently synchronous. This makes it easy to add orthogonal branches to a circuit. Signals are thus promptly dispatched to every element or object that depends on them. Furthermore, whereas sensors (comparison operators) in software must be explicitly associated with relevant effectors and invoked at the right time, hardware sensors are self-processing. That is, a hardware sensor works independently of the causes of the phenomenon (change) it is designed to detect. As a result, barring a physical failure, it is impossible for a hardware system to fail to notice an event.
By contrast, in software, sensors must be explicitly processed in order for a change to be detected. The result of a comparison operation is likely to be useless unless the operator is called at the right time, i.e., immediately after or concurrent with the change. As mentioned previously, in a complex software system, programmers often fail to update all relevant sensors after a change in a property. Is it any wonder that logic circuits are so much more reliable than software programs?
As Jiantao Pan points out in his excellent paper on software reliability, "hardware faults are mostly physical faults, while software faults are design faults, which are harder to visualize, classify, detect, and correct." This begs the question. Why can't software engineers do what hardware designers do? In other words, why can't software designers design software the same way hardware designers design hardware? (Note that, by hardware design, I mean the design of the hardware's logic). When hardware fails, it is almost always due to some physical malfunction, and almost never to a problem in the underlying logic. Since software has no physical faults and only design faults, by adopting the synchronous reactive model of hardware logic design, we can bring software reliability to at least a level on a par with that of hardware. Fortunately for software engineering, all the advantages of hardware can also be made intrinsic to software. And it can be done in a manner that is completely transparent to the programmer.
Thinking of Everything
When it comes to safety-critical applications such as air traffic control or avionics software systems, even a single defect is not an option since it is potentially catastrophic. Unless we can guarantee that our programs are logically consistent and completely free of defects, the reliability problem will not go away. In other words, extremely reliable software is just not good enough. What we need is 100% reliable software. There is no getting around this fact.
Jeff Voas, a leading proponent of the 'there is no silver bullet' movement and a co-founder of Cigital, a software-reliability consulting firm in Dulles, VA, once said that "it's the things that you never thought of that get you every time." It is true that one cannot think of everything, especially when working with algorithmic systems. However, it is also true that a signal-based, synchronous program can be put together in such a way that all internal dependencies and incompatibilities are spotted and resolved automatically, thus relieving the programmer of the responsibility to think of them all. In addition, since all conditions to which the program is designed to react are explicit, they can all can be tested automatically before deployment. Guaranteed bug-free software is an essential aspect of the COSA Project and the COSA operating system. Refer to the COSA Reliability Principle for more on this topic.
Addendum (3/5/2006) The COSA software model makes it possible to automatically find design inconsistencies in a complex program based on temporal constraints. There is a simple method that will ensure that a complex software system is free of internal logical contradictions. With this method, it is possible to increase design correctness simply by increasing complexity. The consistency mechanism can find all temporal constraints in a complex program automatically, while the program is running. The application designer is given the final say as to whether or not any discovered constraint is retained.
Normally, logical consistency is inversely proportional to complexity. The COSA software model introduces the rather counterintuitive notion that higher complexity is conducive to greater consistency. The reason is that both complexity and consistency increase with the number of constraints without necessarily adding to the system's functionality. Any new functionality will be forced to be compatible with the existing constraints while adding new constraints of its own, thereby increasing design correctness and application robustness. Consequently, there is no limit to how complex our future software systems will be. Eventually, time permitting, I will add a special page to the site to explain the constraint discovery mechanism, as it is a crucial part of the COSA model.
Plug-Compatible Components
Many have suggested that we should componentize computer programs in the hope of doing for software what integrated circuits did for hardware. Indeed, componentization is a giant step in the right direction but, even though the use of software components (e.g., Microsoft's ActiveX® controls, Java beans, C++ objects, etc...) in the last decade has automated much of the pain out of programming, the reliability problem is still with us. The reason should be obvious: software components are constructed with things that are utterly alien to a hardware IC designer: algorithms. Also a thoroughly tested algorithmic component may work fine in one application but fail in another. The reason is that its temporal behavior is not consistent. It varies from one environment to another. This problem does not exist in a synchronous model making it ideal as a platform for components.
Another known reason for bad software has to do with compatibility. In the brain, signal pathways are not connected willy-nilly. Connections are made according to their types. Refer, for example, to the retinotopic mapping of the visual cortex: signals from a retinal ganglion cell ultimately reach a specific neuron in the visual cortex, all the way in the back of the brain. This is accomplished via a biochemical identification mechanism during the brain's early development. It is a way of enforcing compatibility between connected parts of the brain. We should follow nature's example and use a strict typing mechanism in our software in order to ensure compatibility between communicating objects. All message connectors should have unique message types, and all connectors should be unidirectional, i.e., they should be either male (sender) or female (receiver). This will eliminate mix-ups and ensure robust connectivity. The use of libraries of pre-built components will automate over 90% of the software development process and turn everyday users into software developers. These plug-compatible components should snap together automatically: just click, drag and drop. Thus the burden of assuring compatibility is the responsibility of the development system, not the programmer.
Some may say that typed connectors are not new and they are correct. Objects that communicate via connectors have indeed been tried before, and with very good results. However, as mentioned earlier, in a pure signal-based system, objects do not contain algorithms. Calling a function in a C++ object is not the same as sending a typed signal to a synchronous component. The only native (directly executable) algorithmic code that should exist in the entire system is a small microkernel. No new algorithmic code should be allowed since the microkernel runs everything. Furthermore, the underlying parallelism and the signaling mechanism should be implemented and enforced at the operating system level in such a way as to be completely transparent to the software designer. (Again, see the COSA Operating System for more details on this topic).
Event Ordering Is Critical
Consistent timing is vital to reliability but the use of algorithms plays havoc with event ordering. To ensure consistency, the prescribed scheduling of every operation or action in a software application must be maintained throughout the life of the application, regardless of the host environment. Nothing should be allowed to happen before or after its time. In a signal-based, synchronous software development environment, the enforcement of order must be deterministic in the sense that every reaction must be triggered by precise, predetermined and explicit conditions. Luckily, this is not something that developers need to be concerned with because it is a natural consequence of the system's parallelism. Note that the term 'consistent timing' does not mean that operations must be synchronized to a real time clock (although they may). It means that the prescribed logical or relative order of operations must be enforced automatically and maintained throughout the life of the system.
Von Neumann ArchitectureThe astute reader may point out that the synchronous nature of hardware cannot be truly duplicated in software because the latter is inherently sequential due to the von Neumann architecture of our computers. This is true but, thanks to the high speed of modern processors, we can easily emulate (although not truly simulate) the parallelism of integrated circuits in software. This is not new. We already emulate nature's parallelism in our artificial neural networks, cellular automata, computer spreadsheets, video games and other types of applications consisting of large numbers of entities running concurrently. The technique is simple: Essentially, within any given processing cycle or frame interval, a single fast central processor does the work of many small virtual processors residing in memory.
One may further argue that in an emulated parallel system, the algorithms are still there even if they are not visible to the developer, and that therefore, the unreliability of algorithmic software cannot be avoided. This would be true if unreliability were due to the use of a single algorithm or even a handful of them. This is neither what is observed in practice nor what is being claimed in this article. It is certainly possible to create one or more flawless algorithmic procedures. We do it all the time. The unreliability comes from the unbridled proliferation of procedures, the unpredictability of their interactions, and the lack of a surefire method with which to manage and enforce dependencies (see the blind code discussion above).
As mentioned previously, in a synchronous software system, no new algorithmic code is ever allowed. The only pure algorithm in the entire system is a small, highly optimized and thoroughly tested execution kernel which is responsible for emulating the system's parallelism. The strict prohibition against the deployment of new algorithmic code effectively guarantees that the system will remain stable.
Software ICs with a Twist
In a 1995 article titled "What if there's a Silver Bullet..." Dr. Brad Cox wrote the following:
Building applications (rack-level modules) solely with tightly-coupled technologies like subroutine libraries (block-level modules) is logically equivalent to wafer-scale integration, something that hardware engineering can barely accomplish to this day. So seven years ago, Stepstone began to play a role analogous to the silicon chip vendors, providing chip-level software components, or Software-ICs[TM], to the system-building community.
While I agree with the use of modules for software composition, I take issue with Dr. Cox's analogy, primarily because subroutine libraries have no analog in integrated circuit design. The biggest difference between hardware and conventional software is that the former operates in a synchronous, signal-based universe where timing is systematic and consistent, whereas the latter uses algorithmic procedures which result in haphazard timing.
Achieving true logical equivalence between software and hardware necessitates a signal-based, synchronous software model. In other words, software should not be radically different than hardware. Rather, it should serve as an extension to it. It should emulate the functionality of hardware by adding only what is lacking: flexibility and ease of modification. In the future, when we develop technologies for non-von Neumann computers that can sprout new physical signal pathways and new self-processing objects on the fly, the operational distinction between software and hardware will no longer be valid.
As an aside, it is my hope that the major IC manufacturers (Intel, AMD, Motorola, Texas Instruments, Sun Microsystems, etc...) will soon recognize the importance of synchronous software objects and produce highly optimized CPUs designed specifically for this sort of parallelism. This way, the entire execution kernel could be made to reside on the CPU chip. This would not only completely eliminate the need for algorithmic code in program memory but would result in unparalleled performance. See the description of the COSA Operating System Kernel for more on this.
Failure Localization
An algorithmic program is more like a chain, and like a chain, it is as strong as its weakest link. Break any link and the entire chain is broken. This brittleness can be somewhat alleviated by the use of multiple parallel threads. A malfunctioning thread usually does not affect the proper functioning of the other threads. Failure localization is a very effective way to increase a system's fault tolerance. But the sad reality is that, even though threaded operating systems are the norm in the software industry, our systems are still susceptible to catastrophic failures. Why? The answer is that threads do not entirely eliminate algorithmic coding. They encapsulate algorithms into concurrent programs running on the same computer. Another even more serious problem with threads is that they are, by necessity, asynchronous. Synchronous processing (in which all elementary operations have equal durations and are synchronized to a common clock) is a must for reliability.Threads can carry a heavy price because of the performance overhead associated with context switching. Increasing the number of threads in a system so a to encapsulate and parallelize elementary operations quickly becomes unworkable. The performance hit would be tremendous. Fortunately, there is a simple parallelization technique that does away with threads altogether. It is commonly used in such applications as cellular automata, neural networks, and other simulation-type programs. See the COSA Operating System for more details.
Boosting Productivity
The notion that the computer is merely a machine for the execution of instruction sequences is a conceptual disaster. The computer should be seen as a behaving system, i.e., a collection of synchronously interacting objects. The adoption of a synchronous model will improve productivity by several orders of magnitude for the following reasons:
Visual Software Composition
The synchronous model lends itself superbly to a graphical development environment for software composition. It is much easier to grasp the meaning of a few well-defined icons than it is to decipher dozens of keywords in a language which may not even be one's own. It takes less mental effort to follow signal activation pathways on a diagram than it is to unravel someone's obscure algorithmic code spread over multiple files. The application designer can get a better feel for the flow of things because every signal propagates from one object to another using a unidirectional pathway. A drag-and-drop visual composition environment not only automates a large part of software development, it also eliminates the usual chaos of textual environments by effectively hiding away any information that lies below the current level of abstraction. For more information, see Software Composition in COSA.
Complementarity
One of the greatest impediments to software productivity is the intrinsic messiness of algorithmic software. Although the adoption of structured code and object-oriented programming in the last century was a significant improvement, one could never quite achieve a true sense of order and completeness. There is a secure satisfaction one gets from a finished puzzle in which every element fits perfectly. This sort of order is a natural consequence of what I call the principle of complementarity. Nothing brings order into chaos like complementarity. Fortunately, the synchronous model is an ideal environment for an organizational approach which is strictly based on complementarity. Indeed, complementarity is the most important of the basic principles underlying Project COSA.
Fewer Bugs
The above gains will be due to a marked increase in clarity and comprehensibility. But what will drastically boost productivity will be the fewer number of bugs to fix. It is common knowledge that the average programmer's development time is spent mostly in testing and debugging. The use of snap-together components (click, drag and drop) will automate a huge part of the development process while preventing and eliminating all sorts of problems associated with incompatible components. In addition, development environments will contain debugging tools that will find, correct and prevent all the internal design bugs automatically. A signal-based, synchronous environment will facilitate safe, automated software development and will open up computer programming to the lay public.
Conclusion
Slaying the Werewolf
Unreliable software is the most urgent issue facing the computer industry. Reliable software is critical to the safety, security and prosperity of the modern computerized world. Software has become too much a part of our everyday lives to be entrusted to the vagaries of an archaic and hopelessly flawed paradigm. We need a new approach based on a rock-solid foundation, an approach worthy of the twenty-first century. And we need it desperately! We simply cannot afford to continue doing business as usual. Frederick Brooks is right about one thing: there is indeed no silver bullet that can solve the reliability problem of complex algorithmic systems. But what Brooks and others fail to consider is that his arguments apply only to the complexity of algorithmic software, not to that of behaving systems in general. In other words, the werewolf is not complexity per se but algorithmic software. The bullet should be used to slay the beast once and for all, not to alleviate the symptoms of its incurable illness.
Rotten at the Core
In conclusion, we can solve the software reliability and productivity crisis. To do so, we must acknowledge that there is something rotten at the core of software engineering. We must understand that using the algorithm as the basis of computer programming is the last of the stumbling blocks that are preventing us from achieving an effective and safe componentization of software comparable to what has been done in hardware. It is the reason that current quality control measures will always fail in the end. To solve the crisis, we must adopt a synchronous, signal-based software model. Only then will our software programs be guaranteed free of defects, irrespective of their complexity.
Next: Project COSA
* This is not to say that algorithmic solutions are bad or that they should not be used, but that the algorithm should not be the basis of software construction. A purely algorithmic procedure is one in which communication is restricted to only two elements or statements at a time. In a non-algorithmic system, the number of elements that can communicate simultaneously is only limited by physical factors.
** A synchronous system is one in which all objects are active at the same time. This does not mean that all signals must be generated simultaneously. It means that every object reacts to its related events immediately, i.e., without delay. The end result is that the timing of reactions is deterministic.
©2004-2006 Louis Savain
Copy and distribute freely
Abstract: There is something fundamentally wrong with the way we create software. Contrary to conventional wisdom, unreliability is not an essential characteristic of complex software programs. In this article, I will propose a silver bullet solution to the software reliability and productivity crisis. The solution will require a radical change in the way we program our computers. I will argue that the main reason that software is so unreliable and so hard to develop has to do with a custom that is as old as the computer: the practice of using the algorithm as the basis of software construction (*). I will argue further that moving to a signal-based, synchronous (**) software model will not only result in an improvement of several orders of magnitude in productivity, but also in programs that are guaranteed free of defects, regardless of their complexity.
Software Is Bad and Getting Worse
The 'No Silver Bullet' Syndrome
Not long ago, in an otherwise superb article [pdf] on the software reliability crisis published by MIT Technology Review, the author blamed the problem on everything from bad planning and business decisions to bad programmers. The proposed solution: bring in the lawyers. Not once did the article mention that the computer industry's fundamental approach to software construction might be flawed. The reason for this omission has to do in part with a highly influential paper that was published in 1987 by a now famous computer scientist named Frederick P. Brooks. In the paper, titled "No Silver Bullet--Essence and Accidents of Software Engineering", Dr. Brooks writes:
But, as we look to the horizon of a decade hence, we see no silver bullet. There is no single development, in either technology or in management technique, that by itself promises even one order-of-magnitude improvement in productivity, in reliability, in simplicity....
Not only are there no silver bullets now in view, the very nature of software makes it unlikely that there will be any--no inventions that will do for software productivity, reliability, and simplicity what electronics, transistors, and large-scale integration did for computer hardware.
No other paper in the annals of software engineering has had a more detrimental effect on humanity's efforts to find a solution to the software reliability crisis. Almost single-handedly, it succeeded in convincing the entire software development community that there is no hope in trying to find a solution. It is a rather unfortunate chapter in the history of programming. Untold billions of dollars and even human lives have been and will be wasted as a result.
When Brooks wrote his famous paper, he apparently did not realize that his arguments applied only to algorithmic complexity. Most people in the software engineering community wrongly assume that algorithmic software is the only possible type of software. Non-algorithmic or synchronous reactive software is similar to the signal-based model used in electronic circuits. It is, by its very nature, extremely stable and much easier to manage. This is evident in the amazing reliability of integrated circuits. See Targeting the Wrong Complexity below.
Calling in the lawyers and hiring more software experts schooled in an ancient paradigm will not solve the problem. It will only be costlier and, in the end, deadlier. The reason is threefold. First, the complexity and ubiquity of software continue to grow unabated. Second, the threat of lawsuits means that the cost of software development will skyrocket (lawyers, experts and trained engineers do not work for beans). Third, the incremental stop-gap measures offered by the experts are not designed to get to the heart of the problem. They are designed to provide short-term relief at the expense of keeping the experts employed. In the meantime, the crisis continues.
Ancient ParadigmWhy ancient paradigm? Because the root cause of the crisis is as old as Lady Ada Lovelace who invented the sequential stored program (or table of instructions) for Charles Babbage's analytical engine around 1842. Built out of gears and rotating shafts, the analytical engine was the first true general-purpose numerical computer, the ancestor of the modern electronic computer. But the idea of using a step by step procedure in a machine is at least as old as Jacquard's punched cards which were used to control the first automated loom in 1801. The Persian mathematician Muhammad ibn Mūsā al-Khwārizmī is credited for having invented the algorithm in 825 AD, as a problem solving method. The word algorithm derives from 'al-Khwārizmī .'
Why The Experts Are Wrong
Turing's Baby
Early computer scientists of the twentieth century were all trained mathematicians. They viewed the computer primarily as a tool with which to solve mathematical problems written in an algorithmic format. Indeed, the very name computer implies the ability to perform a calculation and return a result. Soon after the introduction of electronic computers in the 1950s, scientists fell in love with the ideas of famed British computer and artificial intelligence pioneer, Alan Turing. According to Turing, to be computable, a problem has to be executable on an abstract computer called the universal Turing machine (UTM). As everyone knows, a UTM (an infinitely long tape with a movable read/write head) is the quintessential algorithmic computer, a direct descendent of Lovelace's sequential stored program. It did not take long for the Turing computability model (TCM) to become the de facto religion of the entire computer industry.
A Fly in the Ointment
The UTM is a very powerful abstraction because it is perfectly suited to the automation of all sorts of serial tasks for problem solving. Lovelace and Babbage would have been delighted, but Turing's critics could argue that the UTM, being a sequential computer, cannot be used to simulate real-world problems which require multiple simultaneous computations. Turing's advocates could counter that the UTM is an idealized computer and, as such, can be imagined as having infinite read/write speed. The critics could then point out that, idealized or not, an infinitely fast computer introduces all sorts of logical/temporal headaches since all computations are performed simultaneously, making it unsuitable to inherently sequential problems. As the saying goes, you cannot have your cake and eat it too. At the very least, the TCM should have been extended to include both sequential and concurrent processes. However, having an infinite number of tapes and an infinite number of heads that can move from one tape to another would destroy the purity of the UTM ideal.
The Hidden Nature of Computing
The biggest problem with the UTM is not so much that it cannot be adapted to certain real-world parallel applications but that it hides the true nature of computing. Most students of computer science will recognize that a computer program is, in reality, a behaving machine (BM). That is to say, a program is an automaton that detects changes in its environment and effects changes in it. As such, it belongs in the same class of machines as biological nervous systems and integrated circuits. A basic universal behaving machine (UBM) consists, on the one hand, of a couple of elementary behaving entities (a sensor and an effector) or actors and, on the other, of an environment (a variable).
Universal Behaving Machine
Actors
Environment
Sensor
Effector
Variable
More complex UBMs consist of arbitrarily large numbers of actors and environmental variables. This computing model, which I have dubbed the behavioral computing model (BCM), is a radical departure from the TCM. Whereas a UTM is primarily a calculation tool for solving algorithmic problems, a UBM is simply an agent that reacts to one or more environmental stimuli. As seen in the figure below, in order for a UBM to act on and react to its environment, sensors and effectors must be able to communicate with each other.
The main point of this argument is that, even though communication is an essential part of the nature of computing, this is not readily apparent from examining a UTM. Indeed, there are no signaling entities, no signals and no signal pathways on a Turing tape or in computer memory. The reason is that, unlike hardware objects which are directly observable, software entities are virtual and must be logically inferred.
Fateful Choice
Unfortunately for the world, it did not occur to early computer scientists that a program is, at its core, a tightly integrated collection of communicating entities interacting with each other and with their environment. As a result, the computer industry had no choice but to embrace a method of software construction that sees the computer simply as a tool for the execution of instruction sequences. The problem with this approach is that it forces the programmer to explicitly identify and resolve a number of critical communication-related issues that, ideally, should have been implicitly and automatically handled at the system level. The TCM is now so ingrained in the collective mind of the software engineering community that most programmers do not even recognize these issues as having anything to do with either communication or behavior. This would not be such a bad thing except that a programmer cannot possibly be relied upon to resolve all the dependencies of a complex software application during a normal development cycle. Worse, given the inherently messy nature of algorithmic software, there is no guarantee that they can be completely resolved. This is true even if one had an unlimited amount of time to work on it. The end result is that software applications become less predictable and less stable as their complexity increases.
Emulation vs. Simulation
It can be convincingly argued that the UBM described above should have been adopted as the proper basis of software engineering from the very beginning of the modern computer era. Note that, whereas a UBM can be used to simulate a UTM, a UTM cannot be used to simulate a UBM. The reason is that a UBM is synchronous (**) by nature, that is to say, more than two of its constituent objects can communicate simultaneously. In a UTM, by contrast, only two objects can communicate at a time: a predecessor and a successor. The question is, given that all modern computers use a von Neumann (UTM-compatible) architecture, can such a computer be used to emulate (as opposed to simulate) a synchronous system? An even more important question is this: Is an emulation of a synchronous system adequate for the purpose of resolving the communication issues mentioned in the previous paragraph? As explained below, the answer to both questions is a resounding yes.
Turing's Monster
It is tempting to speculate that, had it not been for our early infatuation with the sanctity of the TCM, we might not be in the sorry mess that we are in today. Software engineers have had to deal with defective software from the very beginning. Computer time was expensive and, as was the practice in the early days, a programmer had to reserve access to a computer days and sometimes weeks in advance. So programmers found themselves spending countless hours meticulously scrutinizing program listings in search of bugs. By the mid 1970s, as software systems grew in complexity and applicability, people in the business began to talk of a reliability crisis. Innovations such as high-level languages, structured and/or object-oriented programming did little to solve the reliability problem. Turing's baby had quickly grown into a monster.
Vested Interest
Software reliability experts (such as the folks at Cigital) have a vested interest in seeing that the crisis lasts as long as possible. It is their raison d'être. Computer scientists and software engineers love Dr. Brooks' ideas because an insoluble software crisis affords them with a well-paying job and a lifetime career as reliability engineers. Not that these folks do not bring worthwhile advances to the table. They do. But looking for a breakthrough solution that will produce Brooks' order-of-magnitude improvement in reliability and productivity is not on their agenda. They adamantly deny that such a breakthrough is even possible. Brooks' paper is their new testament and 'no silver bullet' their mantra. Worst of all, most of them are sincere in their convictions.This attitude (pathological denial) has the unfortunate effect of prolonging the crisis. Most of the burden of ensuring the reliability of software is now resting squarely on the programmer's shoulders. An entire reliability industry has sprouted with countless experts and tool vendors touting various labor-intensive engineering recipes, theories and practices. But more than thirty years after people began to refer to the problem as a crisis, it is worse than ever. As the Technology Review article points out, the cost has been staggering.
There Is a Silver Bullet After All
Reliability is best understood in terms of complexity vs. defects. A program consisting of one thousand lines of code is generally more complex and less reliable than a one with a hundred lines of code. Due to its sheer astronomical complexity, the human brain is the most reliable behaving system in the world. Its reliability is many orders of magnitude greater than that of any complex program in existence (see devil's advocate). Any software application with the complexity of the brain would be so riddled with bugs as to be unusable. Conversely, given their low relative complexity, any software application with the reliability of the brain would almost never fail. Imagine how complex it is to be able to recognize someone's face under all sorts of lighting conditions, velocities and orientations. Just driving a car around town (taxi drivers do it all day long, everyday) without getting lost or into an accident is incredibly more complex than anything any software program in existence can accomplish. Sure brains make mistakes, but the things that they do are so complex, especially the myriads of little things that we are oblivious to, that the mistakes pale in comparison to the successes. And when they do make mistakes, it is usually due to physical reasons (e.g., sickness, intoxication, injuries, genetic defects, etc...) or to external circumstances beyond their control (e.g., they did not know). Mistakes are rarely the result of defects in the brain's existing software.The brain is proof that the reliability of a behaving system (which is what a computer program is) does not have to be inversely proportional to its complexity, as is the case with current software systems. In fact, the more complex the brain gets (as it learns), the more reliable it becomes. But the brain is not the only proof that we have of the existence of a silver bullet. We all know of the amazing reliability of integrated circuits. No one can seriously deny that a modern CPU is a very complex device, what with some of the high-end chips from Intel, AMD and others sporting hundreds of millions of transistors. Yet, in all the years that I have owned and used computers, only once did a CPU fail on me and it was because its cooling fan stopped working. This seems to be the norm with integrated circuits in general: when they fail, it is almost always due to a physical fault and almost never to a defect in the logic. Moore's law does not seem to have had a deleterious effect on hardware reliability since, to my knowledge, the reliability of CPUs and other large scale integrated circuits did not degrade over the years as they increased in speed and complexity.
Deconstructing Brooks' Complexity Arguments
Frederick Brooks' arguments fall apart in one important area. Although Brooks' conclusion is correct as far as the unreliability of complex algorithmic software is concerned, it is correct for the wrong reason. I argue that software programs are unreliable not because they are complex (Brooks' conclusion), but because they are algorithmic in nature. In his paper, Brooks defines two types of complexity, essential and accidental. He writes:
The complexity of software is an essential property, not an accidental one.
According to Brooks, one can control the accidental complexity of software engineering (with the help of compilers, syntax and buffer overflow checkers, data typing, etc...), but one can do nothing about its essential complexity. Brooks then explains why he thinks this essential complexity leads to unreliability:
From the complexity comes the difficulty of enumerating, much less understanding, all the possible states of the program, and from that comes the unreliability.
This immediately begs several questions: Why must the essential complexity of software automatically lead to unreliability? Why is this not also true of the essential complexity of other types of behaving systems? In other words, is the complexity of a brain or an integrated circuit any less essential than that of a software program? Brooks is mum on these questions even though he acknowledges in the same paper that the reliability and productivity problem has already been solved in hardware through large-scale integration.
More importantly, notice the specific claim that Brooks is making. He asserts that the unreliability of a program comes from the difficulty of enumerating and/or understanding all the possible states of the program. This is an often repeated claim in the software engineering community but it is fallacious nonetheless. It overlooks the fact that it is equally difficult to enumerate all the possible states of a complex hardware system. This is especially true if one considers that most such systems consist of many integrated circuits that interact with one another in very complex ways. Yet, in spite of this difficulty, hardware systems are orders of magnitude more robust than software systems (see the COSA Reliability Principle for more on this subject).
Brooks backs up his assertion with neither logic nor evidence. But even more disturbing, nobody in the ensuing years has bothered to challenge the validity of the claim. Rather, Brooks has been elevated to the status of a demigod in the software engineering community and his ideas on the causes of software unreliability are now bandied about as infallible dogma.
Targeting the Wrong Complexity
Obviously, whether essential or accidental, complexity is not, in and of itself, conducive to unreliability. There is something inherent in the nature of our software that makes it prone to failure, something that has nothing to do with complexity per se. Note that, when Brooks speaks of software, he has a particular type of software in mind:
The essence of a software entity is a construct of interlocking concepts: data sets, relationships among data items, algorithms, and invocations of functions.
By software, Brooks specifically means algorithmic software, the type of software which is coded in every computer in existence. Just like Alan Turing before him, Brooks fails to see past the algorithmic model. He fails to realize that the unreliability of software comes from not understanding the true nature of computing. It has nothing to do with the difficulty of enumerating all the states of a program. In the remainder of this article, I will argue that all the effort in time and money being spent on making software more reliable is being targeted at the wrong complexity, that of algorithmic software. And it is a particularly insidious and intractable form of complexity, one which humanity, fortunately, does not have to live with. Switch to the right complexity and the problem will disappear.
The Billion Dollar QuestionThe billion (trillion?) dollar question is: What is it about the brain and integrated circuits that makes them so much more reliable in spite of their essential complexity? But even more important, can we emulate it in our software? If the answer is yes, then we have found the silver bullet.
The Silver Bullet
Why Software Is Bad
Algorithmic software is unreliable because of the following reasons:
Brittleness
An algorithm is not unlike a chain. Break a link and the entire chain is broken. As a result, algorithmic programs tend to suffer from catastrophic failures even in situations where the actual defect is minor and globally insignificant.
Temporal Inconsistency
With algorithmic software it is virtually impossible to guarantee the timing of various processes because the execution times of subroutines vary unpredictably. They vary mainly because of a construct called ‘conditional branching’, a necessary decision mechanism used in instruction sequences. But that is not all. While a subroutine is being executed, the calling program goes into a coma. The use of threads and message passing between threads does somewhat alleviate the problem but the multithreading solution is way too coarse and unwieldy to make a difference in highly complex applications. And besides, a thread is just another algorithm. The inherent temporal uncertainty (from the point of view of the programmer) of algorithmic systems leads to program decisions happening at the wrong time, under the wrong conditions.
Unresolved Dependencies
The biggest contributing factor to unreliability in software has to do with unresolved dependencies. In an algorithmic system, the enforcement of relationships among data items (part of what Brooks defines as the essence of software) is solely the responsibility of the programmer. That is to say, every time a property is changed by a statement or a subroutine, it is up to the programmer to remember to update every other part of the program that is potentially affected by the change. The problem is that relationships can be so numerous and complex that programmers often fail to resolve them all.
Why Hardware is Good
Brains and integrated circuits are, by contrast, parallel signal-based systems. Their reliability is due primarily to three reasons:
Strict Enforcement of Signal Timing through Synchronization
Neurons fire at the right time, under the right temporal conditions. Timing is consistent because of the brain's synchronous architecture (**). A similar argument can be made with regard to integrated circuits.
Distributed Concurrent Architecture
Since every element runs independently and synchronously, the localized malfunctions of a few (or even many) elements will not cause the catastrophic failure of the entire system.
Automatic Resolution of Event Dependencies
A signal-based synchronous system makes it possible to automatically resolve event dependencies. That is to say, every change in a system's variable is immediately and automatically communicated to every object that depends on it.
Programs as Communication Systems
Although we are not accustomed to think of it as such, a computer program is, in reality, a communication system. During execution, every statement or instruction in an algorithmic procedure essentially sends a signal to the next statement, saying: 'I'm done, now it's your turn.' A statement should be seen as an elementary object having a single input and a single output. It waits for an input signal, does something, and then sends an output signal to the next object. Multiple objects are linked together to form a one-dimensional (single path) sequential chain. The problem is that, in an algorithm, communication is limited to only two objects at a time, a sender and a receiver. Consequently, even though there may be forks (conditional branches) along the way, a signal may only take one path at a time.
My thesis is that this mechanism is too restrictive and leads to unreliable software. Why? Because there are occasions when a particular event or action must be communicated to several objects simultaneously. This is known as an event dependency. Algorithmic development environments make it hard to attach orthogonal signaling branches to a sequential thread and therein lies the problem. The burden is on the programmer to remember to add code to handle delayed reaction cases: something that occurred previously in the procedure needs to be addressed at the earliest opportunity by another part of the program. Every so often we either forget to add the necessary code (usually, a call to a subroutine) or we fail to spot the dependency.
Event Dependencies and the Blind Code Problem
The state of a system at any given time is defined by the collection of properties (variables) that comprise the system's data, including the data contained in input/output registers. The relationships or dependencies between properties determine the system's behavior. A dependency simply means that a change in one property (also known as an event) must be followed by a change in one or more related properties. In order to ensure flawless and consistent behavior, it is imperative that all dependencies are resolved during development and are processed in a timely manner during execution. It takes intimate knowledge of an algorithmic program to identify and remember all the dependencies. Due to the large turnover in the software industry, programmers often inherit strange legacy code which aggravates the problem. Still, even good familiarity is not a guarantee that all dependencies will be spotted and correctly handled. Oftentimes, a program is so big and complex that its original authors completely lose sight of old dependencies. Blind code leads to wrong assumptions which often result in unexpected and catastrophic failures. The problem is so pervasive and so hard to fix that most managers in charge of maintaining complex mission-critical software systems will try to find alternative ways around a bug that do not involve modifying the existing code.
The Cure For Blind Code
To cure code blindness, all objects in a program must, in a sense, have eyes in the back of their heads. What this means is that every event (a change in a data variable) occurring anywhere in the program must be detected and promptly communicated to every object that depends on it. The cure consists of three remedies, as follows:
Automatic Resolution of Event Dependencies
The problem of unresolved dependencies can be easily solved in a change-driven system through the use of a technique called dynamic pairing whereby change detectors (comparison sensors) are associated with related operators (effectors). This way, the development environment can automatically identify and resolve every dependency between sensors and effectors, leaving nothing to chance.
One-to-many Connectivity
One of the factors contributing to blind code in algorithmic systems is the inability to attach one-to-many orthogonal branches to a thread. This problem is non-existent in a synchronous system because every signal can be channeled through as many pathways as necessary. As a result, every change to a property is immediately broadcasted to every object that is affected by the change.
Immediacy
During the processing of any element in an algorithmic sequence, all the other elements in the sequence are disabled. Thus, any change or event that may require the immediate attention of either preceding or succeeding elements in the chain is ignored. Latency is a major problem in conventional programs. By contrast, immediacy is an inherent characteristic of synchronous systems.
Software Design vs. Hardware Design
All the good things that are implicit and taken for granted in hardware logic design become explicit and a constant headache for the algorithmic software designer. The blindness problem that afflicts conventional software simply does not exist in electronic circuits. The reason is that hardware is inherently synchronous. This makes it easy to add orthogonal branches to a circuit. Signals are thus promptly dispatched to every element or object that depends on them. Furthermore, whereas sensors (comparison operators) in software must be explicitly associated with relevant effectors and invoked at the right time, hardware sensors are self-processing. That is, a hardware sensor works independently of the causes of the phenomenon (change) it is designed to detect. As a result, barring a physical failure, it is impossible for a hardware system to fail to notice an event.
By contrast, in software, sensors must be explicitly processed in order for a change to be detected. The result of a comparison operation is likely to be useless unless the operator is called at the right time, i.e., immediately after or concurrent with the change. As mentioned previously, in a complex software system, programmers often fail to update all relevant sensors after a change in a property. Is it any wonder that logic circuits are so much more reliable than software programs?
As Jiantao Pan points out in his excellent paper on software reliability, "hardware faults are mostly physical faults, while software faults are design faults, which are harder to visualize, classify, detect, and correct." This begs the question. Why can't software engineers do what hardware designers do? In other words, why can't software designers design software the same way hardware designers design hardware? (Note that, by hardware design, I mean the design of the hardware's logic). When hardware fails, it is almost always due to some physical malfunction, and almost never to a problem in the underlying logic. Since software has no physical faults and only design faults, by adopting the synchronous reactive model of hardware logic design, we can bring software reliability to at least a level on a par with that of hardware. Fortunately for software engineering, all the advantages of hardware can also be made intrinsic to software. And it can be done in a manner that is completely transparent to the programmer.
Thinking of Everything
When it comes to safety-critical applications such as air traffic control or avionics software systems, even a single defect is not an option since it is potentially catastrophic. Unless we can guarantee that our programs are logically consistent and completely free of defects, the reliability problem will not go away. In other words, extremely reliable software is just not good enough. What we need is 100% reliable software. There is no getting around this fact.
Jeff Voas, a leading proponent of the 'there is no silver bullet' movement and a co-founder of Cigital, a software-reliability consulting firm in Dulles, VA, once said that "it's the things that you never thought of that get you every time." It is true that one cannot think of everything, especially when working with algorithmic systems. However, it is also true that a signal-based, synchronous program can be put together in such a way that all internal dependencies and incompatibilities are spotted and resolved automatically, thus relieving the programmer of the responsibility to think of them all. In addition, since all conditions to which the program is designed to react are explicit, they can all can be tested automatically before deployment. Guaranteed bug-free software is an essential aspect of the COSA Project and the COSA operating system. Refer to the COSA Reliability Principle for more on this topic.
Addendum (3/5/2006) The COSA software model makes it possible to automatically find design inconsistencies in a complex program based on temporal constraints. There is a simple method that will ensure that a complex software system is free of internal logical contradictions. With this method, it is possible to increase design correctness simply by increasing complexity. The consistency mechanism can find all temporal constraints in a complex program automatically, while the program is running. The application designer is given the final say as to whether or not any discovered constraint is retained.
Normally, logical consistency is inversely proportional to complexity. The COSA software model introduces the rather counterintuitive notion that higher complexity is conducive to greater consistency. The reason is that both complexity and consistency increase with the number of constraints without necessarily adding to the system's functionality. Any new functionality will be forced to be compatible with the existing constraints while adding new constraints of its own, thereby increasing design correctness and application robustness. Consequently, there is no limit to how complex our future software systems will be. Eventually, time permitting, I will add a special page to the site to explain the constraint discovery mechanism, as it is a crucial part of the COSA model.
Plug-Compatible Components
Many have suggested that we should componentize computer programs in the hope of doing for software what integrated circuits did for hardware. Indeed, componentization is a giant step in the right direction but, even though the use of software components (e.g., Microsoft's ActiveX® controls, Java beans, C++ objects, etc...) in the last decade has automated much of the pain out of programming, the reliability problem is still with us. The reason should be obvious: software components are constructed with things that are utterly alien to a hardware IC designer: algorithms. Also a thoroughly tested algorithmic component may work fine in one application but fail in another. The reason is that its temporal behavior is not consistent. It varies from one environment to another. This problem does not exist in a synchronous model making it ideal as a platform for components.
Another known reason for bad software has to do with compatibility. In the brain, signal pathways are not connected willy-nilly. Connections are made according to their types. Refer, for example, to the retinotopic mapping of the visual cortex: signals from a retinal ganglion cell ultimately reach a specific neuron in the visual cortex, all the way in the back of the brain. This is accomplished via a biochemical identification mechanism during the brain's early development. It is a way of enforcing compatibility between connected parts of the brain. We should follow nature's example and use a strict typing mechanism in our software in order to ensure compatibility between communicating objects. All message connectors should have unique message types, and all connectors should be unidirectional, i.e., they should be either male (sender) or female (receiver). This will eliminate mix-ups and ensure robust connectivity. The use of libraries of pre-built components will automate over 90% of the software development process and turn everyday users into software developers. These plug-compatible components should snap together automatically: just click, drag and drop. Thus the burden of assuring compatibility is the responsibility of the development system, not the programmer.
Some may say that typed connectors are not new and they are correct. Objects that communicate via connectors have indeed been tried before, and with very good results. However, as mentioned earlier, in a pure signal-based system, objects do not contain algorithms. Calling a function in a C++ object is not the same as sending a typed signal to a synchronous component. The only native (directly executable) algorithmic code that should exist in the entire system is a small microkernel. No new algorithmic code should be allowed since the microkernel runs everything. Furthermore, the underlying parallelism and the signaling mechanism should be implemented and enforced at the operating system level in such a way as to be completely transparent to the software designer. (Again, see the COSA Operating System for more details on this topic).
Event Ordering Is Critical
Consistent timing is vital to reliability but the use of algorithms plays havoc with event ordering. To ensure consistency, the prescribed scheduling of every operation or action in a software application must be maintained throughout the life of the application, regardless of the host environment. Nothing should be allowed to happen before or after its time. In a signal-based, synchronous software development environment, the enforcement of order must be deterministic in the sense that every reaction must be triggered by precise, predetermined and explicit conditions. Luckily, this is not something that developers need to be concerned with because it is a natural consequence of the system's parallelism. Note that the term 'consistent timing' does not mean that operations must be synchronized to a real time clock (although they may). It means that the prescribed logical or relative order of operations must be enforced automatically and maintained throughout the life of the system.
Von Neumann ArchitectureThe astute reader may point out that the synchronous nature of hardware cannot be truly duplicated in software because the latter is inherently sequential due to the von Neumann architecture of our computers. This is true but, thanks to the high speed of modern processors, we can easily emulate (although not truly simulate) the parallelism of integrated circuits in software. This is not new. We already emulate nature's parallelism in our artificial neural networks, cellular automata, computer spreadsheets, video games and other types of applications consisting of large numbers of entities running concurrently. The technique is simple: Essentially, within any given processing cycle or frame interval, a single fast central processor does the work of many small virtual processors residing in memory.
One may further argue that in an emulated parallel system, the algorithms are still there even if they are not visible to the developer, and that therefore, the unreliability of algorithmic software cannot be avoided. This would be true if unreliability were due to the use of a single algorithm or even a handful of them. This is neither what is observed in practice nor what is being claimed in this article. It is certainly possible to create one or more flawless algorithmic procedures. We do it all the time. The unreliability comes from the unbridled proliferation of procedures, the unpredictability of their interactions, and the lack of a surefire method with which to manage and enforce dependencies (see the blind code discussion above).
As mentioned previously, in a synchronous software system, no new algorithmic code is ever allowed. The only pure algorithm in the entire system is a small, highly optimized and thoroughly tested execution kernel which is responsible for emulating the system's parallelism. The strict prohibition against the deployment of new algorithmic code effectively guarantees that the system will remain stable.
Software ICs with a Twist
In a 1995 article titled "What if there's a Silver Bullet..." Dr. Brad Cox wrote the following:
Building applications (rack-level modules) solely with tightly-coupled technologies like subroutine libraries (block-level modules) is logically equivalent to wafer-scale integration, something that hardware engineering can barely accomplish to this day. So seven years ago, Stepstone began to play a role analogous to the silicon chip vendors, providing chip-level software components, or Software-ICs[TM], to the system-building community.
While I agree with the use of modules for software composition, I take issue with Dr. Cox's analogy, primarily because subroutine libraries have no analog in integrated circuit design. The biggest difference between hardware and conventional software is that the former operates in a synchronous, signal-based universe where timing is systematic and consistent, whereas the latter uses algorithmic procedures which result in haphazard timing.
Achieving true logical equivalence between software and hardware necessitates a signal-based, synchronous software model. In other words, software should not be radically different than hardware. Rather, it should serve as an extension to it. It should emulate the functionality of hardware by adding only what is lacking: flexibility and ease of modification. In the future, when we develop technologies for non-von Neumann computers that can sprout new physical signal pathways and new self-processing objects on the fly, the operational distinction between software and hardware will no longer be valid.
As an aside, it is my hope that the major IC manufacturers (Intel, AMD, Motorola, Texas Instruments, Sun Microsystems, etc...) will soon recognize the importance of synchronous software objects and produce highly optimized CPUs designed specifically for this sort of parallelism. This way, the entire execution kernel could be made to reside on the CPU chip. This would not only completely eliminate the need for algorithmic code in program memory but would result in unparalleled performance. See the description of the COSA Operating System Kernel for more on this.
Failure Localization
An algorithmic program is more like a chain, and like a chain, it is as strong as its weakest link. Break any link and the entire chain is broken. This brittleness can be somewhat alleviated by the use of multiple parallel threads. A malfunctioning thread usually does not affect the proper functioning of the other threads. Failure localization is a very effective way to increase a system's fault tolerance. But the sad reality is that, even though threaded operating systems are the norm in the software industry, our systems are still susceptible to catastrophic failures. Why? The answer is that threads do not entirely eliminate algorithmic coding. They encapsulate algorithms into concurrent programs running on the same computer. Another even more serious problem with threads is that they are, by necessity, asynchronous. Synchronous processing (in which all elementary operations have equal durations and are synchronized to a common clock) is a must for reliability.Threads can carry a heavy price because of the performance overhead associated with context switching. Increasing the number of threads in a system so a to encapsulate and parallelize elementary operations quickly becomes unworkable. The performance hit would be tremendous. Fortunately, there is a simple parallelization technique that does away with threads altogether. It is commonly used in such applications as cellular automata, neural networks, and other simulation-type programs. See the COSA Operating System for more details.
Boosting Productivity
The notion that the computer is merely a machine for the execution of instruction sequences is a conceptual disaster. The computer should be seen as a behaving system, i.e., a collection of synchronously interacting objects. The adoption of a synchronous model will improve productivity by several orders of magnitude for the following reasons:
Visual Software Composition
The synchronous model lends itself superbly to a graphical development environment for software composition. It is much easier to grasp the meaning of a few well-defined icons than it is to decipher dozens of keywords in a language which may not even be one's own. It takes less mental effort to follow signal activation pathways on a diagram than it is to unravel someone's obscure algorithmic code spread over multiple files. The application designer can get a better feel for the flow of things because every signal propagates from one object to another using a unidirectional pathway. A drag-and-drop visual composition environment not only automates a large part of software development, it also eliminates the usual chaos of textual environments by effectively hiding away any information that lies below the current level of abstraction. For more information, see Software Composition in COSA.
Complementarity
One of the greatest impediments to software productivity is the intrinsic messiness of algorithmic software. Although the adoption of structured code and object-oriented programming in the last century was a significant improvement, one could never quite achieve a true sense of order and completeness. There is a secure satisfaction one gets from a finished puzzle in which every element fits perfectly. This sort of order is a natural consequence of what I call the principle of complementarity. Nothing brings order into chaos like complementarity. Fortunately, the synchronous model is an ideal environment for an organizational approach which is strictly based on complementarity. Indeed, complementarity is the most important of the basic principles underlying Project COSA.
Fewer Bugs
The above gains will be due to a marked increase in clarity and comprehensibility. But what will drastically boost productivity will be the fewer number of bugs to fix. It is common knowledge that the average programmer's development time is spent mostly in testing and debugging. The use of snap-together components (click, drag and drop) will automate a huge part of the development process while preventing and eliminating all sorts of problems associated with incompatible components. In addition, development environments will contain debugging tools that will find, correct and prevent all the internal design bugs automatically. A signal-based, synchronous environment will facilitate safe, automated software development and will open up computer programming to the lay public.
Conclusion
Slaying the Werewolf
Unreliable software is the most urgent issue facing the computer industry. Reliable software is critical to the safety, security and prosperity of the modern computerized world. Software has become too much a part of our everyday lives to be entrusted to the vagaries of an archaic and hopelessly flawed paradigm. We need a new approach based on a rock-solid foundation, an approach worthy of the twenty-first century. And we need it desperately! We simply cannot afford to continue doing business as usual. Frederick Brooks is right about one thing: there is indeed no silver bullet that can solve the reliability problem of complex algorithmic systems. But what Brooks and others fail to consider is that his arguments apply only to the complexity of algorithmic software, not to that of behaving systems in general. In other words, the werewolf is not complexity per se but algorithmic software. The bullet should be used to slay the beast once and for all, not to alleviate the symptoms of its incurable illness.
Rotten at the Core
In conclusion, we can solve the software reliability and productivity crisis. To do so, we must acknowledge that there is something rotten at the core of software engineering. We must understand that using the algorithm as the basis of computer programming is the last of the stumbling blocks that are preventing us from achieving an effective and safe componentization of software comparable to what has been done in hardware. It is the reason that current quality control measures will always fail in the end. To solve the crisis, we must adopt a synchronous, signal-based software model. Only then will our software programs be guaranteed free of defects, irrespective of their complexity.
Next: Project COSA
* This is not to say that algorithmic solutions are bad or that they should not be used, but that the algorithm should not be the basis of software construction. A purely algorithmic procedure is one in which communication is restricted to only two elements or statements at a time. In a non-algorithmic system, the number of elements that can communicate simultaneously is only limited by physical factors.
** A synchronous system is one in which all objects are active at the same time. This does not mean that all signals must be generated simultaneously. It means that every object reacts to its related events immediately, i.e., without delay. The end result is that the timing of reactions is deterministic.
©2004-2006 Louis Savain
Copy and distribute freely
1985-1989: No silver bullet
1985 to 1989: No silver bullet
For decades, solving the software crisis was paramount to researchers and companies producing software tools. Seemingly, they trumpeted every new technology and practice from the 1970s to the 1990s as a silver bullet to solve the software crisis. Tools, discipline, formal methods, process, and professionalism were touted as silver bullets:
Tools: Especially emphasized were tools: Structured programming, object-oriented programming, CASE tools, Ada, Java, documentation, standards, and Unified Modeling Language were touted as silver bullets.
Discipline: Some pundits argued that the software crisis was due to the lack of discipline of programmers.
Formal methods: Some believed that if formal engineering methodologies would be applied to software development, then production of software would become as predictable an industry as other branches of engineering. They advocated proving all programs correct.
Process: Many advocated the use of defined processes and methodologies like the Capability Maturity Model.
Professionalism: This led to work on a code of ethics, licenses, and professionalism.
In 1986, Fred Brooks published the No Silver Bullet article, arguing that no individual technology or practice would ever make a 10-fold improvement in productivity within 10 years.
Debate about silver bullets raged over the following decade. Advocates for Ada, components, and processes continued arguing for years that their favorite technology would be a silver bullet. Skeptics disagreed. Eventually, almost everyone accepted that no silver bullet would ever be found. Yet, claims about silver bullets pop up now and again, even today.
Some interpret no silver bullet to mean that software engineering failed. The search for a single key to success never worked. All known technologies and practices have only made incremental improvements to productivity and quality. Yet, there are no silver bullets for any other profession, either. Others interpret no silver bullet as proof that software engineering has finally matured and recognized that projects succeed due to hard work.
However, it could also be said that there are, in fact, a range of silver bullets today, including lightweight methodologies (see "Project management"), spreadsheet calculators, customized browsers, in-site search engines, database report generators, integrated design-test coding-editors with memory/differences/undo, and specialty shops that generate niche software, such as information websites, at a fraction of the cost of totally customized website development. Nevertheless, the field of software engineering appears too complex and diverse for a single "silver bullet" to improve most issues, and each issue accounts for only a small portion of all software problems.
For decades, solving the software crisis was paramount to researchers and companies producing software tools. Seemingly, they trumpeted every new technology and practice from the 1970s to the 1990s as a silver bullet to solve the software crisis. Tools, discipline, formal methods, process, and professionalism were touted as silver bullets:
Tools: Especially emphasized were tools: Structured programming, object-oriented programming, CASE tools, Ada, Java, documentation, standards, and Unified Modeling Language were touted as silver bullets.
Discipline: Some pundits argued that the software crisis was due to the lack of discipline of programmers.
Formal methods: Some believed that if formal engineering methodologies would be applied to software development, then production of software would become as predictable an industry as other branches of engineering. They advocated proving all programs correct.
Process: Many advocated the use of defined processes and methodologies like the Capability Maturity Model.
Professionalism: This led to work on a code of ethics, licenses, and professionalism.
In 1986, Fred Brooks published the No Silver Bullet article, arguing that no individual technology or practice would ever make a 10-fold improvement in productivity within 10 years.
Debate about silver bullets raged over the following decade. Advocates for Ada, components, and processes continued arguing for years that their favorite technology would be a silver bullet. Skeptics disagreed. Eventually, almost everyone accepted that no silver bullet would ever be found. Yet, claims about silver bullets pop up now and again, even today.
Some interpret no silver bullet to mean that software engineering failed. The search for a single key to success never worked. All known technologies and practices have only made incremental improvements to productivity and quality. Yet, there are no silver bullets for any other profession, either. Others interpret no silver bullet as proof that software engineering has finally matured and recognized that projects succeed due to hard work.
However, it could also be said that there are, in fact, a range of silver bullets today, including lightweight methodologies (see "Project management"), spreadsheet calculators, customized browsers, in-site search engines, database report generators, integrated design-test coding-editors with memory/differences/undo, and specialty shops that generate niche software, such as information websites, at a fraction of the cost of totally customized website development. Nevertheless, the field of software engineering appears too complex and diverse for a single "silver bullet" to improve most issues, and each issue accounts for only a small portion of all software problems.
Radar Stealth
http://www.airplanedesign.info/52-radar-stealth.htm
Radar Stealth
And now we get to what people really mean when they say “stealth”. This is the new technology that makes the latest combat aircraft look really funny. Why do they look that way? What is it that those shapes are trying to do?
To understand that, you need to first understand how radar works. In theory, radar works like visible light: A source shines it onto an object, the “illuminated” object reflects it towards a sensor, the sensor picks up this reflected light and can thus locate and identify the object. There is one big difference, though: With visible light, we are used to having many sources all over the place. Either it’s the sun and the sky and the things they illuminate, or it’s a bunch of light bulbs in at least a few spots in the room. With radar, however, there are no other sources, just the one by the sensor. This would be equivalent to wondering around a football field in a moonless night trying to with only a flashlight in your hand. Say I tell you that, somewhere on this field, there is a black pole sticking up a few feet from the ground, with a model airplane at the tip. How do you find it? You’ll probably sweep the flashlight beam slowly around you until you see it illuminate something that is not the ground.
If the model airplane is black, it will be harder for you to see it.
If the model airplane is shiny, it will reflect light from the flashlight… But it might reflect all your light away from you, and all you will see on it is the reflection of darkness, which is indistinguishable from the darkness behind the airplane, unless it reflects your light towards you and you catch the glint of the reflection of your flashlight on the shiny model.
Thus, to be stealthy to radar, an airplane can do two things: Absorb the radar reflection (be “black”), and reflect it away from the sender (be “shiny” and shaped in a way that does not reflect radar in every direction).
There is a third thing, but it’s a little trickier to understand. The airplane surface can reflect the radar signal twice, such that each reflection is half a wavelength out of tune with the other, so the reflections cancel out. That’s the basic idea behind noise-canceling headphones: They hear the sound outside and play for you the same sounds but half a wavelength off, thus canceling much of the ambient noise. They can do this and play music at the same time, or they can just play the external noise half a wavelength off if you just want some peace and quiet.
Of these three goals (reflecting radar away from the sender, absorbing the radar energy, and reflecting the radar waves in a way that cancels them out), reflecting radar away from the sender is by far the most important.
It is the thickness of your airplane’s skin, and the relationship between the skin and the internal components, that will determine whether you’ll reflect radar in such a way that it will cancel itself off.
It is the materials on your airplane skin that will either absorb the radar energy, reflect it, or let it pass through.
It is the shape of your airplane that will determine whether the radar gets reflected back to the sender or away from the sender.
Denys Overholser, a Lockheed mathematician and electrical engineer who had the brilliant idea of using a highly swept, wedge-like, faceted design for stealth (which eventually became the F-117), once famously said that there are four elements that are important in reducing the radar reflection of an airplane: “shape, shape, shape, and materials”. So you can guess which one of these really matters.
Before we go into how the skin shape, materials and thickness ought to be chosen, let me explain just a little bit about how radar reflections are described. In other words, how do you measure how “stealthy” a stealth plane really is?
Well, different shapes reflect radar in different ways when in different orientations. An airplane may reflect a lot of radar at you when seen from the side, but almost none when seen from the back, for example. So a small airplane seen from the side might return about as much radar as a large airplane seen from the back. Radar engineers and operators measure the radar return of a given airplane from a given angle in terms of its Radar Cross Section, or RCS. This is measured as an area, correspondent to the cross-sectional area of a sphere that returns the same amount of radar. So, if a 747 seen from the back has a Radar Cross Section of ten square meters, then this means that it returns as much radar as a sphere with a cross-sectional area of ten meters. (This sphere would be about 3.5 meters across). However, from the side, the 747 may have an RCS of 100 square meters (equivalent to a sphere about 11.3 meters across). Seen from below, it might have an RCS of a few hundred square meters. I just made all these numbers up, but you get the idea (and they’re probably in the ballpark for a 747’s RCS).
So the challenge is to reduce the RCS from as many angles as possible, especially from the front. Reducing RCS from the nose-on angle is by far the most important, as that is the angle your enemy will see you from while you are on your way to bomb them.
The problem with reducing RCS is that the detection range varies with the fourth root of the RCS. This means that if a given radar can detect an airplane from 100 miles away, then reducing the RCS by half would mean this airplane can be detected by this radar from 84 miles away. You’d have to reduce the RCS by a factor of sixteen, down to 6.25% of what it was, so that this radar could detect you from “only” 50 miles away – not really much of an improvement as far as evading the radar, given you had to reduce your RCS by a HUGE amount. You’d have to reduce your RCS to less than one percent of normal before you can fly with relative impunity from most radar systems. Many radar systems have a range of a few hundred miles. But to destroy them, you must get to within a few miles of them. To make this possible, RCS must be reduced by at least a factor of 10,000 – equivalent from taking an airplane the size of a large combat jet but having it return as little radar as a pigeon would.
Like I said, there are three ways of going about reducing RCS: absorbing radar (skin material), reflecting radar waves that cancel themselves out (skin thickness, “depth” of internal components), and reflecting radar away from its origin (aircraft shape). Of these, the last is by far the most important. So let’s start with what is the least important, and today the least commonly used:
Radar-Absorbent Materials (RAM)
The idea of using RAM to evade radar detection dates almost as far back as the first widespread military use of radar, naturally. During World War 2, England developed a wide and effective radar network to protect itself from German ships and air attacks. Towards the end of the war, aircraft (British, German, and American) started carrying radar to find enemy ships and other aircraft. The Nazis figured out that, if a material absorbs radar the same way that black things absorb visible light, then an airplane covered in this material might be able to slip through British radar.
Whether or not a material absorbs radiation of a certain wavelength has to do with the energy levels of the electrons in the atoms that make up that material’s molecules, as well as with the masses and structures of the atoms that make up the molecules. By finding a material whose molecules can vibrate in frequencies similar to those of radar waves, and/or whose electrons can absorb quantities of energy similar to those carried by photons of radar radiation, there is a good chance this material would absorb radar. Carbon products were found to absorb radar well. In addition, radar waves create small magnetic fields as they hit iron, so many small bits of iron could create magnetic fields in such a way as to absorb most of the radar energy. It turns out that small round particles coated with carbonyl ferrite (“iron balls”) are the best absorbers. They should be embedded in a dielectric material (usually a plastic like neoprene, possibly other materials) which slows down the radar waves and gives them more time to be absorbed.
The first use of RAM was to coat German submarine periscopes with the stuff. It is unknown how effective it was – the material covered what was already a very small target, one that was surrounded by waves a large fraction of the time, and thus almost impossible to detect with radar to begin with. The Horten brothers, designers of flying wings who were working on a twin-jet flying-wing bomber for the Nazis, suggested the use of this material on their bomber. The builders had developed a glue-like RAM made of adhesive, sawdust, and carbon, and the skin of the flying wing was to have been made of two layers of plywood sheet with the RAM glue sandwiched in-between. Only the first of three prototypes was finished by the time the war ended, and it did not have this RAM skin, although the second and third prototypes would have. These kinds of materials were later tested on Canberra bombers, but the RCS reduction was not substantial, so attempts at RAM use were eventually dropped in Europe.
When the US started investigating the possibility of building aircraft with radar-stealthy shapes in the 1970s, development of RAM was again picked up. The F-117, the first airplane designed with stealth as a primary design objective, was entirely covered with tiles made of a soft dielectric plastic (like neoprene), and these tiles had, embedded in them, round particles covered with carbonyl ferrite (“iron balls”). Any maintenance or damage to the aircraft had to result in the repair (patching) of this soft plastic tile surface. Dielectric putties, tapes, and adhesive sheets (with the “iron balls” embedded in them) were developed to make this process easier. Eventually, a dielectric paint was developed that had the “iron balls” suspended in it, and this made the process even easier, although the solvent for this paint is extremely toxic.
More modern stealthy airplanes rely almost entirely on their shape for stealth. As you will see below, the shape of the airplane skin can be designed such that radar is almost always reflected away from the emitter. These more modern aircraft use RAM only in places where radar is necessarily reflected in many directions, such as edges (like the lip of the engine air intakes, the perimeter of the wing, etc), joints/cracks (like where control surfaces, access panels, landing gear doors, the canopy edge, or bomb bay doors, meet their fixed neighboring surfaces), and other non-smooth features. RAM can now be sprayed on, making the maintenance of RAM surfaces even easier.
Lastly, it should be noted that some of the radar energy does penetrate the aircraft skin into its internal structure. Most of it is reflected or absorbed, but not all – the remainder goes bouncing around inside the airplane, and eventually comes back out. Metal does tend to reflect radar very well. Because of this, structures made of composite materials (many of which are carbon-based or can have carbonyl ferrite easily embedded in them) are widely used in stealth aircraft, rather than the more usual titanium, steel, nickel, or aluminum. Metal components – such as the engines and any other necessarily-metal parts – must be surrounded by materials that absorb radar or reflect it away, or must be at the right depth under the skin so as to use wave cancellation to cancel out any outgoing radar reflections. Which brings us to…
Wave Cancellation
Given that some of the radar waves will penetrate the skin of the aircraft and possibly bounce back towards the radar emitter, how do you minimize the impact from these waves? Well, you try to have them bounce from an internal structure (like a second skin inside the first skin) that is one-fourth of a radar wavelength inside in the airplane. In other words, you set up a second skin, which is one-fourth of a radar wavelength under the surface of the airplane. When radar bounces off this internal skin and leaves the airplane, it meets up with the radar reflection of the real surface. But the path of the radar wave that reflected internally is half a wavelength longer than the path of the wave that bounced off the surface (one quarter of a wavelength in, one quarter of a wavelength out), which means they are now approximately out of phase, and largely cancel each other out. That’s the theory, anyways.
The first attempt at radar stealth in the US used this approach. When the U-2 spyplanes were being flown over Russia in the late 1950s, it was known that the planes were being detected, and that they were not shot down only because they flew too high for a missile to hit them (a decreasing safety margin, one which eventually disappeared altogether). It was a matter of time before the U-2 was shot down – Lockheed knew this since before the U-2 was even operational, and the Blackbird was designed to fix this (it replaced the U-2 after less than 10 years). But until the Blackbird was finished, the engineers at Lockheed thought they could keep their U-2s from being detected by having them create a secondary radar reflection half a wavelength off the primary one. Under the top secret Project Rainbow, a U-2 was covered by a grid of thin but firm wires, held a quarter of a typical radar wavelength away from the skin of the airplane (and from each other) by non-conductive poles (originally bamboo, eventually fiberglass).
These U-2s came to be known as “dirty birds”. (An airplane in a “dirty” configuration has the landing gear, flaps, bay doors, weapons, or other things extended, adding to the drag). These wires were definitely a huge dent in the U-2’s stellar low-drag aerodynamics. And, it turns out, they did not make the U-2s stealthy: When the Soviet Union protested about the U-2 flights in early 1958, their report contained precise data about the “dirty birds” flight paths. At that point, the Rainbow apparatus was removed and never used again.
When Lockheed started investigating RCS reduction, they developed a RAM coating made of a dielectric plastic with carbonyl-ferrite-covered particles embedded in it. The figured out that the dielectric plastic slowed down the radar waves. While radar waves have wavelengths ranging from around one inch to a few feet while going through the air, they were slowed down in the material. This means the longer waves were squeezed into a wavelength of a few inches, while the shorter ones were a small fraction of an inch long. The thin RAM coating would mostly absorb the shorter waves and would reflect them in such a way that they would cancel out, as its thickness is about one quarter the wavelength of the shorter waves. But what about the longer, more penetrative waves? Under the RAM, engineers placed a thick layer of fiberglass honeycomb, which could be made to have its surface be less dense and its deeper layers becoming progressively denser. This meant that the medium waves would bounce somewhere along the middle of the honeycomb layer, and come out such that they had traveled the extra half-wavelength necessary for cancellation. The longer, more penetrative waves would travel deeper before bouncing, and thus would also travel approximately the extra (longer) half-wavelength. Engineers likened the RAM-on-honeycomb set-up to a multi-channel stereo system: You have a tweeter, which just takes care of the high-frequency (short wavelength) stuff, and you have a woofer, that is best equipped to handle lower frequencies (longer waves). Individually each of them misses a lot of the waves, but together they can handle a wide variety of wave lengths. Typically, longer-range radars use the lower frequencies (longer, more penetrative waves), while smaller radars on fighter planes use medium waves, and missile radars use the higher-frequency, short-wavelength waves. RAM alone cannot protect from longer-range radar waves, so the ingenious graduated-density fiberglass honeycomb layer absorbed some of it and cancelled out what it reflected. It acted as an internal skin, almost always one quarter of a wavelength below the surface. This multi-layered skin is another important feature of stealth aircraft design.
One of the main developments in radar technology since the advent of stealth is the ability to detect aircraft using longer and longer wavelengths. These longer wavelengths, made up of lower-energy photons, are not absorbed much by RAM and are not cancelled out by the quarter-wavelength effect because they are so long. This means only the shape of an airplane, not its materials, can guarantee that is it hard to detect by modern radars.
These longer wavelengths start overlapping the range of wavelengths used in radio communications and other uses, therefore much noise is encountered. Interference with radio transmissions is undesirable for the radar operators (since it means more noise amidst which an aircraft has to be picked out) and for the radio operators and radio receivers (since the radar pulses become incorporated into the signals the radio people are trying to generate and to demodulate / listen to). However, modern computer algorithms do a good job of picking out an aircraft from the noise, so long-wave radars are being used in many countries’ defense networks nowadays.
Reflection
This is the most important challenge of all, the one that really allows stealth aircraft to be stealthy and that causes them to be shaped so strangely.
So far we have been talking about absorbing radar waves or canceling them out. But if you can reflect most radar waves away from the radar antenna to begin with, then only the rest need to be absorbed or cancelled, minimizing the need for complex multi-layer skins and for RAM. (And, like I just mentioned two paragraphs ago, modern long-wavelength radars emit energy that is not easily absorbed or canceled). However, this goal forces your airplane to be shaped in unconventional ways, some of which help aerodynamically, some of which do not.
Let’s get back to our “looking for something on a football field at night using a flashlight” analogy for radar.
Let’s say that the thing you’re looking for is shiny, rather than dull. That means that a narrow beam of light hitting its surface will bounce off at a certain angle, rather than being scattered all over the place. So as you look at this object in the middle of a dark field, what you see is really what it reflects. If some of the surface is at just the right angle, it will reflect the flashlight beam back at you, so you will see the reflection of the flashlight, and you will see the object. However, even if your flashlight beam hits it, if it only reflects the beam AWAY from you, then all you will see on the surface is the reflection of the darkness, and so you would not be able to pick out the object from its dark surroundings.
That’s the basic idea of making a shape that is stealthy to radar. If, when it is hit by a radar beam, it can reflect all the radar “light” away from the source, then the radar can’t see it (since the radar only sees the radar energy that is reflected back the way it came).
Say, for example, that I have a smooth, cylindrical-shaped piece of shiny metal. Say I shine a light onto it:
The light will be reflected in every direction, including me. In other words, if I shine a light on it, I will always observe that the middle of the cylinder looks very bright (as that is where the light I shine will always be reflected back in my direction).
Now say that I have a smooth, prism-shaped piece of shiny metal. Say I shine a light onto it:
The light will always be reflected away from me:
UNLESS, that is, I happen to be exactly at right-angles to one of the sides:
So, as you can see, something round reflects light in many directions, while something flat usually reflects light in one direction:
This was realized by the Lockheed team that built the F-117: Faceted surfaces reflect radar only in a few directions, and you’d have to be perfectly perpendicular to one of the facets to get the radar reflected right back at you. Since the airplane is always moving, the angle of each facet is always changing from the point of view of someone on the ground, so even if you ARE perpendicular to one facet, this only lasts a moment.
Now, let’s make things a little more complicated. First, radar does not bounce off a surface just like a laser off a mirror. SOME scattering does go on, similar to a surface that is shiny but not so perfectly shiny that you could adjust your hair by looking at your reflection. So beams bounce off mostly at the same angle they came in, give or take:
This means that you don’t have to be PERFECLY perpendicular to a facet in order to get energy reflected back at you. You just have to be kinda close to perpendicular:
The best way to ensure that no energy bounces back to where it came from is to try and have your surfaces at some angle relative to the radar source.
This is impossible to do from all directions: A surface that is angled away from one direction, is inevitably angled TOWARDS some other direction. So which direction is most important? Well, when an airplane flies towards enemy territory, all the enemies start out in FRONT of the airplane, so most of the radar sent its way comes approximately from the FRONT, at least at first. The more you reduce the reflection of radar energy sent from the FRONT, the longer you maintain the element of surprise. Anyone who has been caught speeding on a highway can appreciate the importance of a low frontal RCS (This has led the designers of many airplanes that are not very stealthy, like the F/A-18F, Gripen, and Eurofighter Typhoon, to work to reduce the “nose-on” radar reflection, which gives the most bang for your buck as far as implementing stealth technology goes. For example, much of the frontal radar reflection comes from the fan blades on the front of the engine compressor. Having your intake be curved (serpentine, S-shaped) so as to hide the engine compressor blades goes a long way in reducing frontal RCS).
So it you can have all the surfaces angled away from the front (that is, have the nose and leading edges look like a thin, pointy pyramid), then this is as good as a faceted design can get. The nose is like a wedge that pushes radar energy only a little bit out of the way:
This “wedge nose” idea guided the design of the Lockheed Have Blue prototype, the first aircraft that is stealthy in all modern senses of the word, and which evolved into the F-117. The Have Blue, however, flew very badly because of its extremely low-aspect-ratio and extremely high-sweep-angle wings. It was very draggy and very unstable. When the design was turned into an operational military aircraft, the angle was made not-quite-so-pointy. This may have increased the radar cross-section by a tiny amount, but it lowered the drag and tremendously improved handling characteristics.
Now, what do you do with the back of your airplane? There might be radar coming from there as well, right? You can’t make it pointy like the front, because you would then have a thin, unstable, diamond-shaped airplane, and it would be hard to add engine exhaust nozzles or a tail into that design, let alone ailerons or flaps. (When competing for the Have Blue contract, Northrop did have such a diamond-shaped configuration in their proposal, a design referred to by many as the “hopeless diamond”. As we know, Lockheed got that contract, and their Have Blue prototype eventually evolved into the F-117. Interestingly, Northrop later did fly a stealthy “diamond” during the X-47 program, but a revised version of the X-47 (the X-47B) did add wings to the “diamond”, making it look basically like a smaller and pointier B-2).
So you can’t really have a diamond. The wing needs to have a not-too-swept trailing edge, and the fuselage must stick out behind the wing root so that a tail (and some engine exhaust nozzles) could go all the way in the back:
The problem with this design is that, if any radar comes from behind, it will be perpendicular to one of the edges on the back if the radar comes from one of four directions. These four directions are so spread out, any radar coming from anywhere in the back will be dangerously close to perpendicular to ONE of the four edges.
Lockheed’s clever idea was to have the right wing trailing edge swept at the same angle as the left fuselage trailing edge, and the left wing trailing edge swept at the same angle as the right fuselage trailing edge. This means that there are only two directions between the four trailing edges:
This idea, of lining up edges into parallel groups, minimizing the number of directions from which a radar would be perpendicular to an edge, is one of the key concepts of modern stealth design:
One other thing to note is that no two edges – and no two surfaces – should be at 90 degrees to each other. Any two edges or surfaces that are at 90 degrees will ALWAYS reflect a radar beam RIGHT BACK to where it came. For example, here is a pair of lines at 90 degrees. Notice how a beam can hit the pair from any angle.
The beam bounces off the first line at (180 degrees minus original angle) and it bounces off the second line at 180 degrees minus THAT, which is (180 degrees minus (180 degrees minus original angle)), which is equal to the original angle! (This is why, if two mirrors meet at 90 degrees, you will ALWAYS see a reflection of yourself when you look into that corner, no matter where you stand). So it is important that edges and planes on an aircraft not meet at 90 degrees.
This means that, if an airplane has vertical stabilizers and horizontal stabilizers, any radar energy that hits one of them will bounce off it, then off the other, and then right back to where it came from. Same thing for the internal walls of rectangular air intakes. This means that having diagonal stabilizers and non-rectangular intakes drastically reduces the radar cross section of an airplane:
For the same reasons, the edges of landing gear doors and weapons-bay doors are also lined up with the airplane’s edges: If the door edges reflect radar, at least they should only reflect radar in the direction where the wing edges already reflect radar. Most doors and bumps on the F-117 are hexagonal – basically a rectangle aligned with the direction of the airplane, plus a pointy front and a pointy back to line up with edge directions.
But a bay door cannot always be pointy – there might not be enough room for it to stick forward enough to have a pointy front at the right angle. The solution? The front and back edges can be serrated:
Now let’s go back to the “wedge” idea. Of course only the nose can be a wedge in every dimension (that is, a pyramid), but if the aircraft has chines – that is, if the fuselage sides are a sharp ridge rather than a near-vertical wall – then basically the whole airplane is a “wedge”.
One other way of looking at this is: Most radar comes not from straight above or straight below but from a shallow angle, so it hits the airplane near the sides. So it is the vertical sides that reflect back the radar. Get rid of the vertical sides, leaving only the top and bottom, and this shape will only reflect the rare bits of radar energy coming from the top or the bottom. (Besides, if you’re flying right over a radar site or right under a fighter plane, you’re in trouble already…)
This approach was first tried in the Blackbird. The Blackbird was originally going to have an F104-like fuselage (pointy nose but pretty much round in cross section), plus delta wings – not too different from, say, a B-58. The center of lift was pretty far back, especially at high speeds, so it was difficult to (as stability demands) keep the center of gravity near the front. Canards were considered, and successfully tested in wind tunnels. But then the radar engineers at Lockheed suggested that chines be added to the sides. That way, the sides would not reflect radar back to where it came from. The aerodynamicists at first opposed this – it increases the overall surface area and makes skin friction much higher! But wind-tunnel tests showed that these chines generate vortices over themselves which actually create a lot of lift! All the way at the front, too! So no more need for canards, plus the airplane could turn tight without stalling, and takeoff and landing speeds were reduced as well. (In fact, these low-aspect-ratio lifting surfaces are now prominently used in most fighter jets to increase high-alpha lift, such as the F-16, F/A-18, MiG-29, and Sukhois).
The first pictures of the Blackbird to be publicly released were taken in profile (from the side), so that these chines were not apparent. It looked like just another delta-winged plane, kind of a cross between an F-104 and a B-58, albeit with odd (or, should we say, “retro”) engine placement (engines in the middle of the wings) and big spikes at the inlets (smaller spikes were not unusual in fighters at the time).
Modern stealth UAVs make use of these two shape principles to stay stealthy: chines that make for a wedge-like cross-section, and edges that only reflect radar in a few directions:
Even the noses of the JSF and F-22 have a slight “edge” along the sides, rather than a flat and purely vertical side:
The Sukhoi-32 and -34 “Platypus” has horizontal ridges going back along the sides of the nose and forward fuselage, much like the F-22 and F-35 and X-36, somewhat like the Blackbird and X-45. The Russians claim these ridges reduce the RCS of the Platypus. However, the rectangular engine intakes, the engine’s compressor blades placed not too deep in a non-serpentine intake, plus all the surfaces at right angles (weapon pylons and wings, engine pods and wings, engine pods and belly, horizontal and vertical tail fins), probably make the Platypus’ RCS too great for chines to really make any significant dent in it.
Now, you will notice that many of these modern stealth aircraft have curved surfaces, not facets. You may say, “But we just saw how curved surfaces reflect energy in more directions!!!”. This is true, but it’s all right, for two reasons:
One; If the curved surface is at an angle to the radar source, it will reflect the beam away. Remember the “wedge”. Sure, a slightly curved wedge might reflect the beam in more directions than a straight-sided wedge, but those directions will all be AWAY from the radar source anyways, so you’re ok. As long as the aircraft is, overall, fairly flat (that is, made up of few slightly curved surfaces (ideally just a top and a bottom) that meet at a sharp edge), it will only strongly reflect radar if it comes from straight above or straight below (and if you’re flying right over a radar station or right under a fighter plane, then you’re in trouble already).
Now hopefully you can see why flying wings have such low RCS.
The second reason why curves are ok is that, on modern stealth planes, their radius is rarely constant. They never look like circles when seen from any angle, and no part of the surface is spherical or cylindrical: they always look like squashed ellipses blended together with hyperbolae and parabolae. What that means is, as the orientation of a curvy stealth airplane changes in relation to you as it flies around, the part of the curvy surface that is perpendicular to you RIGHT NOW will have a different radius of curvature than the part that was perpendicular to you one second ago. As the airplane flies, the part of the surface that is perpendicular to you changes. But since each point in the surface has a different local radius of curvature, the parts that are reflecting energy back at you keep changing their radius of curvature as the airplane moves. Why is this important? Because something with a small radius of curvature (something very curvy) reflects less radar back at you than something with a large radius of curvature (something flat).
This means that the amount of radar energy being returned by the airplane keeps fluctuating. So even if a radar IS perpendicular to the surface (like a fighter plane right above it or a radar station on the ground right below it), it will be hard to get a radar lock, or even to tell the airplane from the random static around it. Non-constant radii of curvature ensure that the radar reflection changes a lot as the airplane moves (flies), making it hard to lock onto.
In summary...
... a stealthy shape will reflect radar back to the receiver only if the receiver is in one of a few directions. You can minimize the number of directions by having your airplane made of flat (or flattened) shapes, and by having groups of parallel edges (all of the B-2’s edges line up along only two directions). A wedge-like shape (be it the pointy front of the F-117 or the sharp chines on the Blackbird or X-47) tends to deflect radar light away from where it came from. If nothing else, hiding particularly radar-reflective parts (like the fan in the front of your jet engine) goes a long way. And use curves with non-constant radii, since this will cause your radar return to fluctuate, making it harder for it to stand out from the background noise and for the enemy to get a radar lock.
The Future
What is the future of radar stealth? Two emerging technologies that may become used more widely are plasma stealth and active signal cancellation.
Remember how I said that radar wave cancellation works like noise-reducing headphones? The idea is that a wave of a certain profile, when mixed with a copy of that wave that is inverted or half a wavelength out of phase, is mostly cancelled out. Noise-reduction headphones work by “listening” to the ambient noise with a microphone, and then playing this noise to you but inverted and/or out-of phase, so that the noise it plays cancels out the real noise. Now, stealth aircraft skins are made of multiple layers, so that the radar waves bouncing off the internal layers are half a wavelength out of phase with (that is, an inverted form of) the waves that bounce off the surface. But what if an aircraft could sense the incoming radar wave and figure out what direction it’s coming from, figure out what the aircraft’s radar reflection wave would look like, and then emit an inverted / out-of-phase version of that wave? The first part – detecting the radar wave and where it’s coming from – is easy and already an important part of most combat aircraft’s defenses. The second part – figuring out what your reflection would look like to that radar and from that direction – is trickier but not impossible. The third part – emitting a radar burst that cancels out the original – is much harder, because this burst must stop as soon as the original burst ends, otherwise your plane is sending out radar waves that are not canceling anything out and so can act as a beacon, signaling enemy radars to your airplane’s presence and location. It is also extremely important that the emitted radar-canceling wave matches the naturally-reflected waves very well, otherwise instead of cancellation, you get MORE radar waves coming from your airplane. Because the direction, phase, and timing of the emitted radar waves must be very precise, this system is significantly more complicated than the noise-reduction headphones it mimics. However, there are unconfirmed rumors of such systems being in use on the Rafale (the latest French fighter jet) and on the B-2.
“Plasma stealth” is not as spectacular as it sounds. Basically, it has been found that ionized air does a good job of absorbing radar waves. I actually have personally worked with trying to ionize air flowing around an object, even a wing – in my case, though, this was done for the sake of aerodynamic advantages, not stealth. Still, the fact is, with the right electric field, air flowing over a surface can be ionized easily and with little power. All you need is a high-voltage electric field. (In my case, I used a high-voltage high-frequency alternating field between two conductive strips, one of them exposed on the surface and one of them hidden just under the surface no more than a couple millimeters away). The strong electric field ionizes the air that flows into it – pull electrons one way and atom nuclei the other way. But the field can be set up so that the ions don’t actually hit the electrodes (if the field’s frequency is high enough, for example, the ions just vibrate and don’t really move very far), so you only need really low currents – not much power at all. And you don’t have to ionize air right on the surface of the airplane: You could also have some air from the engine intake or exhaust be fed through a strong electric field somewhere inside the airplane, which would ionize this air, and then pump it out somewhere near the front of the airplane. From there, it would be blown back, bathing the airplane in a shroud of ionized air. Or some completely different ionization system could be used, I don’t know. But the point is, once you have ionized air around your airplane (something that is quite doable), it might be much harder to pick up by radar. Again, there is no evidence that such an idea is operational or that it is even being tested on real aircraft, but that doesn’t mean it’s not, and it might be in the future.
Subscribe to:
Posts (Atom)