Software Engineering Stories

Software Engineering Remarks

Tom Van Vleck

Here are some remarks on software over the years that I wanted to save. Some I sent as letters to computing publications, or submitted to online forums, or sent to colleagues.

(If you're a serious programmer, you should be reading the Forum on Risks to the Public in Computers and Related Systems, by the ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator, as well as blogs by other security and software engineering people.)

An old CTSS virus

RISKS 7.8, 13 Jun 1988

This may qualify as one of the oldest viruses: Just before the July 4th holiday in 1966, two undergraduate CTSS users decided to write a RUNCOM (like a shell script) which would invoke itself. They knew that this would create a new SAVED file on each invocation and eventually use all the disk space on the Project MAC CTSS system, but they thought this would just lead to a documented error return. Unfortunately, there was a bug in the system and CTSS crashed. Noel Morris and I spent a long time repairing the CTSS system disk tables by hand. Well, was this a virus? The program launched a new copy of itself, and this proliferation led to the death of the host. (But it didn't spread from one account to another.)

(Note the early fascination with self-reference. The other well-known way to crash CTSS was to issue the XEC * instruction, which said "execute the instruction at the location where this instruction is." The 7094 CPU looped taking I [instruction fetch] cycles only and couldn't be interrupted. Bill Matthews once did this deliberately to stop the system when an unwary system administrator accidentally put the password file in the "message of the day." Once again, at 5PM Friday.)

The most important lesson is "don't get clever at 5PM Friday."

Medical computer crashes

RISKS 19.25, 16 Jul 1997

I visited a hospital emergency room recently late at night. As I was reciting my data to the admitting clerk, a horn sounded, and she said, "Oh darn, the computer's crashed. It crashes every day at midnight, and takes about fifteen minutes to come back." She didn't know what kind of computer it was; the keyboards were labeled IBM.

Buffer overflows and Multics?

RISKS 23.30, 23 Feb 2004

To make a big deal out of providing the 40-year-old feature of marking a region of memory non executable is kind of sad. Multicians look at each other and make the rubbing-sticks-together gesture.

It seems to me that the marketing guys and the popular press writers don't understand the feature, the need for the feature, or what the feature will and won't accomplish.

It's not magic. It fixes some common problems, leaving other problems untouched. It's not a substitute for defensive coding and proper management of storage; all it means is that if there is a mistake, it is more work for an attacker to exploit it.

As Paul Karger points out, when attackers are frustrated by one measure, they don't abandon their attacks. They keep looking for other holes. A fix like this, applied by itself, will lead to a new equilibrium between attackers and defenders, maybe favoring one or the other, but the game will remain the same.

Closing one open barn door is good, but it needs to be complemented by a systematic approach to enumeration of openings, and a method of closing the openings by architectural design that applies to all openings. So I was taught by my leaders on the Multics project, including Corby, Bob Morris, Jerry Saltzer, Ted Glaser, PGN, and many more.

Wikipedia and Email and Mud Flats

mail message

I once tried to add the story of CTSS MAIL to the Wikipedia page for email. Someone removed it, citing the Berkeley Alumni Magazine, which said Ray Tomlinson was the inventor of email -- so I must be wrong. I gave up on Wikipedia.

When I lived in Emeryville CA, there were mud flats on the Bay where people built strange and wonderful driftwood and garbage sculptures, and that is what Wikipedia is like. Someone would build something awesome, and a few days later someone else would tear it down and use the pieces to build something stupid. You'd see something you liked but you couldn't count on ever seeing it again. The sculptures are all gone now. Maybe Wikipedia will eventually wash away too, and someone will build something with the debris.

The Multics Operating System

comment on Bruce Schneier's blog, 20 Sep 2007

I worked on the security internals of several operating systems, including Multics and Unix-derived systems. Multics was more secure. Not perfect, but better than the systems in common use today. Systems like L4.verified and Aesec seem like an interesting next step.

PL/I is a better language than C or C++ for writing secure code. Again, not the ultimate. Because the compiler and runtime were aware of the declared size of string objects, some kinds of overflows were much less likely.

Security has three parts. First is the user interface, like ACLs and mandatory controls, capabilities, etc. Second is the reference monitor, the underlying machinery that enforces the controls. The Multics reference monitor was small, simple, and efficient, as a result of design, language support, hardware support, and implementation practices. Third is user behavior: the most secure system in the world will fail if a user executes code from strangers... and this is the most common current problem.

I think I will add some of the incorrect remarks here to myths.html which discusses a lot of Myths About Multics. Facts:
- Multics had one of the first TCP/IP implementations. It ran in a less privileged ring than the OS kernel.
- My user registration code in Multics would not support "hundreds of thousands of users" on a single site. Maybe 1500 registered users.

Multicians are encouraged to register at and contribute to https://www.multicians.org/.

Fly Bottles

IEEE Computer, Letters to the Editor, October 2008, pp. 6-7, vol. 41

Regarding the article by Bertrand Meyer titled "Seven Principles of Software Testing" (Software Technologies, Aug. 2008, pp. 99-101), I think many of the problems we programmers have are philosophical.

In "The Nightingale of Keats," Borges said, "Coleridge observes that all men are born Aristotelians or Platonists. The latter feel that classes, orders, and genres are realities; the former, that they are generalizations. For the latter, language is nothing but an approximative set of symbols; for the former, it is the map of the universe."

Maeterlinck's bees are Platonists. They are transfixed by the ideal sun. The flies are more Aristotelian -- they believe that random behavior has its place, and eventually escape. Wittgenstein wrote, "What is your aim in philosophy? To show the fly the way out of the fly-bottle."

Programmers are covert Platonists. We imagine a perfect abstract program and write something down that we hope matches our imagination. Then we run one test, and if it works, we say, "Yup, this is the program I imagined." We will continue to encounter disappointments and malfunctions as long as we persist in this approach to reality.

Testing is a way out of the fly-bottle, but it doesn't always work: Some of the flies die before they make it. If the exit is small and the flask large, random flying about might take too long. The flies need a map, a theory of exits -- or they need to avoid entering bottles. Systematic, principle-led testing is like mapping the bottle; avoiding errors is the way to stay out of the bottle in the first place.

My friend Roger once showed me his plan for a software system. He planned so many months to write the code, so many to debug, and so on. I said, "You mean you plan to write code with bugs mixed in, and then strain the bugs out?" He replied, "Sounds kind of dumb when you put it like that."

Letter to the Editor

;login; Spring 2018 Vol. 43, No. 1

Great interview of Peter [Neumann] in the Winter 2017 ;login:. I had the pleasure of knowing and learning from Peter for many years.

Rik asked, "What happened with Multics?" It was a moderate commercial success, until its hardware became obsolete and was not replaced. The operating system design and features, and the people who helped build them, influenced many subsequent systems, including CHERI.

I can amplify Peter's remarks on Multics in a few areas.

Peter said, "The 645 was pretty much frozen early" -- in fact, Multics had a major hardware re-design in 1973 (after Bell Labs left Multics development) when the GE-645 was replaced by the Honeywell 6180. The 6180 architecture extended the Multics hardware-software co-design, providing support for eight rings in hardware (instead of the 645's 64 rings simulated in software), as well as better security. A later hardware I/O controller ran in paged mode and supported Multics device drivers that ran unprivileged in the user ring. (See MDD-012 I/O Interfacer (IOI) for documentation of the Multics I/O Interfacer, ioi_.)

The transition from discrete transistor implementation to integrated circuits gave us 1 MIPS per 6180 CPU rather than the 645's 435 KIPS. The later DPS8/70 was rated at 1.7 MIPS.

Another minor clarification: Peter said, "The buffer overflow problem was solved by making everything outside of the active stack frame not executable, and enforcing that in hardware." Actually, there were several features preventing buffer overflows in Multics:

See Buffer Overflows for more on this topic.

Another clarification: Peter said,

In the early 1970s there was even an effort that retrofitted multilevel security into Multics, which required a little jiggling of ring 0 and ring 1. I was a distant advisor to that (from SRI), although the heavy lifting was done by Jerry Saltzer, Mike Schroeder, and Rich Feiertag, with help from Roger Schell and Paul Karger.

There were several projects to enhance Multics security so it could be sold to the US Air Force. The MLS controls were done by a project called Project Guardian, led by Earl Boebert. A more ambitious project to restructure the Multics kernel, led by Schell, Saltzer, Schroeder, and Feiertag, was canceled before its results were included in Multics (https://multicians.org/b2.html#guardian).

In the mid-'80s, the NCSC B2 security level was awarded to Multics, after a thorough examination of the OS architecture, implementation, and assurance. The evaluation process found a few implementation bugs; much of the effort in attaining the digraph was documenting the existing product.

There are over 2000 names on the list of Multicians. I am mildly uncomfortable at being the only person mentioned by Peter as "heavily involved" in Multics -- we all were. I did my part, but there were many others who made contributions more important than mine, and some who worked on Multics longer. I look back on those times and those colleagues with affection and awe.

Jeffrey Yost's interview with Roger Schell, a key person in the design of security features and TCSEC ("the Orange Book"), is also fascinating: https://conservancy.umn.edu/handle/11299/133439.

Regards,
Tom Van Vleck
thvv@multicians.org

A Positive Review

  By Marcus Ranum. "More Competence Porn".

A thing to worry about: sleep study

RISKS 30.92, 9 Nov 2018

What I have in mind is the paper in the latest CACM, November 2018, Vol. 61 No. 11, Pages 157-165. "LIBS: A Bioelectrical Sensing System from Human Ears for Staging Whole-Night Sleep Study" https://cacm.acm.org/magazines/2018/11/232224-libs/fulltext

Sleep study. Good thing, right. Replace the electrode cap applied by a technician with some foam earplugs, saves money, do it at home, results almost as good, plus you can get not only EEG but eye tracking and muscle contractions. They sound very proud.

Then their paper ends with a section on other stuff they could do with this.
- autism onset detection
- meditation training
- eating habit monitoring

Well hmm.
- autonomous audio steering... train a hearing aid to favor amplifying sounds from where the user is looking
- also combine with the EEG signal and micro expression to see how pleased the wearers are with the sound they hear
- distraction and drowsiness detection .. see if drivers are alert
- child's interest assessment .. see what the student is paying attention to in class

OK, but then this could be used to
-- see if Winston Smith is paying attention to the telescreen
-- determine if Winston Smith is pleased by what he hears from Big Brother
-- weed out malcontent and rebellious students
-- detect physiological responses to stimuli ("lie detectors")

oh, not to worry, just don't let anybody stick earplugs with wires on them in your ears. and make sure nobody invents a remote-sensing EEG, and beware of high quality sensor cameras that might pick up your micro expressions and other body responses

yup, nobody would ever use this for evil, right.

if Alexa or Siri offers us a useful gadget that promises to make us happy, will we be allowed to decline?

I bet Joe Weizenbaum would be cautious.

iPhone hacks (The Register)

RISKS 31.40, 5 Sep 2019

There has been recent discussion of hacks of the iPhone OS. See the article in *The Register*, which points to the detailed article by Google Project Zero. https://www.theregister.co.uk/2019/08/30/google_iphone_exploit_chain/

The complexity and subtlety of the attacks described in the Project Zero article is amazing. It appears that this is not done by one powerful wizard (like Mark Dowd) but rather a whole Ministry of Magic.

My guess would be that there are additional, similarly elaborate, exploits not yet described. (QA guy's rule of thumb: for every bug you found, there is one you haven't found yet.)

iPhones are programmed in a C-like language extended with rules, conventions, libraries, and frameworks. It is like making a 737 Max airliner out of trillions of individually glued matchsticks. It might fly... but the technology chosen is too delicate and vulnerable for the purpose intended, and there may be significant systemic weaknesses not addressed by choice of implementation technique.

It seems clear that trying to write secure operating systems in C does not work. Very smart people have tried for 50 years, and the solution to the problem is not reduced to practice.

I think we need even more powerful tools.. and by tools I mean ideas and approaches as well as compilers. Rust, Swift, Scala, Go. Well maybe. Focusing on the language is not enough. We tried that. SEL4, Haskell. Proof methodology. Not yet accepted as standard, the way C replaced assembler. When I look at the Multics B2 and Secure VMS projects, I get the feeling that we are still doing it wrong. Trying to build skyscrapers with two-by-fours and hammers.

I used to say, "the software is crying out to us with the only voice it has, failure reports. We have to listen, and figure out why, and imagine solutions."

I feel like our problem is philosophical. I'd like better clarity about what we require operating systems to do, and what kind of certainty we want about their behavior.

We are still in the pit, and better shovels won't be enough.

Software Engineering Education

Mail to a friend who is a professor of computer science.

I don't think "Software Engineering" should be taught just in Java.

How bout a more realistic case study: ask students to take over a project that's in trouble.

The product would be made by interconnecting various packages from different sources: pasted from StackOverflow, cloned from GitHub, bought from now-defunct company, supplied in object-only form by consultant from distant country, written by legendary founder who then retired to an ashram, etc. With glue code written in several languages. It would be fun to make up the personalities and comment style of the supposed creators.

What do the students do first? Maybe we suggest they take an inventory, and then ask them what could go wrong with each component.

You know, as I look at this, I have an idea. This is a Role Playing Game! You start out not knowing what the goal is. Instead of killing the dragon, you have to deal with a grumpy programmer. If they say "READ SPEC" it says "Several specs exist. Some are in English." And so on.

And if they say "PLUGH" it says "there is no magic in software engineering."

More on Software Engineering Education

More mail to a friend who is a professor of computer science.

One aspect of the class situation encourages submitting buggy programs: if the student has a choice of getting it right, or turning in the code on time, the first choice gets zero credit, and the second choice may get some credit.

What about an exercise where you announce that grading is pass/fail: if there is any bug, then the grade is zero.

Tell 'em "imagine you are working on a Toyota gas pedal, and you have to get it perfect the first time."

War Stories

The previous entry reminds me of a job I had once.

My company was working on a Second System. They had tried to hire the best of the best: people who had already succeeded at doing the thing we were trying to do.

Naturally these folks were proud of what they had done, and wanted to talk about it. These war stories about long ago solutions to problems we didn't have took up time and derailed many a design session. But they were fun.

I suggested that we designate Tuesday lunch as the War Story Hour. People could regale others with the stupidity of management and their brilliance. And if someone started with a war story in any other meeting, we could say, "Great, let's save that for War Story Hour." And attendance at War Story Hour would be optional...

Fire

I sometimes say that Multicians sit in the back of the room, and look at each other, and make the rubbing-sticks-together gesture.. "someday these folks will discover fire and think themselves mighty clever."

We're not making fun of those folks. We were there ourselves, trying to invent. We know how many trials and discoveries lie ahead, and know that their paths will lead to completely new ideas.

We also know that while people are focused on finding fire, there's no sense trying to discuss global warming, or carbon capture, or sustainable forest management, or optimum firehouse placement. That will come later.

Object Oriented Programming

mail message 2008

Some of the best programmers who have worked for me were trained in philosophy.

Object-oriented programming is a philosophical activity. People who are too impatient to think carefully and systematically about the world should not program. Some of the people I worked with should have been in some other profession; and many programmers could move from "good" to "great" if they acquired skill in philosophical reasoning.

What is-a, what has-a; how things are classified and the limitations of classifications; design of experiments: we see amateurs attempting to hastily re-invent philosophy and meet deadlines while typing into the editor.

Language

mail message 2011

Computer languages started out to be much simpler and more direct than natural language. Ambiguity was considered a bad idea and was built out. There is a vast literature on this but much of it is garbage, flawed analogies, and nonsense.

(Minds me of the game Careers, ever play it? You picked how much Wealth, Happiness, and Fame represented success for you, and then moved your little car around the board collecting points. The guy who got rich on it then went on to invent a "logical language" called Loglan, in which false statements would be ungrammatical. (!) One Christmas in the 70s I bought 25 copies of the Loglan book and gave them to all kinds of people (they were $2 each). I figured maybe those people wouldn't read the book, but they'd pass them on somehow, and eventually the world would become different and more interesting. Still waiting.)

The question whether brains can be reduced to Universal Turing Machines is also the occasion for much drivel. There are functions that a UTM cannot compute, and some folks claim that human minds can solve questions that a computer cannot, and some connect those two assertions. Read Dogg's Hamlet again: much of what passes for communication works by accident, and is that bad?

Re: Infiltrate anybody, one-click easy

RISKS 27.17, 15 Feb 2013

I heartily agree with Steve Summit's posting in RISKS 27:16.

I advise my friends and family "don't click on links in e-mail messages," but I know they do -- because I see the results when they get hacked.

The programs now invoked by e-mail clients to display web pages and attachments trust those items completely. I wish we could introduce some caution and intelligence into this path.

For display of links in messages, I'd like to use a specialized web page mail-link browser that's passed information like "this obfuscated URL came from a mail message, ostensibly from wellsfargo.com, sent via a mail server in Russia." (I got one of these recently.) The browser could consider multiple factors when deciding how to show the content. It might, for example, display an alert border; disable Flash, Java, Javascript; disable or indicate IFRAMEd content, etc.

Similarly, I'd like the option to send file attachments to a sandboxed program that just displayed text contents.

Sentient Opponent

mail message, 2016, in a discussion about how writing secure code is different from avoiding bugs.

I don't think having sentient opponents disqualifies a science. It does mean that such a science has to have a theory of sentience.

Should cybersecurity abandon science? I fear we never had a scientific approach that was accepted beyond a tiny group. For every Roger Schell, there were thousands of IBM salesmen. I personally spent years in the 90s and 00s working for people who wanted to do secure computing on an insecure platform. They really wanted it. They were willing to spend money on it. They didn't want to be told they couldn't have it. And they were willing to torture meaning and logic so they could say we were making progress.

Software Industry

mail message to a friend, 2016.

We could use proof carrying code, formal methods, etc. but we would rather get pokemon apps out to market.

Many of our difficulties are self constructed. I think there should be a one-syllable word for "making up an unsolvable problem and then stressing about how to solve it." Say "flarp." then we could say "politician x's speech was all flarp," or "religion y is encrusted with flarp," or "the security evaluation process drowned in flarp."

In particular, users are part of the system, but we do not employ a useful psychological theory predicting their behavior. This is a philosophical problem.

on Cognitive Computing

mail message to a friend, 2016. At the time IBM was marketing "Cognitive Computing."

I think the term "cognitive computing" has been around for a long time. And there is a pony in there somewhere.

(Remember the Multics Operator's Handbook? I wrote it. It started out as a long list of operator console commands, and described what each one did, and all the options. Then we realized this was horribly complex and invented BOS RUNCOMs to do commands in order; and operators still screwed up. So we put in a bunch of code that said "if you tried to boot Multics, you should have added the RPV first, and if you didn't we will do it automatically." But the basic commands were all still there, ready to be given a wrong argument. Or consider the salvager: after a crash it used to output all kinds of info about what was broken that it couldn't decide how to fix, and then Noel and I would get out the disk patcher and cuss for a few hours -- or operations would just do a complete reload. This was JUST WRONG. We built a 747 cockpit. What was needed was an automatic transmission, PRNDL. Fixed this later.)

The need for CC is that there are not enough computer science master's degrees awarded every year for every computer to have one on standby. The computer has to be told "go", "stop", stuff like that, and work out the details. Only -- this is fiendishly complex, and we have no theory of how to do it. If writing the program cost 10, writing its autopilot will cost 10 factorial. Well sheeit, Mr Watson, them computers are good at big numbers, mebbe they could LEARN the right thing to do, and we can sit here on the porch and holler at em. I think this was proposed in the 80s.

(Remember BOTTLENECK? The program I tried to write that would tune a Multics? Look at all the meters and tell you what to do first. The more logic I put into it, the more folks I interviewed, the worse it got. Later I figured out a better approach, with your help. Use Bayes' Rule. There are ~1000 meters but only 8 things you can do. So you reason backwards: what is the possible evidence for turning knob X in direction Y for each action. If CPU Idle is high and IO channels are saturated than we are in IO Overload. Basically this is re-inventing what the AI guys call "back chaining." But, and this is crucial, it only works if there is a THEORY of how the system works. Well, we designed it, we should know. Uh-uh.)

My primitive understanding of Cognitive Computing that they want a system that reduces the cost to train people to know the right commands. No choke knob, no spark advance.

Two ways to do this. One is to bolt on a magic box that does it. The other is to cause the redesign and re-architecture of the whole system to eliminate undesired behavior. (Japanese call this poka-yoke.) [We did this with Multics reboot. When system crashes, operator just types 'start'. OS decides what needs salvaging and in what order.]

The idea that one could look at millions of examples of system behavior, and build up a network that "learned" the equivalent of the theory we wanted, seems to ignore Turing. Not to mention W. W. Jacobs [Author of "The Monkey's Paw"].

Look at page six of http://caltechcampuspubs.library.caltech.edu/565/1/1960_02_18_61_18.pdf -- "The Chaostron"

Toothpicks

mail message, 2021

(In the late 70s I proposed restructuring an operating system to not panic. Long story. Just as Zero Defect programming is a vague slogan, so the idea of Zero Crash systems might be a useful aspiration. There's a pony in there somewhere. )

Karger's Law is relevant: "If you foil the Bad Guys' attack today, they won't give up; they will be back tomorrow with a worse attack." Paul was working for a Motorola cellphone division. Bad Guys were "tumbling" phones and he devised a mitigation. So the Bad Guys invented "cloning" phones which was much harder to detect and prevent.

Sometimes I feel like I am on the team building the world's most beautiful skyscraper
out of toothpicks.

(I see I am repeating myself. What I tell you three times is true.)

08 Nov 2023