Footnote 1. A LAP exists in bound_lisp_compiler_, but is an afterthought. Five LAP programs were ever known to have existed on Multics; two of mine, two lost, and one part of Macsyma.
Footnote 2. So source-language independent is lisp_cg_utility_ that the aforementioned LAP uses it to produce its object segment, as well.
Footnote 3. Although the (declare (use --)) feature is available if this is needed.
Footnote 4. It can be called from compiler LISP level explicitly.
Footnote 5. Example:
(defun print macro (x) (list 'princ (cadr x))) (defun a (b)(print a))
will work, i.e., compile a call to princ in a, but
(defun print macro (x) (list 'princ (cadr x))) (declare (print 'foobar))
will print, not princ, "foobar".
Footnote 6. Gary Drescher (GLD) has produced such a compiler, called "ELSIE" (for "LC," i.e., lisp compiler) in his directory on AI. It is worth comparative study.
Footnote 7. We thank André Bensoussan for introducing the concept of contract.
Footnote 8. This is an extremely important and nontrivial point, to which most of the hair in both compilers is addressed. PL/I, for instance, does not define what values b will receive in the following calls:
n = 3; add1: proc (n); n = n + 1;return (n);end; call b((n), add1 (n), (n));
but in the corresponding lisp forms:
(setq n 3) (b n (setq n (1+ n)) n)
the language is quite explicit that b gets 3, 4, and 4. Keep this strongly in mind. (PL/I example fixed 16 February 1996, thanks to Paul Green.)
Footnote 9. Modulo certain semantic and code-generator optimizations that attempt to redefine or suppress function calls where it thinks it is clever than thou.
Footnote 10. As well as an additional argument in go's and returns, which will be described shortly.
Footnote 11. Note that these lists are bound to nil at the start of p1'ing of their respective forms, not their previous values: see p1lam, p1prog, p1and, p1cond.
Footnote 12. One hack in this macro-function is worthy of particular comment: the subtle parallel assignment of do, i.e.,
(do ((i 1 (moby-function-that-changes-j)) (j 0 (function-that-changes-all)) (k 0 (extremely-random-fctn)))...
has its stepping code generated by dostepr in lcp_semant_ as:
(setq i (prog2 0 (moby-function-that-changes-j) (setq j (prog2 0 (function-that-changes-all) (setq k (extremely...)))))
which will be seen to do the trick.
Footnote 13. Moon has called pass 1 "a random bag of hacks."
Footnote 14. Indirect to segment, not a PDP10 operating system10.
Footnote 15. PDL = push down list, marked because the garbage collector must process (mark) it.
Footnote 16. This is not surprising, for it was designed just for this.
Footnote 17. Actually, by the function entry sequence, but this is not relevant here. It is only relevant that it is loaded by the first line of the compiled function.
Footnote 18. The mechanism is identical to the standard Multics Linkage mechanism, although it is not dynamic.
Footnote 19. The doubleword indirect word in the lisp linkage section to which the tspbp is addressed.
Footnote 20. Again, actually the entry sequence.
Footnote 21. Unless, of course, the function is interpreted (EXPR) code, in which case the interpreter (apply) is invoked this and every other time.
Footnote 22. There exist eight index registers: from the viewpoint of lcp, they are essentially useless, as they can only hold an 18-bit segment-relative address, neither an address nor a LISP object, God forbid.
Footnote 23. Other than IBM 650.
Footnote 24. Remember how pass 1 made all multi-form lambdas into progns?
Footnote 25. At least GLD's compiler, LISP machine, and the one in the MIT Press LISP anthology do this.
Footnote 26. We will drop the term "marked." "PDL" will mean the marked PDL.
Footnote 27. As the LISP machine does.
Footnote 28. Such as MACLISP on ITS
Footnote 29. With, of course, pass 1's comments.
Footnote 30. A notable exception is the "delayed-carcdr" hack of NCOMPLR, which is too hirsute to go into here, at this point.
Footnote 31. Drescher's compiler does just this. It prescans internal forms in function calls for potential side-effects before compiling them, but in order not to lose time, it is very, very liberal in its judgements.
Footnote 32. This does not sound so unreasonable: it does precisely this for conditional forms, and could do it easily, although at considerable overhead for every form if this approach were to be taken.
Footnote 33. This is a very poor use of the term, it means exactly the opposite. See below.
Footnote 34. All functions in the compiler referred to from this point on will be in the source file lcp_cg_.lisp, unless otherwise noted.
Footnote 35. More accurately, when executed, will have had stored. Talking about compiled code in the context of a compiler becomes an exercise in the future pluperfect, which is pretty obscure. I will say "where the compiler stored" where I mean "the place that the compiled code will have had stored" with hopefully a gain in clarity.
Footnote 36. When compiling for value.
Footnote 37. Except in the case of a constant, because nobody is depending on the current value of a constant, because its value is always the same, because that's what "constant" means!
Footnote 38. Other than the NCOMPLR carcdr hack. See Appendix B.
Footnote 39. See preceding footnote on future pluperfect.
Footnote 40. I am under the impression that comp on NCOMPLR can be influenced as to where you would prefer, given a choice, that it leave its result.
Footnote 41. Via means we admittedly haven't explained yet.
Footnote 42. e.g., at function entry.
Footnote 43. I did not make this one up this time. The predicates damned-variable in lcp and dvp through dvp4 in NCOMPLR back up my story. The term "condemned" has been suggested as a synonym here.
Footnote 44. You need not even have to clobber the register to reuse it; just "renaming" the result, i.e., calling it b instead of a after (setq b a) is enough to make something non-ilocable, and damnability tests must be made.
Footnote 45. Function values would not necessarily be unique (i.e., unshared) either if lcp attempted common-subexression optimization like many other modern compilers of other languages. I have never heard of a lisp compiler which attempted this, although the NCOMPLR carcdr hack can be considered a special case (see Appendix B). Somebody can probably get a thesis out of that.
Footnote 46. The value tag for a constant is itself. Again, this flows from the definition of constancy and the unsetqability and imperishability of constants.
Footnote 47. Having taken care of course, by calling storeaq? (from clearaq) to see that whatever was there already was damnable or saved.
Footnote 48. Why savevalue did not simply put the value-tag (x.1) there in the first place, i.e., the motivation behind idups, is not exactly clear, and is rooted in NCOMPLR. Experiment and consultation have shown that indeed this alternative works. Bear in mind that it is not the concept of idups, but marking them specially, which is open to question.
Footnote 49. It is possible to setq a variable without using the AQ, if you are wondering about that. One good example is storing a cons from the BB; another is store-zero to a fixnum variable in the latest lcp.
Footnote 50. This is because Multics instructions cannot address the AQ as storage.
Footnote 51. Snappable means that the first attempt to transfer (for functions, or execute, for arrays) through them causes a trap to an operator that fills in the pointer with the address of the function or array sought ("snapping" it) before proceeding, so that the operator is not invoked in future references.
Footnote 52. Such as code to initialize the type-fields of typed temporaries on the PDL. The existence of these temporaries cannot be not known until part-way through the compilation of the function. Having the type-field preinitialized allows untyped non-lisp data to be stored by the machine's native fixnum and flonum storing instructions.
Footnote 53. Remember that PROG tags had their symbols uniquized by Pass 1 so that the symbols associated with them could have properties, etc.
Footnote 54. See "Compiling for Predicate" later on.
Footnote 55. See the previous section for motivation. Instruction-counter relative instructions are also simpler to express in Multics object segments, because of the lack of need for relocation.
Footnote 56. Instruction operand = location-counter of target minus location-counter of instruction.
Footnote 57. This is a rather common optimization in many compilers, known as "tensioning" of transfers.
Footnote 58. The compiler special-cases predicates of 't.
Footnote 59. We will use the term edge of a conditional evaluation-flow instead of the highly ambiguous and usual "branch."
Footnote 60. Recall that pass 1 determined this for PROG tags: for compiler-generated tags, the compiler always knows if this is the case. The compiler never generates backward reference tags (except in one case of MEMQ, where it has the situation well under control).
Footnote 61. Although it is conceivable that we could "order" as opposed to predict. This is the domain of so-called "loop optimizers."
Footnote 62. Even this is only possible because there is no way to "go" to a label in a PROG except from inside it.
Footnote 63. One could argue that runtime checks could be made to determine which is the case, by having set flags, and keeping multiple slotlists. This gets into "loop-optimizer" and flow-analysis theory, and anyway, is not as elegant as what you are about to read.
Footnote 64. This is known in loop-optimizer theory as the back-dominator of the latter point.
Footnote 65. Which is the only kind lcp compiles, i.e., don't load it until demand is realized.
Footnote 66. As in ITS and stack-machine compilers.
Footnote 67. The garbage collector ensured the last time that it ran that what the AP is being moved up to cover, before we store into it, is either 0 (no LISP value) or something that was marked by the garbage collector (i.e., is not garbage) last time it ran.
Footnote 68. At least for foo or bar. The PROG itself might be an argument to a function whose call is being compiled.
Footnote 69. And return.
Footnote 70. Nonlocal PROG GO's can happen legally due to the map-function optimization of PASS 1, already described.
Footnote 71. See earlier section, "Basic Passes and Recursions."
Footnote 72. The arguments of AND are always compiled for predicate. Whether the AND itself is being compiled for predicate is a function of whether the source code is
(cond ((and ..... ))))
(setq x (and ...))
(for value), or even
(progn (and ...) (setq... ))))
Footnote 73. Were the constant nil in the AQ, it would be represented in the compiler as (quote nil), like any other constant.
Footnote 74. The E is an exponent register. A floating-store instruction must be issued to save the E and AQ as a flonum: a store-AQ will not do.
Footnote 75. The value of uninitialized PROG variables is so changed in compiled LISP, being a documented language difference between the compiled language, and the interpreted, in which case, they would be nil.
Footnote 76. This is where fixnum and flonum PDLs come in useful. The fact that numeric objects are self-contained avoids the retention problem associated with PDL numbers in PDP10 MACLISP. See AI Memo 421 by Guy Steele for a discussion of this profoundly distressing issue.
Footnote 77. I mention this principally because of the magnificent, unique, and duly famous internal compiler error message which is produced by csqfreez, called at idup-tagging time, if more than one home or ohome of the same variable is found:
KING OF CONFUSION - CSQFREEZ.
Footnote 78. Emphasis added.