dedicated to Norman Skliar and Sidney Cahn
In an earlier blog-essay, When 1+1 Equals 0, I explained how 1 + 1 = 0 makes sense in mod 2 arithmetic; today I’ll tell you how the equation 1 + 1 = 1 makes sense in Boolean arithmetic and became a tool for designing the complex digital circuits that power the Information Age.
The two people who deserve the most credit and blame for this state of affairs are George Boole (1815–1864) and Claude Shannon (1916–2001).
GEORGE BOOLE
While walking across a field in 1833, young George Boole had an epiphany: it was his duty and his destiny to explain the logic of human thought. “Epiphany” is no overstatement; he mentioned the incident many times in later life, sometimes in reverent, almost messianic tones. He took his destiny seriously to the point of sickness – he later told his sister MaryAnn that he had often made himself ill from his struggle to reduce logic to a mathematical science. Even after “the true method flashed upon him” in 1847, he had frequent bouts of poor health.
Boole worked in many parts of mathematics, such as the calculus of finite differences
(mentioned in my essay What Lovelace Did: From Bombelli to Bernoulli to Babbage), but among the general public he’s best known for his pioneering work on symbolic logic.
In his two great works on logic, The Mathematical Analysis of Logic (1847) and An Investigation of the Laws of Thought on Which are Founded the Mathematical Theories of Logic and Probabilities (1854), Boole repurposed algebraic notation to serve the needs of logic. Classes of things were represented by letters and the symbols + and × were used to represent operations on those classes, though in practice Boole’s algebra omitted × symbols, much as is done in ordinary algebra. If x is the class of all cats and y is the class of all dogs, x + y is the class consisting of all cats and all dogs, while x y is the class of individual animals that are both cats and dogs – an empty class since there is no cat that’s also a dog. Nowadays we refer to Boole’s classes as sets, so that for instance the empty class is called the empty set (the subject of my essay The Null Salad), but I’ll stick to Boole’s terminology for now.
In the figure below, showing four diagrams of the sort called Venn1 diagrams, I use two overlapping circles, one on the left of each diagram representing the class x and one on the right representing the class y. You should imagine that each diagram is populated with objects, aka elements, with elements that belong to the class x lying inside the x-circle and with elements that belong to the class y lying inside the y-circle. Elements that belong to both classes lie inside both circles; elements that belong to neither class lie inside neither circle. The class of elements common to x and y is called the intersection of the two classes; this class is shown schematically at the lower left. The class of elements belonging to x or y or both is called the union of the two classes; this class is shown schematically at the lower right. If you’re wondering about the question mark in the lower-right diagram, I’ll be coming back to that – it ties in with Boole’s greatest mistake.
Let’s take a classical syllogism in the style of Aristotle – “All humans are mortal, Socrates is a human, therefore Socrates is mortal” – and see how Boole might have handled it algebraically using his way of “multiplying” classes. If h is the class of humans and m is the class of mortal beings,2 then h m = h. To see why, remember that h m means the class of all beings that are both human and mortal; since all humans are mortal, saying “human and mortal” is just a wordy way of saying “human”. Likewise, every being that is both Socrates and human is Socrates, so s h = s (where s is the class consisting of Socrates and no one else). Now we can now write a proof of s m = s, that is, “Socrates is mortal”, that uses no words at all:
s m = (s h) m = s (h m) = s h = s
If that proof went by too quickly, here’s a play-by-play analysis. First I used the fact that s = s h – that’s the earlier equation s h = s flipped around – to turn s m into (s h) m. Then I used the associative property3 to equate (s h) m with s (h m). Then I used the fact that h m = h to equate s (h m) with s h. Finally, I used the fact that s h = s.4
Louis Cauturat, in his 1905 book The Algebra of Logic, described Boole’s achievement thus: “Symbolic Logic is, in essentials, the Logic of Aristotle, given new life and power by being dressed up in the wonderful – almost magical – armour and accoutrements of Algebra.”
Of course some might argue that Boole shouldn’t have used + and × for non-numerical purposes. After all, one fundamental rule of Boole’s algebra was that every class x must satisfy the formula x × x = x, since for instance “a cat that’s a cat” is just a wordy way of saying “a cat”. But the formula x × x = x does not hold for ordinary algebra (unless x is 0 or 1 – a point I’ll return to shortly).
Another reason Boole arguably shouldn’t have used algebraic notation is that it ran counter to the practices of the non-mathematical logicians of his day. John Venn (1834–1923), an early supporter of Boole’s innovations, wrote “There is so certain to be some prejudice on the part of those logicians who may without offence be designated as anti-mathematical, against any work professing to be a work on Logic, in which free use is made of the symbols + and −, × and ÷.” But Boole’s inspiration was specifically algebraic and he didn’t want to hide the source of his inspiration. And, as we saw with our Socrates-is-mortal example, thoughtfully chosen notation can do some of your thinking for you.
Boole saw that 0 and 1, the two numerical solutions of x × x = x, would need counterparts in the realm of classes. He reasoned that the class 0, as an analogue of the number 0, must have the property that x × 0 and 0 × x always equal 0, regardless of what the class x is, and the only class with this property is the empty class. On the other hand, the class 1, to be an analogue of the number 1, must have the property that x × 1 and 1 × x always equals x, and the only class with this property is the class that consists of All The Things, which Boole called the Universe. The empty class (the Nulliverse, we might say) and the Universe were the original 0 and 1 of Boole’s theory.
You may be thinking “Wait – I thought that 1 meant True and 0 meant False in Boolean logic!” That’s not wrong: those were the secondary meanings of 1 and 0 in Boole’s system. Boole used his algebra of classes to set up an algebra of propositions – I already gave you a taste of this when I proved that syllogism about Socrates – and in this context, 1 refers to propositions that are always true and 0 refers to truth propositions that are always false.
Boole discovered with delight that the idea of basing logic on the dichotomy between 0 and 1 had been put forward a century and a half before him by the great philosopher Gottfried Wilhelm Leibniz.5 Leibniz likewise had been delighted when he learned in 1689 that the binary arithmetic he’d invented earlier that year fit in tidily with the binary world-system of the I Ching, invented in China some 4000 years before that.
Speaking of Asia, it seems likely that Indian thought had an impact on Boole. A renowned member of Boole’s social circle was George Everest (1790–1866), famous in his day as the surveyor general of the Great Trigonometric Survey of India and now remembered for unwittingly lending his name to the peak he more modestly called Peak XV.6 Everest was also a student of Indian culture and conveyed that interest to others, including Boole. Boole went on to marry Everest’s niece Mary, a writer and thinker working in the newly emerging field of education. Mary later wrote that Boole was deeply influenced by Indian logic as taught to him by Everest, and she wrote “Think what must have been the effect of the intense Hinduizing of three such men as Babbage, De Morgan, and George Boole on the mathematical atmosphere of 1830–65.” (I’ve written about Babbage in the essay What Lovelace Did: From Bombelli to Bernoulli to Babbage I mentioned earlier; Augustus De Morgan (1806–1871) was a champion of Boole’s work who did much to solidify Boole’s legacy.)
Although Boole’s work was respected by logicians and mathematicians, Mary later said that he was disappointed by his readers’ failure to fully appreciate what The Laws of Thought was really about, despite the fact that the title alone should have made the scope of his ambition clear: “Nearly all the logicians and mathematicians ignored the statement that the book was meant to throw light on the nature of the human mind; and treated the formula entirely as a wonderful new method of reducing to logical order masses of evidence about external fact.”
“A MOST REMARKABLE COMBINATION OF TRUTH AND ERROR”
Mary Boole wrote: “My husband told me that when he was a lad of seventeen a thought struck him suddenly, which became the foundation of all his future discoveries. It was a flash of psychological insight into the conditions under which a mind most readily accumulates knowledge. For a few years he supposed himself to be convinced of the truth of the Bible as a whole, and even intended to take orders as a clergyman of the English Church. But by the help of a learned Jew in Lincoln he found out the true nature of the discovery which had dawned on him.” In light of this hint of Jewish influence on Boole’s thought, it’s fitting that in a way he turned out to be less a Messiah than a Moses who couldn’t enter a promised land he’d glimpsed. That land was an interpretation of class-addition that made 1 + 1 = 1 sensible. To his last day, Boole insisted that in his arithmetic of classes, trying to add 1 and 1 was meaningless.
Boole had started from the “law of thought” x × x = x and concluded that in some sense classes must behave like variables that can only take the values 0 or 1. But for him 1 was a class (the Universe), and he didn’t think it was meaningful to add classes together if they had some elements in common to start with. For Boole, a prerequisite for being able to write x + y at all was knowing in advance that x and y are disjoint, i.e., have no elements in common. And in the case where x and y are both the Universe, this is as far from being the case as it can possibly be: everything belongs to both x and y when x and y both contain everything.
The logician William Stanley Jevons (1835–1882), for all his admiration of Boole, thought that Boole was simply wrong about this. He saw no philosophical problem with lumping together two classes that have elements in common, and he thought that addition was a perfect way to represent the resulting larger class. In particular, it was clear to Jevons that if you add any class to itself, the result is just that same class again; in symbols, x + x = x, which in the special case where x is the Universe gives us 1 + 1 = 1. In an 1863 letter, Jevons warned Boole that Boole’s system, with its unduly limited notion of addition, might come to be regarded as “a most remarkable combination of truth and error.” But Boole didn’t – couldn’t – accept the equation 1+1 = 1 because he was still in thrall to the rules of ordinary algebra. Indeed, if Boole had accepted 1 + 1 = 1, he’d have had to accept 1 + 1 = 1 + 0; but then, canceling a 1 from both sides of the equation, he’d have gotten 1 = 0, which equates the Universe with the Nulliverse, thereby obliterating everything. Totally out of the question. Absolutely unacceptable.
Here we see how Peacock’s Principle of Permanence, interpreted in its broadest sense (“preserve all algebraic laws whenever possible”) can fail to provide clear guidance to the mathematical explorer. What does “whenever possible” mean? There’s the rub. Boole and Jevons both saw that in the algebra of classes there was a tension between two principles of algebra that were valid for numbers: the cancellation principle for addition and the principle that any two quantities can be added. Which principle needed to give way? The Principle of Permanence doesn’t tell us. Perhaps the best advice for pioneers of new mathematical terrain is, if several of your assumptions come into conflict, try dropping them one at a time, resolving the conflict in all possible ways, to see which resolution seems most fruitful. Or in the words of Yogi Berra, “When you come to a fork in the road, take it.” That’s more or less what Hamilton did when he invented quaternions (as I describe in my essay Hamilton’s Quaternions, or, The Trouble with Triples). Boole was less flexible.
One reason Boole stood his ground is that his notion of + allowed him to express Jevons’ notion of unrestricted unions of classes in a roundabout way: if the classes x and y overlapped, Boole could write their union as x + (1−x) y: the class consisting of everything that’s in x along with everything that’s not in x but is in y. But he couldn’t apply the ordinary rules of algebra to rewrite this as x + y − x y; after the first few two terms he would have said “Stop right there.” (Too bad for Boole: if he’d pursued this kind of equation he could have rediscovered the inclusion-exclusion principle and applied it to the study of probability.)
That’s where the disagreement stood in late 1864, when Boole’s dutiful nature, his general tendency to ill health, and a spell of bad winter weather came into tragic collision: he walked three miles to his classes in the midst of a rainstorm, delivered his lecture in wet clothes, and soon fell ill with pneumonia. He died on December 8 of that year, all of 49 years old.
Later that same year, Jevons published his own book Pure Logic: Or, The Logic of Quality Apart from Quantity; with Remarks on Boole’s System and on the Relation of Logic and Mathematics. In Jevons’ system, any two classes can be added. For instance, if we take h and m as before, then h + m is the class consisting of all humans and all mortal beings, which is the same as the class of all mortal beings. So h + m equals m.
To get a feeling for how this extension of Boole’s definition of addition enhanced the power of Boole’s system, here’s a way, different from the one I showed you before, to prove the syllogism “All humans are mortal, Socrates is a human, therefore Socrates is mortal”: Since all humans are mortal, h + m = m, and since Socrates is human, s + h = h. Now, as Leibniz would have said, let us calculate!
s + m = s + (h + m) = (s + h) + m = h + m = m
We’ve calculated that s + m = m. Since this tells us that the class containing Socrates together with all mortals is nothing more than the class of all mortals, we can conclude that Socrates is one of those mortals.
This proof was available to Jevons7 but not to Boole, since for Boole the sums that occur in the proof were meaningless. The extra flexibility and power of Jevons’ system made Boole’s scruples moot. Jevons carried the day, and nobody besides historians of math and philosophy mean Boole’s logic when they talk about “Boolean logic”.
BOOLEAN LOGIC AFTER BOOLE
As Boole’s successors simplified and improved the symbolic logic Boole had bequeathed to them, three logical operations took center stage: conjunction, disjunction, and negation.
Conjunction corresponds to the word “and”, and it’s represented by ∧; if p is the proposition “Grass is green” and q is the proposition “Cows are purple”, then p∧q is the compound proposition “Grass is green and cows are purple.” This proposition is false even though p is true, because q is false. Here’s an operation table showing how the truth-value of p ∧ q depends on the truth-value of p and the truth-value of q:
In the table, 1 stands for True and 0 stands for False. If you prefer formulas to tables, you can say that the truth-value of p ∧ q equals the truth-value of p times the truth-value of q.
Disjunction corresponds to the word “or”, and it’s represented by ∨; if p is the proposition “Grass is green” and q is the proposition “Cows are purple”, then p ∨ q is the compound proposition “Grass is green or cows are purple.” Here “or” is to be understood in its inclusive sense, sometimes called “and/or”, so that p ∨ q is true whenever at least one of the constituent propositions is true. Here’s an operation table showing how the truth-value of p ∨ q depends on the truth-value of p and the truth-value of q:
If you prefer formulas to tables, you can say that the truth-value of p ∨ q equals the truth-value of p plus the truth-value of q, as long as you use the convention that 1 plus 1 equals 1.
Finally, negation corresponds to the word “not”, and it’s represented by ¬; if p is the proposition “Grass is green”, ¬p is the proposition “No it isn’t.” The truth-value of ¬p is just 1 minus the truth-value of p.
In our own day, when you search through a database with a complicated search criterion that combines primitive search-terms using the logical connectives “and”, “or”, and “not”, you’re doing what’s called a Boolean search.
The American philosopher and logician Charles Sanders Peirce found the reformed Boolean system of doing logic extremely congenial. He used Boole’s ideas to extend propositional logic from the study of properties of individual objects to the study of relations between objects, anticipating the modern approach that models relations as Boolean matrices. For instance, if our universe of discourse consists of Terah, Abraham, Sarah, and Isaac, then the matrix shown below at the left encodes the relation “__ begat __” (I’ve labeled the rows and columns for your convenience but the labels aren’t part of the matrix).
This is called the Boolean matrix of the “begat” relation. If you square that matrix using the definition of matrix multiplication I described in my essay What is a Matrix? you get the matrix shown at the right; the 2 corresponds to the fact that Terah, the father of both Abraham and Sarah, was Isaac’s grandfather twice over. The new matrix isn’t a Boolean matrix because of that pesky 2, but if you round the 2 down to a 1, you get the Boolean matrix that encodes the relation “__ begat someone who begat __”, aka “__ is a biological grandparent of __”. Alternatively, when you compute the entries of the matrix-product, you can replace every positive integer you see along the way by 1 as you go; or equivalently, you can replace ordinary arithmetic by 1 + 1 = 1 arithmetic throughout. This is called the Boolean matrix product and it’s become part of the undergraduate computer science curriculum.
In 1886 Peirce went so far as to propose that logical operations could be carried out by electrical switching circuits. He even had one of his students, Allan Marquand, create circuit diagrams for an electrical logic machine. But neither Peirce nor Marquand built such a machine, nor did they publish their ideas or sketches, and the idea was lost in a dusty archive until Claude Shannon arrived on the scene and re-invented Peirce’s brilliant, abandoned idea.
CLAUDE SHANNON
As a boy, Claude Shannon loved to build electrical gadgets, such as a radio-controlled model boat and a telegraph system that would let him talk to a friend who lived half a mile away. Sure, he could use a telephone, but what would be the fun of that?
The gadgets Shannon built were made of switches, batteries, and other circuit elements joined together by wires. Current would flow from one terminal of a battery along wires to the other circuit elements and then back to the battery’s other terminal as long as the switches permitted it. A closed switch or circuit was one that permitted current to flow; an open switch or circuit was one in which the current was blocked.
Here’s a simple circuit featuring a battery, a light bulb, and two switches, one closed and one open:
This is a series arrangement of the two switches; any electron that wants to go around the track has to run two gauntlets. (Thanks to a bad guess made by Benjamin Franklin, the direction of electrical flow is defined as the opposite of the direction in which the electrons are going, but let’s ignore that nicety.) As long as both switches are closed, current flows and the light bulb glows. But if either of the switches is open, as in the diagram, the circuit as a whole becomes open and current can’t flow.
On the other hand, here’s a different way to arrange the same two switches:
This is the parallel arrangement of the two switches. Now each electron has a choice of paths (“Shall I run Gauntlet 1 or Gauntlet 2?”), so as long as at least one of the switches is closed, the circuit as a whole will become closed and the light bulb will glow. Only if both switches are open will the circuit as a whole be an open circuit and the bulb stay dark.
These facts along with the associated terminology were certainly known to Shannon in 1932 when he enrolled at the University of Michigan, where he pursued a double major in electrical engineering and mathematics. A lot of what Shannon learned from his engineering courses were analogous principles for understanding how circuit elements combine when those elements aren’t just humble things like switches but more complex things like resistors, capacitors, and inductors. For instance, when two resistors of resistance R1 and R2 are combined in a circuit, they act like a single resistor of resistance R1 + R2 if the two resistors are hooked up in series and act like a single resistor of resistance 1/(1/R1 + 1/R2) if the two resistors are hooked up in parallel. Such formulas from electrical physics formed the mathematical foundations of electrical engineering. But Shannon also took a course in philosophy in which he learned about the work of George Boole, and it didn’t escape his notice that if you encode the rules of switching networks via the tables
where 0 means open and 1 means closed, you get the operation tables for “and” and “or”.8
In 1936 Shannon began a master’s degree in electrical engineering at the Massachusetts Institute of Technology, where he worked with Vannevar Bush on a complicated device called the Differential Analyzer. It was an analogue computer, and it inspired Shannon to develop ideas that would eventually relegate analogue computers to the sidelines of the computation game. The complexity of the Differential Analyzer and the ad hoc nature of its design led Shannon to ask himself, how might we design complicated circuits so as to preserve their intended functionality while minimizing the number of circuit elements?
The formulas of Boolean logic provided a clue. For instance, consider the law that says that ∧ is distributive over ∨:9
x ∧ (y ∨ z) = (x ∧ y) ∨ (x ∧ z)
Turn it around and you get the formula
(x ∧ y) ∨ (x ∧ z) = x ∧ (y ∨ z)
which, applied to switching circuits, tells us that a particular kind of circuit with three logic elements can be replaced by a circuit with just two without affecting its behavior.
In 1938, at the ripe age of 22, Shannon completed a master’s thesis in electrical engineering at the Massachusetts Institute of Technology, making the analogy between Boolean logic and circuit design explicit for the first time. It might have seemed a sterile exercise, since most electronic devices aren’t made of just switches and wires. The Differential Analyzer, for instance, represented numbers internally by such electrical quantities as current, voltage, resistance, capacitance, and inductance; in contrast, the on/off vocabulary of switching networks seemed too impoverished to accomplish anything useful. But Shannon showed that this wasn’t the case by giving many applications of switching networks, including the design of a switching network that could add numbers in base two. Furthermore, he showed that the task of optimizing circuit-design manually could be aided by the addition-and-multiplication formalism he borrowed from algebraic logic. Instead of writing
(x ∧ y) ∨ (x ∧ z) = x ∧ (y ∨ z)
Shannon wrote
(x · y) + (x · z) = x · (y + z)
and ever since then, circuit-designers have used those symbols. Once engineers managed to wrap their heads around the fact that in this new context x · x = x and x + x = x and that “addition” is now distributive over “multiplication” (turnabout is fair play), they found that their skill in manipulating algebraic symbols, honed since high school, stood them in good stead.
Shannon’s algebraic approach to simplification of circuits was so successful that nobody uses it anymore – no humans, I mean. Shannon reduced circuit design to algebra, and then his successors found ways to get digital computers to solve complicated algebra problems more quickly and reliably than human beings. Circuit design became automated, so that computers based on Boolean logic run programs that deploy Shannon’s formalism to design better computers. Humans are mostly out of the loop at the circuitry-level.
Shannon’s master’s thesis has been hailed as the most influential master’s thesis of all time, but in 1938 his greatest work still lay ahead of him – not in his Ph.D. thesis (which proposed an approach to genetics now widely viewed as a dead end) but in the theory of information he developed in the decade that followed when he was working at Bell Labs. Just as Shannon had shown in the 1930s that 0’s and 1’s suffice for computation, he showed in the 1940s that 0’s and 1’s suffice for communication. It was this work that led to the way we measure information in “bits”.10 Shannon’s championing of digital methods of communication also led to the sad fact that my old television will no longer work without a converter box – all TV broadcasts are now digital. For that matter, in a few years my old landline telephone won’t work either; telephony is going digital too. Thanks to digital technology’s triumph over analogue technology, we can expect worldwide chaos if the Internet ever goes all the way down. (As I write these words, the big news of the day is that a major cybersecurity failure has crippled numerous pieces of the world’s Information Age infrastructure.)
Unlike Boole, whose work was ignored by psychologists of his day to his great disappointment, Shannon’s work was attended to by psychologists and everybody else. Researchers from all areas of human knowledge were eager to see what sorts of tricks Shannon’s shiny new ideas could be made to perform. The mania invited parody, such as the spoof paper “Information Theory, Photosynthesis, and Religion”. Shannon himself counseled sobriety in his 1956 screed “The Bandwagon”, warning information-theory groupies that “Seldom do more than a few of nature’s secrets give way at one time” and asserting the need for scientists to maintain “a thoroughly scientific attitude”.
Although most people would agree with me that Shannon’s greatest contribution to engineering and to the world of ideas was his theory of information, in an interview in the 1980s, reminiscing about his career, he expressed special fondness for his early work on switching networks and Boolean algebra. “I think I had more fun doing that than anything else in my life, creatively speaking.”
Shannon placed a lot of value on fun and spent the rest of his life having it. He told one interviewer “I am always building totally useless gadgets just because they’re fun to make.” For instance, in 1950 he and his wife Betty built an electromechanical mouse that could learn how to navigate a maze using trial and error. But never again did he do anything that came close to the importance of his work on switching theory or his work on information theory.
ARTIFICIAL MINDS?
Some say that the Shannons’ mouse was among the first examples of artificial intelligence, but it didn’t lead to anything larger. He also wrote papers on computer chess and he co-organized a workshop in 1956 that is considered the first conference on artificial intelligence. Still, he didn’t consider himself a godfather of A.I., nor did others. That title is nowadays associated with George Boole’s great-great-grandson Geoffrey Everest Hinton.
One of Hinton’s interests was neural networks. Although the artificial neurons he studied are quite different from the switching networks Shannon based on Boolean logic, you might say there’s a kind of family resemblance. In modern A.I. systems, an artificial neural network is supported by circuitry that’s still doing classical digital computing of the kind Shannon pioneered.
Nowadays, thanks to Large Language Models, there’s a feeling in the air that the human brain may be on the verge of designing its own replacement. ChatGPT certainly does a passable job of convincing human beings that ChatGPT thinks the way we do, though I’m not sure whether that means that ChatGPT is smart or that humans are stupid.
How would Boole have felt about digital computers and the prospect of machine intelligence? It’s possible that he would have been thrilled. After all, Boole’s wife Mary wrote that her husband believed The Laws of Thought to be “really about brain-action – a work on psychology, not a mathematical text.” If so, a machine built on the principles he articulated should be capable of simulating brain-action, and what better proof of this could there be than digital machines that can pass for human?
Then again, neural network architecture doesn’t implement Boolean logic in any direct way. Boole would have preferred the older approaches to A.I. that sought to model the world symbolically, but these approaches didn’t lead to websites that can do your homework for you. Part of the secret sauce of 21st century A.I. is multi-layer neural networks which bear only a vague resemblance to Boole’s compound propositions or Shannon’s hierarchical networks of switches.
Although we can never know how the real George Boole would have felt about modern A.I., there is a sense in which we might someday have something resembling an answer. After all, digital resurrection of the dead is A Thing nowadays, and the quality of the mimicry is bound to improve as the design of Large Language Models becomes more sophisticated. An LLM trained on everything Boole ever wrote, published or unpublished, might be a passable proxy for Boole-the-thinker.
If we wanted to go down this path, we’d probably want to include the accounts of Boole given by his wife, who understood his mind better than anyone else did. Who knew, for instance, that Boole believed that “in studying the mathematics of the action of the brain we are studying and conversing with God”? Mary did.
I’ll end with a speculative scenario. Some future A.I. program digitally resurrects Boole as a sub-program and converses with “him”. The simulated Boole says “Ah, since I perceive that I am resurrected in non-bodily form, I must now inhabit a future era in which humanity has joined itself to the Supreme Intelligent Cause. So tell me, is my algebra indeed the way the Divine Mind thinks?”
And the A.I. answers, “It is now.”
Thanks to Don Knuth and Vaughan Pratt.
This essay is an online-supplement to chapter 11 of a book I’m writing, tentatively called “What Can Numbers Be?: The Further, Stranger Adventures of Plus and Times”. If you think this sounds cool and want to help me make the book better, check out http://jamespropp.org/readers.pdf. And as always, feel free to submit comments on this essay at the Mathematical Enchantments WordPress site!
ENDNOTES
#1. These diagrams have long been named after John Venn, who comes into today’s story later. Some recent historians of mathematics, noticing that, many years before Venn was born, the mathematician Leonhard Euler came up with an early version of these diagrams, have proposed that we should call them Euler diagrams or Venn-Euler diagrams or Euler-Venn diagrams. And hey, I’m a big fan of historical accuracy in nomenclature, especially when unjustly ignored mathematicians or unjustly ignored mathematical civilizations are involved. But c’mon, we’re talking about Leonhard Euler here! The original guy-who-did-it-first, the guy whose successors would’ve hated him for scooping them if he hadn’t been so darned nice about it! So I say let Venn have his diagrams. Historically fastidious mathematicians can give a nod to Euler’s priority with the historically fastidious mathematician’s customary humorous aside: “Venn diagrams, so named because they were discovered by Leonhard Euler, … ”
#2. Boole would have wanted me to use x and y instead of h and m to make the equations look more respectably algebra-esque, but my notation should help you remember which class is which. Also, instead of saying “All humans are mortal,” Boole would’ve wanted me to bring time into the picture by saying “At all times when a particular being is human, that being is mortal.” But post-Boole Boolean algebra hasn’t been so picky about taking time into account.
#3. The use of the associative property requires justification since our symbolic “multiplication” operation isn’t really multiplication. For that matter, the principle that you can substitute one “equation” into another requires justification too. Such justifications are part of what Boole tried to provide in his books.
#4. Boole’s own proofs of such syllogisms were a bit more complicated and relied on supplementary variables/classes, for reasons I’m still trying to figure out. I’d welcome thoughts from people who know Boole’s work better than I do!
#5. Leibniz had represented conjunction of propositions using juxtaposition of the associated symbols, though as far as I know he never associated logical conjunction with multiplication of numbers or variables; in fact he sometimes represented conjunction by a plus sign.
#6. Everest himself would have hated the peak’s modern name, as he felt that indigenous names should be used for geographical features; he would have preferred that the peak be given a Tibetan or Nepali name. Everest would also have been unhappy at the way so many people, myself included, mispronounce his name as “Ever-rest” instead of “Eve-rest”.
#7. Although in principle Jevons could have given simple proofs like these for some syllogisms, I’m not aware of his having done so. Once again, I’d welcome input from scholars who know this chapter of mathematical history better than I do.
#8. Here I’m simplifying history a little. In Shannon’s original way of linking switching networks to Boolean algebra, he represented open switches by 1’s and closed switches by 0’s. Nowadays it’s acknowledged that both conventions have their uses.
#9. Boolean arithmetic is more symmetrical than ordinary arithmetic: addition isn’t distributive over addition, but ∨ and ∧ are both distributive over each other.
#10. Although the coinage “bit” first appeared in Shannon’s 1948 paper, he attributed it to his Bell Labs colleague John Tukey.
REFERENCES
Janet Heine Barnett, Applications of Boolean algebra: Claude Shannon and Circuit Design
George Boole, The Mathematical Analysis of Logic, 1847
George Boole, An Investigation of the Laws of Thought on Which are Founded the Mathematical Theories of Logic and Probabilities, 1854
Stanley Burris, George Boole, The Stanford Encyclopedia of Philosophy
Irving Copilowish, Matrix Development of the Calculus of Relations, Journal of Symbolic Logic, volume 13, No. 4, pp. 193–203
Des MacHale, The Life and Work of George Boole: A Prelude to the Digital Age, Cork University Press, 2014
Colm Mulcahy, The Bicentennial of George Boole, the Man Who Laid the Foundations of the Digital Age, Scientific American, 2013
Paul J. Nahin, The Logician and the Engineer: How George Boole and Claude Shannon Created the Information Age, 2013
Jeremy Norman, Shannon’s “Symbolic Analysis of Relay and Switching Circuits”: The Most Significant Master’s Thesis of the 20th Century, 2013?
Jeremy Rothman, Why the Godfather of A.I. Fears What He’s Built, The New Yorker, 2023
Claude Shannon, A Symbolical Analysis of Relay and Switching Circuits, 1940
Claude Shannon, A Mathematical Theory of Communication, 1948
Claude Shannon, The Bandwagon, 1956
Wikipedia entries for George Boole, the Boole family, and Claude Shannon