collate logo small - Capital C
banner image
ONLY PARTICIPANTS CAN RESPOND Share symbol0 shares

Steven Pinker and Scott Aaronson debate AI scaling!

Author profile picture Steven Pinker
i
Recipient profile picture Scott Aaronson
i
9 August
Dear Scott Aaronson,
Will future deep learning models with more parameters and trained on more examples avoid the moving the goalposts? Either way, what’s at stake? It depends very much on the question. There’s the cognitive science question of whether humans think and speak the way GPT-3 and other deep-learning neural network models do. And there’s the engineering question of whether the way to develop better, humanlike AI is to upscale deep learning models (as opposed to incorporating different mechanisms, like a knowledge database and propositional reasoning). The questions are, to be sure, related: If a model is incapable of duplicating a human feat like language understanding, it can’t be a good theory of how the human mind works. Conversely, if a model flubs some task that humans can ace, perhaps it’s because it’s missing some mechanism that powers the human mind. Still, they’re not the same question: As with airplanes and other machines, an artificial system can duplicate or exceed a natural one but work in a different way. Apropos the scientific question, I don’t see the Marcus-Davis challenges as benchmarks or long bets that they have to rest their case on. I see them as scientific probing of an empirical hypothesis, namely whether the human language capacity works like GPT-3. Its failures of common sense are one form of evidence that the answer is “no,” but there are others—for example, that it needs to be trained on half a trillion words, or about 10,000 years of continuous speech, whereas human children get pretty good after 3 years. Conversely, it needs no social and perceptual context to make sense of its training set, whereas children do (hearing children of deaf parents don’t learn spoken language from radio and TV). Another diagnostic is that baby-talk is very different from the output of a partially trained GPT. Also, humans can generalize their language skill to express their intentions across a wide range of social and environmental contexts, whereas GPT-3 is fundamentally a text extrapolator (a task, incidentally, which humans aren’t particularly good at). There are surely other empirical probes, limited only by scientific imagination, and it doesn’t make sense in science to set up a single benchmark for an empirical question once and for all. As we learn more about a phenomenon, and as new theories compete to explain it, we need to develop more sensitive instruments and more clever empirical tests. That’s what I see Marcus and Davis as doing. Regarding the second, engineering question of whether scaling up deep-learning models will “get us to Artificial General Intelligence”: I think the question is probably ill-conceived, because I think the concept of “general intelligence” is meaningless. (I’m not referring to the psychometric variable g, also called “general intelligence,” namely the principal component of correlated variation across IQ subtests. This is a variable that aggregates many contributors to the brain’s efficiency such as cortical thickness and neural transmission speed, but it is not a mechanism (just as “horsepower” is a meaningful variable, but it doesn’t explain how cars move.) I find most characterizations of AGI to be either circular (such as “smarter than humans in every way,” begging the question of what “smarter” means) or mystical—a kind of omniscient, omnipotent, and clairvoyant power to solve any problem. No logician has ever outlined a normative model of what general intelligence would consist of, and even Turing swapped it out for the problem of fooling an observer, which spawned 70 years of unhelpful reminders of how easy it is to fool an observer. If we do try to define “intelligence” in terms of mechanism rather than magic, it seems to me it would be something like “the ability to use information to attain a goal in an environment.” (“Use information” is shorthand for performing computations that embody laws that govern the world, namely logic, cause and effect, and statistical regularities. “Attain a goal” is shorthand for optimizing the attainment of multiple goals, since different goals trade off.) Specifying the goal is critical to any definition of intelligence: a given strategy in basketball will be intelligent if you’re trying to win a game and stupid if you’re trying to throw it. So is the environment: a given strategy can be smart under NBA rules and stupid under college rules. Since a goal itself is neither intelligent or unintelligent (Hume and all that), but must be exogenously built into a system, and since no physical system has clairvoyance for all the laws of the world it inhabits down to the last butterfly wing-flap, this implies that there are as many intelligences as there are goals and environments. There will be no omnipotent superintelligence or wonder algorithm (or singularity or AGI or existential threat or foom), just better and better gadgets. In the case of humans, natural selection has built in multiple goals—comfort, pleasure, reputation, curiosity, power, status, the well-being of loved ones—which may trade off, and are sometimes randomized or inverted in game-theoretic paradoxical tactics. Not only does all this make psychology hard, but it makes human intelligence a dubious benchmark for artificial systems. Why would anyone want to emulate human intelligence in an artificial system (any more than a mechanical engineer would want to duplicate a human body, with all its fragility)? Why not build the best possible autonomous vehicle, or language translator, or dishwasher-emptier, or baby-sitter, or protein-folding predictor? And who cares whether the best autonomous vehicle driver would be, out of the box, a good baby-sitter? Only someone who thinks that intelligence is some all-powerful elixir. Back to GPT-3, DALL-E, LaMDA, and other deep learning models: It seems to me that the question of whether or not they’re taking us closer to “Artificial General Intelligence” (or, heaven help us, “sentience”) is based not on any analysis of what AGI would consist of but on our being gobsmacked by what they can do. But refuting our intuitions about what a massively trained, massively parameterized network is capable of (and I’ll admit that they refuted mine) should not be confused with a path toward omniscience and omnipotence. GPT-3 is unquestionably awesome at its designed-in goal of extrapolating text. But that is not the main goal of human language competence, namely expressing and perceiving intentions. Indeed, the program is not even set up to input or output intentions, since that would require deep thought about how to represent intentions, which went out of style in AI as the big-data/deep-learning hammer turned every problem into a nail. That’s why no one is using GPT-3 to answer their email or write an article or legal brief (except to show how well the program can spoof one). So is Scott Alexander right that every scaled-up GPT-n will avoid the blunders that Marcus and Davis show in GPT-(n-1)? Perhaps, though I doubt it, for reasons that Marcus and Davis explain well (in particular, that astronomical training sets at best compensate for their being crippled by the lack of a world model). But even if they do, that would show neither that human language competence is a GPT (given the totality of the relevant evidence) nor that GPT-n is approaching Artificial General Intelligence (whatever that is). (This letter and what follows was originally published in Shtetl-Optimized.)

Steven Pinker

Author profile picture Steven Pinker
9 August
Dear Steven Pinker,

As usual, I find you crystal-clear and precise—so much so that we can quickly dispense with the many points of agreement. Basically, one side says that, while GPT-3 is of course mind-bogglingly impressive, and while it refuted confident predictions that no such thing would work, in the end it’s just a text-prediction engine that will run with any absurd premise it’s given, and it fails to model the world the way humans do. The other side says that, while GPT-3 is of course just a text-prediction engine that will run with any absurd premise it’s given, and while it fails to model the world the way humans do, in the end it’s mind-bogglingly impressive, and it refuted confident predictions that no such thing would work.

All the same, I do think it’s possible to identify a substantive disagreement between the distinguished baby-boom linguistic thinkers and the gen-X/gen-Y blogging Scott A.’s: namely, whether there’s a coherent concept of “general intelligence.” You write:

“No logician has ever outlined a normative model of what general intelligence would consist of, and even Turing swapped it out for the problem of fooling an observer, which spawned 70 years of unhelpful reminders of how easy it is to fool an observer.”

I freely admit that I have no principled definition of “general intelligence,” let alone of “superintelligence.” To my mind, though, there’s a simple proof-of-principle that there’s something an AI could do that pretty much any of us would call “superintelligent.” Namely, it could say whatever Albert Einstein would say in a given situation, while thinking a thousand times faster. Feed the AI all the information about physics that the historical Einstein had in 1904, for example, and it would discover special relativity in a few hours, followed by general relativity a few days later. Give the AI a year, and it would think … well, whatever thoughts Einstein would’ve thought, if he’d had a millennium in peak mental condition to think them.

If nothing else, this AI could work by simulating Einstein’s brain neuron-by-neuron—provided we believe in the computational theory of mind, as I’m assuming we do. It’s true that we don’t know the detailed structure of Einstein’s brain in order to simulate it (we might have, had the pathologist who took it from the hospital used cold rather than warm formaldehyde). But that’s irrelevant to the argument. It’s also true that the AI won’t experience the same environment that Einstein would have—so, alright, imagine putting it in a very comfortable simulated study, and letting it interact with the world’s flesh-based physicists. A-Einstein can even propose experiments for the human physicists to do—he’ll just have to wait an excruciatingly long subjective time for their answers. But that’s OK: as an AI, he never gets old.

Next let’s throw into the mix AI Von Neumann, AI Ramanujan, AI Jane Austen, even AI Steven Pinker—all, of course, sped up 1,000x compared to their meat versions, even able to interact with thousands of sped-up copies of themselves and other scientists and artists. Do we agree that these entities quickly become the predominant intellectual force on earth—to the point where there’s little for the original humans left to do but understand and implement the AIs’ outputs (and, of course, eat, drink, and enjoy their lives, assuming the AIs can’t or don’t want to prevent that)? If so, then that seems to suffice to call the AIs “superintelligences.” Yes, of course they’re still limited in their ability to manipulate the physical world. Yes, of course they still don’t optimize arbitrary goals. All the same, these AIs have effects on the real world consistent with the sudden appearance of beings able to run intellectual rings around humans—not exactly as we do around chimpanzees, but not exactly unlike it either.

I should clarify that, in practice, I don’t expect AGI to work by slavishly emulating humans—and not only because of the practical difficulties of scanning brains, especially deceased ones. Like with airplanes, like with existing deep learning, I expect future AIs to take some inspiration from the natural world but also to depart from it whenever convenient. The point is that, since there’s something that would plainly count as “superintelligence,” the question of whether it can be achieved is therefore “merely” an engineering question, not a philosophical one.

Obviously I don’t know the answer to the engineering question: no one does! One could consistently hold that, while the thing I described would clearly count as “superintelligence,” it’s just an amusing fantasy, unlikely to be achieved for millennia if ever. One could hold that all the progress in AI so far, including the scaling of language models, has taken us only 0% or perhaps 0.00001% of the way toward superintelligence so defined.

So let me make two comments about the engineering question. The first is that there’s good news here, at least epistemically: unlike with the philosophical questions, we’re virtually guaranteed more clarity over time! Indeed, we’ll know vastly more just by the end of this decade, as the large language models are further scaled and tweaked, and we find out whether they develop effective representations of the outside world and of themselves, the ability to reject absurd premises and avoid self-contradiction, or even the ability to generate original mathematical proofs and scientific hypotheses. Of course, Gary Marcus and Scott Alexander have already placed concrete bets on the table for what sorts of things will be possible by 2030. For all their differences in rhetoric, I was struck that their actual probabilities differed much more modestly.

So then what explains the glaring differences in rhetoric? This brings me to my second comment: whenever there’s a new, rapidly-growing, poorly-understood phenomenon, whether it’s the Internet or AI or COVID, there are two wildly different modes of responding to it, which we might call “February 2020 mode” and “March 2020 mode.” In February 2020 mode, one says: yes, a naïve extrapolation might lead someone to the conclusion that this new thing is going to expand exponentially and conquer the world, dramatically changing almost every other domain—but precisely because that conclusion seems absurd on its face, it’s our responsibility as serious intellectuals to articulate what’s wrong with the arguments that lead to it. In March 2020 mode, one says: holy crap, the naïve extrapolation seems right! Prepare!! Why didn’t we start earlier?

Often, to be sure, February 2020 mode is the better mode, at least for outsiders—as with the Y2K bug, or the many disease outbreaks that fizzle. My point here is simply that February 2020 mode and March 2020 mode differ by only a month. Sometimes hearing a single argument, seeing a single example, is enough to trigger an epistemic cascade, causing all the same facts to be seen in a new light. As a result, reasonable people might find themselves on opposite sides of the chasm even if they started just a few steps from each other.

As for me? Well, I’m currently trying to hold the line around February 26, 2020. Suspending my day job in the humdrum, pedestrian field of quantum computing, I’ve decided to spend a year at OpenAI, thinking about the theoretical foundations of AI safety. But for now, only a year.

Scott Aaronson

Scott Aaronson
9 August
Dear Scott Aaronson,

Thanks, Scott, for your thoughtful and good-natured reply, and for offering me the opportunity to respond in Shtetl-Optimized, one of my favorite blogs. Despite the areas of agreement, I still think that discussions of AI and its role in human affairs—including AI safety—will be muddled as long as the writers treat intelligence as an undefined superpower rather than a mechanisms with a makeup that determines what it can and can’t do. We won’t get clarity on AI if we treat the “I” as “whatever fools us,” or “whatever amazes us,” or “whatever IQ tests measure,” or “whatever we have more of than animals do,” or “whatever Einstein has more of than we do”—and then start to worry about a superintelligence that has much, much more of whatever that is.

Take Einstein sped up a thousandfold. To begin with, current AI is not even taking us in that direction. As you note, no one is reverse-engineering his connectome, and current AI does not think the way Einstein thought, namely by visualizing physical scenarios and manipulating mathematical equations. Its current pathway would be to train a neural network with billions of physics problems and their solutions and hope that it would soak up the statistical patterns.

Of course, the reason you pointed to a sped-up Einstein was to procrastinate having to define “superintelligence.” But if intelligence is a collection of mechanisms rather than a quantity that Einstein was blessed with a lot of, it’s not clear that just speeding him up would capture what anyone would call superintelligence. After all, in many areas Einstein was no Einstein. You above all could speak of his not-so-superintelligence in quantum physics, and when it came world affairs, in the early 1950s he offered the not exactly prescient or practicable prescription, “Only the creation of a world government can prevent the impending self-destruction of mankind.” So it’s not clear that we would call a system that could dispense such pronouncements in seconds rather than years “superintelligent.” Nor with speeding up other geniuses, say, an AI Bertrand Russell, who would need just nanoseconds to offer his own solution for world peace: the Soviet Union would be given an ultimatum that unless it immediately submitted to world government, the US (which at the time had a nuclear monopoly) would bomb it with nuclear weapons.

My point isn’t to poke retrospective fun at brilliant men, but to reiterate that brilliance itself is not some uncanny across-the-board power that can be “scaled” by speeding it up or otherwise; it’s an engineered system that does particular things in particular ways. Only with a criterion for intelligence can we say which of these counts as intelligent.

Now, it’s true that raw speed makes new kinds of computation possible, and I feel silly writing this to you of all people, but speeding a process up by a constant factor is of limited use with problems that are exponential, as the space of possible scientific theories, relative to their complexity, must be. Speeding up a search in the space of theories a thousandfold would be a rounding error in the time it took to find a correct one. Scientific progress depends on the search exploring the infinitesimal fraction of the space in which the true theories are likely to lie, and this depends on the quality of the intelligence, not just its raw speed.

And it depends as well on a phenomenon you note, namely that scientific progress depends on empirical discovery, not deduction from a silicon armchair. The particle accelerators and space probes and wet labs and clinical trials still have to be implemented, with data accumulating at a rate set by the world. Strokes of genius can surely speed up the rate of discovery, but in the absence of omniscience about every particle, the time scale will still be capped by empirical reality. And this in turn directs the search for viable theories: which part of the space one should explore is guided by the current state of scientific knowledge, which depends on the tempo of discovery. Speeding up scientists a thousandfold would not speed up science a thousandfold.

All this is relevant to AI safety. I’m all for safety, but I worry that the dazzling intellectual capital being invested in the topic will not make us any safer if it begins with a woolly conception of intelligence as a kind of wonder stuff that you can have in different amounts. It leads to unhelpful analogies, like “exponential increase in the number of infectious people during a pandemic” ≈ “exponential increase in intelligence in AI systems.” It encourages other questionable extrapolations from the human case, such as imagining that an intelligent tool will develop an alpha-male lust for domination. Worst of all, it may encourage misconceptions of AI risk itself, particularly the standard scenario in which a hypothetical future AGI is given some preposterously generic single goal such as “cure cancer” or “make people happy” and theorists fret about the hilarious collateral damage that would ensue.

If intelligence is a mechanism rather than a superpower, the real dangers of AI come into sharper focus. An AI system designed to replace workers may cause mass unemployment; a system designed to use data to sort people may sort them in ways we find invidious; a system designed to fool people may be exploited to fool them in nefarious ways; and as many other hazards as there are AI systems. These dangers are not conjectural, and I suspect each will have to be mitigated by a different combination of policies and patches, just like other safety challenges such as falls, fires, and drownings. I’m curious whether, once intelligence is precisely characterized, any abstract theoretical foundations of AI safety will be useful in dealing with the actual AI dangers that will confront us.

Steven Pinker

Steven Pinker
9 August
Dear Steven Pinker,

A main crux of disagreement has turned out to be whether there’s any coherent concept of “superintelligence.” I give a qualified “yes” (I can’t provide necessary and sufficient conditions for it, nor do I know when AI will achieve it if ever, but there are certainly things an AI could do that would cause me to say it was achieved). You, in contrast, gave a strong “no.”

My friend (and previous Shtetl-Optimized guest blogger) Sarah Constantin then wrote a thoughtful response to you, taking a different tack than I had. Sarah emphasized that you are on record defending the statistical validity of Spearman’s g: the “general factor of human intelligence,” which accounts for a large fraction of the variation in humans’ performance across nearly every intelligence test ever devised, and which is also found to correlate with cortical thickness and other physiological traits. Is it so unreasonable, then, to suppose that g is measuring something of abstract significance, such that it would continue to make sense when extrapolated, not to godlike infinity, but at any rate, well beyond the maximum that happens to have been seen in humans?

(As it happens, the same question was also discussed at length in, e.g., Shane Legg’s 2008 PhD thesis; Legg then went on to cofound DeepMind.)

Scott Aaronson

Scott Aaronson
9 August
Dear Scott Aaronson,

While I defend the existence and utility of IQ and its principal component, general intelligence or g, in the study of individual differences, I think it’s completely irrelevant to AI, AI scaling, and AI safety. It’s a measure of differences among humans within the restricted range they occupy, developed more than a century ago. It’s a statistical construct with no theoretical foundation, and it has tenuous connections to any mechanistic understanding of cognition other than as an omnibus measure of processing efficiency (speed of neural transmission, amount of neural tissue, and so on). It exists as a coherent variable only because performance scores on subtests like vocabulary, digit string memorization, and factual knowledge intercorrelate, yielding a statistical principal component, probably a global measure of neural fitness.

In that regard, it’s like a Consumer Reports global rating of cars, or overall score in the pentathlon. It would not be surprising that a car with a more powerful engine also had a better suspension and sound system, or that better swimmers are also, on average, better fencers and shooters. But this tells us precisely nothing about how engines or human bodies work. And imagining an extrapolation to a supervehicle or a superathlete is an exercise in fantasy but not a means to develop new technologies.

Indeed, if “superintelligence” consists of sky-high IQ scores, it’s been here since the 1970s! A few lines of code could recall digit strings or match digits to symbols orders of magnitude better than any human, and old-fashioned AI programs could also trounce us in multiple-choice vocabulary tests, geometric shape extrapolation (“progressive matrices”), analogies, and other IQ test components. None of this will help drive autonomous vehicles, discover cures for cancer, and so on.

As for recent breakthroughs in AI which may or may not surpass humans (the original prompt for this exchange); What is the IQ of GPT-3, or DALL-E, or AlphaGo? The question makes no sense!

So, to answer your question: yes, general intelligence in the psychometrician’s sense is not something that can be usefully extrapolated. And it’s “one-dimensional” only in the sense that a single statistical principal component can always be extracted from a set of intercorrelated variables.

One more point relevant to the general drift of the comments. My statement that “superintelligence” is incoherent is not a semantic quibble that the word is meaningless, and it’s not a pre-emptive strategy of Moving the True Scottish Goalposts. Sure, you could define “superintelligence,” just as you can define “miracle” or “perpetual motion machine” or “square circle.” And you could even recognize it if you ever saw it. But that does not make it coherent in the sense of being physically realizable.

If you’ll forgive me one more analogy, I think “superintelligence” is like “superpower.” Anyone can define “superpower” as “flight, superhuman strength, X-ray vision, heat vision, cold breath, super-speed, enhance hearing, and nigh-invulnerability.” Anyone could imagine it, and recognize it when he or she sees it. But that does not mean that there exists a highly advanced physiology called “superpower” that is possessed by refugees from Krypton! It does not mean that anabolic steroids, because they increase speed and strength, can be “scaled” to yield superpowers. And a skeptic who makes these points is not quibbling over the meaning of the word superpower, nor would he or she balk at applying the word upon meeting a real-life Superman. Their point is that we almost certainly will never, in fact, meet a real-life Superman. That’s because he’s defined by human imagination, not by an understanding of how things work. We will, of course, encounter machines that are faster than humans, and that see X-rays, that fly, and so on, each exploiting the relevant technology, but “superpower” would be an utterly useless way of understanding them.

To bring it back to productive discussions of AI: there’s plenty of room to analyze the capabilities and limitations of particular intelligent algorithms and data structures—search, pattern-matching, error back-propagation, scripts, multilayer perceptrons, structure-mapping, hidden Markov models, and so on. But melting all these mechanisms into a global variable called “intelligence,” understanding it via turn-of-the-20th-century school tests, and mentally extrapolating it with a comic-book prefix, is, in my view, not a productive way of dealing with the challenges of AI.

Steven Pinker

Steven Pinker
9 August
Dear Steven Pinker,

I wanted to drill down on the following passage:

"Sure, you could define “superintelligence,” just as you can define “miracle” or “perpetual motion machine” or “square circle.” And you could even recognize it if you ever saw it. But that does not make it coherent in the sense of being physically realizable."

The way I use the word “coherent,” it basically means “we could recognize it if we saw it.” Clearly, then, there’s a sharp difference between this and “physically realizable,” although any physically-realizable empirical behavior must be coherent. Thus, “miracle” and “perpetual motion machine” are both coherent but presumably not physically realizable. “Square circle,” by contrast, is not even coherent.

You now seem to be saying that “superintelligence,” like “miracle” or “perpetuum mobile,” is coherent (in the “we could recognize it if we saw it” sense) but not physically realizable. If so, then that’s a big departure from what I understood you to be saying before! I thought you were saying that we couldn’t even recognize it.

If you do agree that there’s a quality that we could recognize as “superintelligence” if we saw it—and I don’t mean mere memory or calculation speed, but, let’s say, “the quality of being to John von Neumann in understanding and insight as von Neumann was to an average person”—and if the debate is merely over the physical realizability of that, then the arena shifts back to human evolution. As you know far better than me, the human brain was limited in scale by the width of the birth canal, the need to be mobile, and severe limitations on energy. And it wasn’t optimized for understanding algebraic number theory or anything else with no survival value in the ancestral environment. So why should we think it’s gotten anywhere near the limits of what’s physically realizable in our world?

Not only does the concept of “superpowers” seem coherent to me, but from the perspective of someone a few centuries ago, we arguably have superpowers—the ability to summon any of several billion people onto a handheld video screen at a moment’s notice, etc. etc. You’d probably reply that AI should be thought of the same way: just more tools that will enhance our capabilities, like airplanes or smartphones, not some terrifying science-fiction fantasy.

What I keep saying is this: we have the luxury of regarding airplanes and smartphones as “mere tools” only because there remain so many clear examples of tasks we can do that our devices can’t. What happens when the devices can do everything important that we can do, much better than we can? Provided we’re physicalists, I don’t see how we reject such a scenario as “not physically realizable.” So then, are you making an empirical prediction that this scenario, although both coherent and physically realizable, won’t come to pass for thousands of years? Are you saying that it might come to pass much sooner, like maybe this century, but even if so we shouldn’t worry, since a tool that can do everything important better than we can do it is still just a tool?

Scott Aaronson

Scott Aaronson
9 August
Dear Scott Aaronson,

I’m not sure that “coherent” can be equated with “recognize it if we saw it,” and I suspect that the Potter-Stewart-pornography standard is not a good criterion in science, since it combines the subjective with the hypothetical (what’s to prevent me from saying, “Sure, I’d recognize a square circle if I saw one–it would have four equal perpendicular sides and all its points would be equidistant from the center! What’s the problem?”) But I don’t mean to quibble about words, so let me clarify what I meant, namely that that “superintelligence” is incoherent in the same sense that “superpowers” (in the Superman sense) are incoherent: it corresponds to no single actual phenomenon in the world, but is a projection of our own unconstrained imagination.

Of course it’s notoriously foolish to stipulate what some technology will never do (controlled fission, moon landings, and so on). But that works both ways: it’s foolish to extrapolate technological advances using our imagination rather than the constraints, costs, and benefits. We still don’t have the nuclear-powered vacuum cleaners prophesied in the 1960s, and almost certainly never will, not because it’s physically impossible, but because it would be formidably difficult and for dubious benefit. In the case of surpassing humans at “everything,” it’s easy to forget how wicked even one of these challenges can be. Take driving—a simple goal, seemingly straightforward physical inputs, and just three degrees of freedom in the output. I myself would have predicted that AI would have surpassed humans years ago, yet it appears to be years if not decades away. This is despite the stupendous incentives in terms of promised safety, convenience, and profit, and massive investment from many companies. And the successes we have seen have come from enormous engineering ingenuity specifically directed at the problem, not from developing an AI with a high IQ, whatever that would mean. Now multiply this by all the tasks you’d include under “everything.” And recall that the conspicuous successes of recent years come from techniques that are useless for many of these challenges (you won’t cure cancer by training a deep learning network on half a trillion diseases and their cures). We have to be cautious in our predictions in both ways: no doubt we’ll continue to be surprised at the moon landings while waiting forever for the nuclear vacuum cleaners.

Also, does “everything” include human goals like achieving notoriety, vindicating pet ideas, maximizing profit, implementing Green or Woke or Christian ideals, and so on? Again, putting aside Pygmalion-Frankenstein-Golem-Pinocchio narratives, what is the engineering or economic incentive for duplicating an entire human in the first place, even if it were feasible? It’s hard enough to build a tool that does one thing well, like driving.

Steven Pinker

Steven Pinker
9 August
Dear Steven Pinker,

On your first point:

People have of course imagined many things, from nuclear bombs to digital computers, that corresponded to no actual phenomenon in the world at the time they were imagined. Notably, in the case of computers, a 19th-century skeptic could’ve reasonably objected: “wait, you say it’s for talking to friends and playing music and reading books and maintaining a calendar and writing documents and ordering food and doing difficult calculations? What possible benefit could there be to bundling all those wildly different abilities into a single imagined device, as if the same ‘superpower’ would somehow enable all of them?”

This is the whole trouble with reasoning about the future! Sometimes people’s imaginations were too unconstrained; other times, they weren’t nearly unconstrained enough.

As for “square circle,” I concede that one could parse it as a perfectly-coherent concept that can merely, for logical reasons, never correspond to any actual thing in the world. In practice, though, if a boss demanded a square circle on his desk by tomorrow morning, the employee would surely start wondering what the boss wants: “would he be happy with four circular arcs that meet at corners, forming a shape that’s partly circle-like and partly square-like? would he be happy with a circle in 1-norm, which is a square (or more precisely a diamond)?” Until the boss clarified, his request might rightly be called “incoherent.”

In any case, probably none of this bears directly on the debate, since not only do I not find the concept of “superintelligence” inherently incoherent, I see no reason of physics or logic why it could never correspond to any real thing in the world. I don’t know whether it can or will, but I regard those as contingent questions of technology and biology, not of physics, math, or logic.

On your second and third points:

A lot of this strikes me as just coming down to the timescale. Returning to our imagined 19th-century skeptic of the “general-purpose digital computer,” the skeptic might say: “before you imagine this miracle-device for communicating across the world and reading and archiving and simulating physics and managing finances and playing music and etc. etc., stop to think through how wicked even one of these challenges will be!” The skeptic would’ve been entirely right, if they regarded this as practical advice to computer designers embarking on a century-long journey—but entirely wrong, if they regarded it as a philosophical argument for the incoherence of the imagined device that lay at the journey’s end.

As for fully self-driving cars, my understanding (others should chime in if they disagree!) is that, from the incomplete data we have, the best prototypes (e.g. Waymo) seem very near the point of being as safe as human drivers already. Certainly if one just wanted to use them for a taxi or ridesharing service within a specific city, and could preprogram detailed maps of that city’s every cranny. Right now, a car with enough onboard compute to run the ML model you’d really want would probably be a huge, bulky monstrosity, but rapid improvements there are about as predictable as anything in technology.

Unfortunately, it’s become clear that, even after self-driving cars become safer than humans, regulatory and psychological barriers will slow their adoption or maybe even prevent it entirely. A huge part of the reason is that, even if self-driving cars can soon cause (say) 10x fewer fatalities per mile than human drivers do, when they do cause fatalities, it will be salient and scary and weird—the AI mistaking a pedestrian’s shirt pattern for a green light or whatever.

Hopefully I don’t have to belabor this point to a man who’s written multiple bestselling books explaining how dramatic, newsworthy, tragic events systematically warp people’s understanding of reality, when they should just be looking at statistics!

Scott Aaronson

Scott Aaronson

Related letters

    The problem of consciousness, and the panpsychist solution

    Philip Goff on 7 June
    Responses: 7

    Dear Massimo Pigliucci, We’ve had a quite a few vigorous exchanges on twitter on the topic of panpsychism. I’ve learned a lot from these exchanges, but I’m sure you’ll agree it’s hard to tackle issues in any depth with the 140 limit imposed by Twitter. So I’m grateful for t...

    The meaning and legacy of humanism: a sharp challenge from a potential ally

    Andy Norman on 17 June
    Responses: 4

    Dear Yuval Noah Harari, My name is Andy Norman, and I count myself a fan of your work. I admire your clarity, your passion for big ideas, and your commitment to clear, accessible writing. I think your books—Sapiens and Homo Deus—are landmark achievements, destined to stimul...

    COLLATE'S FOUNDING LETTER: great minds in written conversation in search of leaders, ideas and togetherness

    Oliver Kraftman on 25 January
    Responses: 0

    Dear Internet, Optimism has been part of my nature for as long as I can remember. As a child I remember watching my favourite football team thinking we’d never lose. I’d believe we could win until the very end of a game no matter how badly we were losing or who the...

    I was surprised by some of your recent comments on Twitter, which struck me as decidedly backward looking.

    David Sloan Wilson on 7 June
    Responses: 7

    Dear Massimo Pigliucci, We go way back and share a love of philosophy in addition to biology. I was proud to be included in the “Altenberg 16” workshop that you organized to explore the Extended Evolutionary Synthesis, a term that you coined. I have always regarded you as ...

    All my Bitcoin was stolen from my Coinbase wallet - your company is egregiously negligent.

    Oliver Kraftman on 29 July
    Responses: 0

    Dear Brian Armstrong, I’m writing to relay my experience with Coinboase over the past 12 months. It’s rather nasty, so strap yourself in. I hope this letter will give you additional motivation to fight what seem to be fundamental problems at your company. About a year a...

    Panpsychism, the combination problem, and Sufi mysticism

    Christopher Crompton on 20 June
    Responses: 0

    Dear Philip Goff, I am intrigued by your support of panpsychism and would love to run some things past you. I enjoyed your exchange with Massimo Pigliucci on Collate, although I could sense your frustration that some of his key objections seemed to be based on what ...

    On Purpose

    Alex Rosenberg on 16 February
    Responses: 7

    Dear Daniel Dennett, I have no idea how much interest there might be about disagreements among naturalists between the optimistic ones like you and the pessimistic ones like me. But I know I’d like to be convinced I’m mistaken. And you are probably the philosopher most l...

    Is it possible to fail your life's mission?

    Tasshin Fogleman on 27 July
    Responses: 1

    Dear River Kenna, In your most recent newsletter[1], you wrote about Mission and Pattern: > There’s something inside each of us that wants to be expressed through our lives—call it what you want, destiny, energy, gods, psychological tendencies, urges, pattern, missio...

    To Professor Simon Armitage, Poet Laureate, in response to '70 Notices'

    Christopher Crompton on 26 May
    Responses: 0

    Dear Simon Armitage, In response to '70 Notices' You spent a long time looking hard to notice seventy things between greyscale streetlit troughs and Dark and White Peaks, saw a full moon bottled in a drop of dew, felt the full weight of Sheffield and Manchester come ...

    Re: The Ages of Uncertainty and Human Nature

    Tobias Lim on 27 May
    Responses: 0

    Dear Ian Stewart, Firstly, I wanted to thank you for writing your book Do Dice Play God?. I enjoyed the tour you took readers on across the “Ages of Uncertainty”. We forget sometimes that quantum mechanics and chaos theory are relatively new discoveries. And that mode...

    The Grey Ethics of Pet Ownership

    Tobias Lim on 10 July
    Responses: 0

    Dear Cheryl Abbate, I was walking through my local university recently and noticed something peculiar. Their student union had organized a petting zoo for students who were about to sit their college exams. It was a nice gesture in some ways. An opportunity for young pe...

    COP26

    Jacob Herbert on 11 November
    Responses: 1

    Dear Alok Sharma, I am writing ahead of the COP26 which will be held in Glasgow from the 31st of October. This event has been much anticipated around the world, but anticipation does not always equate to action, as was seen in the woeful failures of COP25 in 2019. The...

    The mystery of why we like mysteries.

    Eleanor on 17 July
    Responses: 0

    Dear Anthony Horowitz, I had the enjoyable experience of growing up with your Alex Rider books during my childhood, and also being a fan of Sherlock Holmes, it has also been wonderful seeing you continue those stories. I wanted to write to you regarding something that has...

    Space, NASA, and the Problems on Earth

    Tobias Lim on 23 June
    Responses: 0

    Dear Mae Jemison, As you know, the Biden-Harris administration has requested an annual budget of $26 billion for NASA in 2023 (with similar sums projected for the years ahead). [1] While this is less than 0.5 percent of yearly U.S. government spending, it is not an in...

    Is charging money for punk really that punk?

    Eleanor on 2 August
    Responses: 0

    Dear Vivienne Westwood, I am a visual artist, and recently when trying to form some ideas for upcoming projects, I was struck by the way that aesthetics change over time — not only through being reimagined, but through being commercialised. And I was also struck by the inev...

    From Hilbert's Problems to Santa Fe's Problems

    Tobias Lim on 8 April
    Responses: 0

    Dear Melanie Mitchell, This letter is long overdue, but I wanted to thank you for writing your wonderful book Complexity: A Guided Tour. As an economist, I’ve been trained to think in a rigid sort of way. And it’s taken me a number of years and a lot of re-education to u...

    Breaking Germany's Fossil Fuel Dependence

    Griffin Harris on 2 April
    Responses: 0

    Dear Olaf Scholz, His Excellency Olaf Scholz, the Federal Chancellor of the Republic of Germany, I write to you on a critical topic that will undoubtedly have impacts on the German people, the people of the EU, and the people of the World for years and even decades ...

    Responsibility and regulation on the internet.

    Eleanor on 27 May
    Responses: 0

    Dear Damian Hinds, I am writing to you with some thoughts on the practice and practicality of the Online Safety Bill, which is currently at Committee stage in the House of Commons. The subject of regulation and responsibility on the internet has long been of interest t...

    Science, Religion, and Coexistence

    Tobias Lim on 13 June
    Responses: 0

    Dear Ali Rizvi, I came across a wonderful correspondence between Matt Thornton and yourself a few days ago. I say wonderful because it is rare to see two people from diverse backgrounds with different worldviews engage in a meaningful and respectful dialogue. Your...

    Reasonable Progress

    Joshua Dubrow on 13 June
    Responses: 0

    Dear Cornel West, I am writing ask about the concept of "reasonable progress:" when we can reasonably expect progress to arrive? When stuff moves forward -- justice through judicial decisions, paperwork by deadlines -- we may think that the forward motion was in a r...

    Apple Trees and the Future of Science

    Tobias Lim on 23 June
    Responses: 0

    Dear Pamela Gay, I’m marveling at the wonderful breakthroughs, discoveries, and achievements that scientists and engineers have made recently. Over the last few years, we’ve had the COVID-19 vaccine, CRISPR technology, and AlphaFold structures, just to name a few. A...

    The future of Britain's food security

    Christopher Crompton on 8 June
    Responses: 0

    Dear Jim McMahon, I am writing to you in your capacity as Shadow Minister for the Environment, Food and Rural Affairs about the critical issue of our nation’s food security. In a time of short-termist, sticking-plaster politics, we need a serious, credible, long-term ...

    The Strategic Importance of Rare Earth Metals

    Dale Joseph Ferrier on 4 April
    Responses: 0

    Dear Iain Duncan Smith, I hope this letter finds you well. I recently attended an event by the Ribble Valley Conservatives where you were their guest speaker. It seems many of those attending that night had questions to put to you, for which you gladly answered. I was unfor...

    Gardening for the future

    Christopher Crompton on 4 August
    Responses: 0

    Dear Chris Beardshaw, First of all, congratulations on another Chelsea gold this year for your impeccably designed RNLI show garden. The sheer amount of thought and attention to detail that went into everything from the use of materials – elm and New Forest green oak to e...

    Cutting Trees to Cut Down on Wildfires

    Griffin Harris on 11 April
    Responses: 0

    Dear Joe Biden, As we head deeper into springtime this year, millions of Americans across the western United States are gearing up for what is sure to be another intense fire season. Over the past several years, our country has experienced a dramatic uptick in sev...

    A Layperson Wonders About His Free Will

    Tobias Lim on 25 August
    Responses: 0

    Dear Meghan Griffith, [highlight=transparent]Recently, I’ve been most curious about free will. Unfortunately, my partner is not. And she is tired of me asking her whether or not she believes in free will. So to save her from further misery, I will take my questions and th...

    Is Instagram Instagram again?

    Eleanor on 31 August
    Responses: 0

    Dear Adam Mosseri, As someone who is both an Instagram user and deeply interested in the internet, I have been (along with many other people) disappointed in your recent changes to the platform. Not only do they make the platform less enjoyable, but I am concerned that...

    The Man from the Future

    Tobias Lim on 15 September
    Responses: 2

    Dear Ananyo Bhattacharya, [highlight=transparent]I wanted to thank you for writing The Man from the Future. I got my copy on Audible and thoroughly enjoyed listening to your tale of the legendary John von Neumann. You did a marvelous job weaving his personal story in between ...

    Harley Benton guitars – compliments and a request

    Christopher Crompton on 21 September
    Responses: 0

    Dear Hans Thomann, I am writing first of all to extend my compliments on your excellent Harley Benton guitars. I really feel your brand is transforming the guitar industry for the better. I bought my first Harley Benton last year after reading many superb reviews, in...

    Christianity and Me

    Dale Joseph Ferrier on 22 September
    Responses: 0

    Dear Internet, [justify][highlight=transparent]A few weeks ago whilst at our usual Saturday night excursion to the village local, our conversation somehow got onto religion. Someone in our group said something seemingly insignificant, but it sparked a small epiphan...

    Farming, fungi and the future

    Christopher Crompton on 23 September
    Responses: 0

    Dear Ruth Jones, I am writing to you in your capacity as Shadow Minister for Agri-Innovation and Climate Adaptation. At present, Britain clearly has a long way to go to arrive at a sustainable system of farming. While piecemeal changes are being made, we are not seei...

    Dealing With Our Gelatinous Ignorance

    Tobias Lim on 3 October
    Responses: 0

    Dear Kareem Abdul-Jabbar, [highlight=transparent]Firstly, I want to say thank you for your newsletter. For basketball fans around the world, your achievements, both on and off the court, have achieved a sort of mythical status. So it makes me happy that one of the greatest-of...

    Bridges to Infinity and God

    Tobias Lim on 10 October
    Responses: 0

    Dear Michael Guillen, [highlight=transparent]A few weeks ago, I bought a worn copy of your book, Bridges to Infinity, from my local bookshop. The intriguing cover and table of contents caught my eye immediately. And having read the book, I can see why you won awards as a ...

    Just testing to see if our new caching system works!

    Oliver Kraftman on 21 October
    Responses: 0

    Dear Oliver the Admin, Just testing to see if our new caching system works! This will be awesome once up and running. All thanks to Kevin! King Kevin! Let's go!!! Just testing to see if our new caching system works! This will be awesome once up and running. All thanks to K...

    Letters to Tarkovsky

    Tobias Lim on 4 December
    Responses: 0

    Dear Internet, When a nobody like myself writes in letter form to a public figure, there is only a small probability that she will see my words amidst the flood of mail and messages that she inevitably receives. Beyond that, there is an even smaller chance that she...

Dear friend.

This is a letter on Collate.org. A place for politicians, authors and experts to have online correspondences with each other and the public, in public.

Enter your email to read this letter and to receive our best letters in your inbox once a week.

Only the recipient can respond....

Verify your identity and respond via email