GPT-3, PaLM, and look-up tables

[This topic is way outside my expertise. Just thinking out loud.]

Here is Google’s new language model PaLM having a think:

Alex Tabarrok writes

It seems obvious that the computer is reasoning. It certainly isn’t simply remembering. It is reasoning and at a pretty high level! To say that the computer doesn’t “understand” seems little better than a statement of religious faith or speciesism…

It’s true that AI is just a set of electronic neurons none of which “understand” but my neurons don’t understand anything either. It’s the system that understands. The Chinese room understands in any objective evaluation and the fact that it fails on some subjective impression of what it is or isn’t like to be an AI or a person is a failure of imagination not an argument…

These arguments aren’t new but Searle’s thought experiment was first posed at a time when the output from AI looked stilted, limited, mechanical. It was easy to imagine that there was a difference in kind. Now the output from AI looks fluid, general, human. It’s harder to imagine there is a difference in kind.

Tabarrok uses an illustration of Searle’s Chinese room featuring a giant look-up table:

But as Scott Aaronson has emphasized [PDF], a machine that simply maps inputs to outputs by consulting a giant look-up table should not be considered “thinking” (although it could be considered to “know”). First, such a look-up table would be beyond astronomically large for any interesting AI task and hence physically infeasible to implement in the real universe. But more importantly, the fact that something is being looked up rather than computed undermines the idea that the system understands or is reasoning.

Of course, GPT-3 and PaLM are not consulting a look-up table, but they are less flexible and arguably much less compressed than a human brain. They may do a large amount of nominal computation, but I suspect their computation is very inefficient and lies somewhere on the (logarithmically scaled) “spectrum of understanding” between a look-up table and the human brain. In this case, I think it’s fair to say they “only partially understand” or something like that.

Alex Tabarrok says that “it’s hard to imagine there is a difference in kind”, but I’d counter with the popular AI aphorism that “sufficient large quantitative difference are essentially qualitative”. Indeed, I’d say the recent results are still very consistent with — though by no means demonstrate — these closely related claims

  • There is at least one large “missing piece” to understand about intelligence before AGI can be built with feasible resources.
  • Scaling up current ML methods will become unfeasibly costly before they achieve human-level ability on most tasks — even if scaling up current methods could work in principle if given unlimited resources (just as for the more extreme case of a look-up table).

But who knows. Gwern is bullish.

Bookmark the permalink.

Leave a Reply

Required fields are marked with a *. Your email address will not be published.

Contact me if the spam filter gives you trouble.

Basic HTML tags like ❮em❯ work. Type [latexpage] somewhere to render LaTeX in $'s. (Details.)