HomeNewsCan We Understand AI? A Response to Jordan Peterson’s Podcast

Can We Understand AI? A Response to Jordan Peterson’s Podcast

Like snobby teenagers claim of themselves, many say that “nobody understands artificial intelligence (AI).” For example, in a recent interview between Jordan Peterson and Brian Roemmele about ChatGPT, Jordan Peterson claimed that “The system is too complex to model” and each AI system is not only incomprehensible but unique. He further claims that “some of these AI systems, they’ve [AI experts] managed to reduce what they do learn to something approximating an algorithm. . . . [but] Generally the system can’t be and isn’t simplified.”

Brian Roemmele concurred: “nobody really understands precisely what it’s doing and what is called the hidden layer. It is so many interconnections of neurons that it essentially is a black box. . . .”

The criticism isn’t confined to these two. The “interpretability problem” is an ongoing topic of research within computer science. However, on closer examination, this criticism of deep learning models is not well-founded, is ill-defined, and leads to more confusion than enlightenment. We know very well the inner workings of machine learning models, better than any other system of similar complexity, and they are not a black box.

(For the sake of this argument, I will not be addressing the fact that OpenAI has ironically not published their parameters. In that sense and in that sense alone, ChatGPT is a black box.)

It seems odd to claim that we don’t or can’t “understand” a thing we made. Surely, we can open up a model and look at the flow of information. It is very precisely defined exactly which numbers are multiplied and added to what and pushed through which nonlinearities. There isn’t a single step in the entire process that is “unpredictable” or “undefined” at the outset. Even to the extent that some models “randomly” draw from a distribution, this is both predetermined (as all computers are only pseudorandom) and understandable (which is why we can describe it as “drawing from a distribution”).

So, what do people mean when they say deep learning “can’t be understood”? It seems the term “interpretability” itself isn’t well-defined. Nobody has been able to give a rigorous definition.

Pseudoscientists like Roemmele prey on people’s misunderstanding of technical language to further their false claims. For example, he claims “nobody really understands precisely what it’s doing and what is called the hidden layer.”

But the reality is that the hidden layers are no different from any other layer. This is a technical term that means any layer that is not an input or an output layer. It has nothing to do, as Roemmele has implied, with a particular mysteriousness about it. It is no more and no less “understandable” than the input or output layers. It is “hidden” only in the sense that the end user doesn’t interact with it. However, Roemmele’s audience doesn’t understand this sleight of hand. (I doubt Roemmele himself understands this as he is not a data scientist.)

Jordan Peterson must be given more leeway, as he doesn’t claim to have knowledge on AI himself—like Roemmele has—but cites his brother-in-law, Jim Keller, as his source of information. It is impossible to know exactly what Peterson’s brother-in-law might have meant, but as filtered through Peterson, the statements on AI are false.

For example, it is nonsensical to claim “the system is too complex to model” when “the system” is the model. One might claim that atoms are too complex to understand. However, would it make any sense to claim that the Bohr model of the atom is too complex to understand? The data is the thing we don’t understand, and a model is a thing we use to understand it. The more accurate the model, the better we understand the underlying phenomena. Deep learning models are the most accurate models and so are the most understandable.

It is also nonsensical to claim that “Generally the system can’t be and isn’t simplified [to something approximating an algorithm].” Algorithms have strict definitions, and all AI falls into that category. If it can be described as a Turing machine, it is an algorithm, and that includes all AI. In fact, the vast majority of AI don’t even reach the standard of Turing completeness (the most complex a computer can theoretically be) and can be described entirely as pushdown automata (a strict subset of Turing machines).

Why might people want to claim deep learning models can’t be understood? For some statisticians, I think it is their last grip on relevance as deep neural networks slowly drive older statistical models to obsolescence. For others, the “unknowability” of it all is scary and a welcome invitation for more government intervention. We shouldn’t let AI have the same fate as nuclear power—needlessly maligned over little to no threat at all. Let’s enjoy the fruits of our labor, and that includes the massive cost reduction from using a very human-comprehensible AI.

No comments

leave a comment