top of page

The Chomsky Theory and AI's Role in Understanding Universal Grammar

Eric Han

         Why do variations of "mama" and "dada" almost universally refer to mom and dad, and why do so many similarities exist among all human languages? Such questions have led some linguists, like Noam Chomsky, to propose the idea of universal grammar. According to this school of thought, the capacity to learn language is innate in humans. Raised under normal conditions, they will always develop a complex language with features such as nouns and verbs. This concept is backed by the poverty of the stimulus argument, suggesting children are not exposed to enough data to acquire every aspect of their spoken language. This clashes with the notion that language is learned solely from experience, but there is evidence supporting this theory.

          A classic example supporting the poverty of stimulus theory is the sentence, "colorless green ideas sleep furiously," coined by Chomsky himself. Though nonsensical, it's grammatically correct, and native speakers can discern that, even though they've never heard it before. Chomsky argues that such ability showcases the innate grasp of language rules, often bringing linguistics and neuroscience into conflict. Interestingly, the advent of large language models (LLMs) like ChatGPT has prompted linguists to revisit Chomsky’s universal grammar theory. Despite being trained on limited inputs, LLMs can generate brand-new sentences, which seemingly counters the poverty of the stimulus argument. Researchers from Meta AI highlight that despite only 7% of GPT-3's training dataset being non-English, the model exhibits impressive multilingual abilities. Some, like Stephen Hanson, posit that the capability of LLMs to generate language from pattern recognition and statistical regularities refutes Chomsky's innateness argument, indicating that language learning could occur through exposure to language. Linguists, computer scientists, and educators, among others, might gain insights into early language learning from studying how LLMs react when trained on different datasets. However, comparing a human child to an algorithm is a considerable leap. Chomsky, in a New York Times guest essay, distinguishes the human mind from statistical engines like ChatGPT, emphasizing its efficiency and elegance in operating with minimal information to create explanations, rather than inferring correlations. Consequently, the way LLMs "learn" neither refutes nor validates the universal grammar theory.

In a guest essay published in the New York Times, Chomsky writes:  

 

“The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question. On the contrary, the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information; it seeks not to infer brute correlations among data points but to create explanations.”  

          Chomsky’s idea that the difference in “thinking” between LLMs and humans is key to why the advances in algorithmic “learning” are inherently unintelligent. As a result, the ways LLMs “learn” have nothing to do with disproving or proving the theory of universal grammar. 

 

          In fact, Chomsky argues it would be inherently unethical to treat ChatGPT and other LLMs like they are sources of intelligence. The inability of LLMs to perform reasoning means they are not bound by correctness, “machine learning systems can learn both that the earth is flat and that the earth is round,” Chomsky writes. In extension, LLMs are not capable of moral thinking. ChatGPT, for example, is morally indifferent, and Chomsky argues moral indifference is no different from evil.           The rise and growing popularity of large language models (LLMs) have envisioned two starkly different futures. While some anticipate that advancements in artificial intelligence will ultimately pave the way to utopia, others, including writers and artists, are already grappling with the dystopian realities ushered in by these developments. Nonetheless, within the realm of linguistics, LLMs offer a promising prospect in unraveling some of its long-standing enigmas.

references:

(Required)

bottom of page