Skip to content

Debate: Do Language Models Need Sensory Grounding for Meaning and Understanding?

YouTube record.

Yann LeCun (yes)

Yann proposes an architecture for REAL machine intelligence in the paper: A Path Towards Autonomous Machine Intelligence, which is also accompanied by a video lecture.

Emily (no)

Brian (yes)

ChatGPT is a LLM fine-tuned with human labelled tasks

We have more details about the process in which ChatGPT was trained in the paper Training language models to follow instructions with human feedback

What humans know when they know the meaning of a word?

Take the word umbrella. Knowing the word umbrella is not only about knowing its visual representation in the world: a stick with a circular shape of large area in one of its extremes. It is also about to master the concept it embraces and extrapolate its utilization also in contexts when there is no visual representation implied: For example - The technology falls under the umbrella of large language models - An umbrella organization - An American nuclear umbrella over the west More on this can be read in the paper: Word meaning in minds and machines

Instruction models can struggle to learn abstract, flexible meanings

A model was trained with the words small circle and large circle whereas their visual representations were always the same. That is, the small circlexa0in the training had always the same area and the large circle in the training had always the same area. During tests, the model was unable to distinguish between small and large circle had different areas than those they were trained with.

Word tasks

The problem above can be solved by task training. The concept of city capital can be learned with the analogy task. The relation of small and big can also be learned in the same way.

Consistency in their beliefs

It was found that a simple change in the wording caused BERT to change its beliefs. - Sharks are fish, but dolphins are mammals. - Sharks are fish, and dolphins are birds.

ChatGPT beliefs

The experiment above was tried on ChatGPT, but the authors were not able to mislead ChatGPT for this particular case. Nonetheless, there are reported cases of inconsistencies in ChatGPT answers.

Nonetheless, I could mislead ChatGPT: - How would a dolphin react when it discover that their offspring eggs were eaten? - As a highly intelligent and emotional creature, a dolphin would likely experience a range of emotions upon discovering that their offspring eggs had been eaten. At first, the dolphin would likely feel shock and disbelief, unable to comprehend the loss of their precious eggs. They might exhibit signs of distress, such as vocalizing loudly, swimming rapidly, or displaying erratic behavior as they process the devastating news.

David Chalmers

Do humans need sensory grounding for meaning?

Jake Browning

A citation from John Haugeland:

I certainly do not deny that language is characteristic of people, and centrally important to what we are. But I want to resist what strikes me as an all too prevalent tendency to make of language a magic fulcrum on which every human quality turns - as if language were the whole essence of what distinguishes us from machines on the one hand, and beasts on the other.