Debate: Do Language Models Need Sensory Grounding for Meaning and Understanding?
Yann LeCun (yes)
Yann proposes an architecture for REAL machine intelligence in the paper: A Path Towards Autonomous Machine Intelligence, which is also accompanied by a video lecture.
Emily (no)
Brian (yes)
ChatGPT is a LLM fine-tuned with human labelled tasks
We have more details about the process in which ChatGPT was trained in the paper Training language models to follow instructions with human feedback
What humans know when they know the meaning of a word?
Take the word umbrella. Knowing the word umbrella is not only about knowing its visual representation in the world: a stick with a circular shape of large area in one of its extremes. It is also about to master the concept it embraces and extrapolate its utilization also in contexts when there is no visual representation implied: For example - The technology falls under the umbrella of large language models - An umbrella organization - An American nuclear umbrella over the west More on this can be read in the paper: Word meaning in minds and machines
Instruction models can struggle to learn abstract, flexible meanings
A model was trained with the words small circle and large circle whereas their visual representations were
always the same. That is, the small circlexa0in the training had always the same area and the large circle in
the training had always the same area. During tests, the model was unable to distinguish between small and large
circle had different areas than those they were trained with.
Word tasks
The problem above can be solved by task training. The concept of city capital can be learned with the analogy task. The relation of small and big can also be learned in the same way.
Consistency in their beliefs
It was found that a simple change in the wording caused BERT to change its beliefs. - Sharks are fish, but dolphins are mammals. - Sharks are fish, and dolphins are birds.
ChatGPT beliefs
The experiment above was tried on ChatGPT, but the authors were not able to mislead ChatGPT for this particular case. Nonetheless, there are reported cases of inconsistencies in ChatGPT answers.
Nonetheless, I could mislead ChatGPT: - How would a dolphin react when it discover that their offspring eggs were eaten? - As a highly intelligent and emotional creature, a dolphin would likely experience a range of emotions upon discovering that their offspring eggs had been eaten. At first, the dolphin would likely feel shock and disbelief, unable to comprehend the loss of their precious eggs. They might exhibit signs of distress, such as vocalizing loudly, swimming rapidly, or displaying erratic behavior as they process the devastating news.
David Chalmers
Do humans need sensory grounding for meaning?
- Emily Bender and Alexandre Koller: Language models lack sensory grounding and don't have meaning or understanding
- Yann LeCun Twitter on September 2022: Can an LLM figure this out: In front of us are six gears numbered 1 to 6, mounted on axles in a row. Each gear is engaged with the next gear. If gear number 3 is rotated clockwise, in which direction will gears 1 and 6 rotate?
- By the time no, but now (2023) it can! (Tested with GPT4)
Jake Browning
A citation from John Haugeland:
I certainly do not deny that language is characteristic of people, and centrally important to what we are. But I want to resist what strikes me as an all too prevalent tendency to make of language a magic fulcrum on which every human quality turns - as if language were the whole essence of what distinguishes us from machines on the one hand, and beasts on the other.