Lecture 1 Rabbit Hole
A rabbit hole refers to the research process in which start from a single article or reference, many other interesting reference appear resulting in a very long chain of information.
Generative Pre-Trained transformer 3
Starting point: Reference to GPT-3 based models in Lecture-1.
Rabbit hole list
- GPT-3 Wikipedia article: https://en.wikipedia.org/wiki/GPT-3
- GPT-3 paper: https://arxiv.org/pdf/2005.14165.pdf
- Transformers paper: https://arxiv.org/abs/1706.03762
This is the deep learning representation model in the origin of generative models. - Training datasets: CommonCrawl, WebText2, Books1, Books2, Wikipedia.
- OpenAI: https://openai.com/
This is the research group that worked on the GPT-3. They have an open API that is publicly available. - AI-produced answers based on fictional or real characters: https://www.aiwriter.app/
- An article wrote by AI for the Guardian: https://www.theguardian.com/commentisfree/2020/sep/08/robot-wrote-this-article-gpt-3
- WIRED article about GPT-3: https://www.wired.com/story/ai-text-generator-gpt-3-learning-language-fitfully/
Generating programming code; creating spam; sumarizing house rent contracts.But GPT-3 often spews contradictions or nonsense, because its statistical word-stringing is not guided by any intent or a coherent understanding of reality. “It doesn't have any internal model of the world, or any world, and so it can’t do reasoning that would require such a model,”.
- The deepest problem with Deep Learning: https://medium.com/@GaryMarcus/the-deepest-problem-with-deep-learning-91c5991f5695
- Noal Chomsky thoughts: https://www.youtube.com/watch?v=c6MU5zQwtT4
The video touch several subjects, but several of them are related to
languages.
It's not a language model. It works just as well for impossible languages as for actual languages. It is therefore refuted, if intended as a language model, by normal scientific criteria. [...] Perhaps it's useful for some purpose, but it seems to tell us nothing about language or cognition generally.
- aaa