Episode #430 from 2:38:04
AI and memory
For the case of driving, I think it could be quite effective. One of the things that's currently missing, even though OpenAI just recently announced adding memory, and I did want to ask you how important it is, how difficult is it to add some of the memory mechanisms that you've seen in humans to AI systems? I would say superficially not that hard, but then in a deeper level, very, very hard because we don't understand episodic memory. One of the ideas I talk about in the book, because one of the oldest dilemmas in computational neurosciences, what Steve Grossberg called the Stability Plasticity Dilemma, when do you say something is new and overwrite your preexisting knowledge versus going with what you had before and making incremental changes? Part of the problem with going through massive... Part of the problem of things like if you're trying to design an LLM or something like that, is, especially for English, there's so many exceptions to the rules. If you want to rapidly learn the exceptions, you're going to lose the rules, and if you want to keep the rules, you have a harder time learning the exception. David Marr is one of the early pioneers in computational neuroscience, and then Jay McClellan and my colleague, Randy O'Reilly, some other people like Neil Cohen, all these people started to come up with the idea that maybe that's part of what we need.
Why this moment matters
For the case of driving, I think it could be quite effective. One of the things that's currently missing, even though OpenAI just recently announced adding memory, and I did want to ask you how important it is, how difficult is it to add some of the memory mechanisms that you've seen in humans to AI systems? I would say superficially not that hard, but then in a deeper level, very, very hard because we don't understand episodic memory. One of the ideas I talk about in the book, because one of the oldest dilemmas in computational neurosciences, what Steve Grossberg called the Stability Plasticity Dilemma, when do you say something is new and overwrite your preexisting knowledge versus going with what you had before and making incremental changes? Part of the problem with going through massive... Part of the problem of things like if you're trying to design an LLM or something like that, is, especially for English, there's so many exceptions to the rules. If you want to rapidly learn the exceptions, you're going to lose the rules, and if you want to keep the rules, you have a harder time learning the exception. David Marr is one of the early pioneers in computational neuroscience, and then Jay McClellan and my colleague, Randy O'Reilly, some other people like Neil Cohen, all these people started to come up with the idea that maybe that's part of what we need.