Fique off-line com o app Player FM !
Causal Inference & Reinforcement Learning with Andrew Lampinen Ep 13 | CausalBanditsPodcast.com
Manage episode 410037563 series 3526805
Love Causal Bandits Podcast?
Help us bring more quality content: Support the show
Video version of this episode is available here
Causal Inference with LLMs and Reinforcement Learning Agents?
Do LLMs have a world model?
Can they reason causally?
What's the connection between LLMs, reinforcement learning, and causality?
Andrew Lampinen, PhD (Google DeepMind) shares the insights from his research on LLMs, reinforcement learning, causal inference and generalizable agents.
We also discuss the nature of intelligence, rationality and how they play with evolutionary fitness.
Join us in the journey!
Recorded on Dec 1, 2023 in London, UK.
About The Guest
Andrew Lampinen, PhD is a Senior Research Scientist at Google DeepMind. He holds a PhD in PhD in Cognitive Psychology from Stanford University. He's interested in cognitive flexibility and generalization, and how these abilities are enabled by factors like language, memory, and embodiment.
Connect with Andrew:
- Andrew on Twitter/X
- Andrew's web page
About The Host
Aleksander (Alex) Molak is an independent machine learning researcher, educator, entrepreneur and a best-selling author in the area of causality (https://amzn.to/3QhsRz4).
Connect with Alex:
- Alex on the Internet
Links
Papers
- Lampinen et al. (2023) - "Passive learning of active causal strategies in agents and language models" (https://arxiv.org/pdf/2305.16183.pdf)
- Dasgupta, Lampinen, et al. (2022) Language models show human-like content effects on reasoning tasks" (https://arxiv.org/abs/2207.07051)
- Santoro, Lampinen, et al. (2021) - "Symbolic behaviour in artificial intelligence" (https://www.researchgate.net/publication/349125191_Symbolic_Behaviour_in_Artificial_Intelligence)
- Webb et al. (2022) - “Emergent Analogical Reasoning in Large Language Models” (https://arxiv.org/abs/2212.09196)
Books
- Tomasello (2019) - “Becoming Human” (ht
Causal Bandits Podcast
Causal AI || Causal Machine Learning || Causal Inference & Discovery
Web: https://causalbanditspodcast.com
Connect on LinkedIn: https://www.linkedin.com/in/aleksandermolak/
Join Causal Python Weekly: https://causalpython.io
The Causal Book: https://amzn.to/3QhsRz4
Capítulos
1. Causal Inference & Reinforcement Learning with Andrew Lampinen Ep 13 | CausalBanditsPodcast.com (00:00:00)
2. [Ad] Rumi.ai (00:22:02)
3. (Cont.) Causal Inference & Reinforcement Learning with Andrew Lampinen Ep 13 | CausalBanditsPodcast.com (00:22:51)
28 episódios
Manage episode 410037563 series 3526805
Love Causal Bandits Podcast?
Help us bring more quality content: Support the show
Video version of this episode is available here
Causal Inference with LLMs and Reinforcement Learning Agents?
Do LLMs have a world model?
Can they reason causally?
What's the connection between LLMs, reinforcement learning, and causality?
Andrew Lampinen, PhD (Google DeepMind) shares the insights from his research on LLMs, reinforcement learning, causal inference and generalizable agents.
We also discuss the nature of intelligence, rationality and how they play with evolutionary fitness.
Join us in the journey!
Recorded on Dec 1, 2023 in London, UK.
About The Guest
Andrew Lampinen, PhD is a Senior Research Scientist at Google DeepMind. He holds a PhD in PhD in Cognitive Psychology from Stanford University. He's interested in cognitive flexibility and generalization, and how these abilities are enabled by factors like language, memory, and embodiment.
Connect with Andrew:
- Andrew on Twitter/X
- Andrew's web page
About The Host
Aleksander (Alex) Molak is an independent machine learning researcher, educator, entrepreneur and a best-selling author in the area of causality (https://amzn.to/3QhsRz4).
Connect with Alex:
- Alex on the Internet
Links
Papers
- Lampinen et al. (2023) - "Passive learning of active causal strategies in agents and language models" (https://arxiv.org/pdf/2305.16183.pdf)
- Dasgupta, Lampinen, et al. (2022) Language models show human-like content effects on reasoning tasks" (https://arxiv.org/abs/2207.07051)
- Santoro, Lampinen, et al. (2021) - "Symbolic behaviour in artificial intelligence" (https://www.researchgate.net/publication/349125191_Symbolic_Behaviour_in_Artificial_Intelligence)
- Webb et al. (2022) - “Emergent Analogical Reasoning in Large Language Models” (https://arxiv.org/abs/2212.09196)
Books
- Tomasello (2019) - “Becoming Human” (ht
Causal Bandits Podcast
Causal AI || Causal Machine Learning || Causal Inference & Discovery
Web: https://causalbanditspodcast.com
Connect on LinkedIn: https://www.linkedin.com/in/aleksandermolak/
Join Causal Python Weekly: https://causalpython.io
The Causal Book: https://amzn.to/3QhsRz4
Capítulos
1. Causal Inference & Reinforcement Learning with Andrew Lampinen Ep 13 | CausalBanditsPodcast.com (00:00:00)
2. [Ad] Rumi.ai (00:22:02)
3. (Cont.) Causal Inference & Reinforcement Learning with Andrew Lampinen Ep 13 | CausalBanditsPodcast.com (00:22:51)
28 episódios
Tüm bölümler
×Bem vindo ao Player FM!
O Player FM procura na web por podcasts de alta qualidade para você curtir agora mesmo. É o melhor app de podcast e funciona no Android, iPhone e web. Inscreva-se para sincronizar as assinaturas entre os dispositivos.