Artwork

Conteúdo fornecido por The Nonlinear Fund. Todo o conteúdo do podcast, incluindo episódios, gráficos e descrições de podcast, é carregado e fornecido diretamente por The Nonlinear Fund ou por seu parceiro de plataforma de podcast. Se você acredita que alguém está usando seu trabalho protegido por direitos autorais sem sua permissão, siga o processo descrito aqui https://pt.player.fm/legal.
Player FM - Aplicativo de podcast
Fique off-line com o app Player FM !

LW - Fluent, Cruxy Predictions by Raemon

21:13
 
Compartilhar
 

Manage episode 428347623 series 3337129
Conteúdo fornecido por The Nonlinear Fund. Todo o conteúdo do podcast, incluindo episódios, gráficos e descrições de podcast, é carregado e fornecido diretamente por The Nonlinear Fund ou por seu parceiro de plataforma de podcast. Se você acredita que alguém está usando seu trabalho protegido por direitos autorais sem sua permissão, siga o processo descrito aqui https://pt.player.fm/legal.
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Fluent, Cruxy Predictions, published by Raemon on July 10, 2024 on LessWrong. The latest in the Feedback Loop Rationality series. Periodically, people (including me) try to operationalize predictions, or bets, and... it doesn't seem to help much. I think I recently "got good" at making "actually useful predictions." I currently feel on-the-cusp of unlocking a host of related skills further down the rationality tech tree. This post will attempt to spell out some of the nuances of how I currently go about it, and paint a picture of why I think it's worth investing in. The takeaway that feels most important to me is: it's way better to be "fluent" at operationalizing predictions, compared to "capable at all." Previously, "making predictions" was something I did separately from my planning process. It was a slow, clunky process. Nowadays, reasonably often I can integrate predictions into the planning process itself, because it feels lightweight. I'm better at quickly feeling-around for "what sort of predictions would actually change my plans if they turned out a certain way?", and then quickly check in on my intuitive sense of "what do I expect to happen?" Fluency means you can actually use it day-to-day to help with whatever work is most important to you. Day-to-day usage means you can actually get calibrated for predictions in whatever domains you care about. Calibration means that your intuitions will be good, and you'll know they're good. If I were to summarize the change-in-how-I-predict, it's a shift from: "Observables-first". i.e. looking for things I could easily observe/operationalize, that were somehow related to what I cared about. to: "Cruxy-first". i.e. Look for things that would change my decisionmaking, even if vague, and learn to either better operationalize those vague things, or, find a way to get better data. (and then, there's a cluster of skills and shortcuts to make that easier) Disclaimer: This post is on the awkward edge of "feels full of promise", but "I haven't yet executed on the stuff that'd make it clearly demonstrably valuable." (And, I've tracked the results of enough "feels promise" stuff to know they have a <50% hit rate) I feel like I can see the work needed to make this technique actually functional. It's a lot of work. I'm not sure if it's worth it. (Alas, I'm inventing this technique because I don't know how to tell if my projects are worth it and I really want a principled way of forming beliefs about that. Since I haven't finished vetting it yet it's hard to tell!) There's a failure mode where rationality schools proliferate without evidence, where people get excited about their new technique that sounds good on paper, and publish exciting blogposts about it. There's an unfortunate catch-22 where: Naive rationality gurus post excitedly, immediately, as if they already know what they're doing. This is even kinda helpful (because believing in the thing really does help make it more true). More humble and realistic rationality gurus wait to publish until they're confident their thing really works. They feel so in-the-dark, fumbling around. As they should. Because, like, they (we) are. But, I nonetheless feel like the rationality community was overall healthier when it was full of excited people who still believed a bit in their heart that rationality would give them superpowers, and posted excitedly about it. It also seems intellectually valuable for notes to be posted along the way, so others can track it and experiment. Rationality Training doesn't give you superpowers - it's a lot of work, and then "it's a pretty useful skill, among other skills." I expect, a year from now, I'll have a better conception of the ideas in this post. This post is still my best guess about how this skill can work, and I'd like it if other people excitedl...
  continue reading

1801 episódios

Artwork
iconCompartilhar
 
Manage episode 428347623 series 3337129
Conteúdo fornecido por The Nonlinear Fund. Todo o conteúdo do podcast, incluindo episódios, gráficos e descrições de podcast, é carregado e fornecido diretamente por The Nonlinear Fund ou por seu parceiro de plataforma de podcast. Se você acredita que alguém está usando seu trabalho protegido por direitos autorais sem sua permissão, siga o processo descrito aqui https://pt.player.fm/legal.
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Fluent, Cruxy Predictions, published by Raemon on July 10, 2024 on LessWrong. The latest in the Feedback Loop Rationality series. Periodically, people (including me) try to operationalize predictions, or bets, and... it doesn't seem to help much. I think I recently "got good" at making "actually useful predictions." I currently feel on-the-cusp of unlocking a host of related skills further down the rationality tech tree. This post will attempt to spell out some of the nuances of how I currently go about it, and paint a picture of why I think it's worth investing in. The takeaway that feels most important to me is: it's way better to be "fluent" at operationalizing predictions, compared to "capable at all." Previously, "making predictions" was something I did separately from my planning process. It was a slow, clunky process. Nowadays, reasonably often I can integrate predictions into the planning process itself, because it feels lightweight. I'm better at quickly feeling-around for "what sort of predictions would actually change my plans if they turned out a certain way?", and then quickly check in on my intuitive sense of "what do I expect to happen?" Fluency means you can actually use it day-to-day to help with whatever work is most important to you. Day-to-day usage means you can actually get calibrated for predictions in whatever domains you care about. Calibration means that your intuitions will be good, and you'll know they're good. If I were to summarize the change-in-how-I-predict, it's a shift from: "Observables-first". i.e. looking for things I could easily observe/operationalize, that were somehow related to what I cared about. to: "Cruxy-first". i.e. Look for things that would change my decisionmaking, even if vague, and learn to either better operationalize those vague things, or, find a way to get better data. (and then, there's a cluster of skills and shortcuts to make that easier) Disclaimer: This post is on the awkward edge of "feels full of promise", but "I haven't yet executed on the stuff that'd make it clearly demonstrably valuable." (And, I've tracked the results of enough "feels promise" stuff to know they have a <50% hit rate) I feel like I can see the work needed to make this technique actually functional. It's a lot of work. I'm not sure if it's worth it. (Alas, I'm inventing this technique because I don't know how to tell if my projects are worth it and I really want a principled way of forming beliefs about that. Since I haven't finished vetting it yet it's hard to tell!) There's a failure mode where rationality schools proliferate without evidence, where people get excited about their new technique that sounds good on paper, and publish exciting blogposts about it. There's an unfortunate catch-22 where: Naive rationality gurus post excitedly, immediately, as if they already know what they're doing. This is even kinda helpful (because believing in the thing really does help make it more true). More humble and realistic rationality gurus wait to publish until they're confident their thing really works. They feel so in-the-dark, fumbling around. As they should. Because, like, they (we) are. But, I nonetheless feel like the rationality community was overall healthier when it was full of excited people who still believed a bit in their heart that rationality would give them superpowers, and posted excitedly about it. It also seems intellectually valuable for notes to be posted along the way, so others can track it and experiment. Rationality Training doesn't give you superpowers - it's a lot of work, and then "it's a pretty useful skill, among other skills." I expect, a year from now, I'll have a better conception of the ideas in this post. This post is still my best guess about how this skill can work, and I'd like it if other people excitedl...
  continue reading

1801 episódios

All episodes

×
 
Loading …

Bem vindo ao Player FM!

O Player FM procura na web por podcasts de alta qualidade para você curtir agora mesmo. É o melhor app de podcast e funciona no Android, iPhone e web. Inscreva-se para sincronizar as assinaturas entre os dispositivos.

 

Guia rápido de referências