![Obrázek epizody DSB Podcast #18 [CZ] - Reasoning Failures in LLM](https://i.actve.net/youradio_news/tracks/e/b/eb17929c2fb66daa6301a7325824a0c22433e666.png)
DSB Podcast #18 [CZ] - Reasoning Failures in LLM
![Obrázek epizody DSB Podcast #18 [CZ] - Reasoning Failures in LLM](https://i.actve.net/youradio_news/tracks_small/6/3/639a230f6680b97c75c1a4127cf0d8b78628f776.png)
In this episode we discuss the research paper 'Large Language Model Reasoning Failures,' which was published in Transactions on Machine Learning Research . We focus on a categorization framework for identifying situations where large language models fail at reasoning, pointing out that the goal of the paper is not to evaluate whether these models actually think, but rather to understand their limitations compared to human reasoning. The episode also highlights that these errors can be similar to cognitive biases in humans and discusses possible approaches to overcoming them.
Blog post with detailed description.