442 Afleveringen

  1. Language Bottleneck Models: A Framework for Interpretable Knowledge Tracing and Beyond

    Gepubliceerd: 3-7-2025
  2. Learning to Explore: An In-Context Learning Approach for Pure Exploration

    Gepubliceerd: 3-7-2025
  3. Human-AI Matching: The Limits of Algorithmic Search

    Gepubliceerd: 25-6-2025
  4. Uncertainty Quantification Needs Reassessment for Large-language Model Agents

    Gepubliceerd: 25-6-2025
  5. Bayesian Meta-Reasoning for Robust LLM Generalization

    Gepubliceerd: 25-6-2025
  6. General Intelligence Requires Reward-based Pretraining

    Gepubliceerd: 25-6-2025
  7. Deep Learning is Not So Mysterious or Different

    Gepubliceerd: 25-6-2025
  8. AI Agents Need Authenticated Delegation

    Gepubliceerd: 25-6-2025
  9. Probabilistic Modelling is Sufficient for Causal Inference

    Gepubliceerd: 25-6-2025
  10. Not All Explanations for Deep Learning Phenomena Are Equally Valuable

    Gepubliceerd: 25-6-2025
  11. e3: Learning to Explore Enables Extrapolation of Test-Time Compute for LLMs

    Gepubliceerd: 17-6-2025
  12. Extrapolation by Association: Length Generalization Transfer in Transformers

    Gepubliceerd: 17-6-2025
  13. Uncovering Causal Hierarchies in Language Model Capabilities

    Gepubliceerd: 17-6-2025
  14. Generalization or Hallucination? Understanding Out-of-Context Reasoning in Transformers

    Gepubliceerd: 17-6-2025
  15. Improving Treatment Effect Estimation with LLM-Based Data Augmentation

    Gepubliceerd: 17-6-2025
  16. LLM Numerical Prediction Without Auto-Regression

    Gepubliceerd: 17-6-2025
  17. Self-Adapting Language Models

    Gepubliceerd: 17-6-2025
  18. Why in-context learning models are good few-shot learners?

    Gepubliceerd: 17-6-2025
  19. Take Caution in Using LLMs as Human Surrogates: Scylla Ex Machina∗

    Gepubliceerd: 14-6-2025
  20. The Logic of Machines: The AI Reasoning Debate

    Gepubliceerd: 12-6-2025

5 / 23

Cut through the noise. We curate and break down the most important AI papers so you don’t have to.

Visit the podcast's native language site