Least-To-Most Prompting Enables Complex Reasoning in Large Language Models

AI Safety Fundamentals: Alignment - Een podcast door BlueDot Impact

Chain-of-thought prompting has demonstrated remarkable performance on various natural language reasoning tasks. However, it tends to perform poorly on tasks which requires solving problems harder than the exemplars shown in the prompts. To overcome this challenge of easy-to-hard generalization, we propose a novel prompting strategy, least-to-most prompting. The key idea in this strategy is to break down a complex problem into a series of simpler subproblems and then solve them in sequence. So...

Visit the podcast's native language site