How to use chain-of-thought (CoT) prompting to dramatically improve reasoning in ChatGPT, Claude, and other language models.
Chain of Thought (CoT) prompting encourages language models to show their reasoning process step by step before giving a final answer. This dramatically improves performance on math, logic, coding, and complex analysis tasks.
Instead of asking for a direct answer:
Without CoT:
What is 23 × 47? → 1081 (model might get this wrong)
With CoT:
What is 23 × 47? Think step by step. → 23 × 47 = 23 × 40 + 23 × 7 = 920 + 161 = 1081 ✓
Simply add "Think step by step" or "Let's work through this":
Analyze this code for security vulnerabilities.
Think step by step, examining each function for potential issues.
Provide examples that demonstrate the reasoning process:
Q: A store has 15 apples. 8 are sold, then 12 more arrive. How many?
A: Start with 15. Subtract 8 sold: 15 - 8 = 7. Add 12 new: 7 + 12 = 19. Answer: 19.
Q: A warehouse has 200 boxes. 45 are shipped, 30 returned, 80 new arrive. How many?
A: [model continues the pattern]
Generate multiple reasoning paths and take the majority answer:
Solve this problem using three different approaches.
For each approach, show your full reasoning.
Then compare your answers and give the most likely correct one.
| Task Type | CoT Benefit |
|---|---|
| Math / arithmetic | High |
| Logic puzzles | High |
| Multi-step reasoning | High |
| Code debugging | Medium-High |
| Creative writing | Low |
| Simple factual Q&A | Low-None |
Ask the model to explore multiple solution branches:
Consider three possible approaches to solve this.
For each, explore 2-3 steps ahead.
Evaluate which approach is most promising and continue with it.
Ask the model to check its own work:
After solving, review your answer:
1. Does it satisfy all constraints?
2. Are there edge cases you missed?
3. Is there a simpler solution?
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts!