← Course | Module 01 - Foundations Lesson 6
MODULE 01 - EXERCISE

Exercise: Build Your Mental Model of How Claude Thinks

🕑 15 min read 🎯 Beginner

The Goal

After this exercise, you'll have an accurate mental model of how Claude processes your requests. This mental model is worth more than any prompt template - it lets you predict Claude's behavior and debug issues intuitively.

Step 1: The Input Pipeline

When you send a message to Claude, here's what actually happens:

# What you think happens: You type message -> Claude reads it -> Claude responds # What actually happens: 1. Your text gets tokenized (split into ~750 tokens per 1000 words) 2. System prompt + full conversation history + your message = total input 3. Claude processes ALL tokens simultaneously (not word by word) 4. Claude generates output tokens ONE AT A TIME (left to right) 5. Each output token is influenced by ALL input tokens
Key insight

Claude reads everything at once but writes one word at a time. This is why it can reference anything from your input, but sometimes "loses track" of its own output mid-generation.

Step 2: Test Your Understanding

Try these experiments with Claude (use claude.ai or the API):

  1. Experiment 1: Ask "What's 2+2?" with temperature 0 ten times. You'll get "4" every time. Now ask with temperature 1. Still "4" - because there's only one right answer. Temperature only matters when multiple valid answers exist.
  2. Experiment 2: Put a key fact at the start of a very long message vs the middle. Ask Claude about it. Notice the difference in recall quality.
  3. Experiment 3: Give Claude a system prompt saying "You are a pirate." Then ask a technical question. Notice how it balances the persona with being helpful.
  4. Experiment 4: Ask Claude something it clearly doesn't know (a fake company name). Notice it says "I don't have information about that" instead of making something up.

Step 3: Build Your Cheat Sheet

Create a personal reference card with these rules:

RuleWhy
Put important info at start/end, not middleAttention is strongest at edges
Be specific, not vagueVague input = vague output
Temperature 0 for facts, higher for creativityControls randomness of word selection
System prompts shape everythingThey're "always on" context
Claude writes left-to-right, can't go backAsk it to plan before executing
More context = more cost, not always better qualityTokens = money, and noise hurts signal

Key Takeaways

🖨 Download PDF 🐦 Share on X
← Previous