AI tutor — demo

An expert reader on every page

The tutor knows the chapter you're reading, the cell you just ran, and the error message you just saw. It cites the book directly and rewrites examples in the variables you used — answers no generic chatbot can give. Try the suggested prompts below to see how it responds.

📚 Book Tutor Context: Chapter 3 §3.10 — CAPM regression
T
Hi — I see you're reading Chapter 3, the CAPM regression section. I can explain a passage, recompute an example with different numbers, debug your code, or quiz you on the concepts. What would you like?
You
Why did the regression find β = 2.22 and not 2.33 like the slides showed?
T
Good catch. The simulation in this notebook draws 500 random daily returns with np.random.seed(42), then runs OLS on the first 80 % (400 days). The true beta in the data-generating process is 2.33 — that's the value the simulation injects — but the estimated beta from any finite sample is just that point estimate plus sampling noise. With n = 400, the standard error of β̂ is roughly 0.10, so seeing β̂ ≈ 2.22 (about one SE below the true value) is exactly what the model predicts. The 95 % confidence interval [2.02, 2.42] still contains the true 2.33.
You
Can you show me what happens if I change the noise to 0.05 instead of 0.02?
T
Sure. Replace noise_sigma = 0.02 with noise_sigma = 0.05 in the Interactive Explorer cell (§3.55) and re-run. The slope estimate stays near 2.22 but the 95 % CI widens dramatically — roughly to [1.7, 2.7] — because the additional noise makes any one sample less informative about the true β. Numerically, the standard error of β̂ scales with the noise standard deviation:
SE(β̂) ≈ σ_noise / (√n · σ_x)
so doubling noise roughly doubles the CI width. This is exactly why real-world betas have wider confidence intervals than the textbook examples suggest.

Demo only. Production tutor calls the Claude API with the chapter text and the reader's most recent code/error as system context. Cost is a few cents per session at Anthropic's current pricing — fully covered by the cohort fee.

How the tutor stays grounded

Three properties keep the tutor useful instead of generic. First, every prompt is injected with the chapter text — so it can quote the book rather than paraphrase the internet. Second, it can read the reader's most recent code cell and error message — so its debugging suggestions actually fix the reader's specific problem. Third, it is instructed never to invent functions or libraries that don't exist in the chapter's Pyodide environment — so the code it suggests will actually run.

The tutor refuses to do graded work on the reader's behalf. If a learner asks "give me the answer to Exercise 3.4", the tutor responds with a Socratic hint instead — turning a temptation to cheat into a learning moment.