Lecture Notes
These notes are organized as full lecture tracks rather than isolated posts. The source notebooks live under notebooks/lectures/, and the website pages below provide the public course navigation.
01. Core Causal Inference
This is the backbone sequence. It starts from causal questions and potential outcomes, then builds through experiments, adjustment-based observational designs, and quasi-experimental strategies used in product, policy, and industry settings.
02. Causal Machine Learning
This track focuses on causal questions where modern machine learning is useful after the estimand is clear: heterogeneous treatment effects, nuisance modeling, policy targeting, off-policy evaluation, and validation.
03. Industry Applications
This track translates causal designs into the decisions companies actually make: incrementality, promotions, ranking, churn, feature launches, marketplaces, and executive readouts.
04. Advanced Topics
This track collects the advanced topics that separate routine effect estimation from mature causal practice: mechanisms, missingness, measurement, generalization, spillovers, panels, discovery, Bayesian workflows, and AI-system complications.
05. AI for Causal Inference
This course treats AI as an assistant to the causal analyst: translating business questions, drafting estimand cards, critiquing DAGs, retrieving domain knowledge, generating code, creating reports, and stress-testing AI outputs.
06. Causal Inference for AI Systems
This course flips the direction: AI systems become the intervention. It covers estimands, experiments, triggered exposure, RAG and agent evaluation, routing bias, monitoring, fairness, cost-benefit, governance, and lifecycle management.
07. Causal Inference for Generative AI
This course specializes causal evaluation for generative AI systems, where outputs are stochastic, measurement is difficult, and human behavior often changes in response to generated content.
08. Causal Inference for Reinforcement Learning
This course is written for learners who know the earlier causal tracks but may be new to reinforcement learning. It builds RL concepts from the causal viewpoint and then studies logged decisions, policy evaluation, offline RL, RLHF, and agentic systems.
Short Notes
These earlier short notes remain useful as focused references: