05. AI for Causal Inference

AI
LLMs
Causal Inference
Lecture Notes
How LLMs, RAG, agents, evaluation, and automation can support causal analysts without replacing identification discipline.
Published

May 3, 2026

This course treats AI as an assistant to the causal analyst: translating business questions, drafting estimand cards, critiquing DAGs, retrieving domain knowledge, generating code, creating reports, and stress-testing AI outputs.

Notebook links open rendered HTML pages generated from the source notebooks under notebooks/lectures/. Code is visible by default; rendering is configured not to execute live notebook code, so local LLM or GPU-heavy cells are not triggered during website builds.

Notebook Sequence

  1. 00. Getting a Local LLM Running
  2. 01. AI-Assisted Causal Workflow
  3. 02. LLM Basics for Causal Analysts
  4. 03. Turning Business Questions into Causal Questions
  5. 04. Estimand Cards and Causal Design Documents
  6. 05. AI-Assisted DAG Brainstorming
  7. 06. DAG Critique, Variable Roles, and Backdoor Paths
  8. 07. RAG for Causal Domain Knowledge
  9. 08. Literature Synthesis for Causal Assumptions
  10. 09. Dataset Profiling with AI
  11. 10. Detecting Bad Controls, Post-Treatment Variables, and Leakage
  12. 11. Synthetic Data Generation for Causal Teaching
  13. 12. Simulation Labs for Assumption Stress Testing
  14. 13. AI-Assisted Method Selection
  15. 14. AI-Assisted Causal Code Generation
  16. 15. Automating Balance, Overlap, and Diagnostic Reports
  17. 16. AI for Sensitivity Analysis
  18. 17. AI for Experiment Design and Power Planning
  19. 18. AI for Quasi-Experiment Design
  20. 19. Causal Report Generation with LLMs
  21. 20. Causal Analysis Agent
  22. 21. Multi-Agent Causal Review Workflow
  23. 22. Evaluating AI Outputs in Causal Workflows
  24. 23. Hallucination and Failure Modes in AI Causal Analysis
  25. 24. Capstone AI-Assisted Causal Project

How To Read This Track

  • Work through the notebooks in order if you want the full course arc.
  • Treat each notebook as a lecture plus lab: read the discussion, inspect the code, and rerun locally when you want to experiment.
  • For AI-heavy notebooks, expect some brittleness when live model calls are enabled; that instability is part of the course material rather than something hidden from the reader.

The .ipynb sources remain in the matching folder under notebooks/lectures/.