Evidence-Led Experiments
Design experiments that connect learning goals to instrumentation, without drowning the team in dashboards.
Tuition (informational): ₩540,000
4 weeks · mixed · Async-first
Overview
Metrics and experimentation for PMs who want credible learning loops. We stay away from vanity charts; instead you wire hypotheses to events, define guardrails, and narrate outcomes to leadership without hype.
What is inside
- Hypothesis one-pager tied to user tasks
- Event naming cookbook for small teams
- Readout structure that resists cherry-picking
- Experiment design patterns for risky launches
- Quality standards for data hygiene
- Narrative templates for exec summaries
- Office hours on instrumentation trade-offs
Outcomes you can show
- Ship one experiment with pre-registered success criteria
- Retire one dashboard tile that misled your squad
- Deliver a learning recap engineers endorse
Facilitator
Hana Sato
Program Director with background in instrumentation for growth-stage SaaS.
FAQ
Do I need a data scientist?
Helpful but not required. Labs assume you can tag events with engineering support once a week.
What analytics stacks are referenced?
Examples use generic event models. We do not configure Amplitude, Mixpanel, or internal warehouses for you.
Honest limitation?
If your product lacks stable telemetry, several labs are theoretical only until wiring lands.
Experience notes
“The hypothesis one-pager stopped our team from shipping twelve simultaneous tweaks. Still building the discipline to pre-register metrics every time.”
