Files
skills-library/skills/design-thinking/loop.md

5.9 KiB

Skill: Design Thinking - Loop

Description

Connect all phases of design thinking into a continuous improvement cycle, with clear decision points for iteration vs shipping.

Input

  • phase_results: Results from empathize, define, ideate, prototype, test phases (required)
  • iteration_number: Which iteration is this (optional, default: 1)
  • time_constraint: Timeline for shipping (optional)
  • quality_bar: Minimum quality requirements (optional)

The Continuous Loop

Full Cycle Flow

EMPATHIZE → DEFINE → IDEATE → PROTOTYPE → TEST
     ↑                                      ↓
     └──────────── LOOP BACK ───────────────┘

Decision Points:
1. After TEST: Ship, Iterate, or Pivot?
2. If Iterate: Which phase to revisit?
3. If Pivot: Back to Empathize or Define?

Loop Decision Framework

Decision Matrix:

Test Results → Next Action

SHIP:
- Success rate: >80%
- User satisfaction: >4/5
- No critical issues
- Meets business goals
→ Action: Deploy to production

ITERATE (Minor):
- Success rate: 60-80%
- 2-3 moderate issues
- Core concept works
- Quick fixes available
→ Action: PROTOTYPE → TEST

ITERATE (Major):
- Success rate: 40-60%
- Multiple issues
- Concept solid but execution off
- Need design changes
→ Action: IDEATE → PROTOTYPE → TEST

PIVOT:
- Success rate: <40%
- Wrong problem solved
- Users prefer current solution
- Fundamental assumption wrong
→ Action: EMPATHIZE or DEFINE

When to Loop Back to Each Phase

Back to EMPATHIZE if:

  • Users dont have the problem you thought
  • Your solution solves wrong pain point
  • Missing key user segment
  • Assumptions about users proven wrong

Back to DEFINE if:

  • Problem statement too broad/narrow
  • Wrong success metrics
  • HMW question leads to wrong solutions
  • Persona doesnt match real users

Back to IDEATE if:

  • Solution works but not optimal
  • Better ideas emerged during testing
  • Technical constraints changed
  • Simpler approach possible

Back to PROTOTYPE if:

  • Concept good, execution poor
  • Usability issues in UI/UX
  • Need higher/lower fidelity
  • Technical implementation issues

Rerun TEST if:

  • Fixed critical issues
  • Changed user segment
  • Need more data
  • A/B test needed

Iteration Velocity

Fast Iteration Cycles

Week 1:
- Mon: Empathize (existing data)
- Tue: Define + Ideate
- Wed: Prototype (lo-fi)
- Thu: Test (3-5 users)
- Fri: Loop decision

Week 2:
- Mon-Tue: Iterate prototype
- Wed: Test again
- Thu: Ship or continue

Quality vs Speed Trade-offs

Speed Priority:
- Lo-fi prototypes
- 3-user tests
- Ship at 70% quality
- Fix in production

Quality Priority:
- Hi-fi prototypes
- 10+ user tests
- Ship at 95% quality
- Polish before launch

Balanced:
- Med-fi prototypes
- 5-user tests
- Ship at 80% quality
- Plan iteration post-launch

Metrics That Matter

Leading Indicators (Predict Success)

Empathize:
- Users interviewed per segment
- Pain points validated by 3+ users
- Quote-to-insight ratio

Define:
- HMW clarity score (1-5)
- Stakeholder alignment (%)
- Success metric specificity

Ideate:
- Ideas per session
- Effort/Impact scoring
- Idea diversity

Prototype:
- Build time vs plan
- Component reuse rate
- Fidelity appropriate for stage

Test:
- User completion rate
- Time on task vs target
- SUS score

Lagging Indicators (Measure Impact)

Post-Launch:
- User adoption rate
- Feature usage frequency
- Support tickets (fewer = better)
- User satisfaction (NPS)
- Business metrics (revenue, retention)

Output Format

{
  "status": "success",
  "iteration": 2,
  "cycle_summary": {
    "empathize": "5 users interviewed, 3 critical pains identified",
    "define": "HMW: Reduce setup time from 2hr to 5min",
    "ideate": "8 ideas generated, selected template-based approach",
    "prototype": "Med-fi prototype, 6hrs build time",
    "test": "80% success rate, SUS score 72"
  },
  "decision": {
    "verdict": "iterate_minor",
    "confidence": "high",
    "reasoning": "Core flow works but 2 UX issues need fixing"
  },
  "loop_back_to": "prototype",
  "changes_needed": [
    "Add template preview on hover",
    "Improve success confirmation message"
  ],
  "estimated_effort": "4 hours",
  "next_test_plan": {
    "users": 3,
    "focus": "Template selection UX",
    "success_criteria": "90% success rate on template task"
  },
  "ship_criteria": {
    "must_have": [
      "90% task completion",
      "SUS score > 75",
      "Zero critical issues"
    ],
    "nice_to_have": [
      "Template customization",
      "Saved templates"
    ]
  },
  "timeline": {
    "this_iteration": "3 days",
    "total_so_far": "10 days",
    "target_ship": "14 days"
  }
}

Quality Gates

  • Clear decision made (ship/iterate/pivot)
  • Loop target phase identified
  • Specific changes documented
  • Success criteria for next iteration defined
  • Timeline realistic
  • Learning captured for future iterations

Token Budget

  • Max input: 1500 tokens
  • Max output: 2000 tokens

Model

  • Recommended: sonnet (strategic reasoning)

Philosophy

"Done is better than perfect. But learning is better than done." Ship fast, learn faster, improve continuously.

Keep it simple:

  • Small iterations beat big leaps
  • Test assumptions early
  • Fail fast, learn faster
  • Perfect is the enemy of shipped
  • User feedback > internal opinions

Pivot vs Persevere

Persevere if:

  • Core metrics trending up
  • Users like the direction
  • Issues are tactical, not strategic
  • Learning compounds each iteration

Pivot if:

  • 3+ iterations with no improvement
  • Users consistently reject solution
  • Wrong problem being solved
  • Better opportunity identified

Continuous Improvement Post-Launch

Post-Ship Loop:
1. Monitor usage analytics
2. Collect user feedback
3. Identify new pain points
4. Prioritize improvements
5. Small iterations weekly
6. Major updates monthly