Files
mylder-frontend/docs/cloudflare-workers-roadmap.md
christiankrag ef31ed3564 Add health endpoint for Swarm health checks
The /health route returns JSON status for Docker Swarm
to verify container health during deployment.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-14 13:27:25 +01:00

15 KiB

Cloudflare Workers Roadmap for Mylder

Overview

Edge computing strategy for Mylder platform using Cloudflare Workers, R2, D1, and KV.

Philosophy: Move compute closer to users, reduce VPS load, improve UX with sub-100ms responses.

Current Architecture vs Edge-First

Current (VPS-Centric)

User → Cloudflare CDN → VPS (149.102.155.84)
                          ├── Next.js (Frontend)
                          ├── Supabase (Auth, DB)
                          ├── n8n (Automation)
                          └── Dokploy (Orchestration)

Future (Edge-First)

User → Cloudflare Workers (Edge Logic)
       ├── Static Assets (CDN)
       ├── Edge Auth (Workers + KV)
       ├── Edge API (Workers + D1)
       └── Origin (VPS for heavy compute)
           ├── Supabase (Primary DB)
           └── n8n (Automation)

Workers Use Cases

1. Edge Authentication (Priority: High)

Problem: Every auth check hits VPS → Supabase (100-300ms) Solution: Cache JWT validation at edge

// workers/auth-edge.ts
export default {
  async fetch(request: Request, env: Env): Promise<Response> {
    const token = request.headers.get('Authorization')?.split('Bearer ')[1];
    if (!token) return new Response('Unauthorized', { status: 401 });

    // Check KV cache for user session
    const cached = await env.SESSIONS.get(token);
    if (cached) {
      return new Response(cached, {
        headers: { 'X-Auth-Source': 'edge-cache' }
      });
    }

    // Fallback to Supabase origin
    const response = await fetch('https://supabase.mylder.io/auth/v1/user', {
      headers: { Authorization: `Bearer ${token}` }
    });

    if (response.ok) {
      const user = await response.text();
      await env.SESSIONS.put(token, user, { expirationTtl: 3600 }); // 1 hour
      return new Response(user, {
        headers: { 'X-Auth-Source': 'origin' }
      });
    }

    return response;
  }
};

Benefits:

  • Auth checks: 300ms → 10ms (30x faster)
  • Reduced Supabase load
  • Better UX for logged-in users

Trade-offs:

  • Session invalidation complexity
  • KV storage costs ($0.50/1M reads)

2. API Rate Limiting (Priority: High)

Problem: DDoS/abuse protection on VPS is resource-intensive Solution: Rate limit at edge before hitting origin

// workers/rate-limiter.ts
import { RateLimiter } from '@cloudflare/workers-rate-limiter';

export default {
  async fetch(request: Request, env: Env): Promise<Response> {
    const limiter = new RateLimiter({
      namespace: env.RATE_LIMIT,
      // 100 requests per 60 seconds per IP
      limit: 100,
      period: 60,
    });

    const ip = request.headers.get('CF-Connecting-IP') || 'unknown';
    const { success } = await limiter.limit({ key: ip });

    if (!success) {
      return new Response('Rate limit exceeded', {
        status: 429,
        headers: { 'Retry-After': '60' }
      });
    }

    // Forward to origin
    return fetch(request);
  }
};

Benefits:

  • Block abuse before it hits VPS
  • Per-IP, per-user, per-endpoint limits
  • 0ms latency overhead

Use Cases:

  • /api/* endpoints
  • Login attempts
  • Form submissions
  • Search queries

3. A/B Testing at Edge (Priority: Medium)

Problem: Client-side A/B testing causes layout shift (poor CLS) Solution: Server-side rendering with edge variants

// workers/ab-test.ts
export default {
  async fetch(request: Request, env: Env): Promise<Response> {
    const url = new URL(request.url);

    // Check existing variant
    let variant = request.headers.get('Cookie')?.match(/variant=([AB])/)?.[1];

    if (!variant) {
      // Assign 50/50 split
      variant = Math.random() < 0.5 ? 'A' : 'B';
    }

    // Rewrite request to origin with variant
    const originUrl = `${url.origin}${url.pathname}?variant=${variant}`;
    const response = await fetch(originUrl, request);

    // Set variant cookie
    const headers = new Headers(response.headers);
    headers.set('Set-Cookie', `variant=${variant}; Path=/; Max-Age=2592000`);

    return new Response(response.body, {
      status: response.status,
      headers
    });
  }
};

Benefits:

  • No layout shift (CLS = 0)
  • Instant variant assignment
  • Analytics at edge

Use Cases:

  • Landing page variants
  • Pricing page tests
  • Feature flags

4. Edge Personalization (Priority: Medium)

Problem: Generic content for all users (low engagement) Solution: Geo/device-specific content at edge

// workers/personalize.ts
export default {
  async fetch(request: Request, env: Env): Promise<Response> {
    const country = request.cf?.country as string;
    const device = request.headers.get('User-Agent')?.includes('Mobile')
      ? 'mobile'
      : 'desktop';

    // Fetch base HTML from origin
    const response = await fetch(request);
    const html = await response.text();

    // Inject personalized content
    const personalized = html
      .replace('{{COUNTRY}}', country)
      .replace('{{DEVICE}}', device)
      .replace('{{CURRENCY}}', getCurrency(country));

    return new Response(personalized, {
      headers: {
        'Content-Type': 'text/html',
        'Cache-Control': 'public, max-age=60, s-maxage=3600',
        'Vary': 'CF-IPCountry, User-Agent'
      }
    });
  }
};

function getCurrency(country: string): string {
  const map: Record<string, string> = {
    US: 'USD', GB: 'GBP', EU: 'EUR', DK: 'DKK'
  };
  return map[country] || 'USD';
}

Benefits:

  • Geo-specific content (currency, language)
  • Device-optimized markup
  • Sub-50ms personalization

Use Cases:

  • Pricing display
  • Content localization
  • Feature availability

5. Image Optimization & Resizing (Priority: Low)

Problem: Next.js Image Optimization on VPS is CPU-intensive Solution: Cloudflare Images or Workers + R2

// workers/image-optimize.ts
export default {
  async fetch(request: Request, env: Env): Promise<Response> {
    const url = new URL(request.url);
    const imagePath = url.pathname.replace('/img/', '');

    // Parse resize params
    const width = url.searchParams.get('w') || '1920';
    const quality = url.searchParams.get('q') || '80';
    const format = url.searchParams.get('f') || 'webp';

    // Fetch from R2
    const object = await env.IMAGES.get(imagePath);
    if (!object) return new Response('Not Found', { status: 404 });

    // Resize at edge (requires paid Workers plan)
    const resized = await fetch(`https://mylder.io/cdn-cgi/image/width=${width},quality=${quality},format=${format}/${imagePath}`);

    return new Response(resized.body, {
      headers: {
        'Content-Type': `image/${format}`,
        'Cache-Control': 'public, max-age=31536000, immutable'
      }
    });
  }
};

Benefits:

  • Offload VPS CPU
  • Auto WebP/AVIF conversion
  • Responsive images on-demand

Trade-offs:

  • Requires Cloudflare Images ($5/month + $1/100k)
  • Or R2 storage ($0.015/GB/month)

6. Edge API Caching (Priority: Medium)

Problem: Repeated API calls for same data Solution: Smart caching with stale-while-revalidate

// workers/api-cache.ts
export default {
  async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise<Response> {
    const cache = caches.default;
    const cacheKey = new Request(request.url, request);

    // Check cache
    let response = await cache.match(cacheKey);
    if (response) {
      const age = Date.now() - new Date(response.headers.get('Date')!).getTime();

      // If stale (>60s), revalidate in background
      if (age > 60000) {
        ctx.waitUntil(revalidate(request, cache, cacheKey));
      }

      return response;
    }

    // Fetch from origin
    response = await fetch(request);

    // Cache for 5 minutes
    const cached = new Response(response.body, response);
    cached.headers.set('Cache-Control', 'public, max-age=300, stale-while-revalidate=3600');
    ctx.waitUntil(cache.put(cacheKey, cached));

    return response;
  }
};

async function revalidate(request: Request, cache: Cache, key: Request) {
  const fresh = await fetch(request);
  await cache.put(key, fresh.clone());
}

Benefits:

  • API responses: 200ms → 10ms
  • Stale content served instantly
  • Background revalidation

Use Cases:

  • Public API endpoints
  • User profiles
  • Content feeds

Cloudflare R2 (Object Storage)

Use Cases for R2

What: S3-compatible object storage at edge (no egress fees)

1. User Uploads

// Store user avatars, documents
await env.UPLOADS.put(`users/${userId}/avatar.jpg`, file);

Benefits:

  • $0.015/GB/month (10x cheaper than S3)
  • No egress fees (free downloads)
  • Global CDN distribution

When to Use:

  • User-generated content (avatars, documents)
  • Static assets (fonts, icons)
  • Media files (videos, audio)

When NOT to Use:

  • Frequent small writes (use D1 instead)
  • Sub-10MB total storage (use KV instead)

2. Static Asset Hosting

// Move _next/static/* to R2
const asset = await env.STATIC_ASSETS.get(path);
return new Response(asset.body, {
  headers: { 'Cache-Control': 'public, max-age=31536000, immutable' }
});

Benefits:

  • Reduce VPS storage needs
  • Faster global distribution
  • Immutable content = perfect cache

3. Backup Storage

// Daily Supabase backups to R2
await env.BACKUPS.put(`supabase-${date}.sql.gz`, backup);

Cloudflare D1 (Edge Database)

Use Cases for D1

What: SQLite at edge (sub-10ms queries globally)

1. Edge Metadata

// Store user preferences, feature flags
const db = env.DB;
const result = await db.prepare(
  'SELECT theme, locale FROM user_prefs WHERE user_id = ?'
).bind(userId).first();

Benefits:

  • Read latency: 100ms (Supabase) → 5ms (D1)
  • No VPS load for reads
  • SQL interface (familiar)

When to Use:

  • Read-heavy data (user settings, configs)
  • Small datasets (<1GB)
  • Geo-distributed reads

When NOT to Use:

  • Write-heavy workloads (D1 eventual consistency)
  • Complex joins (use Supabase instead)
  • Primary data storage (Supabase is source of truth)

2. Analytics/Metrics

// Store page views, click events
await db.prepare(
  'INSERT INTO analytics (page, user_id, timestamp) VALUES (?, ?, ?)'
).bind(page, userId, Date.now()).run();

Benefits:

  • High-volume writes
  • Real-time aggregations
  • No Supabase quota impact

3. Feature Flags

// Enable/disable features per user
const flags = await db.prepare(
  'SELECT * FROM feature_flags WHERE active = 1'
).all();

D1 vs Supabase Decision Matrix

Feature D1 (Edge) Supabase (VPS)
Read Latency 5-10ms 100-300ms
Write Latency 50-100ms 100-300ms
Consistency Eventual Strong
Storage Limit 1GB 8GB+
Cost Free (5M reads/day) Included in VPS
Use Case Cached metadata Primary data

Strategy: D1 = edge cache, Supabase = source of truth

Cloudflare KV (Key-Value Store)

Use Cases for KV

What: Global key-value store (sub-1ms reads)

1. Session Storage

await env.SESSIONS.put(sessionId, userData, { expirationTtl: 3600 });

Benefits:

  • Fastest storage option (<1ms reads)
  • Auto-expiration (TTL)
  • Global replication

Cost: $0.50/1M reads, $5/1M writes (generous free tier)

2. Configuration/Secrets

const apiKey = await env.CONFIG.get('stripe_public_key');

3. Rate Limit Counters

const count = await env.RATE_LIMIT.get(ip);
await env.RATE_LIMIT.put(ip, count + 1, { expirationTtl: 60 });

Implementation Roadmap

Phase 1: Foundation (Month 1)

Goal: Basic edge infrastructure

  • Set up Cloudflare Workers project
  • Deploy edge auth validator (Worker + KV)
  • Implement rate limiting for /api/*
  • Migrate static assets to R2
  • Monitor performance impact

Success Metrics:

  • Auth latency: <20ms (p95)
  • Static asset cache hit rate: >95%
  • VPS CPU load: -20%

Phase 2: Optimization (Month 2-3)

Goal: Performance wins

  • Deploy A/B testing worker
  • Implement edge personalization
  • Set up D1 for user preferences
  • Create stale-while-revalidate API cache
  • Image optimization via R2 + Workers

Success Metrics:

  • LCP: <1.5s (p75)
  • API latency: -50% for cached endpoints
  • Global availability: 99.9%

Phase 3: Advanced (Month 4+)

Goal: Edge-first architecture

  • Move session management to edge (D1 + KV)
  • Implement edge analytics (D1)
  • Deploy feature flags system (D1)
  • Create edge API gateway
  • Multi-region failover

Success Metrics:

  • Edge request ratio: >70%
  • Origin load: -60%
  • Global latency: <100ms (p95)

Cost Analysis

Free Tier Limits

  • Workers: 100k requests/day
  • KV: 100k reads/day, 1k writes/day
  • R2: 10GB storage, 1M reads/month
  • D1: 5M reads/day, 100k writes/day

Paid Costs (Estimated for 1M users/month)

Service Usage Cost/Month
Workers 100M requests $5 (bundled)
KV 50M reads $25
R2 100GB storage + 500M reads $1.50
D1 Included in Workers $0
Total ~$30/month

ROI: Potential to downgrade VPS ($30/month savings) → break even

Anti-Patterns to Avoid

1. Over-Caching

Bad: Cache everything at edge for 24 hours Good: Cache static assets long, dynamic content short

2. Edge as Primary DB

Bad: Store all user data in D1 Good: D1 = cache, Supabase = source of truth

3. Synchronous Edge-Origin Calls

Bad: Worker → fetch origin → wait → respond Good: Serve stale, revalidate in background

4. Ignoring Cold Starts

Bad: Assume Workers are instant Good: Optimize bundle size (<1MB), minimize dependencies

5. Complex Business Logic at Edge

Bad: Move entire Next.js app to Workers Good: Keep heavy compute on VPS, edge for lightweight tasks

Monitoring & Debugging

Workers Analytics

Dashboard: Workers & Pages → Analytics

  • Request volume
  • Success rate (2xx/3xx/4xx/5xx)
  • CPU time (p50, p95, p99)
  • Error logs

Tail Workers (Real-Time Logs)

wrangler tail mylder-auth-edge

Custom Metrics

// workers/metrics.ts
export default {
  async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise<Response> {
    const start = Date.now();
    const response = await fetch(request);
    const duration = Date.now() - start;

    // Log to D1 analytics
    ctx.waitUntil(
      env.ANALYTICS.prepare(
        'INSERT INTO metrics (endpoint, duration, status) VALUES (?, ?, ?)'
      ).bind(request.url, duration, response.status).run()
    );

    return response;
  }
};

Next Steps

  1. Prototype: Deploy simple edge auth worker (1 day)
  2. Measure: Compare edge vs origin performance (1 week)
  3. Iterate: Expand to rate limiting, A/B testing (1 month)
  4. Optimize: Move 70%+ traffic to edge (3 months)

References