← Back to blog

99% Uptime Guarantee: Building Resilient AI Services with Multi-Model Fallback Architecture

Santiago Arias
Santiago Arias
CTO

Introduction

When AI models fail (and they will), your users shouldn't notice. Rate limits hit, networks timeout, providers deprecate models, and services go down. A single-model dependency creates a Single Point of Failure (SPOF) that can bring down your entire feature.

At CreativeOS, we built a production-grade AI service that automatically falls back across multiple providers, ensuring 99%+ uptime even when individual models experience outages. This article explains our sequential fallback architecture and how it keeps our AI-powered features running reliably.

The Problem with Single-Model Dependencies

Traditional AI integrations look like this:

async function analyzeText(text: string) {
  const response = await fetch('https://api.openai.com/v1/chat/completions', {
    method: 'POST',
    body: JSON.stringify({ model: 'gpt-4', messages: [{ role: 'user', content: text }] })
  })
  return response.json()
}

What happens when:

  • OpenAI rate limits your account? Feature breaks
  • Network timeout? Feature breaks
  • Model deprecated? Feature breaks
  • Provider outage? Feature breaks

Every failure becomes a user-facing error. Not acceptable for production.

Our Solution: Sequential Fallback Architecture

Instead of one model, we maintain ordered lists of models from different providers. When a request comes in, we try models sequentially until one succeeds:

const OPENROUTER_MODELS = [
  'amazon/nova-lite',
  'qwen/qwen-2.5-turbo',
  'google/gemini-2.0-flash-exp',
  'anthropic/claude-3.5-sonnet',
  'openai/gpt-4o-mini'
]

async function tryModelsInParallel(prompt: string): Promise<string> {
  logger.info(`Trying text models sequentially until first success`)

  const errors: string[] = []

  // Try models one by one until we get a success
  for (const model of OPENROUTER_MODELS) {
    try {
      logger.info(`Trying model: ${model}`)
      const result = await callOpenRouter(model, prompt)
      if (result) {
        logger.info(`✅ Success with model: ${model}`)
        return result
      }
      throw new Error('No result returned')
    } catch (error) {
      const errorMsg = `${model}: ${error instanceof Error ? error.message : 'Unknown error'}`
      errors.push(errorMsg)
      logger.warn(`❌ Model ${model} failed:`, error)
      // Continue to next model
    }
  }

  // If we get here, all models failed
  throw new Error(`All ${OPENROUTER_MODELS.length} models failed: ${errors.join(', ')}`)
}

Why sequential, not parallel?

  1. Cost optimization: We order models by cost (cheapest first). If the first model succeeds, we don't pay for others.
  2. Simpler error handling: One failure at a time is easier to debug than multiple parallel failures.
  3. Predictable behavior: Results come from a consistent model when that model is available.

Model Selection Strategy

Our model lists are carefully curated:

Text Models (Ordered by Cost)

const OPENROUTER_MODELS = [
  'amazon/nova-lite',        // Cheapest, fast
  'qwen/qwen-2.5-turbo',     // Good balance
  'google/gemini-2.0-flash-exp', // Fast, reliable
  'anthropic/claude-3.5-sonnet', // High quality
  'openai/gpt-4o-mini'       // Backup premium option
]

Vision Models (Multi-Image Support)

const LLMS_WITH_VISION = [
  'google/gemini-2.0-flash-exp',
  'openai/gpt-4o',
  'anthropic/claude-3.5-sonnet'
]

Selection criteria:

  • Cost: Cheaper models tried first
  • Provider diversity: Different providers to avoid correlated failures
  • Performance: Fast models prioritized for real-time use cases
  • Capabilities: Vision models support multi-image analysis

Error Handling Patterns

Not all errors are equal. We categorize failures and handle them appropriately:

async function generateImageTagsBase64(base64: string, mimeType: string): Promise<any> {
  const models = ['google/gemini-2.0-flash-exp', 'openai/gpt-4o', 'anthropic/claude-3.5-sonnet']
  let lastError = ''

  for (const model of models) {
    try {
      const response = await callVisionModel(model, base64, mimeType)
      const parsed = JSON.parse(response)
      return { model, ...parsed }
    } catch (e) {
      // Handle specific network errors
      if (e.name === 'AbortError') {
        lastError = `Model ${model} timeout error: Request timed out after 30 seconds`
        logger.warn(`[AI] Model ${model} timeout error`)
      } else if (e.code === 'EPIPE' || e.code === 'ECONNRESET' || e.code === 'ECONNREFUSED') {
        lastError = `Model ${model} network error: ${e.code} - ${e.message}`
        logger.warn(`[AI] Model ${model} network error (${e.code}):`, e.message)
      } else if (e.name === 'TypeError' && e.message.includes('fetch')) {
        lastError = `Model ${model} fetch error: ${e.message}`
        logger.warn(`[AI] Model ${model} fetch error:`, e.message)
      } else {
        lastError = `Model ${model} error: ${e}`
        logger.warn(`[AI] Model ${model} error:`, e)
      }
      continue // Try next model
    }
  }
  
  throw new Error(`All models failed. Last error: ${lastError}`)
}

Error categories:

  • Timeouts: Request exceeded time limit (try next model)
  • Network errors: Connection issues (try next model)
  • Rate limits: Provider throttling (try next model)
  • Parse errors: Invalid JSON response (try next model)
  • All models failed: Only then throw error to user

Real-World Application: Copy Grader

Our Copy Grader tool analyzes marketing copy for readability and persuasion. It uses the fallback system to ensure reliability:

async function gradeCopy(text: string): Promise<ReadabilityScore> {
  const characters = text.length
  const sentences = text.split(/[.!?]+/).filter(s => s.trim().length > 0).length
  const words = text.split(/\s+/).filter(w => w.length > 0).length

  const analysisPrompt = `Analyze this copy for readability and persuasive elements. Return ONLY a JSON object with this exact structure:

{
  "readabilityScore": <number 0-100 based on Flesch Reading Ease>,
  "gradeLevel": "<grade level like '8th - 9th Grade' or 'College'>",
  "description": "<description like 'Plain English' or 'Fairly easy to read'>",
  "persuasionScore": <number 0-100>,
  "elements": {
    "scarcity": <percentage 0-100>,
    "urgency": <percentage 0-100>,
    "socialProof": <percentage 0-100>,
    "benefits": <percentage 0-100>,
    "callToAction": <percentage 0-100>
  }
}

Text to analyze: "${text}"`

  // This will try models sequentially until one succeeds
  const llmResponse = await tryModelsInParallel(analysisPrompt)
  const cleanResponse = llmResponse.replace(/```json|```/g, '').trim()
  const analysisData = JSON.parse(cleanResponse)

  return {
    score: analysisData.readabilityScore,
    gradeLevel: analysisData.gradeLevel,
    description: analysisData.description,
    persuasionScore: analysisData.persuasionScore,
    elements: analysisData.elements,
    stats: { characters, sentences, words }
  }
}

Benefits:

  • Users get results even if primary model fails
  • Cost-optimized (uses cheapest available model)
  • Reliable (99%+ uptime in production)

Multi-Image Vision Analysis

For complex analysis requiring multiple images (like comparing ad creative to landing page), we use vision-capable models:

async function tryVisionModelsWithMultipleImages(
  prompt: string, 
  image1Url: string, 
  image2Url: string
): Promise<string> {
  logger.info(`Trying vision models with multiple images sequentially until first success`)

  const errors: string[] = []

  for (const model of LLMS_WITH_VISION) {
    try {
      logger.info(`Trying vision model with multiple images: ${model}`)
      const result = await callVisionModelWithMultipleImages(model, prompt, image1Url, image2Url)
      if (result) {
        logger.info(`✅ Success with vision model: ${model}`)
        return result
      }
      throw new Error('No result returned')
    } catch (error) {
      const errorMsg = `${model}: ${error instanceof Error ? error.message : 'Unknown error'}`
      errors.push(errorMsg)
      logger.warn(`❌ Vision model ${model} failed:`, error)
      // Continue to next model
    }
  }

  throw new Error(`All ${LLMS_WITH_VISION.length} vision models failed: ${errors.join(', ')}`)
}

This enables features like our Funnel Congruence Analyzer, which compares ad creatives to landing pages to identify conversion-killing mismatches.

Monitoring & Observability

To ensure the system works as expected, we log extensively:

logger.info(`✅ Success with model: ${model}`)
logger.warn(`❌ Model ${model} failed:`, error)

Key metrics to track:

  • Success rate per model: Which models succeed most often?
  • Failure reasons: Timeouts vs. rate limits vs. network errors
  • Cost per request: Track which model tier served each request
  • Latency per model: Performance characteristics

Production insights:

  • Primary model (nova-lite) succeeds ~95% of the time
  • Fallback to second model adds ~200ms latency (acceptable)
  • All models failing happens <0.1% of requests (true outages)
  • Cost savings: ~80% of requests use cheapest model

JSON Schema Validation

To ensure consistent outputs, we validate JSON responses:

const cleanResponse = llmResponse.replace(/```json|```/g, '').trim()
const analysisData = JSON.parse(cleanResponse)

// Validate required fields
if (!analysisData.readabilityScore || !analysisData.persuasionScore) {
  throw new Error('Invalid response structure')
}

If parsing fails, we try the next model. This handles cases where a model returns invalid JSON.

Best Practices

  1. Order by cost: Try cheaper models first to optimize spend
  2. Provider diversity: Use different providers to avoid correlated failures
  3. Comprehensive error handling: Categorize errors and handle appropriately
  4. Structured logging: Log which model succeeded for debugging
  5. Graceful degradation: Only fail after all models exhausted
  6. Timeout configuration: Set reasonable timeouts (30s for most cases)

Common Pitfalls to Avoid

Don't use parallel requests: Wastes money, complicates error handling

Don't hardcode fallbacks: Use configurable model lists

Don't ignore errors: Log everything for debugging

Don't fail fast: Try all models before giving up

Don't skip validation: Always validate JSON responses

Conclusion

Multi-model fallback architecture is essential for production AI services. By implementing sequential fallback with comprehensive error handling, we achieved:

  • 99%+ uptime: Users rarely see failures
  • Cost optimization: 80% of requests use cheapest model
  • Reliability: Handles provider outages gracefully
  • Observability: Full logging for debugging

The patterns we've shared here are production-tested and handle thousands of AI requests daily. Whether you're building copy analysis, image tagging, or any AI-powered feature, sequential fallback ensures your users get results even when individual models fail.

The key insight: Treat AI models as unreliable infrastructure. Build resilience into your architecture, not just your error handling.