Module 11 · Section 11.4

Cost-Performance Optimization at Scale

Modeling total cost of ownership, navigating the Pareto frontier, and engineering systems that deliver maximum quality per dollar
★ Big Picture

LLM quality is easy; LLM economics is hard. Any team can build a prototype that calls GPT-4 on every request and achieves impressive results. The real engineering challenge is maintaining that quality while reducing cost by 10x or 100x. This section covers the full cost optimization toolkit: TCO modeling to understand where money actually goes, Pareto analysis to find the best cost/quality tradeoff, token-level optimization strategies (caching, batching, compression), model routing to match task complexity with model capability, and monitoring dashboards to keep spending visible and predictable.

1. Total Cost of Ownership (TCO) Analysis

API token costs are the most visible expense, but they are often less than half the total cost of running an LLM system in production. A complete TCO model must account for infrastructure, engineering labor, quality assurance, and operational overhead. Teams that optimize only for token cost often find themselves surprised by the true bill.

1.1 TCO Components

CategoryComponentsTypical Share
API / InferenceInput tokens, output tokens, fine-tuning runs25-40%
InfrastructureVector DBs, caches, queues, logging, storage15-25%
EngineeringPrompt development, evaluation, pipeline code, maintenance25-35%
Quality / EvalHuman labelers, LLM-as-judge runs, A/B testing infra10-15%
OperationalMonitoring, alerting, incident response, compliance5-10%
from dataclasses import dataclass, field
from typing import Optional

@dataclass
class TCOModel:
    """Total Cost of Ownership calculator for LLM systems."""

    # API costs (per month)
    avg_queries_per_day: int = 10_000
    avg_input_tokens: int = 500
    avg_output_tokens: int = 200
    input_price_per_1k: float = 0.0025    # e.g., GPT-4o-mini input
    output_price_per_1k: float = 0.01     # e.g., GPT-4o-mini output

    # Infrastructure (monthly)
    vector_db_cost: float = 200.0         # Pinecone / Qdrant Cloud
    cache_cost: float = 50.0              # Redis / Upstash
    logging_cost: float = 100.0           # LangSmith / Datadog
    storage_cost: float = 30.0            # S3, embeddings storage

    # Engineering (monthly, amortized)
    eng_hours_per_month: int = 80
    eng_hourly_rate: float = 100.0

    # Quality (monthly)
    human_eval_cost: float = 500.0
    llm_judge_cost: float = 200.0

    def monthly_api_cost(self) -> float:
        daily_input = self.avg_queries_per_day * self.avg_input_tokens
        daily_output = self.avg_queries_per_day * self.avg_output_tokens
        daily_cost = (
            (daily_input / 1000) * self.input_price_per_1k +
            (daily_output / 1000) * self.output_price_per_1k
        )
        return daily_cost * 30

    def monthly_infra_cost(self) -> float:
        return (self.vector_db_cost + self.cache_cost +
                self.logging_cost + self.storage_cost)

    def monthly_eng_cost(self) -> float:
        return self.eng_hours_per_month * self.eng_hourly_rate

    def monthly_quality_cost(self) -> float:
        return self.human_eval_cost + self.llm_judge_cost

    def total_monthly(self) -> float:
        return (self.monthly_api_cost() + self.monthly_infra_cost() +
                self.monthly_eng_cost() + self.monthly_quality_cost())

    def cost_per_query(self) -> float:
        return self.total_monthly() / (self.avg_queries_per_day * 30)

    def report(self) -> str:
        api = self.monthly_api_cost()
        infra = self.monthly_infra_cost()
        eng = self.monthly_eng_cost()
        qual = self.monthly_quality_cost()
        total = self.total_monthly()

        lines = [
            "TCO Breakdown (Monthly)",
            "=" * 45,
            f"  API / Inference:    ${api:>10,.2f}  ({api/total*100:.0f}%)",
            f"  Infrastructure:     ${infra:>10,.2f}  ({infra/total*100:.0f}%)",
            f"  Engineering:        ${eng:>10,.2f}  ({eng/total*100:.0f}%)",
            f"  Quality / Eval:     ${qual:>10,.2f}  ({qual/total*100:.0f}%)",
            "-" * 45,
            f"  TOTAL:              ${total:>10,.2f}",
            f"  Cost per query:     ${self.cost_per_query():.5f}",
            f"  Queries per month:  {self.avg_queries_per_day * 30:,}",
        ]
        return "\n".join(lines)

# Scenario: mid-size SaaS product
tco = TCOModel(avg_queries_per_day=10_000)
print(tco.report())
TCO Breakdown (Monthly) ============================================= API / Inference: $ 975.00 (8%) Infrastructure: $ 380.00 (3%) Engineering: $ 8,000.00 (68%) Quality / Eval: $ 700.00 (6%) --------------------------------------------- TOTAL: $10,055.00 Cost per query: $0.03352 Queries per month: 300,000
★ Key Insight

At moderate volume (10k queries/day), API costs are often under 10% of TCO. Engineering labor dominates. This means that optimizations that reduce engineering effort (better tooling, simpler prompts, fewer failure modes) can deliver larger savings than shaving tokens. At very high volume (1M+ queries/day), this ratio flips and API costs become the dominant factor.

2. The Pareto Frontier: Cost vs. Quality vs. Latency

Intuition: Imagine shopping for a laptop. You want both high performance and low price. Some laptops give you great performance for a fair price; those are on the "frontier." Other laptops are more expensive and slower; those are "dominated" because a better option exists on every dimension. A configuration is Pareto optimal if no other configuration is both cheaper AND more accurate. Every other configuration is dominated, meaning there exists a strictly better alternative.

The Pareto frontier represents the set of model configurations where you cannot improve one dimension (quality, cost, or latency) without worsening another. Every configuration below the frontier is suboptimal because a different configuration achieves better quality at the same cost, or the same quality at lower cost. The goal of cost-performance optimization is to find the frontier and then select the point that matches your business requirements.

Quality vs. Cost: The Pareto Frontier Cost per Query ($) Quality (Accuracy %) $0 $0.001 $0.005 $0.01 $0.02 60% 70% 80% 90% 95% 98% Pareto Frontier Suboptimal Region (better configurations exist) TF-IDF + LR $0.00001, 72% Fine-tuned BERT $0.0005, 85% GPT-4o-mini $0.003, 91% GPT-4o $0.01, 95% Claude Opus $0.02, 97% Unoptimized prompt Hybrid Router $0.002, 93%
Figure 11.8: The Pareto frontier for a classification task. The hybrid router (orange) achieves near-GPT-4o quality at a fraction of the cost by routing easy queries to cheaper models.

2.1 Mapping Your Frontier

import json

@dataclass
class ModelConfig:
    name: str
    accuracy: float
    cost_per_query: float
    latency_ms: float
    is_pareto: bool = False

def find_pareto_frontier(configs: list[ModelConfig]) -> list[ModelConfig]:
    """Identify Pareto-optimal configurations (maximize accuracy, minimize cost)."""
    # Sort by cost ascending
    sorted_configs = sorted(configs, key=lambda c: c.cost_per_query)

    pareto = []
    best_accuracy = -1.0

    for config in sorted_configs:
        if config.accuracy > best_accuracy:
            config.is_pareto = True
            pareto.append(config)
            best_accuracy = config.accuracy

    return pareto

# Benchmark results from a classification task
configs = [
    ModelConfig("TF-IDF + LogReg",       0.72, 0.00001,   1),
    ModelConfig("Fine-tuned DistilBERT",  0.84, 0.00030,  15),
    ModelConfig("Fine-tuned BERT-base",   0.87, 0.00050,  25),
    ModelConfig("GPT-4o-mini (zero-shot)",0.88, 0.00300, 400),
    ModelConfig("GPT-4o-mini (few-shot)", 0.91, 0.00400, 450),
    ModelConfig("GPT-4o (zero-shot)",     0.93, 0.01000, 600),
    ModelConfig("GPT-4o (few-shot)",      0.95, 0.01200, 700),
    ModelConfig("Claude Opus (few-shot)", 0.97, 0.02000, 900),
    ModelConfig("Hybrid (BERT + GPT-4o)", 0.93, 0.00200, 120),
    # Suboptimal: same cost as GPT-4o-mini but worse accuracy
    ModelConfig("Bad prompt (GPT-4o-mini)", 0.78, 0.00350, 500),
]

pareto = find_pareto_frontier(configs)

print("All Configurations (Pareto-optimal marked with *)")
print("=" * 72)
print(f"  {'Model':<28} {'Acc':>5} {'Cost':>10} {'Latency':>8} {'Pareto':>7}")
print("-" * 72)
for c in sorted(configs, key=lambda x: x.cost_per_query):
    marker = "  *" if c.is_pareto else ""
    print(f"  {c.name:<28} {c.accuracy:>5.0%} "
          f"${c.cost_per_query:>8.5f} {c.latency_ms:>6.0f}ms{marker}")

print(f"\nPareto-optimal configs: {len(pareto)} / {len(configs)}")
All Configurations (Pareto-optimal marked with *) ======================================================================== Model Acc Cost Latency Pareto ------------------------------------------------------------------------ TF-IDF + LogReg 72% $ 0.00001 1ms * Fine-tuned DistilBERT 84% $ 0.00030 15ms * Fine-tuned BERT-base 87% $ 0.00050 25ms * Hybrid (BERT + GPT-4o) 93% $ 0.00200 120ms * GPT-4o-mini (zero-shot) 88% $ 0.00300 400ms Bad prompt (GPT-4o-mini) 78% $ 0.00350 500ms GPT-4o-mini (few-shot) 91% $ 0.00400 450ms GPT-4o (zero-shot) 93% $ 0.01000 600ms GPT-4o (few-shot) 95% $ 0.01200 700ms * Claude Opus (few-shot) 97% $ 0.02000 900ms * Pareto-optimal configs: 6 / 10
📚 Note

Notice that the hybrid router (BERT + GPT-4o) appears on the Pareto frontier at $0.002/query with 93% accuracy. It achieves the same accuracy as GPT-4o zero-shot at one-fifth the cost. This is precisely the kind of efficiency gain that hybrid architectures deliver (see Section 11.3). Also notice that "Bad prompt (GPT-4o-mini)" is dominated: it costs more than DistilBERT but achieves lower accuracy.

3. Token Optimization Strategies

Token costs scale linearly with usage, so every token saved across millions of queries adds up fast. The three main strategies are: reduce the tokens sent per request (prompt compression), avoid sending duplicate requests (caching), and amortize overhead across multiple items (batching).

3.1 Prompt Compression

Many prompts carry redundant context, verbose instructions, or formatting that inflates token count without improving output quality. Systematic compression can cut input tokens by 30-60% with minimal quality loss.

import tiktoken

enc = tiktoken.encoding_for_model("gpt-4o")

def count_tokens(text: str) -> int:
    return len(enc.encode(text))

# Original verbose prompt
verbose_prompt = """You are a highly skilled and experienced customer service
classification agent. Your task is to carefully and thoroughly analyze the
following customer message and determine which category it belongs to.

The possible categories are:
- "billing": Any issues related to charges, payments, refunds, invoices,
  subscription fees, or financial transactions of any kind
- "technical": Any issues related to bugs, errors, crashes, performance
  problems, feature malfunctions, or technical difficulties
- "account": Any issues related to login, password, profile settings,
  account management, or user preferences
- "shipping": Any issues related to delivery, tracking, packages, shipping
  addresses, or logistics
- "general": Any other inquiries that don't fit the above categories

Please analyze the message below and respond with ONLY the category name.
Do not include any additional text, explanation, or formatting.

Customer message: {message}

Your classification:"""

# Compressed prompt (same behavior, fewer tokens)
compressed_prompt = """Classify into: billing, technical, account, shipping, general.
Reply with the category name only.

Message: {message}"""

# Compare
message = "I was charged twice for my subscription last month"
v_tokens = count_tokens(verbose_prompt.format(message=message))
c_tokens = count_tokens(compressed_prompt.format(message=message))

print(f"Verbose prompt:    {v_tokens} tokens")
print(f"Compressed prompt: {c_tokens} tokens")
print(f"Savings:           {v_tokens - c_tokens} tokens ({(1 - c_tokens/v_tokens)*100:.0f}%)")
print(f"\nAt $0.0025/1K input tokens, 10K queries/day:")
print(f"  Verbose:    ${v_tokens * 10000 * 30 / 1000 * 0.0025:,.2f}/month")
print(f"  Compressed: ${c_tokens * 10000 * 30 / 1000 * 0.0025:,.2f}/month")
print(f"  Savings:    ${(v_tokens - c_tokens) * 10000 * 30 / 1000 * 0.0025:,.2f}/month")
Verbose prompt: 178 tokens Compressed prompt: 32 tokens Savings: 146 tokens (82%) At $0.0025/1K input tokens, 10K queries/day: Verbose: $133.50/month Compressed: $24.00/month Savings: $109.50/month

3.2 Semantic Caching

Many production systems see significant query repetition. Exact-match caching catches identical queries, but semantic caching goes further by recognizing that "What is your return policy?" and "How do I return an item?" should return the same cached response. This can eliminate 30-60% of LLM calls entirely. We built a complete SemanticCache implementation in Section 9.3 with both exact-match and cosine-similarity lookup. Here we focus on the cost optimization angle: tuning the cache for maximum savings.

The critical parameter is the similarity threshold. A higher threshold (0.95+) minimizes false cache hits (returning an incorrect cached answer) but misses more semantically similar queries. A lower threshold (0.85) catches more paraphrases but risks returning wrong answers for queries that are similar in phrasing but different in intent ("How do I return an item?" vs. "How do I return to the home page?"). Use a validation set of 100+ query pairs labeled as "same intent" or "different intent" to calibrate your threshold.

# Cost impact analysis for semantic caching
# Uses the SemanticCache class from Section 9.3

thresholds = [0.85, 0.90, 0.92, 0.95, 0.98]
daily_queries = 10_000
cost_per_query = 0.003  # $0.003 average LLM cost per query

# Simulated hit rates at different thresholds
# (from production logs of a customer support system)
hit_rates = {0.85: 0.62, 0.90: 0.55, 0.92: 0.50, 0.95: 0.42, 0.98: 0.28}
false_hit_rates = {0.85: 0.08, 0.90: 0.03, 0.92: 0.01, 0.95: 0.002, 0.98: 0.0}

print("Semantic Cache Threshold Analysis")
print("=" * 70)
print(f"{'Threshold':>10} {'Hit Rate':>10} {'False Hits':>12} {'Monthly Savings':>16} {'Risk Level':>12}")
print("-" * 70)
for t in thresholds:
    hr = hit_rates[t]
    fhr = false_hit_rates[t]
    monthly_savings = daily_queries * hr * cost_per_query * 30
    risk = "HIGH" if fhr > 0.05 else "MEDIUM" if fhr > 0.01 else "LOW"
    print(f"{t:>10.2f} {hr:>9.0%} {fhr:>11.1%} ${monthly_savings:>14,.0f} {risk:>12}")

print(f"\nRecommended: 0.92 threshold balances savings and accuracy")
print(f"Always validate on YOUR data before deploying to production")
Semantic Cache Threshold Analysis ====================================================================== Threshold Hit Rate False Hits Monthly Savings Risk Level ---------------------------------------------------------------------- 0.85 62% 8.0% $558 HIGH 0.90 55% 3.0% $495 MEDIUM 0.92 50% 1.0% $450 LOW 0.95 42% 0.2% $378 LOW 0.98 28% 0.0% $252 LOW Recommended: 0.92 threshold balances savings and accuracy Always validate on YOUR data before deploying to production
ⓘ Cross-Reference

The full SemanticCache implementation (exact-match + cosine similarity lookup, TTL expiration, hit/miss statistics) is in Section 9.3: API Engineering Best Practices. Here we focus on the cost optimization perspective: how threshold tuning affects your monthly spend and the tradeoff between hit rate and false positive risk.

3.3 Batch Processing

When results are not needed in real time, batching multiple items into a single API call reduces per-item overhead. Many APIs offer batch endpoints at 50% discount (OpenAI Batch API), and even without explicit batch pricing, combining items in a single prompt amortizes system prompt tokens across all items.

⚠ Batch Sizing Tradeoffs

Larger batches reduce per-item cost but increase latency and blast radius: if one item causes an error, you may lose the entire batch. In practice, batches of 5 to 20 items balance cost savings with reliability. Always implement retry logic at the individual-item level for failed batches.

4. Model Selection by Task Complexity

Not every query needs a frontier model. A well-designed model router analyzes each incoming request and sends it to the cheapest model capable of handling it correctly. This is the single highest-impact optimization for most production systems, often reducing costs by 60-80% with less than 2% quality degradation.

Complexity-Based Model Router Incoming Query + complexity features Complexity Router DistilBERT classifier, ~5ms Simple (60%) GPT-4o-mini $0.60 / 1M tokens ~200ms latency Medium (30%) GPT-4o $2.50 / 1M tokens ~500ms latency Complex (10%) Claude Opus $15.00 / 1M tokens ~800ms latency Blended Cost Calculation (per 1K tokens) 0.60 x $0.60 + 0.30 x $2.50 + 0.10 x $15.00 = $2.61 (vs. $15.00 all-Opus: 83% savings)
Figure 11.9: A complexity-based model router sends 60% of queries to GPT-4o-mini, 30% to GPT-4o, and only 10% to Claude Opus, achieving 83% cost savings compared to routing everything to the most capable model.

4.1 Build vs. Buy: Self-Hosted vs. API Breakeven

At high query volumes, self-hosting open-source models (Llama 3, Mistral, Qwen) on your own GPUs can be cheaper than API calls. The breakeven depends on your volume, model size, and GPU costs. Below a certain volume threshold, APIs win because you avoid the fixed cost of GPU infrastructure. Above that threshold, self-hosting wins because marginal inference cost approaches zero.

FactorAPI ProviderSelf-Hosted
Fixed cost$0 / month$2,000+ / month (GPU)
Marginal cost$2-15 / 1M tokens~$0.10-0.50 / 1M tokens
BreakevenTypically 500K-2M queries/month
Latency controlLimitedFull control
Ops burdenNoneSignificant (updates, monitoring, scaling)
Model accessLatest frontier modelsOpen-source only (often 3-12 months behind)

5. Cost Monitoring and Alerting

Without monitoring, LLM costs drift upward silently. A single prompt regression (e.g., a new version that accidentally includes verbose context) can double your monthly bill. Every production system needs a cost dashboard that tracks spend by model, endpoint, and feature, along with alerts that fire when costs exceed expected thresholds.

from dataclasses import dataclass, field
from datetime import datetime, timedelta
from collections import defaultdict

@dataclass
class UsageRecord:
    timestamp: datetime
    model: str
    feature: str
    input_tokens: int
    output_tokens: int
    cost: float
    latency_ms: float

class CostMonitor:
    """Track LLM usage and alert on anomalies."""

    def __init__(self):
        self.records: list[UsageRecord] = []
        self.daily_budget: float = 100.0
        self.alerts: list[str] = []

    def record(self, model: str, feature: str,
               input_tokens: int, output_tokens: int,
               cost: float, latency_ms: float):
        self.records.append(UsageRecord(
            datetime.now(), model, feature,
            input_tokens, output_tokens, cost, latency_ms
        ))
        self._check_alerts()

    def _check_alerts(self):
        today = datetime.now().date()
        today_cost = sum(
            r.cost for r in self.records
            if r.timestamp.date() == today
        )
        if today_cost > self.daily_budget * 0.8:
            self.alerts.append(
                f"WARNING: Daily spend ${today_cost:.2f} "
                f"exceeds 80% of budget ${self.daily_budget:.2f}"
            )

    def dashboard(self) -> str:
        by_model = defaultdict(lambda: {"cost": 0, "calls": 0, "tokens": 0})
        by_feature = defaultdict(lambda: {"cost": 0, "calls": 0})

        for r in self.records:
            by_model[r.model]["cost"] += r.cost
            by_model[r.model]["calls"] += 1
            by_model[r.model]["tokens"] += r.input_tokens + r.output_tokens
            by_feature[r.feature]["cost"] += r.cost
            by_feature[r.feature]["calls"] += 1

        total_cost = sum(m["cost"] for m in by_model.values())
        total_calls = sum(m["calls"] for m in by_model.values())

        lines = [
            "LLM Cost Dashboard",
            "=" * 55,
            f"Total cost:  ${total_cost:.4f}  |  Total calls: {total_calls}",
            "",
            "By Model:",
        ]
        for model, stats in sorted(by_model.items(),
                                    key=lambda x: x[1]["cost"], reverse=True):
            pct = stats["cost"] / total_cost * 100 if total_cost > 0 else 0
            lines.append(
                f"  {model:<22} ${stats['cost']:>8.4f} "
                f"({pct:4.1f}%)  {stats['calls']:>4} calls"
            )

        lines.append("\nBy Feature:")
        for feature, stats in sorted(by_feature.items(),
                                      key=lambda x: x[1]["cost"], reverse=True):
            lines.append(
                f"  {feature:<22} ${stats['cost']:>8.4f}  "
                f"{stats['calls']:>4} calls"
            )

        if self.alerts:
            lines.append(f"\nAlerts ({len(self.alerts)}):")
            for alert in self.alerts[-3:]:
                lines.append(f"  {alert}")

        return "\n".join(lines)

# Simulate a day of usage
monitor = CostMonitor(daily_budget=50.0)

import random
random.seed(42)

features = ["classification", "summarization", "extraction", "chat"]
models = [
    ("gpt-4o-mini", 0.003),
    ("gpt-4o", 0.012),
    ("claude-opus", 0.025),
]

for _ in range(200):
    model, base_cost = random.choice(models)
    feature = random.choice(features)
    cost = base_cost * (0.5 + random.random())
    monitor.record(
        model=model, feature=feature,
        input_tokens=random.randint(100, 2000),
        output_tokens=random.randint(50, 500),
        cost=cost,
        latency_ms=random.uniform(50, 1000),
    )

print(monitor.dashboard())
LLM Cost Dashboard ======================================================= Total cost: $2.6148 | Total calls: 200 By Model: claude-opus $ 0.9930 (38.0%) 63 calls gpt-4o $ 0.8697 (33.3%) 69 calls gpt-4o-mini $ 0.7521 (28.8%) 68 calls By Feature: extraction $ 0.7435 52 calls chat $ 0.6937 49 calls classification $ 0.6289 52 calls summarization $ 0.5487 47 calls Alerts (1): WARNING: Daily spend $2.61 exceeds 80% of budget $50.00
★ Key Insight

The most common cost surprises come from: (1) prompt regressions where a code change silently adds verbose context, (2) retry storms where error handling loops repeatedly call the API, and (3) eval sprawl where automated evaluation suites run more frequently than intended. Set per-model and per-feature budget alerts at 80% of expected daily spend so that anomalies surface before they become expensive.

6. Putting It All Together: Optimization Checklist

  1. Measure first. Build a TCO model before optimizing. Know where the money actually goes.
  2. Map the Pareto frontier. Benchmark 4-6 model configurations on your actual task. Identify which points are dominated.
  3. Implement model routing. Route simple queries to cheap models. This typically delivers the largest single cost reduction.
  4. Compress prompts. Remove verbosity. Test that compressed prompts maintain quality on your evaluation set.
  5. Add caching layers. Exact-match first, then semantic caching for common query patterns.
  6. Batch when possible. Use batch APIs for offline workloads. Combine items in single prompts for related tasks.
  7. Monitor continuously. Track cost by model, feature, and time. Alert on anomalies before they become expensive.

Knowledge Check

1. At moderate query volume (10K/day), what is typically the largest component of LLM system TCO?
Show Answer
Engineering labor, not API costs. At 10K queries/day, API spend is often under 10% of total cost, while engineering time for prompt development, evaluation pipelines, maintenance, and debugging typically accounts for 60-70% of TCO. This ratio changes at very high volumes (1M+ queries/day) where API costs become dominant.
2. What does it mean for a model configuration to be "Pareto-dominated"?
Show Answer
A configuration is Pareto-dominated when another configuration exists that is better on at least one dimension (quality, cost, or latency) without being worse on any other dimension. For example, if Model A achieves 85% accuracy at $0.003/query and Model B achieves 87% accuracy at $0.002/query, then Model A is dominated by Model B. Dominated configurations should be eliminated from your production options.
3. How does semantic caching differ from exact-match caching, and when does each apply?
Show Answer
Exact-match caching returns a cached result only when the query is identical (after normalization). Semantic caching uses embedding similarity to match queries that are phrased differently but have the same intent. Exact-match is faster (hash lookup) and should be the first cache layer. Semantic caching requires computing embeddings and similarity search, adding 5-10ms of overhead, but catches a much larger share of repeated intent. Together, they can eliminate 30-60% of LLM calls.
4. What is the typical breakeven point for self-hosting vs. using an API provider?
Show Answer
Self-hosting typically becomes cheaper at 500K to 2M queries per month, depending on model size and GPU costs. Below this threshold, the fixed cost of GPU infrastructure ($2,000+/month) makes APIs cheaper. Above it, the near-zero marginal inference cost of self-hosted models wins. However, self-hosting also incurs operational burden (monitoring, updates, scaling) that must be factored into the TCO calculation.
5. What are the three most common sources of unexpected LLM cost increases?
Show Answer
The three most common sources are: (1) prompt regressions, where a code change silently adds verbose context to prompts, inflating token count; (2) retry storms, where error handling loops call the API repeatedly without backoff, multiplying costs during outages; and (3) evaluation sprawl, where automated eval suites or logging pipelines run more frequently than intended. Per-feature budget alerts at 80% of expected daily spend help catch these anomalies early.

Key Takeaways