Catalypt LogoCatalypt.ai

Industry Focus

Developer Options

Resources

Back to Blog

I Tried Running LLMs Locally So You Don't Have To

2025-07-10T00:00:00.000Z Catalypt AI Team ai-first

"We need to run everything locally for security." Famous last words. I spent $4,000 on hardware, burned through three weekends, and ended up with a system that makes GPT-3.5 look like Einstein. Here's my journey into local LLM hell.

The Hardware Arms Race

Started confident. "I have a decent gaming rig!" Then I read the requirements:

  • LLaMA 70B: 140GB of RAM minimum
  • Mixtral 8x7B: 90GB RAM
  • Even "small" 13B models: 26GB minimum

My 16GB gaming rig: "Am I a joke to you?"

The shopping list grew:

// My local LLM hardware journey
const hardware_costs = {
  "NVIDIA RTX 4090": 1599,  // "Just get the best GPU"
  "64GB DDR5 RAM": 400,     // "You'll need more RAM"
  "2TB NVMe SSD": 200,      // "Models are huge"
  "Better PSU": 200,        // "4090 is hungry"
  "New case": 150,          // "Better cooling needed"
  "Therapy": Infinity       // After it still didn't work well
};

// Total: $2,549 + my sanity

Day 1: "Just run ollama pull llama2" Day 2: Compiling llama.cpp from source because the binary doesn't work Day 3: Learning about quantization formats (Q4_K_M vs Q5_K_S anyone?) Day 4: Why is Python using system RAM when I have VRAM? Day 5: Discovering my GPU compute capability is 0.1 version too old

Finally got Mistral 7B running locally. The benchmarks:

// Performance comparison (tokens per second)
const benchmark_results = {
  "GPT-4-Turbo": {
    speed: 60,
    quality: "Actually understands context",
    cost: "$0.01/1K tokens"
  },
  "Local Mistral 7B": {
    speed: 8,
    quality: "Forgot the question mid-answer",
    cost: "My electricity bill + hardware"
  },
  "Local LLaMA 70B (Q4)": {
    speed: 2,
    quality: "Pretty good when it works",
    cost: "Can't run other programs simultaneously"
  }
};

But wait, it gets better. The responses:

Me: "Write a function to validate email addresses"

Local LLM: "Here"s a function to validate email addresses: def validate_email(email):   return '@' in email

This checks if the email contains an @ symbol, which all valid emails must have."

Technically correct. Practically useless.

"Just quantize the model!" they said. Here's what they don't tell you:

  • Q8: Barely fits in RAM, 2 tokens/second
  • Q5: Fits better, 5 tokens/second, noticeably dumber
  • Q4: Fast! 15 tokens/second! Also can't count past 3
  • Q2: Lightning fast! Outputs random words!

It's like compressing a JPEG until it's fast to load but you can't tell if it's a cat or a toaster.

// Quantization quality degradation
const quantization_tradeoffs = {
  "FP16 (Original)": {
    size: "14GB",
    speed: "0.5 tokens/sec",
    quality: "100%",
    practical: "Won't fit in consumer GPU"
  },
  "Q8": {
    size: "7.5GB",
    speed: "2 tokens/sec",
    quality: "98%",
    practical: "Still too slow"
  },
  "Q4_K_M": {
    size: "4.5GB",
    speed: "15 tokens/sec",
    quality: "85%",
    practical: "Forgets context, math is creative"
  },
  "Q2_K": {
    size: "2.5GB",
    speed: "40 tokens/sec",
    quality: "Potato",
    practical: "Outputs look like autocorrect gone wild"
  }
};

After all that pain, I found exactly three use cases where local worked:

Simple, deterministic tasks where even a dumb model works:

// The ONLY things local LLMs did okay at
const local_llm_wins = [
  {
    task: "JSON formatting",
    prompt: "Format this as JSON: name John age 30",
    result: '{"name": "John", "age": 30}',  // Even Q4 models can do this
    usefulness: "Just use JSON.stringify"
  },
  {
    task: "Simple regex",
    prompt: "Regex for email validation",
    result: "/^[^\\s@]+@[^\\s@]+\\.[^\\s@]+$/",  // Basic but works
    usefulness: "Stack Overflow is faster"
  },
  {
    task: "Variable naming",
    prompt: "Variable name for user age",
    result: "userAge",  // Wow, revolutionary
    usefulness: "My IDE already does this"
  }
];

When you just need variable name suggestions or simple completions. Copilot is still 100x better though.

When you absolutely cannot send data to external APIs. But then you realize the local model is too dumb to handle sensitive data properly anyway.

  • Electricity: Running a 4090 at full tilt? That's 450W. For hours.
  • Cooling: My office became a sauna. AC bill doubled.
  • Noise: GPU fans at 100% sound like a jet engine
  • Maintenance: Models need updates. Frameworks break. Dependencies conflict.
  • Opportunity Cost: Time spent making it work vs. actually building things

"But we need it for privacy!" Okay, let's think about this:

  • You downloaded the model from... the internet
  • You're running code from... random GitHub repos
  • The model was trained on... the entire internet
  • Your "private" code is probably on GitHub anyway

Plus, local models are so bad you'll end up using ChatGPT anyway when you need real work done.

// What actually happened in practice
const reality_check = {
  monday: {
    plan: "Use local LLM for everything!",
    reality: "Spent 2 hours getting wrong answers",
    outcome: "Used ChatGPT"
  },
  tuesday: {
    plan: "Just use it for simple tasks",
    reality: "Even simple tasks were painful",
    outcome: "Used ChatGPT"
  },
  wednesday: {
    plan: "Maybe for offline coding?",
    reality: "Suggestions worse than no suggestions",
    outcome: "Just coded without AI"
  },
  thursday: {
    plan: "Privacy-sensitive work only",
    reality: "Anonymized data and used GPT-4",
    outcome: "10x faster and actually worked"
  },
  friday: {
    plan: "Selling GPU on eBay",
    reality: "Keeping it for gaming",
    outcome: "At least Cyberpunk looks good"
  }
};

After burning money and time, here's my setup:

  • 99% of tasks: GPT-4 / Claude API
  • Sensitive data: Anonymize first, then use cloud APIs
  • Offline work: I just... write code myself
  • Local models: Gathering dust
  1. You have actual regulatory requirements (not just paranoia)
  2. You're processing data that legally cannot leave your network
  3. You have a specific, simple task that a small model can handle
  4. You enjoy pain and have money to burn
  5. You're doing AI research

That's it. That's the list.

Local LLMs in 2024 are like self-hosting email in 2024. Technically possible, practically painful, and usually not worth it unless you have very specific requirements.

The gap between local and cloud models isn't closing - it's widening. While you're struggling to run last year's model on your $5k rig, OpenAI just released something 10x better that runs instantly for pennies.

Get Started