Skip to main content

Local AI Setup

Run large language models locally on your own machine, no rate limits, no cloud dependency. Everything stays on your hardware.

This guide covers setting up a complete local AI development environment on Windows with WSL2 and an NVIDIA GPU.

Why Local AI?

  • Privacy: Your code and conversations never leave your machine
  • No costs: No per-token or API fees
  • No rate limits: Use it as much as you want
  • Offline: Works without internet

Contents

  1. Ollama Setup - Run local LLMs
  2. WSL2 + NVIDIA Setup - Enable GPU acceleration
  3. Opencode Setup - AI-powered CLI assistance