Local-first coding help for VS Code

LocalPilot

A private Ollama-powered coding assistant for developers who want autocomplete, chat, explanations, fixes, tests, and safe code edits without sending source code to cloud AI services.

localpilot.ts
function completeLocally(context) {
  const host = "http://localhost:11434";
  const model = "qwen2.5-coder:1.5b";

  return ollama.generate({
    host,
    model,
    prompt: context.safePrompt
  });
}
LocalPilot suggestion: keep the request local.

What it does

Practical AI assistance with local boundaries

LocalPilot is built around a simple contract: VS Code prepares bounded, filtered context and sends it only to the configured Ollama REST API host.

Local Ollama autocomplete

Copilot-style ghost text runs through your configured Ollama host, with full or one-line inline completion modes.

Editor-aware chat

The chat panel can answer freeform questions about the active file or selected code and includes quick actions for common tasks.

Selected-code commands

Explain code, explain errors, add comments, fix code, generate tests, and solve coding problems directly from VS Code.

Safe apply flow

Fixes and comment generation open a diff preview first. You choose whether to apply, copy, or cancel the proposed replacement.

Low-resource modes

Micro, lite, standard, custom, and auto modes tune context and output budgets so local models stay responsive.

Private by design

LocalPilot has no telemetry, avoids sensitive files, and redacts secret-like strings before prompt text reaches Ollama.

Guides

Learn local AI workflows for VS Code and Ollama

Practical articles for setting up private autocomplete, choosing local coding models, and keeping VS Code responsive with Ollama.

Quick setup

Bring your own Ollama model

Install Ollama, start the local server, pull a coding model, then run LocalPilot setup inside VS Code. The extension asks before any model download.

ollama serve
ollama pull qwen2.5-coder:1.5b
code --install-extension localpilot-0.0.1.vsix