Local Ollama autocomplete
Copilot-style ghost text runs through your configured Ollama host, with full or one-line inline completion modes.
Local-first coding help for VS Code
A private Ollama-powered coding assistant for developers who want autocomplete, chat, explanations, fixes, tests, and safe code edits without sending source code to cloud AI services.
function completeLocally(context) {
const host = "http://localhost:11434";
const model = "qwen2.5-coder:1.5b";
return ollama.generate({
host,
model,
prompt: context.safePrompt
});
}What it does
LocalPilot is built around a simple contract: VS Code prepares bounded, filtered context and sends it only to the configured Ollama REST API host.
Copilot-style ghost text runs through your configured Ollama host, with full or one-line inline completion modes.
The chat panel can answer freeform questions about the active file or selected code and includes quick actions for common tasks.
Explain code, explain errors, add comments, fix code, generate tests, and solve coding problems directly from VS Code.
Fixes and comment generation open a diff preview first. You choose whether to apply, copy, or cancel the proposed replacement.
Micro, lite, standard, custom, and auto modes tune context and output budgets so local models stay responsive.
LocalPilot has no telemetry, avoids sensitive files, and redacts secret-like strings before prompt text reaches Ollama.
Guides
Practical articles for setting up private autocomplete, choosing local coding models, and keeping VS Code responsive with Ollama.
Set up private VS Code autocomplete with Ollama and LocalPilot, choose a coding model, and keep source code on your machine.
Learn when a local Ollama-powered VS Code assistant is a good fit and what tradeoffs to expect compared with cloud coding tools.
Compare practical Ollama model choices for LocalPilot autocomplete, chat, code fixes, and low-resource development machines.
Quick setup
Install Ollama, start the local server, pull a coding model, then run LocalPilot setup inside VS Code. The extension asks before any model download.
ollama serve
ollama pull qwen2.5-coder:1.5b
code --install-extension localpilot-0.0.1.vsix