Troubleshooting

Fix common LocalPilot setup and runtime issues

Most issues come down to Ollama reachability, missing local models, model size, or safety filters doing their job.

Ollama is disconnected

Start Ollama, keep the host reachable, then run LocalPilot: Check Ollama Status.

ollama serve

No models are found

Run LocalPilot: Run Setup or pull a model manually, then select it inside VS Code.

ollama pull qwen2.5-coder:1.5b

Inline suggestions feel slow

Switch inline completion mode to line, use micro or lite mode, lower context/output limits, or choose a smaller model.

localpilot.inlineCompletionMode = line
localpilot.mode = lite

Output quality is weak

Try a larger coding model and select it for chat or inline suggestions with LocalPilot: Select Local Model.

ollama pull qwen2.5-coder:7b

Files are skipped

LocalPilot intentionally blocks secrets, generated folders, dependency folders, lock files, minified files, and oversized files.

localpilot.maxFileSizeKb = 500