Ollama is disconnected
Start Ollama, keep the host reachable, then run LocalPilot: Check Ollama Status.
ollama serveTroubleshooting
Most issues come down to Ollama reachability, missing local models, model size, or safety filters doing their job.
Start Ollama, keep the host reachable, then run LocalPilot: Check Ollama Status.
ollama serveRun LocalPilot: Run Setup or pull a model manually, then select it inside VS Code.
ollama pull qwen2.5-coder:1.5bSwitch inline completion mode to line, use micro or lite mode, lower context/output limits, or choose a smaller model.
localpilot.inlineCompletionMode = line
localpilot.mode = liteTry a larger coding model and select it for chat or inline suggestions with LocalPilot: Select Local Model.
ollama pull qwen2.5-coder:7bLocalPilot intentionally blocks secrets, generated folders, dependency folders, lock files, minified files, and oversized files.
localpilot.maxFileSizeKb = 500