Code explanations are most useful near the code

A good explanation starts with the exact selection you are looking at. LocalPilot adds VS Code commands for explaining selected code and selected errors, so you can ask for context without leaving the editor.

Because the model call goes through Ollama, the default request stays local to your machine. That is useful when the code is private, experimental, or not ready to paste into a hosted chat service.

What LocalPilot sends

The extension builds a bounded prompt from the active selection and nearby context. It avoids common sensitive paths, dependency folders, generated folders, minified bundles, lock files, and oversized files. It also redacts secret-like strings before sending prompt text to the configured Ollama host.

  • Private key blocks are redacted.
  • API key, token, password, and secret assignments are redacted.
  • Large generated folders are skipped.
  • The configured Ollama host controls where requests go.

A useful explanation workflow

Select a function, run LocalPilot: Explain Selected Code, then ask follow-up questions in chat about naming, edge cases, or tests. For errors, select the stack trace or pasted terminal output and run LocalPilot: Explain Error.

This works best when you keep the selection focused. Smaller selections help local models produce clearer explanations and reduce unnecessary prompt work.

Explanation is not review

Treat local AI explanations as a fast reading aid. They can help you understand intent, identify likely failure points, and find test ideas, but they do not replace code review, static analysis, or running the test suite.