Project Context
The AI Assistant is not a generic chatbot — it understands your specific project. This page explains how context is collected, injected, and managed.
How the assistant Knows Your Project
When you open an AI Chat session, xyva automatically gathers context from three sources:
| Source | What it provides | When it updates |
|---|---|---|
| File tree scan | Directory structure, file types, framework detection (React, Vue, Angular, etc.) | On project open and file changes |
| Test results | Latest Playwright/Jest/Vitest run outcomes, pass/fail counts, failure messages | After each test run |
| Architecture vault | Component diagrams, dependency graphs, module relationships from the Architecture page | On vault refresh |
This context is prepended to your conversation as a structured system message so the AI can reference real file paths, actual test failures, and known architectural decisions.
Context Injection Flow
Project opens
→ File tree scanner indexes src/, tests/, playwright/
→ Test result cache loads latest XML/JSON reports
→ Architecture vault provides module graph
→ Combined context is serialized as system prompt
→ Sent with every chat request to the LLM providerINFO
Context injection happens transparently. You do not need to manually attach files or paste code — the assistant already knows what is in your project.
Context Window Management
LLM providers impose token limits. The assistant manages this automatically:
- Priority ranking — test failures and recent file changes rank higher than static structure.
- Progressive summarization — when context exceeds 60% of the window, older conversation turns are summarized.
- Selective inclusion — only files relevant to the current conversation topic are included in full. The rest are referenced by path.
WARNING
Very large monorepos (50k+ files) may exceed scanning limits. Use the .xyvaignore file to exclude directories like node_modules, dist, or vendor from context ingestion.
What the assistant Can Reference
Once context is loaded, you can ask questions like:
- "Which components have no test coverage?"
- "Why did the login spec fail in the last run?"
- "Show me the dependency chain from App.tsx to the API layer."
- "What framework version are we running?"
The assistant answers using actual data from your workspace, not generic assumptions.
Privacy
All context is assembled locally in the Electron main process. The only data sent externally is the prompt payload to your configured LLM provider. If you use Ollama, nothing leaves your machine.
