Skip to content

AI Proxy

xyva.ai can connect to a self-hosted AI gateway that exposes an OpenAI-compatible endpoint and routes requests through a single controlled access point.

WARNING

AI Proxy is a private deployment option available on request. It is intended for teams that need controlled provider routing, VPN-only access, or stricter infrastructure boundaries.

Architecture

text
xyva.ai (Electron) -- WireGuard VPN --> VPS AI Proxy (10.8.0.1:3000)
                                     |- Gemini  (OAuth)
                                     |- OpenAI  (API key)
                                     `- Claude  (API key)

The proxy runs on http://10.8.0.1:3000 and is accessible only through the WireGuard VPN tunnel. No public internet exposure.

OpenAI-Compatible API

The proxy implements the OpenAI chat completions contract, so any client that works with the OpenAI SDK can target it without changes:

bash
curl http://10.8.0.1:3000/v1/chat/completions \
  -H "Authorization: Bearer $PROXY_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gemini-2.5-pro",
    "messages": [{"role": "user", "content": "Hello"}]
  }'

Configuration

In xyva, go to Settings > AI Provider and set:

FieldValue
Endpointhttp://10.8.0.1:3000
API KeyYour proxy bearer token
Default ModelModel identifier (for example gemini-2.5-pro)

Availability

AI Proxy is not part of the standard public Solo or Team self-serve checkout. If you need this setup, treat it as a custom/private deployment conversation.

  • Ops Portal - infrastructure dashboard showing proxy health
  • Swarm QA - AI-powered scanning that can use the proxy

Local-first QA orchestration.