Server Configuation
Chat Completions API
LLM Bridge provides an OpenAI API-compatible API server with support for multiple providers.
API
LLM Bridge provides POST /chat/completions API endpoint.
LLM Bridge Configuration
yomo.yaml
Configuration Fields
API Server Configuration (bridge.ai.server)
addr
: The address of the API server.provider
: The provider of the API server. Supported providers includeopenai
,ollama
,vllm
,gemini
,anthropic
,vertexai
and etc.
Providers Configuration (bridge.ai.providers)
OpenAI Provider
api_key
: The API key for OpenAI. You can grab it from the OpenAI Dashboard.model
: The model to use. Supported all chat completions models likegpt-4.1
,gpt-o4
and etc.
Ollama Provider
api_endpoint
: The endpoint of the Ollama API server. Default ishttp://localhost:11434
.
Google Gemini Provider
api_key
: The API key for Google Gemini. You can grab it from the Google Cloud Console.
vllm Provider
api_endpoint
: The endpoint of the vllm API server. Default ishttp://localhost:9999/v1
.api_key
: The API key for vllm.model
: The model to use. Supported all chat completions models likedeepseek-ai/DeepSeek-R1
,llama3
and etc.
Anthropic Provider
api_key
: The API key for Anthropic. You can grab it from the Anthropic Dashboard.model
: The model to use. Supported all chat completions models likeclaude-3.7-sonnet
,claude-3
and etc.