Modules
AI
Chat with LLMs
The AI module lets you query language models directly from zlaunch.
Supported Providers
| Provider | Environment Variables |
|---|---|
| Ollama (local) | OLLAMA_URL, OLLAMA_MODEL |
| Google Gemini | GEMINI_API_KEY |
| OpenAI | OPENAI_API_KEY |
| OpenRouter | OPENROUTER_API_KEY, OPENROUTER_MODEL |
zlaunch checks providers in order and uses the first one configured.
Setup
Local LLM with Ollama:
- Install and start Ollama
- Pull a model:
ollama pull llama2 - Set environment variables:
export OLLAMA_URL="http://localhost:11434"
export OLLAMA_MODEL="llama2"export GEMINI_API_KEY="your-api-key"export OPENAI_API_KEY="your-api-key"export OPENROUTER_API_KEY="your-api-key"
export OPENROUTER_MODEL="openai/gpt-4" # optionalAdd these to your shell profile (~/.bashrc, ~/.zshrc) or compositor config.
Usage
Open AI mode
zlaunch show --modes aiType your prompt
Enter your question or request
View response
Response streams in real-time with markdown rendering
Features
- Streaming responses — See output as it's generated
- Markdown rendering — Code blocks, lists, formatting
- Conversation context — Follow-up questions work
Tips:
- Ollama runs locally with no API costs
- Cloud APIs typically have lower latency
- Responses can be copied to clipboard