zlaunch
Modules

AI

Chat with LLMs

The AI module lets you query language models directly from zlaunch.

AI module showing a streamed response

Supported Providers

ProviderEnvironment Variables
Ollama (local)OLLAMA_URL, OLLAMA_MODEL
Google GeminiGEMINI_API_KEY
OpenAIOPENAI_API_KEY
OpenRouterOPENROUTER_API_KEY, OPENROUTER_MODEL

zlaunch checks providers in order and uses the first one configured.

Setup

Local LLM with Ollama:

  1. Install and start Ollama
  2. Pull a model: ollama pull llama2
  3. Set environment variables:
export OLLAMA_URL="http://localhost:11434"
export OLLAMA_MODEL="llama2"
export GEMINI_API_KEY="your-api-key"
export OPENAI_API_KEY="your-api-key"
export OPENROUTER_API_KEY="your-api-key"
export OPENROUTER_MODEL="openai/gpt-4"  # optional

Add these to your shell profile (~/.bashrc, ~/.zshrc) or compositor config.

Usage

Open AI mode

zlaunch show --modes ai

Type your prompt

Enter your question or request

View response

Response streams in real-time with markdown rendering

Features

  • Streaming responses — See output as it's generated
  • Markdown rendering — Code blocks, lists, formatting
  • Conversation context — Follow-up questions work

Tips:

  • Ollama runs locally with no API costs
  • Cloud APIs typically have lower latency
  • Responses can be copied to clipboard

On this page