Quick Start Guide
Get Agent.md running in 5 minutes—from installation to your first working agent.
1. Installation
Prerequisites
- Python 3.13+ (download)
- API key for one LLM provider (or skip with Ollama)
Option A: One-line install (recommended)
Linux/macOS:
Windows (PowerShell):
This installs uv (if needed), agentmd, and runs the interactive setup wizard that configures your workspace, provider, and API key.
Option B: Developer setup
git clone https://github.com/z-fab/agentmd.git
cd agentmd
# Install with uv (recommended)
uv sync
# Install provider dependencies
uv pip install -e ".[all]" # All providers
# OR choose one:
# uv pip install -e ".[openai]" # OpenAI
# uv pip install -e ".[anthropic]" # Anthropic
# uv pip install -e ".[ollama]" # Ollama
Then run the setup wizard:
Verify Installation
2. Configuration
Agent.md is zero-config by default — ~/.config/agentmd/config.yaml is auto-created on first run with sensible defaults. The setup wizard can also create it for you.
~/.config/agentmd/config.yaml — Application settings
~/agentmd/.env — API keys (secrets only)
Get a free API key: - Google Gemini (free tier available) - OpenAI (requires credit card) - Anthropic (requires credit card) - Ollama (fully local, no key needed)
Security
Never commit .env to git. It's in .gitignore by default.
Using secrets in prompts
Use ${VAR_NAME} in your agent's prompt body to inject .env values at runtime — no need to hardcode secrets. See Environment Variable Substitution.
3. Create Your First Agent
The fastest way is with agentmd new:
If you have a provider configured, it will ask what the agent should do and generate the file using AI. Otherwise (or with --template), it walks you through an interactive questionnaire:
You can also create agents manually — just add an .md file to agents/:
---
name: hello
description: First agent
---
You are a friendly assistant. Create a warm greeting message
and save it to a file called 'hello-output.txt'. Keep it
under 3 sentences and mention the date if possible.
What this means:
- YAML frontmatter (between ---) = agent configuration
- Markdown body = system prompt (what the agent does)
- No model needed — uses the default from config.yaml
Override the default model
To use a specific model for an agent, add a model section:
4. Run Your Agent
You'll see live output:
▶ Running hello
google / gemini-2.5-flash
11:32:04 🤖 I'll create a warm greeting for you...
11:32:05 🔧 file_write → {'file_path': 'hello-output.txt', ...}
11:32:05 📎 file_write ← File written successfully
11:32:05 ✅ I've created a warm greeting and saved it to hello-output.txt!
✓ hello completed in 523ms
Tokens: 28 in / 87 out / 115 total
Execution #1
5. Or Chat with Your Agent
Want a conversation instead of a one-shot run? Use chat:
This opens an interactive session where you type messages and the agent responds with full context of the conversation:
Chat with hello
google / gemini-2.5-flash
Type /exit or Ctrl+C to end session
> Write a greeting in French
11:33:01 🤖 Bonjour! Que votre journée soit...
11:33:02 ✅ Done!
> Now save it to greeting-fr.txt
11:33:10 🔧 file_write → greeting-fr.txt
11:33:10 ✅ Saved!
> /exit
Session ended: 2 turns, 280 tokens (84 in / 196 out), 12.3s
Execution #2
6. Check the Output
Example:
Greetings! Today is March 11, 2026, and I'm delighted to connect with you.
May your day be filled with curiosity and purpose!
7. View Execution History
Shows table with execution ID, status, duration, tokens used, etc.
Next Steps
Try Different Providers
Override the default model per agent:
# Use Claude
model:
provider: anthropic
name: claude-sonnet-4-5-20250929
# Use GPT-4
model:
provider: openai
name: gpt-4o-mini
# Use local Ollama
model:
provider: ollama
name: llama3
Add Scheduling
Make agent run automatically:
Then start the runtime:
Use HTTP Requests
Fetch data from APIs:
---
name: quote-fetcher
---
Fetch a random quote from https://zenquotes.io/api/random
Parse the JSON and save the best quote to 'daily-quote.txt'.
Explore Ready-Made Agents
Copy and customize from our library: - Code examples
Learn Core Concepts
CLI Commands Reference
agentmd new <name>— Scaffold a new agent (AI-assisted or interactive)agentmd run [agent]— Execute single agent (interactive picker if omitted)agentmd chat [agent]— Interactive multi-turn chat sessionagentmd start— Start runtime with scheduler (-dfor daemon mode)agentmd list— List all agentsagentmd logs <agent>— View execution history (-fto follow)agentmd validate [agent]— Validate agent configurationagentmd status— Check if runtime is runningagentmd stop— Stop background runtimeagentmd config— Show current configurationagentmd setup— Interactive setup wizardagentmd update— Update to latest version
Troubleshooting
"API key not found"
- Check .env exists in your workspace
- Verify key name matches provider (e.g., GOOGLE_API_KEY)
- Run agentmd config to see which files are being loaded
"No agents found in workspace"
- Run agentmd config to check your workspace path
- Verify the agents/ directory exists and contains .md files
"Provider requires langchain-..."
- Install the provider: pip install agentmd[openai]
You've got this! You now understand:
- How to install Agent.md
- How to configure with config.yaml and .env
- How to create an agent file (with optional model override)
- How to execute agents and view results
Start building! 🚀