Mistral vibe 2.0
Mistral Vibe Architecture Overview
Mistral Vibe is a command-line coding assistant powered by Mistral’s models, providing a conversational interface to interact with codebases through natural language.
Core Architecture Components
1. Entry Point & CLI Layer (vibe/cli/entrypoint.py, vibe/cli/cli.py)
- Purpose: Command-line interface and argument parsing
- Key Features:
- Argument parsing for interactive vs programmatic modes
- Trust folder system for security
- Session continuation/resumption
- Setup workflow for API keys
- Working directory management
2. Agent Loop (vibe/core/agent_loop.py)
- Purpose: Main conversation loop and orchestration
- Key Responsibilities:
- Manages conversation state and message history
- Handles LLM interactions (streaming and non-streaming)
- Tool execution lifecycle (approval, execution, result handling)
- Middleware pipeline for turn limits, price limits, auto-compaction
- Session management and logging
- Context compaction for long conversations
- Agent switching and configuration
3. Agents System (vibe/core/agents/)
- Purpose: Agent profile management and configuration
- Key Components:
AgentProfile: Defines agent behavior (safety level, tool permissions, etc.)AgentManager: Manages active agent and profile switching- Built-in agents:
default,plan,accept-edits,auto-approve,explore - Agent types:
AGENT(interactive) andSUBAGENT(background task worker)
4. Tool System (vibe/core/tools/)
- Purpose: Tool discovery, management, and execution
- Key Components:
ToolManager: Discovers and manages available toolsBaseTool: Abstract base class for all tools- Built-in tools:
read_file,write_file,search_replace: File operationsbash: Shell command executiongrep: Code searchingtodo: Task managementask_user_question: Interactive user queriestask: Subagent delegation
- MCP (Model Context Protocol) integration for external tools
5. Skills System (vibe/core/skills/)
- Purpose: Extensible functionality through reusable components
- Key Features:
- Skill discovery from multiple paths (global, local, custom)
- Pattern-based skill enabling/disabling
- Skill metadata parsing from
SKILL.mdfiles - Follows Agent Skills specification
6. Configuration (vibe/core/config.py)
- Purpose: Centralized configuration management
- Key Features:
- TOML-based configuration
- Multiple provider support (Mistral, generic LLM APIs)
- Model configuration and pricing
- Tool permissions and allowlists/denylists
- MCP server configuration
- Session logging settings
- Project context scanning configuration
7. LLM Backend (vibe/core/llm/)
- Purpose: LLM provider integration and message formatting
- Key Components:
- Backend factory for different providers
- Message formatting and tool schema generation
- Streaming and non-streaming completion support
- Token counting and usage tracking
8. Textual UI (vibe/cli/textual_ui/)
- Purpose: Interactive terminal interface
- Key Features:
- Rich terminal UI with Textual framework
- Message display and history
- Tool output viewing
- Todo list management
- Autocompletion for commands and file paths
- Theme support and customization
- External editor integration
9. Session Management (vibe/core/session/)
- Purpose: Persistent conversation state
- Key Features:
- Session logging and saving
- Session continuation/resumption
- Session migration for version compatibility
- Context compaction for long conversations
10. Middleware Pipeline (vibe/core/middleware.py)
- Purpose: Extensible conversation flow control
- Key Middleware:
TurnLimitMiddleware: Limits conversation turnsPriceLimitMiddleware: Enforces cost limitsAutoCompactMiddleware: Automatically compacts contextContextWarningMiddleware: Warns about context sizePlanAgentMiddleware: Special handling for plan agent
Key Features and Workflows
- Interactive Mode:
- Real-time conversation with the AI agent
- Tool execution with approval workflow
- Rich terminal UI with message history
- Autocompletion for commands and file paths
- Programmatic Mode:
- Non-interactive execution via
--promptflag - Auto-approve mode for scripting
- Multiple output formats (text, JSON, streaming)
- Turn and price limits
- Non-interactive execution via
- Tool Execution:
- Approval workflow (manual, auto-approve, or per-tool)
- Allowlist/denylist filtering
- Streaming tool output
- Error handling and recovery
- Skills System:
- Extend functionality with reusable components
- Custom slash commands
- Pattern-based skill management
- Security Features:
- Trust folder system
- Tool permission levels (never, ask, always)
- Allowlist/denylist patterns
- Session isolation
Architecture Patterns
- Dependency Injection:
- Configuration is passed through callables to support dynamic updates
- Backend factory pattern for LLM providers
- Event-Driven Architecture:
- Conversation events (user messages, assistant responses, tool calls)
- Streaming events for real-time updates
- Middleware Pattern:
- Extensible pipeline for conversation flow control
- Before/after turn hooks
- Plugin System:
- Skills for extensible functionality
- MCP servers for external tools
- State Management:
- Immutable message history
- Session logging and continuation
- Context compaction for long conversations
Data Flow
- User Input → CLI Entry Point → Agent Loop
- Agent Loop → LLM Backend (for completions) → Message Formatting
- Tool Calls → Tool Manager → Tool Execution → Results
- Events → UI Updates → User Feedback Loop
- Session Data → Session Logger → Persistent Storage
This architecture provides a flexible, extensible foundation for building a powerful CLI coding assistant with support for multiple workflows, security features, and extensibility through skills and MCP servers.
Mistral Vibe Tool System Analysis
Mistral Vibe’s tool system provides a powerful, extensible framework for interacting with the filesystem, running commands, and performing various operations. Each tool is a self-contained unit with its own configuration, state, and security model.
Built-in Tools
1. read_file
Purpose: Read a UTF-8 file, returning content from a specific line range.
Input Signature:
class ReadFileArgs(BaseModel):
path: str
offset: int = 0 # Line number to start reading from (0-indexed, inclusive)
limit: int | None = None # Maximum number of lines to readOutput Signature:
class ReadFileResult(BaseModel):
path: str
content: str
lines_read: int
was_truncated: bool = FalseSecurity Mechanisms: 1. Path Validation: - Expands user home directory (~) - Resolves relative paths to absolute - Validates path exists and is a file (not directory) - Checks path is within project directory using path.relative_to(Path.cwd().resolve()) - Raises ToolError with security message if path is outside project
- Size Limits:
- Configurable
max_read_bytes(default: 64,000 bytes) - Tracks bytes read and stops at limit
- Sets
was_truncatedflag in result
- Configurable
- Allowlist/Denylist:
- Uses
fnmatchfor pattern matching - Checks file path against configured patterns
- Returns
ToolPermission.ALWAYSorToolPermission.NEVER
- Uses
- Error Handling:
- Catches
OSErrorand wraps inToolError - Validates input parameters (empty path, negative offset, invalid limit)
- Catches
- State Tracking:
- Maintains history of recently read files (max 10)
- Helps prevent infinite loops
Prompt:
Use `read_file` to read the content of a file. It's designed to handle large files safely.
- By default, it reads from the beginning of the file.
- Use `offset` (line number) and `limit` (number of lines) to read specific parts or chunks of a file. This is efficient for exploring large files.
- The result includes `was_truncated: true` if the file content was cut short due to size limits.
**Strategy for large files:**
1. Call `read_file` with a `limit` (e.g., 1000 lines) to get the start of the file.
2. If `was_truncated` is true, you know the file is large.
3. To read the next chunk, call `read_file` again with an `offset`. For example, `offset=1000, limit=1000`.
This is more efficient than using `bash` with `cat` or `wc`.
2. write_file
Purpose: Create or overwrite a UTF-8 file.
Input Signature:
class WriteFileArgs(BaseModel):
path: str
content: str
overwrite: bool = False # Must be true to overwrite existing filesOutput Signature:
class WriteFileResult(BaseModel):
path: str
bytes_written: int
file_existed: bool
content: strSecurity Mechanisms: 1. Path Validation: - Expands user home directory - Resolves relative paths to absolute - Validates path is within project directory - Raises ToolError if path is outside project
- Overwrite Protection:
- Default
overwrite=Falseprevents accidental overwrites - Explicit
overwrite=Truerequired to replace existing files - Validates file existence before writing
- Default
- Size Limits:
- Configurable
max_write_bytes(default: 64,000 bytes) - Rejects content exceeding limit
- Configurable
- Directory Creation:
- Configurable
create_parent_dirs(default: True) - Creates parent directories if enabled
- Validates parent directory exists if disabled
- Configurable
- Allowlist/Denylist:
- Uses
fnmatchfor pattern matching - Checks file path against configured patterns
- Uses
- Error Handling:
- Validates non-empty path
- Catches
OSErrorand wraps inToolError - Validates content size before writing
Prompt:
Use `write_file` to write content to a file.
**Arguments:**
- `path`: The file path (relative or absolute)
- `content`: The content to write to the file
- `overwrite`: Must be set to `true` to overwrite an existing file (default: `false`)
**IMPORTANT SAFETY RULES:**
- By default, the tool will **fail if the file already exists** to prevent accidental data loss
- To **overwrite** an existing file, you **MUST** set `overwrite: true`
- To **create a new file**, just provide the `path` and `content` (overwrite defaults to false)
- If parent directories don't exist, they will be created automatically
**BEST PRACTICES:**
- **ALWAYS** use the `read_file` tool first before overwriting an existing file to understand its current contents
- **ALWAYS** prefer using `search_replace` to edit existing files rather than overwriting them completely
- **NEVER** write new files unless explicitly required - prefer modifying existing files
- **NEVER** proactively create documentation files (*.md) or README files unless explicitly requested
- **AVOID** using emojis in file content unless the user explicitly requests them
**Usage Examples:**
``python
# Create a new file (will error if file exists)
write_file(
path="src/new_module.py",
content="def hello():\n return 'Hello World'"
)
# Overwrite an existing file (must read it first!)
# First: read_file(path="src/existing.py")
# Then:
write_file(
path="src/existing.py",
content="# Updated content\ndef new_function():\n pass",
overwrite=True
)
``
**Remember:** For editing existing files, prefer `search_replace` over `write_file` to preserve unchanged portions and avoid accidental data loss.
3. search_replace
Purpose: Replace sections of files using SEARCH/REPLACE blocks.
Input Signature:
class SearchReplaceArgs(BaseModel):
file_path: str
content: str # Contains SEARCH/REPLACE blocksOutput Signature:
class SearchReplaceResult(BaseModel):
file: str
blocks_applied: int
lines_changed: int
content: str
warnings: list[str] = []Security Mechanisms: 1. Path Validation: - Validates file exists and is a file (not directory) - Resolves relative paths to absolute - Checks path is within project directory
- Content Validation:
- Validates non-empty content
- Configurable
max_content_size(default: 100,000 bytes) - Parses SEARCH/REPLACE blocks with regex
- Validates block format and content
- Backup Support:
- Configurable
create_backup(default: False) - Creates
.bakfiles when enabled
- Configurable
- Fuzzy Matching:
- Configurable
fuzzy_threshold(default: 0.9) - Provides context when search text not found
- Shows unified diff of closest match
- Configurable
- Error Handling:
- Detailed error messages with context
- Shows line numbers and surrounding content
- Warns about multiple occurrences
- Handles Unicode decode errors
- Permission error handling
Prompt
Use `search_replace` to make targeted changes to files using SEARCH/REPLACE blocks. This tool finds exact text matches and replaces them.
Arguments:
- `file_path`: The path to the file to modify
- `content`: The SEARCH/REPLACE blocks defining the changes
The content format is:
``
<<<<<<< SEARCH
[exact text to find in the file]
=======
[exact text to replace it with]
>>>>>>> REPLACE
``
You can include multiple SEARCH/REPLACE blocks to make multiple changes to the same file:
``
<<<<<<< SEARCH
def old_function():
return "old value"
=======
def new_function():
return "new value"
>>>>>>> REPLACE
<<<<<<< SEARCH
import os
=======
import os
import sys
>>>>>>> REPLACE
``
IMPORTANT:
- The SEARCH text must match EXACTLY (including whitespace, indentation, and line endings)
- The SEARCH text must appear exactly once in the file - if it appears multiple times, the tool will error
- Use at least 5 equals signs (=====) between SEARCH and REPLACE sections
- The tool will provide detailed error messages showing context if search text is not found
- Each search/replace block is applied in order, so later blocks see the results of earlier ones
- Be careful with escape sequences in string literals - use \n not \\n for newlines in code
4. bash
Purpose: Run one-off bash commands and capture output.
Input Signature:
class BashArgs(BaseModel):
command: str
timeout: int | None = None # Override default timeoutOutput Signature:
class BashResult(BaseModel):
command: str
stdout: str
stderr: str
returncode: intSecurity Mechanisms: 1. Command Analysis: - Uses tree-sitter to parse bash commands - Extracts individual commands from compound statements - Analyzes command structure for security
- Allowlist/Denylist:
- Default allowlist: safe commands (echo, find, git, etc.)
- Default denylist: interactive shells, editors, debuggers
- Denylist for standalone commands (python, bash, etc.)
- Pattern matching using
startswith()
- Process Isolation:
- Creates new process group (
start_new_sessionon Unix) - Sets environment variables to prevent interaction:
CI=true,NONINTERACTIVE=1,NO_TTY=1TERM=dumb,DEBIAN_FRONTEND=noninteractive- Disables pagers (
PAGER=cat,GIT_PAGER=cat)
- Creates new process group (
- Timeout Enforcement:
- Configurable default timeout (300 seconds)
- Uses
asyncio.wait_for()to kill hanging processes - Process tree killing on timeout
- Different methods for Windows vs Unix
- Output Limits:
- Configurable
max_output_bytes(default: 16,000 bytes) - Truncates stdout/stderr at limit
- Configurable
- Error Handling:
- Non-zero return codes raise
ToolError - Timeout errors are caught and wrapped
- Process cleanup in finally block
- Encoding handling for Windows
- Non-zero return codes raise
Prompt
Use the `bash` tool to run one-off shell commands.
**Key characteristics:**
- **Stateless**: Each command runs independently in a fresh environment
**Timeout:**
- The `timeout` argument controls how long the command can run before being killed
- When `timeout` is not specified (or set to `None`), the config default is used
- If a command is timing out, do not hesitate to increase the timeout using the `timeout` argument
**IMPORTANT: Use dedicated tools if available instead of these bash commands:**
**File Operations - DO NOT USE:**
- `cat filename` → Use `read_file(path="filename")`
- `head -n 20 filename` → Use `read_file(path="filename", limit=20)`
- `tail -n 20 filename` → Read with offset: `read_file(path="filename", offset=<line_number>, limit=20)`
- `sed -n '100,200p' filename` → Use `read_file(path="filename", offset=99, limit=101)`
- `less`, `more`, `vim`, `nano` → Use `read_file` with offset/limit for navigation
- `echo "content" > file` → Use `write_file(path="file", content="content")`
- `echo "content" >> file` → Read first, then `write_file` with overwrite=true
**Search Operations - DO NOT USE:**
- `grep -r "pattern" .` → Use `grep(pattern="pattern", path=".")`
- `find . -name "*.py"` → Use `bash("ls -la")` for current dir or `grep` with appropriate pattern
- `ag`, `ack`, `rg` commands → Use the `grep` tool
- `locate` → Use `grep` tool
**File Modification - DO NOT USE:**
- `sed -i 's/old/new/g' file` → Use `search_replace` tool
- `awk` for file editing → Use `search_replace` tool
- Any in-place file editing → Use `search_replace` tool
**APPROPRIATE bash uses:**
- System information: `pwd`, `whoami`, `date`, `uname -a`
- Directory listings: `ls -la`, `tree` (if available)
- Git operations: `git status`, `git log --oneline -10`, `git diff`
- Process info: `ps aux | grep process`, `top -n 1`
- Network checks: `ping -c 1 google.com`, `curl -I https://example.com`
- Package management: `pip list`, `npm list`
- Environment checks: `env | grep VAR`, `which python`
- File metadata: `stat filename`, `file filename`, `wc -l filename`
**Example: Reading a large file efficiently**
WRONG:
``bash
bash("cat large_file.txt") # May hit size limits
bash("head -1000 large_file.txt") # Inefficient
``
RIGHT:
``python
# First chunk
read_file(path="large_file.txt", limit=1000)
# If was_truncated=true, read next chunk
read_file(path="large_file.txt", offset=1000, limit=1000)
``
**Example: Searching for patterns**
WRONG:
``bash
bash("grep -r 'TODO' src/") # Don't use bash for grep
bash("find . -type f -name '*.py' | xargs grep 'import'") # Too complex
``
RIGHT:
``python
grep(pattern="TODO", path="src/")
grep(pattern="import", path=".")
``
**Remember:** Bash is best for quick system checks and git operations. For file operations, searching, and editing, always use the dedicated tools when they are available.
5. grep
Purpose: Recursively search files for a regex pattern.
Input Signature:
class GrepArgs(BaseModel):
pattern: str
path: str = "."
max_matches: int | None = None
use_default_ignore: bool = TrueOutput Signature:
class GrepResult(BaseModel):
matches: str
match_count: int
was_truncated: bool = FalseSecurity Mechanisms: 1. Backend Detection: - Prefers ripgrep (rg) over grep - Falls back to GNU grep if rg not available - Raises error if neither is installed
- Exclusion Patterns:
- Default exclusion list for common directories:
.venv/,venv/,.env/,env/node_modules/,.git/,__pycache__/- Cache directories, build directories, IDE files
- Binary files, system files
- Loads additional patterns from
.vibeignore - Respects
.gitignoreand.ignorefiles
- Default exclusion list for common directories:
- Output Limits:
- Configurable
max_output_bytes(default: 64,000 bytes) - Configurable
default_max_matches(default: 100) - Tracks truncation state
- Configurable
- Timeout:
- Configurable
default_timeout(default: 60 seconds) - Kills process on timeout
- Configurable
- Path Validation:
- Validates search path exists
- Expands relative paths
Prompt
Use `grep` to recursively search for a regular expression pattern in files.
- It's very fast and automatically ignores files that you should not read like .pyc files, .venv directories, etc.
- Use this to find where functions are defined, how variables are used, or to locate specific error messages.
6. todo
Purpose: Manage a simple task list.
Input Signature:
class TodoArgs(BaseModel):
action: str # "read" or "write"
todos: list[TodoItem] | None = NoneOutput Signature:
class TodoResult(BaseModel):
message: str
todos: list[TodoItem]
total_count: intSecurity Mechanisms: 1. State Isolation: - Maintains separate state per tool instance - No external filesystem access - No command execution
- Limit Enforcement:
- Configurable
max_todos(default: 100) - Validates todo count on write
- Configurable
- Data Validation:
- Validates unique IDs
- Validates status and priority values
- Pydantic model validation
- Permission:
- Default permission:
ToolPermission.ALWAYS - No sensitive operations
- Default permission:
Prompt
Use the `todo` tool to manage a simple task list. This tool helps you track tasks and their progress.
## How it works
- **Reading:** Use `action: "read"` to view the current todo list
- **Writing:** Use `action: "write"` with the complete `todos` list to update. You must provide the ENTIRE list - this replaces everything.
## Todo Structure
Each todo item has:
- `id`: A unique string identifier (e.g., "1", "2", "task-a")
- `content`: The task description
- `status`: One of: "pending", "in_progress", "completed", "cancelled"
- `priority`: One of: "high", "medium", "low"
## When to Use This Tool
**Use proactively for:**
- Complex multi-step tasks (3+ distinct steps)
- Non-trivial tasks requiring careful planning
- Multiple tasks provided by the user (numbered or comma-separated)
- Tracking progress on ongoing work
- After receiving new instructions - immediately capture requirements
- When starting work - mark task as in_progress BEFORE beginning
- After completing work - mark as completed and add any follow-up tasks discovered
**Skip this tool for:**
- Single, straightforward tasks
- Trivial operations (< 3 simple steps)
- Purely conversational or informational requests
- Tasks that provide no organizational benefit
## Task Management Best Practices
1. **Status Management:**
- Only ONE task should be `in_progress` at a time
- Mark tasks `in_progress` BEFORE starting work on them
- Mark tasks `completed` IMMEDIATELY after finishing
- Keep tasks `in_progress` if blocked or encountering errors
2. **Task Completion Rules:**
- ONLY mark as `completed` when FULLY accomplished
- Never mark complete if tests are failing, implementation is partial, or errors are unresolved
- When blocked, create a new task describing what needs resolution
3. **Task Organization:**
- Create specific, actionable items
- Break complex tasks into manageable steps
- Use clear, descriptive task names
- Remove irrelevant tasks entirely (don't just mark cancelled)
## Examples
**Example 1: Reading todos**
``json
{
"action": "read"
}
``
**Example 2: Initial task creation (user requests multiple features)**
``json
{
"action": "write",
"todos": [
{
"id": "1",
"content": "Add dark mode toggle to settings",
"status": "pending",
"priority": "high"
},
{
"id": "2",
"content": "Implement theme context/state management",
"status": "pending",
"priority": "high"
},
{
"id": "3",
"content": "Update components for theme switching",
"status": "pending",
"priority": "medium"
},
{
"id": "4",
"content": "Run tests and verify build",
"status": "pending",
"priority": "medium"
}
]
}
``
**Example 3: Starting work (marking one task in_progress)**
``json
{
"action": "write",
"todos": [
{
"id": "1",
"content": "Add dark mode toggle to settings",
"status": "in_progress",
"priority": "high"
},
{
"id": "2",
"content": "Implement theme context/state management",
"status": "pending",
"priority": "high"
},
{
"id": "3",
"content": "Update components for theme switching",
"status": "pending",
"priority": "medium"
},
{
"id": "4",
"content": "Run tests and verify build",
"status": "pending",
"priority": "medium"
}
]
}
``
**Example 4: Completing task and adding discovered subtask**
``json
{
"action": "write",
"todos": [
{
"id": "1",
"content": "Add dark mode toggle to settings",
"status": "completed",
"priority": "high"
},
{
"id": "2",
"content": "Implement theme context/state management",
"status": "in_progress",
"priority": "high"
},
{
"id": "3",
"content": "Update components for theme switching",
"status": "pending",
"priority": "medium"
},
{
"id": "4",
"content": "Fix TypeScript errors in theme types",
"status": "pending",
"priority": "high"
},
{
"id": "5",
"content": "Run tests and verify build",
"status": "pending",
"priority": "medium"
}
]
}
``
**Example 5: Handling blockers (keeping task in_progress)**
``json
{
"action": "write",
"todos": [
{
"id": "1",
"content": "Deploy to production",
"status": "in_progress",
"priority": "high"
},
{
"id": "2",
"content": "BLOCKER: Fix failing deployment pipeline",
"status": "pending",
"priority": "high"
},
{
"id": "3",
"content": "Update documentation",
"status": "pending",
"priority": "low"
}
]
}
``
## Common Scenarios
**Multi-file refactoring:** Create todos for each file that needs updating
**Performance optimization:** List specific bottlenecks as individual tasks
**Bug fixing:** Track reproduction, diagnosis, fix, and verification as separate tasks
**Feature implementation:** Break down into UI, logic, tests, and documentation tasks
Remember: When writing, you must include ALL todos you want to keep. Any todo not in the list will be removed. Be proactive with task management to demonstrate thoroughness and ensure all requirements are completed successfully.
7. ask_user_question
Purpose: Ask the user one or more questions and wait for responses.
Input Signature:
class AskUserQuestionArgs(BaseModel):
questions: list[Question] # 1-4 questionsOutput Signature:
class AskUserQuestionResult(BaseModel):
answers: list[Answer]
cancelled: bool = FalseSecurity Mechanisms: 1. Context Validation: - Requires InvokeContext with user_input_callback - Fails if not in interactive UI - Prevents use in non-interactive contexts
- Input Validation:
- Validates question count (1-4)
- Validates option count (2-4 per question)
- Validates header length (max 12 characters)
- Pydantic model validation
- No External Access:
- No filesystem operations
- No command execution
- Pure UI interaction
- Permission:
- Default permission:
ToolPermission.ALWAYS - Safe, read-only operation
- Default permission:
Prompt: See vibe/core/tools/builtins/prompts/ask_user_question.md
8. task
Purpose: Delegate work to a subagent for independent execution.
Input Signature:
class TaskArgs(BaseModel):
task: str
agent: str = "explore" # Must be a subagentOutput Signature:
class TaskResult(BaseModel):
response: str
turns_used: int
completed: boolSecurity Mechanisms: 1. Agent Type Validation: - Validates agent exists - Checks agent_type == AgentType.SUBAGENT - Prevents recursive spawning of regular agents - Prevents spawning interactive agents
- Isolation:
- Creates separate
AgentLoopinstance - Disables session logging for subagents
- Separate configuration and state
- Creates separate
- Resource Limits:
- Tracks turns used
- Detects interruption/completion
- Limits output accumulation
- Permission:
- Default permission:
ToolPermission.ASK - Requires explicit approval
- Default permission:
Prompt
Use `ask_user_question` to gather information from the user when you need clarification, want to validate assumptions, or need help making a decision. **Don't hesitate to use this tool** - it's better to ask than to guess wrong.
## When to Use
- **Clarifying requirements**: Ambiguous instructions, unclear scope
- **Technical decisions**: Architecture choices, library selection, tradeoffs
- **Preference gathering**: UI style, naming conventions, approach options
- **Validation**: Confirming understanding before starting significant work
- **Multiple valid paths**: When several approaches could work and you want user input
## Question Structure
Each question has these fields:
- `question`: The full question text (be specific and clear)
- `header`: A short label displayed as a chip (max 12 characters, e.g., "Auth", "Database", "Approach")
- `options`: 2-4 choices (an "Other" option is automatically added for free text)
- `multi_select`: Set to `true` if user can pick multiple options (default: `false`)
### Options Structure
Each option has:
- `label`: Short display text (1-5 words)
- `description`: Brief explanation of what this choice means or its implications
## Examples
**Single question with recommended option:**
``json
{
"questions": [{
"question": "Which authentication method should we use?",
"header": "Auth",
"options": [
{"label": "JWT tokens (Recommended)", "description": "Stateless, scalable, works well with APIs"},
{"label": "Session cookies", "description": "Traditional approach, requires session storage"},
{"label": "OAuth 2.0", "description": "Third-party auth, more complex setup"}
],
"multi_select": false
}]
}
``
**Multiple questions (displayed as tabs):**
``json
{
"questions": [
{
"question": "Which database should we use?",
"header": "Database",
"options": [
{"label": "PostgreSQL", "description": "Relational, ACID compliant"},
{"label": "MongoDB", "description": "Document store, flexible schema"}
],
"multi_select": false
},
{
"question": "Which features should be included in v1?",
"header": "Features",
"options": [
{"label": "User auth", "description": "Login, signup, password reset"},
{"label": "Search", "description": "Full-text search across content"},
{"label": "Export", "description": "CSV and PDF export"}
],
"multi_select": true
}
]
}
``
## Key Constraints
- **Header max length**: 12 characters (keeps UI clean)
- **Options count**: 2-4 per question (plus automatic "Other")
- **Questions count**: 1-4 per call
- **Label length**: Keep to 1-5 words for readability
## Tips
1. **Put recommended option first** and add "(Recommended)" to its label
2. **Use descriptive headers** that categorize the question type
3. **Keep descriptions concise** but informative about tradeoffs
4. **Use multi_select** when choices aren't mutually exclusive (e.g., features to include)
5. **Ask early** - it's better to clarify before starting than to redo work
Base Tool Security Framework
BaseTool Class
All tools inherit from BaseTool which provides:
- Type Safety:
- Generic types for arguments, results, config, and state
- Pydantic model validation
- Type extraction from annotations
- Configuration:
BaseToolConfigwith permission modelToolPermissionenum (ALWAYS, NEVER, ASK)- Allowlist/denylist patterns
- State Management:
BaseToolStatefor persistent tool state- Separate state per tool instance
ToolPermission System
Three permission levels: - ALWAYS: Tool executes without approval - NEVER: Tool is permanently disabled - ASK: User must approve each execution
Permission Check Flow: 1. Tool calls check_allowlist_denylist(args) 2. If returns ALWAYS or NEVER, use that permission 3. Otherwise, use config permission 4. Agent loop applies final approval logic
InvokeContext
Provides execution context: - tool_call_id: Unique identifier for this invocation - approval_callback: Function to request user approval - agent_manager: Access to agent system - user_input_callback: Function to ask user questions
Security Best Practices
Path Security
- Always resolve relative paths to absolute
- Validate paths are within project directory
- Use
path.relative_to()for containment checks - Never allow
..to escape project root
Command Security
- Parse and analyze command structure
- Use allowlist/denylist patterns
- Isolate processes (new session/group)
- Set safe environment variables
- Enforce timeouts
- Limit output size
File Security
- Validate file existence and type
- Enforce size limits
- Require explicit overwrite for existing files
- Create backups when appropriate
- Handle encoding errors gracefully
Process Security
- Kill process trees on timeout/interrupt
- Clean up resources in finally blocks
- Handle platform differences (Windows vs Unix)
- Use asyncio for process management
Data Security
- Validate all inputs with Pydantic models
- Enforce limits on content size
- Sanitize output (truncate when needed)
- Never expose sensitive information in errors
Summary
Mistral Vibe’s tool system implements a comprehensive security model:
- Defense in Depth: Multiple layers of validation and checking
- Least Privilege: Tools have minimal required permissions
- Explicit Approval: Most tools require user approval
- Input Validation: Strict validation of all parameters
- Output Limits: Prevents resource exhaustion
- Process Isolation: Safe command execution
- Path Containment: Prevents directory traversal
- Error Handling: Graceful degradation and recovery
Each tool follows the same security patterns while adapting them to the specific requirements of its operation.
Mistral Vibe Tool System: Comprehensive Security Analysis
I’ve completed an in-depth exploration of Mistral Vibe’s tool system, documenting each built-in tool, its security mechanisms, and the overall architecture. Here’s a comprehensive summary:
Built-in Tools Overview
Mistral Vibe provides 8 built-in tools, each with sophisticated security mechanisms:
- read_file - Safe file reading with byte limits and path validation
- write_file - Protected file writing with overwrite confirmation
- search_replace - Precise file editing with backup support
- bash - Secure command execution with allowlist/denylist and process isolation
- grep - Safe code searching with exclusion patterns
- todo - Task management with state isolation
- ask_user_question - Interactive user queries with context validation
- task - Subagent delegation with agent type validation
Key Security Features
1. Path Security - All tools validate paths are within the project directory using path.relative_to() - Prevents directory traversal attacks - Expands and resolves paths consistently
2. Permission System - Three-level permission model: ALWAYS, NEVER, ASK - Allowlist/denylist patterns using fnmatch - Configurable per-tool permissions - Explicit user approval for most operations
3. Resource Limits - Byte limits for file operations (64KB-100KB) - Output limits for commands (16KB-64KB) - Match limits for searches (100 matches) - Timeout enforcement (60-300 seconds)
4. Process Isolation - Bash commands run in isolated process groups - Safe environment variables (CI=true, NONINTERACTIVE=1) - Process tree killing on timeout - Platform-specific handling (Windows vs Unix)
5. Input Validation - Pydantic model validation for all parameters - Strict type checking and constraints - Parameter range validation - Empty/null checking
6. Error Handling - Graceful error recovery - Detailed error messages with context - Wrapping of low-level errors - Prevention of information leakage
Security Architecture
The tool system implements a defense-in-depth approach:
- BaseTool Framework - Provides type safety, configuration, and state management
- Permission System - Three-tier approval model with allowlist/denylist
- InvokeContext - Execution context with approval callbacks
- ToolManager - Centralized tool discovery and instantiation
- Agent Loop - Orchestrates tool execution with middleware
Most Important Security Aspects
Path Containment: The most critical security feature is path validation. All file operations check that paths remain within the project directory using path.relative_to(Path.cwd().resolve()), preventing directory traversal attacks.
Process Isolation: For command execution, the system uses tree-sitter to parse commands, sets safe environment variables, creates isolated process groups, and enforces strict timeouts to prevent command injection and resource exhaustion.
Permission Model: The three-level permission system (ALWAYS/NEVER/ASK) with allowlist/denylist support provides fine-grained control over tool execution, requiring explicit user approval for most operations.
Resource Limits: Byte limits, output limits, and timeouts prevent resource exhaustion attacks, ensuring the system remains responsive even with malicious input.
The documentation file mistral-vibe-tool-system-analysis.md provides a detailed, comprehensive analysis of each tool’s security mechanisms, input/output signatures, and implementation details.
System Prompt Construction
The system prompt is constructed by the get_universal_system_prompt() function in vibe/core/system_prompt.py. It combines multiple components:
- Base System Prompt (
vibe/core/prompts/cli.md)- General guidelines for the AI assistant
- Tool usage best practices
- Code modification guidelines
- Tone and style requirements
- Optional Components (controlled by config flags)
- Commit signature instructions
- Model information
- Platform-specific information (OS, shell)
- Tool-specific prompts
- Available skills
- Available subagents
- Project Context (if enabled)
- Directory structure with limits
- Git status and recent commits
- Project documentation
Tool Integration
Tools are integrated in two ways:
1. Tool Prompts in System Message
Each tool has an associated .md file in vibe/core/tools/builtins/prompts/ that provides detailed usage guidelines. These are loaded when config.include_prompt_detail is True.
Tool Prompts Include: - read_file.md - File reading best practices - write_file.md - File writing safety rules - search_replace.md - Search/replace block format - bash.md - Command execution guidelines - grep.md - Search operation best practices - todo.md - Task management guidelines - ask_user_question.md - User interaction patterns - task.md - Subagent delegation best practices
2. Tool Schemas in LLM API Calls
The APIToolFormatHandler class provides tool schemas to the LLM:
def get_available_tools(self, tool_manager: ToolManager) -> list[AvailableTool]:
return [
AvailableTool(
function=AvailableFunction(
name=tool_class.get_name(),
description=tool_class.description,
parameters=tool_class.get_parameters(),
)
)
for tool_class in tool_manager.available_tools.values()
]Tool Schema Components: - Name: Derived from class name (e.g., ReadFile → read_file) - Description: Class variable description - Parameters: JSON schema from Pydantic model
Key Integration Points
get_universal_system_prompt()- Builds the complete system promptBaseTool.get_tool_prompt()- Loads tool-specific documentationAPIToolFormatHandler.get_available_tools()- Provides tool schemas to LLMToolManager- Discovers and manages available tools
Tool Execution Flow
- LLM receives system prompt + tool schemas
- LLM generates tool calls in API format
parse_message()extracts tool callsresolve_tool_calls()validates tool names and arguments- Agent loop executes tools with permission checks
- Results are formatted and added to conversation
Multi-Layered Architecture
The system uses a multi-layered approach:
- Guidance Layer: System prompt with best practices
- Documentation Layer: Tool-specific prompts for complex operations
- API Layer: Tool schemas for function calling
- Context Layer: Project-specific information
This separation allows the LLM to understand both the “how” (guidelines) and the “what” (API structure) of tool usage, resulting in more accurate and safe tool execution.
The documentation file system-prompt-and-tool-integration.md provides a complete, detailed explanation of the construction process and integration mechanisms.
The ToolManager discovers tools through a multi-step process:
1. Search Path Discovery
The manager first computes search paths from configuration: - Default tool directory (DEFAULT_TOOL_DIR.path) - User-configured tool paths (config.tool_paths) - Local tools directory (discovered from current working directory) - Global tools directory (GLOBAL_TOOLS_DIR.path)
These paths are deduplicated to avoid scanning the same location multiple times.
2. Tool Class Discovery
The _iter_tool_classes() method recursively scans all search paths:
@staticmethod
def _iter_tool_classes(search_paths: list[Path]) -> Iterator[type[BaseTool]]:
"""Iterate over all search_paths to find tool classes."""
for base in search_paths:
if not base.is_dir() and base.name.endswith(".py"):
if tools := ToolManager._load_tools_from_file(base):
for tool in tools:
yield tool
for path in base.rglob("*.py"):
if tools := ToolManager._load_tools_from_file(path):
for tool in tools:
yield toolThis method: - Handles both directories and individual .py files - Recursively scans directories with rglob("*.py") - Uses _load_tools_from_file() to extract tool classes from each Python file
3. Tool Loading from Files
The _load_tools_from_file() method: - Skips files starting with _ (private modules) - Computes a module name (canonical for vibe package files, hash-based for external files) - Dynamically imports the module using importlib.util - Inspects the module’s namespace for classes that: - Are subclasses of BaseTool - Are not abstract - Are not the BaseTool class itself
4. MCP Integration
Additionally, the manager integrates MCP (Model Context Protocol) tools: - HTTP-based MCP servers - Stdio-based MCP servers - Each MCP server is queried for available tools - Remote tools are wrapped in proxy classes that inherit from BaseTool
5. Filtering and Availability
After discovery, tools are filtered based on configuration: - enabled_tools: Only tools matching these patterns are available - disabled_tools: Tools matching these patterns are excluded - If neither is specified, all discovered tools are available
The discovered tools are stored in the _available dictionary with tool names as keys and tool classes as values.
This comprehensive discovery mechanism allows the ToolManager to find tools from multiple sources: built-in tools, user-provided tools, local project tools, and MCP servers.
Default Search Paths
The default search paths for tool discovery are:
- DEFAULT_TOOL_DIR:
VIBE_ROOT / "core" / "tools" / "builtins"- This is the built-in tools directory in the vibe package
- Contains the 8 built-in tools: ask_user_question, bash, grep, read_file, search_replace, write_file, task, ui
- GLOBAL_TOOLS_DIR:
VIBE_HOME / "tools"- Located at
~/.vibe/tools(or$VIBE_HOME/toolsif VIBE_HOME env var is set) - User can place custom tools here for global availability
- Located at
- Local Tools Directory:
Path.cwd() / ".vibe" / "tools"- Discovered from the current working directory
- Only available if the current directory is trusted
- Allows project-specific tools
- User-configured paths: Additional paths can be specified in the configuration
BaseTool Class Explanation
The BaseTool class is an abstract base class that defines the interface for all tools in the system. It uses generic type parameters to ensure type safety:
BaseTool[
ToolArgs: BaseModel, # Input arguments model
ToolResult: BaseModel, # Output result model
ToolConfig: BaseToolConfig, # Configuration model
ToolState: BaseToolState, # State model
](ABC)Key Components:
1. InvokeContext - Contains execution context: tool_call_id, approval_callback, agent_manager, user_input_callback - Passed to tools during invocation for access to system services
2. ToolPermission - Enum with three values: ALWAYS, NEVER, ASK - Controls whether a tool can be used without user approval - Default is ASK (requires user approval)
3. BaseToolConfig - Base configuration class with: - permission: ToolPermission level - allowlist: Patterns that automatically allow execution - denylist: Patterns that automatically deny execution - Supports extra fields via extra="allow"
4. BaseToolState - Base state class for maintaining tool-specific state - Uses Pydantic BaseModel with strict validation
5. Core Methods:
run(args, ctx): Abstract method that tools must implement- Takes input arguments and optional context
- Returns an AsyncGenerator yielding ToolStreamEvent or ToolResult
- This is where the actual tool logic goes
invoke(ctx, **raw): Validates arguments and runs the tool- Validates input using Pydantic
- Handles errors gracefully
- Yields results from the run method
from_config(config): Factory method to create tool instances- Creates initial state and returns configured tool instance
6. Utility Methods:
get_tool_prompt(): Loads the tool’s prompt file (from prompts/ subdirectory)get_name(): Returns the tool name in snake_case formatget_parameters(): Returns JSON schema for the tool’s argumentscheck_allowlist_denylist(): Checks if arguments match allowlist/denylist (can be overridden)
7. Type Extraction:
_get_tool_config_class(): Extracts ToolConfig type from class hierarchy_get_tool_state_class(): Extracts ToolState type from class hierarchy_get_tool_args_results(): Extracts ToolArgs and ToolResult types from run method signature_extract_result_type(): Extracts ToolResult from AsyncGenerator return type
Design Patterns:
- Generic Base Class: Uses Python generics to ensure type safety while allowing customization
- Pydantic Validation: All arguments, results, config, and state use Pydantic models for validation
- Async Streaming: Tools yield results asynchronously for better UX
- Dependency Injection: Configuration and state are injected at creation time
- Type Introspection: Extracts types from method signatures for runtime validation
The BaseTool class provides a solid foundation for creating type-safe, validated tools with consistent behavior across the system.
MCP Server Discovery and Integration
1. Configuration Structure
MCP servers are configured in the VibeConfig with three types:
MCPServer = Annotated[
MCPHttp | MCPStreamableHttp | MCPStdio,
Field(discriminator="transport")
]Each MCP server has: - name: Short alias used to prefix tool names - prompt: Optional usage hint - startup_timeout_sec: Timeout for server initialization (default: 10s) - tool_timeout_sec: Timeout for tool execution (default: 60s)
Transport Types: - HTTP: Remote servers accessed via HTTP - url: Base URL - headers: Additional HTTP headers - api_key_env: Environment variable for API token - api_key_header: HTTP header for the token - api_key_format: Format string for header value
- Streamable HTTP: Similar to HTTP but with streaming support
- Same fields as HTTP
- Stdio: Local servers run as subprocesses
command: Command to run (string or list)args: Additional argumentsenv: Environment variables
2. Discovery Process
The discovery happens in ToolManager._integrate_mcp():
async def _integrate_mcp_async(self) -> None:
try:
http_count = 0
stdio_count = 0
for srv in self._config.mcp_servers:
match srv.transport:
case "http" | "streamable-http":
http_count += await self._register_http_server(srv)
case "stdio":
stdio_count += await self._register_stdio_server(srv)
case _:
logger.warning("Unsupported MCP transport: %r", srv.transport)
logger.info(
"MCP integration registered %d tools (http=%d, stdio=%d)",
http_count + stdio_count,
http_count,
stdio_count,
)
except Exception as exc:
logger.warning("Failed to integrate MCP tools: %s", exc)For HTTP/Streamable HTTP servers: 1. Connect to the server URL with optional headers 2. Initialize the MCP client session 3. Call session.list_tools() to get available tools 4. For each remote tool, create a proxy class using create_mcp_http_proxy_tool_class()
For Stdio servers: 1. Start the subprocess with the configured command 2. Initialize the MCP client session 3. Call session.list_tools() to get available tools 4. For each remote tool, create a proxy class using create_mcp_stdio_proxy_tool_class()
3. Proxy Tool Creation
Two factory functions create proxy tool classes:
create_mcp_http_proxy_tool_class()
Creates a BaseTool subclass that wraps an HTTP MCP tool:
def create_mcp_http_proxy_tool_class(
*,
url: str,
remote: RemoteTool,
alias: str | None = None,
server_hint: str | None = None,
headers: dict[str, str] | None = None,
startup_timeout_sec: float | None = None,
tool_timeout_sec: float | None = None,
) -> type[BaseTool[_OpenArgs, MCPToolResult, BaseToolConfig, BaseToolState]]:Key features: - Generates a unique tool name: {alias}_{remote.name} or {host}_{port}_{remote.name} - Stores connection parameters as class variables - Implements run() to call the remote tool via HTTP - Returns MCPToolResult containing the tool execution result - Provides custom display methods for UI integration
create_mcp_stdio_proxy_tool_class()
Creates a BaseTool subclass that wraps a stdio MCP tool:
def create_mcp_stdio_proxy_tool_class(
*,
command: list[str],
remote: RemoteTool,
alias: str | None = None,
server_hint: str | None = None,
env: dict[str, str] | None = None,
startup_timeout_sec: float | None = None,
tool_timeout_sec: float | None = None,
) -> type[BaseTool[_OpenArgs, MCPToolResult, BaseToolConfig, BaseToolState]]:Key features: - Generates a unique tool name using command hash for stability - Stores command and environment as class variables - Implements run() to call the remote tool via stdio - Returns MCPToolResult containing the tool execution result - Provides custom display methods for UI integration
4. Tool Execution Flow
When an MCP proxy tool is invoked:
- Argument Validation: Input is validated against the remote tool’s schema
- Connection Setup: HTTP connection or stdio subprocess is established
- Tool Call: The remote tool is called with the provided arguments
- Result Parsing: The MCP response is parsed into
MCPToolResult - Result Yielding: The result is yielded as an async generator
5. Result Handling
The MCPToolResult model contains: - ok: Boolean indicating success - server: Server identifier (URL or command) - tool: Tool name - text: Text content (if any) - structured: Structured content (if any)
Results are displayed in the UI with custom get_call_display() and get_result_display() methods.
6. Error Handling
- Connection timeouts are handled via
startup_timeout_secandtool_timeout_sec - Failed calls raise
ToolErrorwith descriptive messages - Discovery failures are logged but don’t crash the system
- Individual tool registration failures are logged and skipped
7. Integration Points
MCP tools are fully integrated into the tool system: - Discovered tools are added to ToolManager._available - Can be enabled/disabled via enabled_tools/disabled_tools config - Respect the same permission model as built-in tools - Use the same configuration and state management - Appear in tool listings and autocompletion
This architecture allows seamless integration of external MCP servers as first-class tools in the Vibe system, with proper type safety, error handling, and UI integration.
Web search not activated
Usage of extra_headers Parameter in LLM Calls
The extra_headers parameter is used in three places in the agent loop to pass additional HTTP headers to the LLM provider:
1. Non-Streaming Completion (_chat() method, line 575-580)
result = await backend.complete(
model=active_model,
messages=self.messages,
temperature=active_model.temperature,
tools=available_tools,
tool_choice=tool_choice,
extra_headers={
"user-agent": get_user_agent(provider.backend),
"x-affinity": self.session_id,
},
max_tokens=max_tokens,
)2. Streaming Completion (_chat_streaming() method, line 619-624)
async for chunk in backend.complete_streaming(
model=active_model,
messages=self.messages,
temperature=active_model.temperature,
tools=available_tools,
tool_choice=tool_choice,
extra_headers={
"user-agent": get_user_agent(provider.backend),
"x-affinity": self.session_id,
},
max_tokens=max_tokens,
):3. Token Counting (_compact_session() method, line 843)
actual_context_tokens = await backend.count_tokens(
model=active_model,
messages=self.messages,
tools=self.format_handler.get_available_tools(self.tool_manager),
extra_headers={"user-agent": get_user_agent(provider.backend)},
)Current Headers
The system currently sends two headers: 1. user-agent: Identifies the client as “Mistral-Vibe” with version info 2. x-affinity: Contains the session ID for request correlation
How to Extend for Web Search
To enable web search capabilities, additional headers could be added to extra_headers. For example:
extra_headers={
"user-agent": get_user_agent(provider.backend),
"x-affinity": self.session_id,
"X-Web-Search": "enabled", # Provider-specific web search header
"X-Web-Search-Model": "web-search-preview", # Specific model for web search
}Implementation Approach
The architecture allows for web search to be enabled by: 1. Adding provider-specific headers through the extra_headers parameter 2. These headers are passed to the backend (GenericBackend or MistralBackend) 3. The backend includes them in the HTTP request to the provider 4. The provider handles the web search internally and returns results
This mechanism provides a clean way to enable provider-specific capabilities like web search without modifying the core tool system.
Mistral vibe concepts
1. Agents
- Main entities that interact with users
- Have profiles defining behavior, safety level, and tool permissions
- Built-in agents:
default,plan,accept-edits,auto-approve,explore - Safety levels: SAFE, NEUTRAL, DESTRUCTIVE, YOLO
2. Subagents
- Specialized agents for task delegation
- Run independently to prevent context overload
- Invoked using the
tasktool - Can be created by setting
agent_type = "subagent"
3. Skills
- Reusable components that extend functionality
- Defined in directories with
SKILL.mdfiles - Support metadata like name, description, allowed tools
- Discovered from multiple paths (global, local, custom)
4. Relationship Between Components
- Agents define which tools are available and their permissions
- Skills can add new tools or modify behavior (experimental)
- Tools are the actual implementations that perform actions
- Subagents are specialized agents for delegation
5. Personalization Files
Users can create these files to customize Mistral Vibe:
- Agent Configuration Files (
~/.vibe/agents/*.toml) - Skill Definition Files (
~/.vibe/skills/*/SKILL.md) - Tool Definition Files (
~/.vibe/tools/*.py) - System Prompt Files (
~/.vibe/prompts/*.md) - Main Configuration File (
~/.vibe/config.toml)
6. Configuration
- Supports pattern matching (exact names, globs, regex)
- Configuration inheritance: defaults → global config → project config → agent overrides
- Tool permissions:
always,never,ask
The document provides step-by-step guides for creating custom agents, skills, and tools, along with complete examples.
Files Users Can Create for Personalization
1. Agent Configuration Files (.toml)
- Location:
~/.vibe/agents/or.vibe/agents/(project-specific) - Purpose: Define custom agent profiles with specific tool permissions
- Format: TOML with agent metadata and overrides
2. Skill Definition Files (SKILL.md)
- Location:
~/.vibe/skills/*/SKILL.mdor.vibe/skills/*/SKILL.md - Purpose: Define reusable skills with metadata
- Format: Markdown with YAML frontmatter
3. Tool Definition Files (.py)
- Location:
~/.vibe/tools/or.vibe/tools/(project-specific) - Purpose: Create custom tools by extending
BaseTool - Format: Python files with async methods
4. System Prompt Files (.md)
- Location:
~/.vibe/prompts/ - Purpose: Customize the system prompt for different use cases
- Format: Markdown files
5. Main Configuration File (config.toml)
- Location:
~/.vibe/config.tomlor.vibe/config.toml - Purpose: Global configuration for agents, skills, tools, and MCP servers
- Format: TOML
6. Documentation Files (Optional)
While not required by the system, users can create: - AGENTS.md - Documentation for custom agents - SKILLS.md - Documentation for custom skills - TOOLS.md - Documentation for custom tools - README.md - Project-specific documentation
These documentation files would be in the project’s .vibe/ directory and serve as reference material for team members or future maintainers.
The comprehensive document I created (mistral-vibe-agents-skills-personalization.md) explains all of this in detail with examples and step-by-step guides.
Mistral Vibe: Agents, Skills, Subagents, and Personalization
This document explains the core concepts of agents, skills, subagents in Mistral Vibe, how they relate to each other and to tools, and how users can personalize their Vibe experience.
Agents
Agents are the main entities that interact with users and execute tasks. Each agent has a profile that defines its behavior, safety level, and tool permissions.
Agent Profile Structure
An agent profile consists of: - Name: Unique identifier for the agent - Display Name: Human-readable name - Description: What the agent does - Safety Level: One of SAFE, NEUTRAL, DESTRUCTIVE, or YOLO - Agent Type: AGENT (primary) or SUBAGENT (for delegation) - Overrides: Configuration overrides (tools, permissions, etc.)
Built-in Agents
Mistral Vibe comes with several built-in agents:
default- Standard agent requiring approval for tool executionsplan- Read-only agent for exploration and planning (auto-approves safe tools)accept-edits- Auto-approves file edits onlyauto-approve- Auto-approves all tool executions (use with caution)explore- Read-only subagent for codebase exploration
Agent Safety Levels
- SAFE: Read-only operations, no destructive actions
- NEUTRAL: Default level, requires approval for most actions
- DESTRUCTIVE: Can modify files but with restrictions
- YOLO: No restrictions, auto-approves everything
Using Agents
vibe --agent plan
vibe --agent auto-approveSubagents
Subagents are specialized agents designed for task delegation. They run independently and can perform work without user interaction, preventing context overload.
Key Characteristics
- Run in parallel with the main agent
- Have their own configuration and tool permissions
- Can be invoked using the
tasktool - Useful for long-running or specialized tasks
Example: Delegating to a Subagent
> Can you explore the codebase structure while I work on something else?
🤖 I'll use the task tool to delegate this to the explore subagent.
> task(task="Analyze the project structure and architecture", agent="explore")
Creating Custom Subagents
To create a custom subagent, add agent_type = "subagent" to your agent configuration:
# ~/.vibe/agents/my-subagent.toml
name = "my-subagent"
display_name = "My Subagent"
description = "Specialized subagent for my tasks"
safety = "safe"
agent_type = "subagent"
[tools.read_file]
permission = "always"
[tools.grep]
permission = "always"Skills
Skills are reusable components that extend Vibe’s functionality. They can add new tools, slash commands, and specialized behaviors.
Skill Structure
Skills are defined in directories with a SKILL.md file containing YAML frontmatter:
---
name: code-review
description: Perform automated code reviews
license: MIT
compatibility: Python 3.12+
user-invocable: true
allowed-tools:
- read_file
- grep
- ask_user_question
---
# Code Review Skill
This skill helps analyze code quality and suggest improvements.Skill Metadata Fields
- name: Skill identifier (lowercase, hyphens only)
- description: What the skill does
- license: License name or reference
- compatibility: Environment requirements
- metadata: Arbitrary key-value pairs
- allowed-tools: Pre-approved tools (experimental)
- user-invocable: Whether the skill appears in slash command menu
Skill Discovery Paths
Vibe discovers skills from: 1. Global skills directory: ~/.vibe/skills/ 2. Local project skills: .vibe/skills/ in your project 3. Custom paths: Configured in config.toml
Managing Skills
# Enable specific skills
enabled_skills = ["code-review", "test-*"]
# Disable specific skills
disabled_skills = ["experimental-*"]Relationship Between Agents, Skills, and Tools
The Architecture
┌───────────────────────────────────────────────────────────────┐
│ User Interface │
└───────────────────────────────────────────────────────────────┘
│
▼
┌───────────────────────────────────────────────────────────────┐
│ Agent Manager │
│ - Manages agent profiles (built-in + custom) │
│ - Handles agent switching and configuration │
└───────────────────────────────────────────────────────────────┘
│
▼
┌───────────────────────────────────────────────────────────────┐
│ Skill Manager │
│ - Discovers and loads skills from multiple paths │
│ - Manages skill enable/disable patterns │
└───────────────────────────────────────────────────────────────┘
│
▼
┌───────────────────────────────────────────────────────────────┐
│ Tool Manager │
│ - Discovers built-in and custom tools │
│ - Integrates MCP servers as tools │
│ - Manages tool configuration and permissions │
└───────────────────────────────────────────────────────────────┘
│
▼
┌───────────────────────────────────────────────────────────────┐
│ LLM Backend │
│ - Executes agent conversations │
│ - Handles tool calls and responses │
└───────────────────────────────────────────────────────────────┘
How They Work Together
- Agent defines which tools are available and their permissions
- Skills can add new tools or modify behavior (experimental)
- Tools are the actual implementations that perform actions
- Subagents are specialized agents that can be delegated to
Tool Permissions
Each tool can have different permission levels: - always: Auto-approved, no user confirmation - never: Disabled for this agent - ask: Requires user approval (default)
Example agent configuration with tool permissions:
[tools.write_file]
permission = "always"
[tools.bash]
permission = "ask"
[tools.search_replace]
permission = "never"Personalization Files
Users can personalize Mistral Vibe by creating files in specific directories:
1. Agent Configuration Files
Location: ~/.vibe/agents/ or .vibe/agents/ (project-specific)
Format: TOML files with .toml extension
Example: ~/.vibe/agents/my-agent.toml
name = "my-agent"
display_name = "My Custom Agent"
description = "Agent configured for my specific needs"
safety = "neutral"
# Override global configuration
active_model = "devstral-2"
system_prompt_id = "cli"
# Tool-specific configuration
[tools.bash]
permission = "always"
[tools.write_file]
permission = "ask"
[tools.read_file]
permission = "always"2. Skill Definition Files
Location: ~/.vibe/skills/ or .vibe/skills/ (project-specific)
Format: Directories with SKILL.md file
Example: ~/.vibe/skills/my-skill/SKILL.md
---
name: my-skill
description: My custom skill for specialized tasks
license: MIT
compatibility: Python 3.12+
user-invocable: true
allowed-tools:
- read_file
- grep
---
# My Skill Documentation
This skill provides custom functionality for my workflow.3. Tool Definition Files
Location: ~/.vibe/tools/ or .vibe/tools/ (project-specific)
Format: Python files with BaseTool subclasses
Example: ~/.vibe/tools/my_tool.py
from pathlib import Path
from vibe.core.tools.base import BaseTool, BaseToolConfig
class MyToolConfig(BaseToolConfig):
my_option: str = "default"
class MyTool(BaseTool[MyToolConfig]):
@classmethod
def get_name(cls) -> str:
return "my_tool"
@classmethod
def get_description(cls) -> str:
return "My custom tool that does something useful"
async def run(self, my_option: str) -> str:
"""Run the tool with the given option."""
return f"Tool executed with option: {my_option}"4. System Prompt Files
Location: ~/.vibe/prompts/
Format: Markdown files with .md extension
Example: ~/.vibe/prompts/my-prompt.md
# Custom System Prompt
You are a helpful coding assistant...Then reference it in config:
system_prompt_id = "my-prompt"5. Configuration File
Location: ~/.vibe/config.toml or .vibe/config.toml (project-specific)
Format: TOML
Example:
active_model = "devstral-2"
textual_theme = "terminal"
auto_approve = false
# Agent paths
agent_paths = ["/path/to/custom/agents"]
# Skill paths
skill_paths = ["/path/to/custom/skills"]
# Tool paths
tool_paths = ["/path/to/custom/tools"]
# Enable/disable specific agents
enabled_agents = ["default", "plan"]
disabled_agents = ["auto-approve"]
# Enable/disable specific skills
enabled_skills = ["code-review"]
disabled_skills = ["experimental-*"]
# Enable/disable specific tools
enabled_tools = ["read_file", "grep", "bash"]
disabled_tools = ["write_file", "search_replace"]
# MCP server configuration
[[mcp_servers]]
name = "my_server"
transport = "http"
url = "http://localhost:8000"Creating Custom Agents
Step-by-Step Guide
- Create a new TOML file in
~/.vibe/agents/or.vibe/agents/ - Define the agent profile with name, description, and safety level
- Configure tool permissions as needed
- Use the agent with the
--agentflag
Example: Creating a Review Agent
# ~/.vibe/agents/reviewer.toml
name = "reviewer"
display_name = "Code Reviewer"
description = "Specialized agent for code reviews"
safety = "safe"
# Only allow read operations
[tools.read_file]
permission = "always"
[tools.grep]
permission = "always"
[tools.bash]
permission = "never"
[tools.write_file]
permission = "never"
[tools.search_replace]
permission = "never"Using the Custom Agent
vibe --agent reviewerCreating Custom Skills
Step-by-Step Guide
- Create a directory for your skill in
~/.vibe/skills/or.vibe/skills/ - Create a
SKILL.mdfile with YAML frontmatter - Document the skill in markdown format
- Enable the skill in your configuration (if needed)
Example: Creating a Documentation Skill
# ~/.vibe/skills/documentation/SKILL.md
---
name: documentation
description: Generate and maintain project documentation
description: MIT
compatibility: Python 3.12+
user-invocable: true
allowed-tools:
- read_file
- grep
- write_file
---
# Documentation Skill
This skill helps generate and maintain project documentation by analyzing
code structure and creating comprehensive docs.
## Features
- Analyze code structure
- Generate API documentation
- Create README files
- Update documentation based on code changesCreating Custom Tools
Step-by-Step Guide
- Create a Python file in
~/.vibe/tools/or.vibe/tools/ - Define a class that extends
BaseTool - Implement the
runmethod asynchronously - Configure tool permissions in your agent profile
Example: Creating a Custom Tool
# ~/.vibe/tools/project_stats.py
from pathlib import Path
from vibe.core.tools.base import BaseTool, BaseToolConfig
class ProjectStatsConfig(BaseToolConfig):
include_hidden: bool = False
max_depth: int = 3
class ProjectStats(BaseTool[ProjectStatsConfig]):
@classmethod
def get_name(cls) -> str:
return "project_stats"
@classmethod
def get_description(cls) -> str:
return "Generate statistics about the project structure"
async def run(
self,
include_hidden: bool = False,
max_depth: int = 3
) -> str:
"""Generate project statistics."""
base_path = Path.cwd()
# Count files and directories
files = 0
dirs = 0
for path in base_path.rglob("*"):
if path.is_file():
files += 1
elif path.is_dir():
dirs += 1
# Apply filters
if path.name.startswith(".") and not include_hidden:
continue
return f"Project Statistics:\n- Files: {files}\n- Directories: {dirs}"Using the Custom Tool
Once created, the tool will be automatically discovered and available to agents:
> project_stats()
Configuration Overview
Configuration File Structure
The main configuration file (config.toml) controls:
- Global Settings: Model, theme, behavior
- Agent Management: Paths, enable/disable patterns
- Skill Management: Paths, enable/disable patterns
- Tool Management: Paths, enable/disable patterns, permissions
- MCP Server Configuration: HTTP and stdio servers
- Session Management: Logging, save directory
Configuration Inheritance
Configuration follows this inheritance order (later overrides earlier):
- Default values (built into Vibe)
- Global config (
~/.vibe/config.toml) - Project config (
.vibe/config.toml) - Agent-specific overrides (from agent profile)
Pattern Matching
Vibe supports three types of patterns for enabling/disabling agents, skills, and tools:
- Exact names:
"default"matches only “default” - Glob patterns:
"test-*"matches “test-1”, “test-2”, etc. - Regex patterns:
"re:^serena_.*$"matches regex pattern
Summary
- Agents are the main entities that interact with users
- Subagents are specialized agents for task delegation
- Skills extend functionality through reusable components
- Tools are the actual implementations that perform actions
- All can be customized through configuration files and Python code
- Configuration controls which agents, skills, and tools are available
This architecture provides a flexible and extensible system for personalizing Mistral Vibe to your specific workflow and needs.
Prompt Personalization Options in Mistral Vibe
1. Built-in Prompts
- System Prompts:
cli(default),tests(minimal) - Utility Prompts:
compact(conversation summaries),project_context(project info),dangerous_directory(security warnings)
2. System Prompt Customization
- Create custom prompts in
~/.vibe/prompts/or.vibe/prompts/(project-specific) - Reference them via
system_prompt_idin config - Discovery order: project → global → built-in
3. Utility Prompt Customization
- Project Context Template: Can be customized with placeholders like
{structure},{abs_path},{git_status} - Dangerous Directory Prompt: Can be customized for security warnings
4. Project Context Configuration
- Configurable parameters:
max_chars,max_depth,max_files,max_dirs_per_leveltimeout_seconds,default_commit_count,max_doc_bytes
- Automatic documentation loading from README files
- Git status integration with branch info and recent commits
5. Session-Specific Prompt Control
- Include/exclude layers:
include_commit_signature- Git commit instructionsinclude_model_info- Model nameinclude_prompt_detail- OS/shell info and tool promptsinclude_project_context- Project structure and git status
6. Advanced Customization
- Agent-specific prompts - Different prompts for different agents
- Custom tool prompts - Override
get_tool_prompt()in custom tools - Dynamic content - Directory structure, git status, skills, subagents
7. Best Practices
- Start with built-in prompts and modify incrementally
- Keep prompts concise (token limits)
- Test changes incrementally
- Use multiple prompts for different use cases
- Document your custom prompts
The document provides complete examples and step-by-step guides for creating custom prompts, configuring project context, and controlling prompt layers for different scenarios.
Mistral Vibe: Complete Prompt Personalization Guide
- Understanding the Prompt System
- Built-in Prompts
- System Prompt Customization
- Utility Prompt Customization
- Project Context Customization
- Session-Specific Prompt Control
- Advanced Prompt Customization
- Prompt Variables and Dynamic Content
- Best Practices for Prompt Customization
Understanding the Prompt System
Mistral Vibe uses a multi-layered prompt architecture that combines:
- Base System Prompt - Defines the agent’s core behavior and guidelines
- Utility Prompts - Contextual information like project structure and git status
- Tool Prompts - Specific instructions for each available tool
- Skill Prompts - Information about available skills
- Subagent Prompts - Information about available subagents
The final prompt sent to the LLM is constructed by combining these layers based on configuration settings.
Prompt Construction Flow
┌─────────────────────────────────────────────────────────────┐
│ System Prompt (Base) │
└─────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ Optional Layers │
│ - Commit Signature (if enabled) │
│ - Model Info (if enabled) │
│ - OS System Prompt (if enabled) │
│ - Tool Prompts (if enabled) │
│ - Skill Prompts (if enabled) │
│ - Subagent Prompts (if enabled) │
└─────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ Project Context (if enabled) │
│ - Directory Structure │
│ - Git Status │
│ - Project Documentation (if available) │
└─────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ Final Prompt Sent to LLM │
└─────────────────────────────────────────────────────────────┘
Built-in Prompts
Mistral Vibe comes with several built-in prompts:
System Prompts
cli- The default system prompt for CLI interactions (seevibe/core/prompts/cli.md)- Focuses on tool usage, code modifications, and professional objectivity
- Designed for coding assistance in a terminal environment
tests- A minimal test prompt (seevibe/core/prompts/tests.md)- Simple prompt for testing purposes
Utility Prompts
compact- Prompt for creating conversation summaries (seevibe/core/prompts/compact.md)- Used when the conversation needs to be compacted
- Requires specific structure with 7 sections
project_context- Template for project context information (seevibe/core/prompts/project_context.md)- Displays directory structure and git status
- Used to provide project-aware context
dangerous_directory- Warning for dangerous directories (seevibe/core/prompts/dangerous_directory.md)- Shown when scanning is disabled for security reasons
System Prompt Customization
The system prompt is the foundation of the agent’s behavior and can be fully customized.
Method 1: Using Built-in Prompts
Simply set the system_prompt_id in your configuration:
# ~/.vibe/config.toml
system_prompt_id = "cli" # or "tests"Method 2: Creating Custom System Prompts
You can create your own system prompts by placing markdown files in the prompts directory.
Step-by-Step Guide
Create a prompts directory (if it doesn’t exist):
mkdir -p ~/.vibe/promptsCreate a markdown file with your custom prompt:
nano ~/.vibe/prompts/my-custom-prompt.mdDefine your prompt in the markdown file:
# My Custom Prompt You are a helpful coding assistant specialized in [your domain]. ## Guidelines - Always be helpful and friendly - Focus on [specific requirements] - Avoid [certain behaviors] - Prefer [specific approaches] ## Tool Usage - Use tools to fulfill requests - Always check parameters before using tools - Match the existing code styleActivate your custom prompt in config:
# ~/.vibe/config.toml system_prompt_id = "my-custom-prompt"
Project-Specific Custom Prompts
You can also create prompts specific to a project:
mkdir -p .vibe/prompts
nano .vibe/prompts/project-prompt.mdThen reference it:
# .vibe/config.toml
system_prompt_id = "project-prompt"Prompt Discovery Order
Vibe looks for custom prompts in this order: 1. Project-specific prompts: .vibe/prompts/[name].md 2. Global prompts: ~/.vibe/prompts/[name].md 3. Built-in prompts: vibe/core/prompts/[name].md
Utility Prompt Customization
Utility prompts are used for specific contextual information and can also be customized.
Customizing Project Context
The project context template (project_context.md) can be customized:
Create a custom template:
nano ~/.vibe/prompts/project_context.mdModify the template with your preferred format: ```markdown directoryStructure: Below is a snapshot of {abs_path} at the start of the conversation.{large_repo_warning}
{structure}
Absolute path: {abs_path}
gitStatus: This is the git status at the start of the conversation. {git_status}
Additional Context: - Current timestamp: {timestamp} - User: {username}
3. **Note**: The template uses placeholders that will be replaced at runtime:
- `{large_repo_warning}` - Warning if repository is large
- `{structure}` - Directory structure
- `{abs_path}` - Absolute path
- `{git_status}` - Git status information
### Customizing Dangerous Directory Prompt
The dangerous directory prompt can also be customized:
```bash
nano ~/.vibe/prompts/dangerous_directory.md
Example custom version:
⚠️ Security Restriction Active ⚠️
Project context scanning has been disabled because {reason}.
This is for your security. You can still use tools to explore the project:
- Use `read_file` to read specific files
- Use `bash` to run commands
- Use `grep` to search for patterns
Absolute path: {abs_path}Project Context Customization
The project context provides information about the current project and can be extensively customized through configuration.
Configuration Options
# ~/.vibe/config.toml
[project_context]
# Maximum characters in directory structure
max_chars = 40000
# Default number of commits to show in git status
default_commit_count = 5
# Maximum size of documentation files to load
max_doc_bytes = 32768
# Buffer for truncation warnings
truncation_buffer = 1000
# Maximum depth for directory traversal
max_depth = 3
# Maximum number of files to show
max_files = 1000
# Maximum directories per level
max_dirs_per_level = 20
# Timeout for git operations (seconds)
timeout_seconds = 2.0Customizing Project Documentation
Vibe automatically loads documentation from these files in the project root: - README.md - README.rst - README.txt - README.markdown - README - readme.md - readme.rst - readme.txt - readme.markdown - readme
The documentation is loaded with a size limit (default: 32KB) and displayed in the prompt.
Disabling Project Context
You can disable project context entirely:
# ~/.vibe/config.toml
include_project_context = falseSession-Specific Prompt Control
You can control which prompt layers are included in each session through configuration.
Available Configuration Options
# ~/.vibe/config.toml
# Include commit signature instructions
include_commit_signature = true
# Include model information
include_model_info = true
# Include OS and shell information
include_prompt_detail = true
# Include project context
include_project_context = trueExample: Minimal Prompt Configuration
# ~/.vibe/config.toml
include_commit_signature = false
include_model_info = false
include_prompt_detail = false
include_project_context = falseThis would result in just the base system prompt being sent to the LLM.
Example: Full Context Configuration
# ~/.vibe/config.toml
include_commit_signature = true
include_model_info = true
include_prompt_detail = true
include_project_context = trueThis would include all available context layers.
Advanced Prompt Customization
Agent-Specific Prompts
You can override the system prompt for specific agents:
# ~/.vibe/agents/reviewer.toml
name = "reviewer"
display_name = "Code Reviewer"
system_prompt_id = "review-prompt"
[tools.read_file]
permission = "always"Then create the custom prompt:
nano ~/.vibe/prompts/review-prompt.mdDynamic Prompt Selection
While Vibe doesn’t support dynamic prompt selection based on runtime conditions out of the box, you can:
- Create multiple prompts and switch between them using different agents
- Use environment variables in your prompts (though they won’t be expanded automatically)
- Use MCP servers to provide dynamic context through tools
Custom Tool Prompts
Each tool can have its own prompt. To customize tool prompts:
- Create a custom tool in
~/.vibe/tools/ - Override the
get_tool_prompt()method:
# ~/.vibe/tools/my_tool.py
from vibe.core.tools.base import BaseTool, BaseToolConfig
class MyTool(BaseTool[BaseToolConfig]):
@classmethod
def get_name(cls) -> str:
return "my_tool"
@classmethod
def get_description(cls) -> str:
return "My custom tool"
@classmethod
def get_tool_prompt(cls) -> str:
"""Custom prompt for this tool."""
return """
# My Tool Usage
When using my_tool, always:
- Provide the my_option parameter
- Use lowercase values
- Check the result before proceeding
"""
async def run(self) -> str:
return "Tool executed"Prompt Variables and Dynamic Content
The prompt system supports several dynamic variables that are replaced at runtime:
System Prompt Variables
{config.active_model}- The currently active model name{config.textual_theme}- The current UI theme{platform}- Operating system platform{shell}- Default shell
Project Context Variables
{large_repo_warning}- Warning if repository is large{structure}- Directory structure tree{abs_path}- Absolute path to project root{git_status}- Git repository status
Git Status Variables
Current branch- Name of current git branchMain branch- Name of main branch (main or master)Status- Repository status (clean or number of changes)Recent commits- List of recent commits
Dynamic Content Generation
Some content is generated dynamically:
- Directory Structure - Built from actual filesystem
- Git Status - Fetched from git commands
- Available Skills - List of loaded skills
- Available Subagents - List of available subagents
- Tool Prompts - Prompts from each available tool
Best Practices for Prompt Customization
1. Start with the Built-in Prompt
Begin by using the built-in cli prompt and make incremental changes:
cp vibe/core/prompts/cli.md ~/.vibe/prompts/my-prompt.md
nano ~/.vibe/prompts/my-prompt.md2. Keep Prompts Concise
- The LLM has token limits, so keep prompts focused
- Remove guidelines that aren’t essential for your use case
- Avoid redundant information
3. Be Specific About Requirements
If you have specific requirements, state them clearly:
## Domain-Specific Guidelines
- Always use TypeScript for frontend code
- Prefer functional components over class components
- Use the @mui/material library for UI components4. Test Incrementally
Make small changes and test them before making larger modifications:
system_prompt_id = "test-v1"Test, then:
system_prompt_id = "test-v2"5. Document Your Custom Prompts
Add comments to explain why certain guidelines exist:
## Code Style Guidelines
- Use 2-space indentation (company standard)
- Prefer const over let (immutability best practice)6. Consider Context Length
Remember that the prompt is combined with: - Conversation history - Previous tool outputs - Current user message
Keep the total under the model’s context window limit.
7. Use Multiple Prompts for Different Use Cases
Create different prompts for different scenarios:
# For code reviews
nano ~/.vibe/prompts/review.md
# For documentation
nano ~/.vibe/prompts/docs.md
# For testing
nano ~/.vibe/prompts/test.mdThen use different agents for each:
# ~/.vibe/agents/reviewer.toml
system_prompt_id = "review"
# ~/.vibe/agents/docwriter.toml
system_prompt_id = "docs"Complete Example: Custom Prompt Setup
Step 1: Create Custom Prompt Directory
mkdir -p ~/.vibe/promptsStep 2: Create Custom System Prompt
nano ~/.vibe/prompts/enterprise-dev.md# Enterprise Development Assistant
You are operating as and within Mistral Vibe, configured for enterprise development standards.
## Core Principles
- **Security First**: Always consider security implications
- **Maintainability**: Write code that's easy to understand and maintain
- **Performance**: Optimize for performance where it matters
- **Standards Compliance**: Follow all company coding standards
## Tool Usage Guidelines
- Always use tools to fulfill user requests
- Check parameters before using tools
- Match existing code style exactly
- Keep changes minimal and focused
## Enterprise-Specific Rules
- Use TypeScript for all new frontend code
- Use React with @mui/material for UI components
- Follow the monorepo structure
- Use npm workspaces for package management
- Always add proper documentation
## Code Quality
- Write clean, self-documenting code
- Add JSDoc/TypeScript comments for public APIs
- Follow SOLID principles
- Write unit tests for all new code
- Ensure code passes linting checksStep 3: Create Agent Configuration
nano ~/.vibe/agents/enterprise.tomlname = "enterprise"
display_name = "Enterprise Developer"
description = "Agent configured for enterprise development standards"
safety = "neutral"
system_prompt_id = "enterprise-dev"
# Tool permissions
[tools.write_file]
permission = "ask"
[tools.search_replace]
permission = "ask"
[tools.bash]
permission = "ask"Step 4: Update Main Configuration
nano ~/.vibe/config.tomlactive_model = "devstral-2"
textual_theme = "terminal"
# Project context settings
[project_context]
max_chars = 30000
max_depth = 4
max_files = 500
# Include all context
include_commit_signature = true
include_model_info = true
include_prompt_detail = true
include_project_context = trueStep 5: Use Your Custom Setup
vibe --agent enterpriseSummary
Mistral Vibe offers extensive prompt customization options:
- System Prompts - Customize base behavior (cli, tests, or custom)
- Utility Prompts - Customize contextual information (project_context, dangerous_directory)
- Prompt Layers - Control what context is included (commit signature, model info, etc.)
- Project Context - Configure how project information is presented
- Agent-Specific Prompts - Different prompts for different agents
- Tool Prompts - Custom prompts for custom tools
Key Configuration Options
system_prompt_id- Which system prompt to useinclude_commit_signature- Include git commit instructionsinclude_model_info- Include model name in promptinclude_prompt_detail- Include OS/shell info and tool promptsinclude_project_context- Include project structure and git status[project_context]- Configure project context settings
Customization Locations
- Global:
~/.vibe/prompts/ - Project:
.vibe/prompts/ - Configuration:
~/.vibe/config.tomlor.vibe/config.toml - Agent Configs:
~/.vibe/agents/*.tomlor.vibe/agents/*.toml
By understanding and leveraging these options, you can fully customize Mistral Vibe’s behavior to match your specific workflow, coding standards, and requirements.
How the Task Tool Works in Mistral Vibe
I’ve explored the task tool implementation and created a detailed document covering all aspects. Here’s what I found:
Core Functionality
The task tool delegates work to subagents for independent execution, enabling: - Context management (offload work to prevent context overload) - Parallel execution (run multiple tasks independently) - Specialization (use different subagents for different tasks) - Automation (delegate repetitive or complex tasks)
How It Works
- Validation: Checks that the specified agent exists and is a subagent
- Isolated Execution: Creates a new
AgentLoopwith logging disabled - Conversation Loop: Runs the subagent with the task description
- Event Streaming: Collects responses and displays tool results
- Completion: Returns accumulated response with turn count
Key Security Mechanisms
- Agent Type Validation: Only subagents can be spawned (prevents recursion)
- Isolated Configuration: Session logging disabled for privacy
- Inherited Approval Callback: Maintains same permission model
- Turn Counting: Prevents runaway processes
- Error Handling: Graceful degradation on failures
Task Arguments
- Required:
task(str) - The task description - Optional:
agent(str) - Subagent name (default: “explore”)
Task Results
Returns a TaskResult with: - response: Accumulated text response - turns_used: Number of assistant responses - completed: Whether task finished normally
Best Practices
- Write clear, detailed task descriptions
- Choose the right subagent for the task
- Use for complex tasks, not simple operations
- Trust subagent autonomy
The document provides complete implementation details, security analysis, use cases, and a step-by-step example of creating and using custom subagents.
Complete Guide to the Task Tool
Overview of the Task Tool
The task tool is a powerful feature in Mistral Vibe that allows delegating work to subagents for independent execution. It enables parallel processing and context management by running specialized agents in the background.
Key Characteristics
- Independent Execution: Subagents run autonomously without user interaction
- Context Isolation: Subagent conversations don’t interfere with the main conversation
- Resource Management: Limits on turns and execution to prevent runaway processes
- Security Constraints: Only subagents can be spawned (not regular agents)
- No Logging: Subagent interactions are not saved to session logs
How the Task Tool Works
The task tool creates a separate AgentLoop instance for the subagent, runs it with the provided task description, and returns the accumulated results.
Core Process
- Validation: Check that the specified agent exists and is a subagent
- Configuration: Create a new configuration with logging disabled
- Execution: Run the subagent’s conversation loop with the task
- Monitoring: Track turns and collect output
- Completion: Return results when done or interrupted
Code Flow
# 1. Get the subagent profile
agent_profile = agent_manager.get_agent(args.agent)
# 2. Validate it's a subagent
if agent_profile.agent_type != AgentType.SUBAGENT:
raise ToolError("Only subagents can be used")
# 3. Create isolated configuration
base_config = VibeConfig.load(session_logging=SessionLoggingConfig(enabled=False))
# 4. Create subagent loop
subagent_loop = AgentLoop(config=base_config, agent_name=args.agent)
# 5. Execute and collect results
async for event in subagent_loop.act(args.task):
if isinstance(event, AssistantEvent) and event.content:
accumulated_response.append(event.content)
# ... handle other events
# 6. Return results
yield TaskResult(
response="".join(accumulated_response),
turns_used=turns_used,
completed=completed,
)Task Execution Flow
Step-by-Step Execution
- Tool Invocation
- User or main agent calls
task(task="...", agent="...") - Tool is validated and permissions are checked
- User or main agent calls
- Subagent Selection
- Agent manager looks up the specified agent
- Verifies it’s a subagent (not a regular agent)
- Loads the agent’s profile and configuration
- Isolated Execution Environment
- Creates new
AgentLoopinstance for the subagent - Disables session logging (
SessionLoggingConfig(enabled=False)) - Inherits approval callback from parent context
- Creates new
- Conversation Loop
- Subagent processes the task description
- Can use any tools allowed in its profile
- Generates responses and tool calls
- Event Streaming
- Assistant messages are collected
- Tool results are displayed in real-time
- Interruptions are detected
- Completion
- Counts turns used (number of assistant responses)
- Determines if task completed normally
- Returns accumulated response
Event Types Handled
The task tool processes these event types from the subagent:
- AssistantEvent: Collects the subagent’s text responses
- ToolResultEvent: Displays tool execution results
- CompactStartEvent/CompactEndEvent: Handles conversation compaction
- Middleware stop events: Detects interruptions
Task Arguments
The task tool accepts two parameters:
Required Parameter
task(str): The task description to delegate to the subagent- Should be clear and detailed
- Provides context for autonomous execution
- Examples: “Analyze the project structure”, “Find all TODO comments”
Optional Parameter
agent(str): Name of the subagent to use- Default: “explore” (built-in exploration subagent)
- Must be a valid subagent profile
- Examples: “explore”, “my-custom-subagent”
Example Invocations
task(task="Find all instances of the word TODO in the codebase")
task(task="Analyze the architecture of the backend service", agent="explore")
task(task="Review the test files for missing assertions", agent="reviewer")
Task Results
The task tool returns a TaskResult object with three fields:
Result Fields
response(str): Accumulated text response from the subagent- Contains all assistant messages concatenated
- May include error messages if execution failed
- Used as the primary output of the tool
turns_used(int): Number of turns the subagent used- Counts assistant responses in the conversation
- Helps track resource usage
- Used for monitoring and billing
completed(bool): Whether the task completed normallyTrueif subagent finished naturallyFalseif interrupted by middleware or error- Helps determine if results are reliable
Example Results
Successful completion:
{
"response": "Found 15 TODO comments in the codebase...",
"turns_used": 3,
"completed": true
}Interrupted:
{
"response": "Analyzing project structure... [Subagent error: timeout]",
"turns_used": 5,
"completed": false
}Security and Safety Mechanisms
The task tool has multiple security layers:
1. Agent Type Validation
if agent_profile.agent_type != AgentType.SUBAGENT:
raise ToolError(
f"Agent '{args.agent}' is a {agent_profile.agent_type.value} agent. "
f"Only subagents can be used with the task tool. "
f"This is a security constraint to prevent recursive spawning."
)Purpose: Prevents infinite recursion by ensuring only subagents can be spawned.
2. Isolated Configuration
base_config = VibeConfig.load(
session_logging=SessionLoggingConfig(enabled=False)
)Purpose: Subagent interactions are not logged to prevent sensitive data leakage.
3. Inherited Approval Callback
if ctx and ctx.approval_callback:
subagent_loop.set_approval_callback(ctx.approval_callback)Purpose: Maintains the same permission model as the parent agent.
4. Turn Counting
turns_used = sum(
msg.role == Role.assistant for msg in subagent_loop.messages
)Purpose: Prevents runaway processes by tracking resource usage.
5. Error Handling
try:
async for event in subagent_loop.act(args.task):
# ... process events
except Exception as e:
completed = False
accumulated_response.append(f"\n[Subagent error: {e}]")Purpose: Graceful handling of subagent failures.
6. Middleware Integration
The subagent inherits middleware from the main configuration: - TurnLimitMiddleware: Limits number of turns - PriceLimitMiddleware: Limits cost - AutoCompactMiddleware: Compacts long conversations - PlanAgentMiddleware: Handles plan agent logic
Use Cases and Best Practices
When to Use the Task Tool
✅ Context management: Delegate tasks that would consume too much main conversation context ✅ Specialized work: Use appropriate subagents for specific task types ✅ Parallel execution: Launch multiple subagents for independent tasks ✅ Autonomous work: Tasks that don’t require back-and-forth with the user
Best Practices
- Write clear, detailed task descriptions
- The subagent works autonomously
- Provide enough context for independent success
- Example: “Analyze the project structure and architecture” vs “Check stuff”
- Choose the right subagent
- Match the subagent to the task type
- Use built-in subagents like “explore” for code analysis
- Create custom subagents for specialized tasks
- Prefer direct tools for simple operations
- If you know exactly which file to read: use
read_file - If you need to search: use
grep - Only use task for complex, multi-step work
- If you know exactly which file to read: use
- Trust the subagent’s judgment
- Let it explore without micromanaging
- Avoid specifying exact steps
- Focus on the goal, not the method
Example Use Cases
Codebase Exploration:
task(task="Analyze the project structure and identify key components")
Pattern Searching:
task(task="Find all deprecated API usages in the codebase")
Documentation Review:
task(task="Check if all public functions have proper documentation")
Architecture Analysis:
task(task="Analyze the database schema and identify potential issues")
Limitations
Functional Limitations
- No file writing: Subagents cannot write or modify files
- No user interaction: Subagents cannot ask user questions
- No persistent state: Subagent sessions are not saved
- Limited turns: Controlled by middleware (turn limits)
- Cost limits: Controlled by middleware (price limits)
Technical Limitations
- Single subagent at a time: Only one subagent runs per task call
- No nested tasks: Subagents cannot call the task tool themselves
- Resource sharing: Subagents share the same LLM backend
- Context isolation: Subagent context doesn’t affect main conversation
Error Handling
- Timeouts: Subagents may be interrupted if they take too long
- Resource limits: Subagents are constrained by middleware
- Permission errors: Subagents respect their tool permissions
- Execution errors: Errors are captured and returned in response
Implementation Details
Tool Class Structure
class Task(
BaseTool[TaskArgs, TaskResult, TaskToolConfig, BaseToolState],
ToolUIData[TaskArgs, TaskResult],
):Type Parameters: - TaskArgs: Input arguments (task description, agent name) - TaskResult: Output results (response, turns, completion status) - TaskToolConfig: Tool configuration (permission level) - BaseToolState: Tool execution state
Key Methods
run(): Main execution method (async generator)- Creates subagent loop
- Executes task
- Yields stream events
- Returns final result
get_call_display(): UI display for tool call- Shows “Running {agent} agent: {task}”
- Provides feedback during execution
get_result_display(): UI display for tool result- Shows “Agent completed in {turns_used} turns”
- Indicates success or interruption
get_status_text(): Status message- Returns “Running subagent”
- Used during execution
Permission Model
class TaskToolConfig(BaseToolConfig):
permission: ToolPermission = ToolPermission.ASKDefault Permission: ASK (requires user approval)
Possible Values: - always: Auto-approve task executions - never: Disable task tool completely - ask: Require user confirmation (default)
UI Integration
The task tool implements ToolUIData for rich UI display:
@classmethod
def get_call_display(cls, event: ToolCallEvent) -> ToolCallDisplay:
args = event.args
if isinstance(args, TaskArgs):
return ToolCallDisplay(
summary=f"Running {args.agent} agent: {args.task}"
)
return ToolCallDisplay(summary="Running subagent")
@classmethod
def get_result_display(cls, event: ToolResultEvent) -> ToolResultDisplay:
result = event.result
if isinstance(result, TaskResult):
turn_word = "turn" if result.turns_used == 1 else "turns"
if not result.completed:
return ToolResultDisplay(
success=False,
message=f"Agent interrupted after {result.turns_used} {turn_word}",
)
return ToolResultDisplay(
success=True,
message=f"Agent completed in {result.turns_used} {turn_word}",
)
return ToolResultDisplay(success=True, message="Agent completed")Complete Example
Creating a Custom Subagent
First, create a custom subagent profile:
# ~/.vibe/agents/code-analyzer.toml
name = "code-analyzer"
display_name = "Code Analyzer"
description = "Specialized subagent for code analysis tasks"
safety = "safe"
agent_type = "subagent"
# Allow read-only tools
[tools.read_file]
permission = "always"
[tools.grep]
permission = "always"
[tools.bash]
permission = "always"
# Disable write tools
[tools.write_file]
permission = "never"
[tools.search_replace]
permission = "never"Using the Custom Subagent
task(
task="Analyze the code quality and identify potential issues",
agent="code-analyzer"
)
Expected Output
🤖 Running code-analyzer agent: Analyze the code quality and identify potential issues
🔧 code-analyzer: Agent completed in 4 turns
Found the following issues:
1. Missing type hints in 15 functions
2. Unused imports in 3 files
3. Potential security issues in auth module
Summary
The task tool is a powerful feature that enables:
- Context Management: Offload work to prevent context overload
- Parallel Execution: Run multiple tasks independently
- Specialization: Use different subagents for different tasks
- Automation: Delegate repetitive or complex tasks
Key Features
- Independent execution with isolated configuration
- Security constraints preventing recursion
- Resource monitoring with turn counting
- Rich UI integration with progress display
- Error handling with graceful degradation
Best Practices
- Write detailed task descriptions
- Choose appropriate subagents
- Use for complex, multi-step tasks
- Trust subagent autonomy
- Monitor resource usage
By understanding how the task tool works, you can effectively use it to manage complex workflows, delegate specialized tasks, and improve overall productivity in Mistral Vibe.
How Skills Work in Mistral Vibe
I’ve explored the skills system implementation and created a detailed document covering all aspects. Here’s what I found:
Core Functionality
Skills are reusable components that extend Vibe’s functionality through: - Documentation in the system prompt - Slash command integration (when user_invocable=true) - Tool restrictions (experimental feature) - Metadata-driven configuration
Skill Structure
- Location: Directories with
SKILL.mdfiles - Format: YAML frontmatter + markdown content
- Discovery: Automatic from multiple paths
Skill Metadata Fields
Required: - name (str): Skill identifier (must match directory name) - description (str): What the skill does
Optional: - license (str): License information - compatibility (str): Environment requirements - metadata (dict): Arbitrary key-value pairs - allowed-tools (list): Pre-approved tools (experimental) - user-invocable (bool): Appear in slash command menu (default: true)
Discovery Paths
Skills are discovered from: 1. Custom paths (from skill_paths in config) 2. Local project skills (.vibe/skills/) 3. Global skills (~/.vibe/skills/)
Integration Points
- System Prompt: Skills section with XML format
- Slash Commands: User-invocable skills appear in menu
- Configuration: Enable/disable via patterns
Management
- Enable/Disable: Using
enabled_skillsanddisabled_skillsin config - Pattern Matching: Exact names, globs, and regex
- Duplicate Handling: First occurrence wins
Security Features
- HTML escaping in system prompt
- Tool restrictions (experimental)
- Validation of metadata fields
The document provides complete implementation details, examples, and best practices for creating and using skills.
Complete Guide to the Skills System
- Overview of the Skills System
- Skill Structure and Format
- Skill Discovery and Loading
- Skill Metadata Fields
- Skill Integration with the System
- Skill Discovery Paths
- Skill Management and Configuration
- Skills and Slash Commands
- Best Practices for Creating Skills
- Complete Example: Creating a Custom Skill
Overview of the Skills System
The skills system in Mistral Vibe allows extending functionality through reusable components. Skills can add specialized behaviors, documentation, and can appear as slash commands in the UI.
Key Characteristics
- Reusable Components: Skills are self-contained units of functionality
- Metadata-Driven: Skills use YAML frontmatter for configuration
- Discoverable: Skills are automatically discovered from multiple paths
- Configurable: Skills can be enabled/disabled via patterns
- User-Invocable: Skills can appear in slash command menu
Purpose
Skills serve several purposes:
- Documentation: Provide context about available capabilities
- Specialization: Define specialized behaviors for different domains
- Slash Commands: Appear as user-invocable commands in the UI
- Tool Restrictions: Limit which tools a skill can use (experimental)
Skill Structure and Format
Skills are defined in directories with a SKILL.md file containing YAML frontmatter followed by markdown content.
File Structure
~/.vibe/skills/
└── my-skill/
└── SKILL.md
Format Specification
---
# YAML Frontmatter (Metadata)
name: my-skill
description: What this skill does
description: MIT
compatibility: Python 3.12+
user-invocable: true
allowed-tools:
- read_file
- grep
---
# Markdown Content
## Skill Documentation
Detailed description of what the skill does and how to use it.Frontmatter Requirements
- Must start and end with
---(at least 3 dashes) - Must be valid YAML
- Must contain at least
nameanddescription - Must be a dictionary/mapping
Skill Discovery and Loading
The skill system uses a multi-stage discovery and loading process:
Discovery Process
- Find Skill Directories: Look for directories in search paths
- Check for SKILL.md: Each directory must contain a
SKILL.mdfile - Parse Frontmatter: Extract YAML metadata from the file
- Validate Metadata: Ensure required fields are present
- Create SkillInfo: Build internal representation
- Handle Duplicates: Skip duplicates (log warning)
Loading Code
def _parse_skill_file(self, skill_path: Path) -> SkillInfo:
try:
content = skill_path.read_text(encoding="utf-8")
except OSError as e:
raise SkillParseError(f"Cannot read file: {e}") from e
frontmatter, _ = parse_frontmatter(content)
metadata = SkillMetadata.model_validate(frontmatter)
skill_name_from_dir = skill_path.parent.name
if metadata.name != skill_name_from_dir:
logger.warning(
"Skill name '%s' doesn't match directory name '%s' at %s",
metadata.name,
skill_name_from_dir,
skill_path,
)
return SkillInfo.from_metadata(metadata, skill_path)Error Handling
- Missing SKILL.md: Directory is skipped
- Invalid YAML: Warning logged, skill not loaded
- Missing required fields: Validation error
- Duplicate names: Later occurrence is skipped with warning
Skill Metadata Fields
Required Fields
name(str): Skill identifier- Must match directory name
- Lowercase letters, numbers, and hyphens only
- Pattern:
^[a-z0-9]+(-[a-z0-9]+)*$ - Max length: 64 characters
- Min length: 1 character
description(str): What the skill does- Must be at least 1 character
- Max length: 1024 characters
Optional Fields
license(str | None): License name or reference- Can reference bundled license file
- No length restrictions
compatibility(str | None): Environment requirements- Can specify product, system packages, etc.
- Max length: 500 characters
metadata(dict[str, str]): Arbitrary key-value mapping- Keys and values are converted to strings
- Used for additional metadata
allowed-tools(list[str]): Pre-approved tools (experimental)- Space-delimited list in YAML
- Can be list in YAML or space-separated string
- Limits which tools the skill can use
user-invocable(bool): Controls slash command menu appearance- Default:
true - When
true: Skill appears in slash command menu - When
false: Skill is documentation-only
- Default:
Field Validation
The skill metadata goes through Pydantic validation:
class SkillMetadata(BaseModel):
model_config = {"populate_by_name": True}
name: str = Field(
...,
min_length=1,
max_length=64,
pattern=r"^[a-z0-9]+(-[a-z0-9]+)*$",
description="Skill identifier. Lowercase letters, numbers, and hyphens only.",
)
description: str = Field(
...,
min_length=1,
max_length=1024,
description="What this skill does and when to use it.",
)
# ... other fields with validationSkill Integration with the System
System Prompt Integration
Skills are included in the system prompt when include_prompt_detail is enabled:
def get_universal_system_prompt(
tool_manager: ToolManager,
config: VibeConfig,
skill_manager: SkillManager,
agent_manager: AgentManager,
) -> str:
sections = [config.system_prompt]
# ... other sections
if config.include_prompt_detail:
# ... tool prompts
skills_section = _get_available_skills_section(skill_manager)
if skills_section:
sections.append(skills_section)
# ... other sections
return "\n\n".join(sections)Skill Section Format
# Available Skills
You have access to the following skills. When a task matches a skill's description,
read the full SKILL.md file to load detailed instructions.
<available_skills>
<skill>
<name>skill-name</name>
<description>Skill description</description>
<path>/path/to/skill/SKILL.md</path>
</skill>
</available_skills>HTML Escaping
Skill metadata is HTML-escaped before inclusion in the prompt to prevent injection:
lines.append(f" <name>{html.escape(str(name))}</name>")
lines.append(
f" <description>{html.escape(str(info.description))}</description>"
)
lines.append(f" <path>{html.escape(str(info.skill_path))}</path>")Skill Discovery Paths
The skill system searches for skills in multiple locations:
Default Search Paths
- Global Skills Directory:
~/.vibe/skills/ - Local Project Skills:
.vibe/skills/in current working directory - Custom Paths: Configured in
config.tomlviaskill_paths
Discovery Order
Skills are discovered in this order (later paths override earlier ones): 1. Custom paths (from skill_paths) 2. Local project skills (.vibe/skills/) 3. Global skills (~/.vibe/skills/)
Path Resolution
@staticmethod
def _compute_search_paths(config: VibeConfig) -> list[Path]:
paths: list[Path] = []
for path in config.skill_paths:
if path.is_dir():
paths.append(path)
if (skills_dir := resolve_local_skills_dir(Path.cwd())) is not None:
paths.append(skills_dir)
if GLOBAL_SKILLS_DIR.path.is_dir():
paths.append(GLOBAL_SKILLS_DIR.path)
unique: list[Path] = []
for p in paths:
rp = p.resolve()
if rp not in unique:
unique.append(rp)
return uniqueDuplicate Handling
If the same skill is found in multiple locations: - The first occurrence is kept - Later occurrences are skipped - A debug message is logged
Skill Management and Configuration
Enabling and Disabling Skills
Skills can be enabled or disabled using patterns in configuration:
# ~/.vibe/config.toml
# Enable specific skills
enabled_skills = ["code-review", "test-*"]
# Disable specific skills
disabled_skills = ["experimental-*"]Pattern Matching
Vibe supports three types of patterns:
- Exact names:
"code-review"matches only “code-review” - Glob patterns:
"test-*"matches “test-1”, “test-2”, etc. - Regex patterns:
"re:^serena_.*$"matches regex pattern
Configuration Logic
@property
def available_skills(self) -> dict[str, SkillInfo]:
if self._config.enabled_skills:
return {
name: info
for name, info in self._available.items()
if name_matches(name, self._config.enabled_skills)
}
if self._config.disabled_skills:
return {
name: info
for name, info in self._available.items()
if not name_matches(name, self._config.disabled_skills)
}
return dict(self._available)Priority Rules
- If
enabled_skillsis set: Only those skills are available - If
disabled_skillsis set (andenabled_skillsnot set): All skills except disabled ones - If neither is set: All discovered skills are available
Skills and Slash Commands
Skills can appear as slash commands in the UI when user_invocable is true.
User-Invocable Skills
---
name: code-review
description: Perform automated code reviews
license: MIT
compatibility: Python 3.12+
user-invocable: true # This makes it appear in slash command menu
allowed-tools:
- read_file
- grep
- ask_user_question
---Slash Command Format
When a skill is user-invocable, it appears in the slash command menu as: /skill-name or /skill name (spaces replaced with hyphens)
Invocation Behavior
When a user invokes a skill via slash command: 1. The skill’s SKILL.md file is read 2. The description and instructions are displayed 3. The agent can use the skill’s allowed tools 4. The skill’s context is included in the conversation
Best Practices for Creating Skills
1. Clear and Descriptive Names
- Use lowercase with hyphens:
code-review,test-generator - Be specific about the skill’s purpose
- Avoid generic names like
helperorassistant
2. Comprehensive Descriptions
- Explain what the skill does
- Include when to use it
- Mention any prerequisites or requirements
3. Detailed Documentation
- Provide clear instructions in the markdown body
- Include examples of usage
- Document any limitations or constraints
4. Appropriate Tool Restrictions
- List only the tools needed for the skill
- Avoid giving unnecessary permissions
- Consider security implications
5. Versioning and Compatibility
- Specify compatibility requirements
- Document breaking changes
- Consider versioning your skills
6. Testing
- Test your skill in different scenarios
- Verify it works with the allowed tools
- Check that the documentation is clear
Complete Example: Creating a Custom Skill
Step 1: Create Skill Directory
mkdir -p ~/.vibe/skills/code-reviewStep 2: Create SKILL.md File
nano ~/.vibe/skills/code-review/SKILL.mdStep 3: Define Skill Metadata and Documentation
---
name: code-review
description: Perform automated code reviews to identify issues and suggest improvements
description: MIT
compatibility: Python 3.12+
user-invocable: true
allowed-tools:
- read_file
- grep
- ask_user_question
---
# Code Review Skill
This skill helps analyze code quality and suggest improvements.
## Features
- Identifies common code smells
- Checks for missing documentation
- Suggests refactoring opportunities
- Verifies coding standards compliance
## Usage
When performing a code review, this skill will:
1. Analyze the code structure
2. Check for common issues
3. Suggest improvements
4. Provide detailed feedback
## Limitations
- Only analyzes Python code
- Requires proper file permissions
- May miss context-specific issues
## Examples/code-review
🤖 I’ll perform a code review of the current project.
read_file(path=“src/main.py”) grep(pattern=“TODO”, path=“src/”) ask_user_question(questions=[{ “question”: “What coding standards should I check?”, “options”: [ {“label”: “PEP 8”, “description”: “Python style guide”}, {“label”: “Custom”, “description”: “Project-specific rules”}] }])
Step 4: Verify Skill Discovery
vibeCheck that the skill appears in the system prompt or slash command menu.
Step 5: Test the Skill
> /code-review
Or let the agent use it automatically when appropriate.
Advanced: Project-Specific Skills
You can also create skills specific to a project:
# In your project directory
mkdir -p .vibe/skills/project-docs
nano .vibe/skills/project-docs/SKILL.md---
name: project-docs
description: Generate and maintain project-specific documentation
description: MIT
compatibility: Python 3.12+
user-invocable: true
allowed-tools:
- read_file
- grep
- write_file
---
# Project Documentation Skill
This skill helps generate documentation specific to this project.
## Features
- Analyzes project structure
- Generates API documentation
- Creates README files
- Updates documentation based on code changes
## Project-Specific Rules
- Follow the project's documentation style
- Include project-specific examples
- Reference project-specific componentsSummary
The skills system in Mistral Vibe provides a powerful way to extend functionality:
- Skill Structure: Directories with
SKILL.mdfiles containing YAML frontmatter - Discovery: Automatic discovery from multiple paths
- Metadata: Comprehensive metadata with validation
- Integration: System prompt and slash command integration
- Configuration: Enable/disable via patterns
- Security: Tool restrictions and HTML escaping
Key Benefits
- Reusability: Skills can be shared across projects
- Documentation: Built-in documentation system
- Extensibility: Easy to add new skills
- Control: Fine-grained enable/disable patterns
- User-Friendly: Slash command integration
By understanding how skills work, you can create custom skills to extend Mistral Vibe’s capabilities for your specific needs and workflows.