- Make sure you have Python 3.9+ installed
- Install the LLM CLI tool if you haven't already:
pip install llm
- Clone or download this repository
- Navigate to the project directory
- Install in development mode:
pip install -e .
pip install llm-chutes-
Get your API key from Chutes AI
-
Set it as an environment variable:
export CHUTES_API_KEY="your-api-key-here"
Or add it to your shell profile (
.bashrc,.zshrc, etc.):echo 'export CHUTES_API_KEY="your-api-key-here"' >> ~/.zshrc source ~/.zshrc
Alternatively, you can use the LLM keys system:
llm keys set chutes
-
Test the installation:
python3 test_basic.py
-
List available models:
llm chutes models
-
Try a simple prompt:
llm -m chutes/deepseek-ai/DeepSeek-R1 "Hello, how are you?"
- Make sure your
CHUTES_API_KEYis set correctly - Try refreshing models:
llm chutes refresh - Check your internet connection
- Make sure you installed in the same Python environment as LLM
- Try reinstalling:
pip uninstall llm-chutes && pip install -e .
- Verify your API key is valid
- Check if the Chutes AI service is accessible
- Try the test script:
python3 test_basic.py
# List all models
llm chutes models
# List models in JSON format
llm chutes models --json
# Refresh model cache
llm chutes refresh
# Use a model with caching
llm -m chutes/deepseek-ai/DeepSeek-R1 -o cache 1 "Explain quantum computing"
# Set up an alias for easier use
llm aliases set deepseek chutes/deepseek-ai/DeepSeek-R1
llm -m deepseek "What is machine learning?"