LLM plugin for models hosted by Together AI
First, install the LLM command-line utility.
Now install this plugin in the same environment as LLM.
llm install llm-togetheraiYou will need an API key from Together AI. You can obtain one here.
You can set that as an environment variable called TOGETHER_API_KEY, or add it to the llm set of saved keys using:
llm keys set togetherEnter key: <paste key here>
To list available models, run:
llm models listYou should see a list that looks something like this:
together: together/meta-llama/Llama-2-7b-chat-hf
together: together/meta-llama/Llama-2-13b-chat-hf
together: together/meta-llama/Llama-2-70b-chat-hf
together: together/mistralai/Mistral-7B-Instruct-v0.1
together: together/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO
...
To run a prompt against a model, pass its full model ID to the -m option, like this:
llm -m together/meta-llama/Llama-2-7b-chat-hf "Five creative names for a pet robot"You can set a shorter alias for a model using the llm aliases command like so:
llm aliases set llama2-7b together/meta-llama/Llama-2-7b-chat-hfNow you can prompt the model using:
cat llm_togetherai.py | llm -m llama2-7b -s 'write some pytest tests for this'Some Together AI models can accept image attachments. Run this command:
llm models --options -q togetherAnd look for models that list these attachment types:
Attachment types:
image/gif, image/jpeg, image/png, image/webp
You can feed these models images as URLs or file paths, for example:
curl https://static.simonwillison.net/static/2024/pelicans.jpg | llm \
-m together/meta-llama/Llama-3.2-11B-Vision-Instruct-Turbo 'describe this image' -a -The llm models -q together command will display all available models, or you can use this command to see more detailed information:
llm together modelsOutput starts like this:
- id: meta-llama/Llama-2-7b-chat-hf
name: Llama-2-7b-chat-hf
context_length: 4,096
type: chat
organization: Together
pricing: input $0.2/M, output $0.2/M
- id: meta-llama/Llama-2-13b-chat-hf
name: Llama-2-13b-chat-hf
context_length: 4,096
type: chat
organization: Together
pricing: input $0.3/M, output $0.3/MAdd --json to get back JSON instead:
llm together models --jsonThe plugin caches the list of available models for 1 hour. To refresh this cache manually:
llm together refreshThis will fetch the latest models from the Together AI API and update the local cache.
This plugin uses the Together AI API endpoint:
https://api.together.xyz/v1/models
The models are cached locally in your LLM user directory to improve performance and reduce API calls.
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd llm-togetherai
python3 -m venv venv
source venv/bin/activateNow install the dependencies and test dependencies:
llm install -e '.[test]'To run the tests:
pytestApache 2.0