Skip to content

rajashekar/llm-togetherai

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

llm-togetherai

PyPI Changelog Tests License

LLM plugin for models hosted by Together AI

Installation

First, install the LLM command-line utility.

Now install this plugin in the same environment as LLM.

llm install llm-togetherai

Configuration

You will need an API key from Together AI. You can obtain one here.

You can set that as an environment variable called TOGETHER_API_KEY, or add it to the llm set of saved keys using:

llm keys set together
Enter key: <paste key here>

Usage

To list available models, run:

llm models list

You should see a list that looks something like this:

together: together/meta-llama/Llama-2-7b-chat-hf
together: together/meta-llama/Llama-2-13b-chat-hf
together: together/meta-llama/Llama-2-70b-chat-hf
together: together/mistralai/Mistral-7B-Instruct-v0.1
together: together/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO
...

To run a prompt against a model, pass its full model ID to the -m option, like this:

llm -m together/meta-llama/Llama-2-7b-chat-hf "Five creative names for a pet robot"

You can set a shorter alias for a model using the llm aliases command like so:

llm aliases set llama2-7b together/meta-llama/Llama-2-7b-chat-hf

Now you can prompt the model using:

cat llm_togetherai.py | llm -m llama2-7b -s 'write some pytest tests for this'

Vision models

Some Together AI models can accept image attachments. Run this command:

llm models --options -q together

And look for models that list these attachment types:

  Attachment types:
    image/gif, image/jpeg, image/png, image/webp

You can feed these models images as URLs or file paths, for example:

curl https://static.simonwillison.net/static/2024/pelicans.jpg | llm \
    -m together/meta-llama/Llama-3.2-11B-Vision-Instruct-Turbo 'describe this image' -a -

Listing models

The llm models -q together command will display all available models, or you can use this command to see more detailed information:

llm together models

Output starts like this:

- id: meta-llama/Llama-2-7b-chat-hf
  name: Llama-2-7b-chat-hf
  context_length: 4,096
  type: chat
  organization: Together
  pricing: input $0.2/M, output $0.2/M

- id: meta-llama/Llama-2-13b-chat-hf
  name: Llama-2-13b-chat-hf
  context_length: 4,096
  type: chat
  organization: Together
  pricing: input $0.3/M, output $0.3/M

Add --json to get back JSON instead:

llm together models --json

Refreshing the model cache

The plugin caches the list of available models for 1 hour. To refresh this cache manually:

llm together refresh

This will fetch the latest models from the Together AI API and update the local cache.

API Endpoint

This plugin uses the Together AI API endpoint:

https://api.together.xyz/v1/models

The models are cached locally in your LLM user directory to improve performance and reduce API calls.

Development

To set up this plugin locally, first checkout the code. Then create a new virtual environment:

cd llm-togetherai
python3 -m venv venv
source venv/bin/activate

Now install the dependencies and test dependencies:

llm install -e '.[test]'

To run the tests:

pytest

License

Apache 2.0

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages