Skip to content

Add example scripts to show how to run the model #108

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Aug 14, 2024
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
31 changes: 30 additions & 1 deletion models/llama3_1/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,36 @@ Pre-requisites: Ensure you have `wget` installed. Then run the script: `./downlo

Remember that the links expire after 24 hours and a certain amount of downloads. You can always re-request a link if you start seeing errors such as `403: Forbidden`.

### Access to Hugging Face
## Running the models
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added this blurb here to the README. cc @karpathy


You need to install the following dependencies (in addition to the `requirements.txt` in the root directory of this repository) to run the models:

```
pip install torch fairscale fire blobfile
```

After installing the dependencies, you can run the example scripts as follows:

```bash
#!/bin/bash

PYTHONPATH=$(git rev-parse --show-toplevel) torchrun scripts/example_chat_completion.py <CHECKPOINT_DIR> <TOKENIZER_PATH>
```

The above script should be used with an Instruct (Chat) model. For running larger models with tensor parallelism, you should modify as:

```bash
#!/bin/bash

NGPUS=8
PYTHONPATH=$(git rev-parse --show-toplevel) torchrun \
--nproc_per_node=$NGPUS \
scripts/example_chat_completion.py <CHECKPOINT_DIR> <TOKENIZER_PATH> \
--model_parallel_size $NGPUS
```


## Access to Hugging Face

We also provide downloads on [Hugging Face](https://huggingface.co/meta-llama), in both transformers and native `llama3` formats. To download the weights from Hugging Face, please follow these steps:

Expand Down
4 changes: 2 additions & 2 deletions models/llama3_1/api/datatypes.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,8 @@
from pydantic import BaseModel, Field, validator

from typing_extensions import Annotated
from llama_models.datatypes import * # noqa
from llama_models.schema_utils import json_schema_type
from ...datatypes import * # noqa
from ...schema_utils import json_schema_type


@json_schema_type
Expand Down
Loading