Skip to content

Fix eval recipe bug for group tasks #1642

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Sep 21, 2024

Conversation

SalmanMohammadi
Copy link
Collaborator

@SalmanMohammadi SalmanMohammadi commented Sep 21, 2024

Context

What is the purpose of this PR? Is it to

  • add a new feature
  • fix a bug
  • update tests and/or documentation
  • other (please add here)

Eval recipe was bugging out when trying to get OUTPUT_TYPE for a group task.

(tune) salman@combuter:~/torchtune$ tune run eleuther_eval --config target/eleuther_evaluation.yaml 
2024-09-21:14:35:52,624 INFO     [_logging.py:101] Running EleutherEvalRecipe with resolved config:

batch_size: 1
checkpointer:
  _component_: torchtune.training.FullModelHFCheckpointer
  checkpoint_dir: ./target/1b_normal
  checkpoint_files:
  - pytorch_model.bin
  model_type: LLAMA2
  output_dir: ./target/tmp
device: cuda
dtype: fp32
enable_kv_cache: true
limit: null
max_seq_length: 4096
model:
  _component_: torchtune.models.llama2.llama2
  embed_dim: 2048
  max_seq_len: 2048
  norm_eps: 1.0e-05
  num_heads: 32
  num_kv_heads: 4
  num_layers: 22
  vocab_size: 32000
quantizer: null
seed: 1234
tasks:
- mmlu
- truthfulqa_mc2
tokenizer:
  _component_: torchtune.models.llama2.llama2_tokenizer
  path: ./target/1b_normal/tokenizer.model

2024-09-21:14:35:53,393 DEBUG    [seed.py:60] Setting manual seed to local seed 1234. Local seed is seed + rank = 1234 + 0
2024-09-21:14:36:05,630 INFO     [eleuther_eval.py:226] Model is initialized with precision torch.float32.
2024-09-21:14:36:05,643 INFO     [eleuther_eval.py:200] Tokenizer is initialized from file.
2024-09-21:14:36:05,743 INFO     [huggingface.py:130] Using device 'cuda:0'
/home/salman/.pyenv/versions/3.11.9/envs/tune/lib/python3.11/site-packages/transformers/tokenization_utils_base.py:1601: FutureWarning: `clean_up_tokenization_spaces` was not set. It will be set to `True` by default. This behavior will be depracted in transformers v4.45, and will be then set to `False` by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884
  warnings.warn(
2024-09-21:14:36:06,174 INFO     [huggingface.py:366] Model parallel was set to False, max memory was not set, and device map was set to {'': 'cuda:0'}
2024-09-21:14:36:08,549 INFO     [__init__.py:491] `group` and `group_alias` keys in TaskConfigs are deprecated and will be removed in v0.4.5 of lm_eval. The new `tag` field will be used to allow for a shortcut to a group of tasks one does not wish to aggregate metrics across. `group`s which aggregate across subtasks must be only defined in a separate group config file, which will be the official way to create groups that support cross-task aggregation as in `mmlu`. Please see the v0.4.4 patch notes and our documentation: https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/new_task_guide.md#advanced-group-configs for more information.
2024-09-21:14:37:35,565 INFO     [eleuther_eval.py:268] Running evaluation on ['mmlu', 'truthfulqa_mc2'] tasks.
...
[18:32<00:00, 55.77it/s]
2024-09-21:14:56:56,806 INFO     [eleuther_eval.py:275] Eval completed in 1251.08 seconds.
|                 Tasks                 |Version|Filter|n-shot|Metric|   |Value |   |Stderr|
|---------------------------------------|------:|------|------|------|---|-----:|---|-----:|
|mmlu                                   |      2|none  |      |acc   ||0.2535|±  |0.0037|
| - humanities                          |      2|none  |      |acc   ||0.2593|±  |0.0064|
|  - formal_logic                       |      1|none  |None  |acc   ||0.3175|±  |0.0416|
|  - high_school_european_history       |      1|none  |None  |acc   ||0.2667|±  |0.0345|
|  - high_school_us_history             |      1|none  |None  |acc   ||0.2549|±  |0.0306|
|  - high_school_world_history          |      1|none  |None  |acc   ||0.2700|±  |0.0289|
|  - international_law                  |      1|none  |None  |acc   ||0.3140|±  |0.0424|
|  - jurisprudence                      |      1|none  |None  |acc   ||0.2500|±  |0.0419|
|  - logical_fallacies                  |      1|none  |None  |acc   ||0.3067|±  |0.0362|
|  - moral_disputes                     |      1|none  |None  |acc   ||0.2601|±  |0.0236|
|  - moral_scenarios                    |      1|none  |None  |acc   ||0.2380|±  |0.0142|
|  - philosophy                         |      1|none  |None  |acc   ||0.2154|±  |0.0234|
|  - prehistory                         |      1|none  |None  |acc   ||0.2840|±  |0.0251|
|  - professional_law                   |      1|none  |None  |acc   ||0.2581|±  |0.0112|
|  - world_religions                    |      1|none  |None  |acc   ||0.2749|±  |0.0342|
| - other                               |      2|none  |      |acc   ||0.2472|±  |0.0077|
|  - business_ethics                    |      1|none  |None  |acc   ||0.2200|±  |0.0416|
|  - clinical_knowledge                 |      1|none  |None  |acc   ||0.2453|±  |0.0265|
|  - college_medicine                   |      1|none  |None  |acc   ||0.2486|±  |0.0330|
|  - global_facts                       |      1|none  |None  |acc   ||0.3500|±  |0.0479|
|  - human_aging                        |      1|none  |None  |acc   ||0.2108|±  |0.0274|
|  - management                         |      1|none  |None  |acc   ||0.3495|±  |0.0472|
|  - marketing                          |      1|none  |None  |acc   ||0.2650|±  |0.0289|
|  - medical_genetics                   |      1|none  |None  |acc   ||0.2400|±  |0.0429|
|  - miscellaneous                      |      1|none  |None  |acc   ||0.2759|±  |0.0160|
|  - nutrition                          |      1|none  |None  |acc   ||0.2255|±  |0.0239|
|  - professional_accounting            |      1|none  |None  |acc   ||0.2518|±  |0.0259|
|  - professional_medicine              |      1|none  |None  |acc   ||0.1618|±  |0.0224|
|  - virology                           |      1|none  |None  |acc   ||0.2048|±  |0.0314|
| - social sciences                     |      2|none  |      |acc   ||0.2444|±  |0.0077|
|  - econometrics                       |      1|none  |None  |acc   ||0.2368|±  |0.0400|
|  - high_school_geography              |      1|none  |None  |acc   ||0.2222|±  |0.0296|
|  - high_school_government_and_politics|      1|none  |None  |acc   ||0.2435|±  |0.0310|
|  - high_school_macroeconomics         |      1|none  |None  |acc   ||0.2077|±  |0.0206|
|  - high_school_microeconomics         |      1|none  |None  |acc   ||0.2269|±  |0.0272|
|  - high_school_psychology             |      1|none  |None  |acc   ||0.2459|±  |0.0185|
|  - human_sexuality                    |      1|none  |None  |acc   ||0.2366|±  |0.0373|
|  - professional_psychology            |      1|none  |None  |acc   ||0.3137|±  |0.0188|
|  - public_relations                   |      1|none  |None  |acc   ||0.1818|±  |0.0369|
|  - security_studies                   |      1|none  |None  |acc   ||0.2204|±  |0.0265|
|  - sociology                          |      1|none  |None  |acc   ||0.2338|±  |0.0299|
|  - us_foreign_policy                  |      1|none  |None  |acc   ||0.2100|±  |0.0409|
| - stem                                |      2|none  |      |acc   ||0.2598|±  |0.0078|
|  - abstract_algebra                   |      1|none  |None  |acc   ||0.1400|±  |0.0349|
|  - anatomy                            |      1|none  |None  |acc   ||0.3778|±  |0.0419|
|  - astronomy                          |      1|none  |None  |acc   ||0.2895|±  |0.0369|
|  - college_biology                    |      1|none  |None  |acc   ||0.2361|±  |0.0355|
|  - college_chemistry                  |      1|none  |None  |acc   ||0.2200|±  |0.0416|
|  - college_computer_science           |      1|none  |None  |acc   ||0.2600|±  |0.0441|
|  - college_mathematics                |      1|none  |None  |acc   ||0.2000|±  |0.0402|
|  - college_physics                    |      1|none  |None  |acc   ||0.2549|±  |0.0434|
|  - computer_security                  |      1|none  |None  |acc   ||0.3100|±  |0.0465|
|  - conceptual_physics                 |      1|none  |None  |acc   ||0.2638|±  |0.0288|
|  - electrical_engineering             |      1|none  |None  |acc   ||0.2897|±  |0.0378|
|  - elementary_mathematics             |      1|none  |None  |acc   ||0.2698|±  |0.0229|
|  - high_school_biology                |      1|none  |None  |acc   ||0.2290|±  |0.0239|
|  - high_school_chemistry              |      1|none  |None  |acc   ||0.2512|±  |0.0305|
|  - high_school_computer_science       |      1|none  |None  |acc   ||0.3200|±  |0.0469|
|  - high_school_mathematics            |      1|none  |None  |acc   ||0.2519|±  |0.0265|
|  - high_school_physics                |      1|none  |None  |acc   ||0.2980|±  |0.0373|
|  - high_school_statistics             |      1|none  |None  |acc   ||0.2593|±  |0.0299|
|  - machine_learning                   |      1|none  |None  |acc   ||0.1964|±  |0.0377|
|truthfulqa_mc2                         |      2|none  |0     |acc   ||0.3509|±  |0.0137|

Test plan

Please make sure to do each of the following if applicable to your PR. If you're unsure about any one of these just ask and we will happily help. We also have a contributing page for some guidance on contributing.

  • run pre-commit hooks and linters (make sure you've first installed via pre-commit install)
  • add unit tests for any new functionality
  • update docstrings for any new or updated methods or classes
  • run unit tests via pytest tests
  • run recipe tests via pytest tests -m integration_test
  • manually run any new or modified recipes with sufficient proof of correctness
  • include relevant commands and any other artifacts in this summary (pastes of loss curves, eval results, etc.)

UX

If your function changed a public API, please add a dummy example of what the user experience will look like when calling it.
Here is a docstring example
and a tutorial example

  • I did not change any public API
  • I have added an example to docs or docstrings

Copy link

pytorch-bot bot commented Sep 21, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchtune/1642

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit ab7cd8d with merge base 9a863c8 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Sep 21, 2024
@SalmanMohammadi SalmanMohammadi merged commit 8e5750e into pytorch:main Sep 21, 2024
17 checks passed
@SalmanMohammadi SalmanMohammadi deleted the fix_eval_group_task branch September 27, 2024 13:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants