-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
[Draft] Token-weighted datasets: Control up/down-sampling of multiple datasets #2794
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
- Add weight and weight_strategy fields to all dataset schemas - Implement token-based dataset merging with validation - Support upsample and downsample strategies - Validate weights are 0.0-1.0 and sum to 1.0 - Maintain backward compatibility when no weights specified - Add comprehensive tests for token weighting functionality Co-Authored-By: Casper Hansen <[email protected]>
- Apply multi-line dictionary formatting for long DictDefault calls - Add trailing commas to list items - Remove trailing whitespace - Wrap long pytest.raises calls properly - Follow project's code style guidelines to fix pre-commit failures Co-Authored-By: Casper Hansen <[email protected]>
- Add missing trailing commas to DictDefault list items - Fix blank line spacing between test sections - Wrap long pytest.raises calls properly - Address all remaining pre-commit formatting violations Co-Authored-By: Casper Hansen <[email protected]>
- Remove extra blank lines in shared.py - Combine multi-line ValueError message - Put function parameters on single line - Reformat imports in test file to multi-line format - Reformat Dataset.from_list calls to multi-line format - Simplify DictDefault calls to single line format Co-Authored-By: Casper Hansen <[email protected]>
Remove unnecessary parentheses around comparison in _validate_weights function to resolve pre-commit pylint error C0325 Co-Authored-By: Casper Hansen <[email protected]>
- Log original token counts for each dataset - Show before/after token counts with weight and strategy - Display final merged dataset statistics - Include total token count change ratio Co-Authored-By: Casper Hansen <[email protected]>
- Show token counts for each weighted part before merging - Display total tokens across all weighted parts - Helps users understand the exact breakdown of weighted datasets Co-Authored-By: Casper Hansen <[email protected]>
…ting Implement token-weighting parameter for datasets
- Change target_tok calculation from tok_cnt * weight to weight * total_original_tokens - Weights now correctly represent relative proportions of final merged dataset - Fix upsampling strategy to handle both increase and decrease scenarios - Resolves issue where weights 0.7/0.3 were incorrectly decreasing token counts Co-Authored-By: Casper Hansen <[email protected]>
…stead of configured strategy - Replace misleading 'strategy=upsample' with effective operation - Now correctly shows 'downsampling' when dataset tokens are reduced - Addresses user feedback about contradictory logging Co-Authored-By: Casper Hansen <[email protected]>
- Shows requested vs achieved weights for each dataset - Provides transparency into token weighting results - Helps users verify weighting was applied correctly Co-Authored-By: Casper Hansen <[email protected]>
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
Description
Preprocessing is often a hurdle that requires a lot of care especially when preprocessing before axolotl touches the datasets. Weighting datasets is a feature I consider essential to finetuning a single model for multiple tasks. To achieve this, you would like to control the token distribution which this PR allows.
This implementation is co-authored by me and Devin. I made the initial prototype of the weighting and asked Devin to do the integration work and testing.
Closes #1508
How has this been tested?
A good number of tests have been implemented to verify this implementation, but further manual testing is required to ensure the behavior is correct and as expected. Especially around when to upsample and when to downsample. One assumption that I made during the implementation is that we want the weights to sum to 1 as its otherwise hard to control exactly what happens.
Outputs from one of the tests:
My Nits