Closed
Description
Following how we do it in LoRA recipes, we should add the ability to use a LR scheduler to the full finetune recipes
Here's everything that would need to be changed:
- Add appropriate functions to recipes/full_finetune_distributed.py (follow recipes/lora_finetune_distribute.py for example)
- Add appropriate functions to recipes/full_finetune_single_device.py (follow recipes/lora_finetune_single_device.py for example)
- Run an example run with Llama3.1 8B and log to Weights & Biases. Post screenshot confirming this works in your PR
I don't think we need to update all our configs yet, just need to make sure the functionality exists.