Skip to content

Add ABBA: Highly Expressive Hadamard Product Adaptation for LLMs as a Fine-Tuning Method #2785

Open
@RaghavSinghal10

Description

@RaghavSinghal10

⚠️ Please check that this feature request hasn't been suggested before.

  • I searched previous Ideas in Discussions didn't find any similar feature requests.
  • I searched previous Issues didn't find any similar feature requests.

🔖 Feature description

We just released ABBA, a new architecture for Parameter-Efficient Fine-Tuning (PEFT) that significantly outperforms LoRA and its major variants (e.g., HiRA, DoRA, LoRA-Pro), under the same parameter budget.

Unlike LoRA, which adds a low-rank delta to frozen weights, ABBA models the update as a Hadamard product of two independently learned low-rank matrices. This gives it higher expressivity and flexibility while remaining efficient.

ABBA consistently beats SoTA LoRA variants on commonsense and arithmetic reasoning across 4 open-source LLMs (Mistral-7B, Gemma-2 9B, LLaMA-3.2 1B/3B). In some cases, it even outperforms full fine-tuning.

Paper: https://arxiv.org/abs/2505.14238
Code: https://github.com/CERT-Lab/abba

Would love to get this integrated into Axolotl. Happy to help with this as well!

✔️ Solution

We would love to add ABBA as a fine-tuning method in Axolotl.

❓ Alternatives

No response

📝 Additional Context

No response

Acknowledgements

  • My issue title is concise, descriptive, and in title casing.
  • I have searched the existing issues to make sure this feature has not been requested yet.
  • I have provided enough information for the maintainers to understand and evaluate this request.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions