Skip to content

How do we finetune the model with new data? #466

Closed
@ekolawole

Description

@ekolawole

Can we have a finetune.cpp or finetune.exe file to incorporate new data into the model? The use case will be to design an AI model that can do more than just general chat. It can become very knowledgeable in specific topics they are finetuned on. Also, after creating the finetune.exe , please ensure no GPU is required for the entire process. Because that is what makes this repo awesome in the first place.

Activity

Green-Sky

Green-Sky commented on Mar 24, 2023

@Green-Sky
Collaborator

Sounds cool. But this is not on the short-term Roadmap.

ekolawole

ekolawole commented on Mar 24, 2023

@ekolawole
Author

The goal of these integrations is to enable academia adapt to the new era of AI, and to simplify the intricacies involved. So users should be able to finetune their models to suit their data needs. I was running the 30B this morning and the AI does not have important data about Langchain and other recent use cases from 2021 until now. I believe the data used to build the models are old. For my team, we are looking for a no GPU deployment like this one, that can also support finetuning. What can be done to move this request ahead on the Roadmap

rupakhetibinit

rupakhetibinit commented on Mar 24, 2023

@rupakhetibinit

What you're talking about is training/finetuning which is theoretically possible on CPU but practically impossible/non-feasible on CPU only because you'll be training for literal months instead of days, you need a GPU to actually finetune this. This repository is only for inference/running the model.

PriNova

PriNova commented on Mar 24, 2023

@PriNova

What you're talking about is training/finetuning which is theoretically possible on CPU but practically impossible/non-feasible on CPU only because you'll be training for literal months instead of days, you need a GPU to actually finetune this. This repository is only for inference/running the model.

I think it depents on the approach of fine-tuning.
If the LoRa approach will be used (only the k, q, v memory layers, as far as I understand it correctly), then it could be made by CPU and we can transfer and share the LoRa models.

Green-Sky

Green-Sky commented on Mar 24, 2023

@Green-Sky
Collaborator

loading only the lora part IS on the short-term roadmap #457

leszekhanusz

leszekhanusz commented on Mar 27, 2023

@leszekhanusz

There is the lxe/simple-llama-finetuner repo available for finetuning but you need a GPU with at least 16GB VRAM to finetune the 7B model.

Free-Radical

Free-Radical commented on Apr 17, 2023

@Free-Radical

IS there a way to fine tune these models for reading my documents, etc utilizing cloud hardware but no openai, pinecone, non-free 3rd party dependencies? Code examples would be awesome (i've seen langchain's docs but they are not detailed enough, at leasdt not for me) @leszekhanusz @Green-Sky @PriNova @rupakhetibinit @ekolawole

ch3rn0v

ch3rn0v commented on Apr 17, 2023

@ch3rn0v

@Free-Radical , try vector storage, such as Weaviate. Your query string can contain text in a natural language, the response is based on vector similarity between that string and the documents in the storage. I also tried Vespa, but it didn't work at all. The reason is a design choice that I find questionable, see vespa-engine/pyvespa#499 for details. There are other open source vector storage solutions too.

Free-Radical

Free-Radical commented on Apr 17, 2023

@Free-Radical

@ch3rn0v Thank man, Weaviate looks good, better than going "raw" with FAISS. Will check out Vespa too.

Green-Sky

Green-Sky commented on Apr 18, 2023

@Green-Sky
Collaborator

@Free-Radical you can look at https://github.com/tloen/alpaca-lora
loading lora adapter support also merged today #820, so i suggest you stay on the lora side (lower quality, but way, way faster to train).

Green-Sky

Green-Sky commented on May 15, 2023

@Green-Sky
Collaborator

Since @xaedes contributed the backward version of the necessary tensors, this could now be withing reach. #1360

(afaik this is the tracking issue for finetuning)

Sovenok-Hacker

Sovenok-Hacker commented on May 20, 2023

@Sovenok-Hacker

The goal of these integrations is to enable academia adapt to the new era of AI, and to simplify the intricacies involved. So users should be able to finetune their models to suit their data needs. I was running the 30B this morning and the AI does not have important data about Langchain and other recent use cases from 2021 until now. I believe the data used to build the models are old. For my team, we are looking for a no GPU deployment like this one, that can also support finetuning. What can be done to move this request ahead on the Roadmap

I agree. It will be helpful to fine-tune LLaMA models only using llama.cpp on CPU.

Sovenok-Hacker

Sovenok-Hacker commented on May 20, 2023

@Sovenok-Hacker

What you're talking about is training/finetuning which is theoretically possible on CPU but practically impossible/non-feasible on CPU only because you'll be training for literal months instead of days, you need a GPU to actually finetune this. This repository is only for inference/running the model.

I disagree. What if we need to add a little data? It will be done in hours, why not add a little fine-tuning utility?

8 remaining items

Loading
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

      Development

      No branches or pull requests

        Participants

        @Free-Radical@ch3rn0v@Green-Sky@leszekhanusz@PriNova

        Issue actions

          How do we finetune the model with new data? · Issue #466 · ggml-org/llama.cpp