Closed
Description
Can we have a finetune.cpp or finetune.exe file to incorporate new data into the model? The use case will be to design an AI model that can do more than just general chat. It can become very knowledgeable in specific topics they are finetuned on. Also, after creating the finetune.exe , please ensure no GPU is required for the entire process. Because that is what makes this repo awesome in the first place.
Metadata
Metadata
Assignees
Labels
Type
Projects
Milestone
Relationships
Development
No branches or pull requests
Activity
Green-Sky commentedon Mar 24, 2023
Sounds cool. But this is not on the short-term Roadmap.
ekolawole commentedon Mar 24, 2023
The goal of these integrations is to enable academia adapt to the new era of AI, and to simplify the intricacies involved. So users should be able to finetune their models to suit their data needs. I was running the 30B this morning and the AI does not have important data about Langchain and other recent use cases from 2021 until now. I believe the data used to build the models are old. For my team, we are looking for a no GPU deployment like this one, that can also support finetuning. What can be done to move this request ahead on the Roadmap
rupakhetibinit commentedon Mar 24, 2023
What you're talking about is training/finetuning which is theoretically possible on CPU but practically impossible/non-feasible on CPU only because you'll be training for literal months instead of days, you need a GPU to actually finetune this. This repository is only for inference/running the model.
PriNova commentedon Mar 24, 2023
I think it depents on the approach of fine-tuning.
If the LoRa approach will be used (only the k, q, v memory layers, as far as I understand it correctly), then it could be made by CPU and we can transfer and share the LoRa models.
Green-Sky commentedon Mar 24, 2023
loading only the lora part IS on the short-term roadmap #457
leszekhanusz commentedon Mar 27, 2023
There is the lxe/simple-llama-finetuner repo available for finetuning but you need a GPU with at least 16GB VRAM to finetune the 7B model.
Free-Radical commentedon Apr 17, 2023
IS there a way to fine tune these models for reading my documents, etc utilizing cloud hardware but no openai, pinecone, non-free 3rd party dependencies? Code examples would be awesome (i've seen langchain's docs but they are not detailed enough, at leasdt not for me) @leszekhanusz @Green-Sky @PriNova @rupakhetibinit @ekolawole
ch3rn0v commentedon Apr 17, 2023
@Free-Radical , try vector storage, such as Weaviate. Your query string can contain text in a natural language, the response is based on vector similarity between that string and the documents in the storage. I also tried Vespa, but it didn't work at all. The reason is a design choice that I find questionable, see vespa-engine/pyvespa#499 for details. There are other open source vector storage solutions too.
Free-Radical commentedon Apr 17, 2023
@ch3rn0v Thank man, Weaviate looks good, better than going "raw" with FAISS. Will check out Vespa too.
Green-Sky commentedon Apr 18, 2023
@Free-Radical you can look at https://github.com/tloen/alpaca-lora
loading lora adapter support also merged today #820, so i suggest you stay on the lora side (lower quality, but way, way faster to train).
Green-Sky commentedon May 15, 2023
Since @xaedes contributed the backward version of the necessary tensors, this could now be withing reach. #1360
(afaik this is the tracking issue for finetuning)
Sovenok-Hacker commentedon May 20, 2023
I agree. It will be helpful to fine-tune LLaMA models only using llama.cpp on CPU.
Sovenok-Hacker commentedon May 20, 2023
I disagree. What if we need to add a little data? It will be done in hours, why not add a little fine-tuning utility?
8 remaining items