Replies: 1 comment
-
|
I think in this case you are probably interested in using a RAG setup. I have a quick example here: https://github.com/rasbt/RAGs/blob/main/notebooks/mistral-llamaindex-hf-offline.ipynb |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
After finishing ch07 about fine-tunning, I wanted to be able to ask questions about a curated set of data.
For example, I have a compilation of data on PDF format that I have gathered over years of work, about 700 files, but is a pain to find answers by hand. How can I create a cache so that I can ask the GPT2 LLM that we just created on ch06/ch07 to find?
But I want the answer to come only from this specific set of files
Beta Was this translation helpful? Give feedback.
All reactions