A curated list of retrieval-augmented generation (RAG) in large language models
-
Updated
Feb 14, 2025
A curated list of retrieval-augmented generation (RAG) in large language models
Explore cutting-edge Redis capabilities for Vector Similarity Search, Hybrid Search (Vector Similarity + Meta Search), Semantic Caching, and an advanced RAG model integrated with a Language Model (LLM) Chatbot. Unlock the full potential of Redis as a vector database with this comprehensive showcase of powerful features.
Learn how to build agents that can reason over their own documents
Learn Retrieval-Augmented Generation (RAG) from Scratch using LLMs from Hugging Face and Langchain or Python
MariHacks winning project. Linky leverages a RAG AI & a Vector DB to convert your inputted URLs into tokenized inputs allowing you to then treat Linky as a living form of your URL.
End-to-End solution that harnesses the power of documents to provide insightful answers and valuable knowledge to user's query.
𝐑𝐞𝐭𝐫𝐢𝐞𝐯𝐚𝐥-𝐚𝐮𝐠𝐦𝐞𝐧𝐭𝐞𝐝 𝐠𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐨𝐧 (𝐑𝐀𝐆) is an innovative approach in the field of 𝐧𝐚𝐭𝐮𝐫𝐚𝐥 𝐥𝐚𝐧𝐠𝐮𝐚𝐠𝐞 𝐩𝐫𝐨𝐜𝐞𝐬𝐬𝐢𝐧𝐠 (𝐍𝐋𝐏) that combines the strengths of retrieval-based and 𝐠𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐨𝐧-𝐛𝐚𝐬𝐞𝐝 𝐦𝐨𝐝𝐞𝐥𝐬 to enhance the quality of generated text.
The LARGE LANGUAGE MODEL FOR HYDROGEN STORAGE project uses advanced natural language processing to improve research efficiency.
Click the link below to checkout the website
Add a description, image, and links to the rag-model topic page so that developers can more easily learn about it.
To associate your repository with the rag-model topic, visit your repo's landing page and select "manage topics."