You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This project presents a medical question–answering language model built by fine-tuning Google Gemma-2-2B-IT using LoRA (Low-Rank Adaptation) 🧠⚕️. The primary objective is to adapt a general-purpose large language model to the healthcare domain in a parameter-efficient, reproducible, and resource-aware manner.
This is a sample notebook that can be used for exploring the fine-tuning of LLM using Unsloth Library. The LLM used for the experimentation is Google Gemma.
Gemma-2b-it LLM has been finetuned on a dataset of Python codes, enabling it to proficiently learn Python syntax and assist in debugging tasks, offering valuable guidance to programmers.
🧪 70-test, 6-model AI benchmark: Gemma 4 vs Gemini Pro vs Flash vs Qwen. 420 verified runs across 13 categories. All prompts, rubrics, runner code & raw results included. Code executed, constraints verified, prompt injection confirmed on Vertex AI Studio.