Easy-to-use LLM fine-tuning framework (LLaMA, BLOOM, Mistral, Baichuan, Qwen, ChatGLM)
-
Updated
Dec 24, 2023 - Python
Easy-to-use LLM fine-tuning framework (LLaMA, BLOOM, Mistral, Baichuan, Qwen, ChatGLM)
Fine-tuning ChatGLM-6B with PEFT | 基于 PEFT 的高效 ChatGLM 微调
Firefly(流萤): 中文对话式大语言模型(全量微调+QLoRA),支持微调Mixtral-8x7B、Zephyr、Mistral、Aquila2、Baichuan2、CodeLlama、Llma2、Llama、Qwen、Baichuan、ChatGLM2、InternLM、Ziya、Bloom等大模型
33B Chinese LLM, DPO QLORA, 100K context, AirLLM 70B inference with single 4GB GPU
chatglm 6b finetuning and alpaca finetuning
🦖 𝗟𝗲𝗮𝗿𝗻 about 𝗟𝗟𝗠𝘀, 𝗟𝗟𝗠𝗢𝗽𝘀, and 𝘃𝗲𝗰𝘁𝗼𝗿 𝗗𝗕𝘀 for free by designing, training, and deploying a real-time financial advisor LLM system ~ 𝘴𝘰𝘶𝘳𝘤𝘦 𝘤𝘰𝘥𝘦 + 𝘷𝘪𝘥𝘦𝘰 & 𝘳𝘦𝘢𝘥𝘪𝘯𝘨 𝘮𝘢𝘵𝘦𝘳𝘪𝘢𝘭𝘴
Repository that contains LLM fine-tuning and deployment scripts along with our research findings.
魔搭大模型训练推理工具箱,支持LLaMA、千问、ChatGLM、BaiChuan等多种模型及LoRA等多种训练方式(The LLM training/inference framework of ModelScope community, Support various models like LLaMA, Qwen, ChatGLM, Baichuan and others, and training methods like LoRA, ResTuning, NEFTune, etc.)
🐋MindChat(漫谈): 心理大模型
Finetuning of Falcon-7B LLM using QLoRA on Mental Health Conversational Dataset
chatglm2 6b finetuning and alpaca finetuning
🌿孙思邈中文医疗大模型(Sunsimiao):提供安全、可靠、普惠的中文医疗大模型
Small(7B and below) finetuned LLMs for a diverse set of useful tasks.
LongQLoRA: Extent Context Length of LLMs Efficiently
Finetune any model on HF in less than 30 seconds
Add a description, image, and links to the qlora topic page so that developers can more easily learn about it.
To associate your repository with the qlora topic, visit your repo's landing page and select "manage topics."