🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
-
Updated
Aug 25, 2023 - Python
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
The official gpt4free repository | various collection of powerful language models
🐙 Guides, papers, lecture, notebooks and resources for prompt engineering
OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
Code and documentation to train Stanford's Alpaca models, and generate the data.
Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.
🔍 LLM orchestration framework to build customizable, production-ready LLM applications. Connect components (models, vector DBs, file converters) to pipelines or agents that can interact with your data. With advanced retrieval methods, it's best suited for building RAG, question answering, semantic search or conversational agent chatbots.
RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
大规模中文自然语言处理语料 Large Scale Chinese Corpus for NLP
An implementation of model parallel GPT-2 and GPT-3-style models using the mesh-tensorflow library.
NeMo: a toolkit for conversational AI
💥 Fast State-of-the-Art Tokenizers optimized for Research and Production
An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.
Bringing large-language models and chat to web browsers. Everything runs inside the browser with no server support.
A PyTorch-based Speech Toolkit
Open Source Neural Machine Translation and (Large) Language Models in PyTorch
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
An open source implementation of CLIP.
An implementation of model parallel autoregressive transformers on GPUs, based on the DeepSpeed library.
Add a description, image, and links to the language-model topic page so that developers can more easily learn about it.
To associate your repository with the language-model topic, visit your repo's landing page and select "manage topics."