A cloud-native vector database, storage for next generation AI applications
-
Updated
Apr 21, 2023 - Go
A cloud-native vector database, storage for next generation AI applications
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
Integrate cutting-edge LLM technology quickly and easily into your apps
中文LLaMA&Alpaca大语言模型+本地CPU/GPU部署 (Chinese LLaMA & Alpaca LLMs)
Data Lake for Deep Learning. Multi-modal Vector Database for LLMs/LangChain. Store, query, version, & visualize datasets. Stream data in real-time to PyTorch/TensorFlow. https://activeloop.ai
Bringing large-language models and chat to web browsers. Everything runs inside the browser with no server support.
Your own virtual developer. e2b lets you build & deploy specialized AI agents that build software for you based on your instructions.
CodeGen is an open-source model for program synthesis. Trained on TPU-v4. Competitive with OpenAI Codex.
langchain-ChatGLM, local knowledge based ChatGLM with langchain | 基于本地知识的 ChatGLM 问答
Training and serving large-scale neural networks
GPTCache is a library for creating semantic cache to store responses from LLM queries.
Integrate Human Supervision into your Platform. For all Training Data Types, Image, Video, 3D, Text, Geo, Audio, Compound, Grid, LLM, GPT, Conversational, and more.
Easily build, customize and control your own LLMs
Add a description, image, and links to the llm topic page so that developers can more easily learn about it.
To associate your repository with the llm topic, visit your repo's landing page and select "manage topics."