karpathy / llama2.c
Inference Llama 2 in one file of pure C
See what the GitHub community is most excited about today.
Inference Llama 2 in one file of pure C
Run Llama 2 locally with gradio UI on GPU or CPU from anywhere (Linux/Windows/Mac). Supporting Llama-2-7B/13B/70B with 8-bit, 4-bit. Supporting GPU inference (6 GB VRAM) and CPU inference.
Chat with your documents on your local device using GPT models. No data leaves your device and 100% private.
12 weeks, 26 lessons, 52 quizzes, classic Machine Learning for all
TypeChat is a library that makes it easy to build natural language interfaces using types.
Official Pytorch Implementation for "TokenFlow: Consistent Diffusion Features for Consistent Video Editing" presenting "TokenFlow"
👩🏿💻👨🏾💻👩🏼💻👨🏽💻👩🏻💻中国独立开发者项目列表 -- 分享大家都在做什么
InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products.
A colab gradio web UI for running Large Language Models
The simplest, fastest repository for training/finetuning medium-sized GPTs.
Meta-Transformer for Unified Multimodal Learning
kfd, short for kernel file descriptor, is a project to read and write kernel memory on Apple devices.
A bot for r/place that doesn't use the api
开源社区第一个能下载、能运行的中文 LLaMA2 模型!
Python package for easily interfacing with chat apps, with robust features and minimal code complexity.
An implementation of "Retentive Network: A Successor to Transformer for Large Language Models"
Running Llama 2 and other Open-Source LLMs on CPU Inference Locally for Document?Q&A
Llama中文社区,最好的中文Llama大模型,完全开源可商用
Master programming by recreating your favorite technologies from scratch.