Issues: lm-sys/FastChat
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
support Vicuna finetune with qLoRA
good first issue
Good for newcomers
#1657
opened Jun 11, 2023 by
ehartford
How to enable batch evaluation? I got RuntimeError: CUDA error: device-side assert triggered
#1655
opened Jun 10, 2023 by
pkumc
FastChat with API using only one processor core on CPU for output generation
bug
Something isn't working
#1652
opened Jun 10, 2023 by
ahlinus
report error while i execute
python -m fastchat.serve.openai_api_server --host localhost --port 8000
#1641
opened Jun 9, 2023 by
lplzyp
ValueError: Tokenizer class GPTNeoXTokenizer does not exist or is not currently imported.
#1630
opened Jun 8, 2023 by
mzdsk2
AttributeError: module 'torch.cuda' has no attribute 'OutOfMemoryError'
#1627
opened Jun 8, 2023 by
zhanhang123
fastchat.serve.model_worker --device cpu only uses one CPU Thread for token generation.
#1626
opened Jun 7, 2023 by
j0schi
Support logprob in OpenAI API
good first issue
Good for newcomers
#1615
opened Jun 6, 2023 by
wymanCV
Language distribution of ShareGPT 70K conversation dataset for FastChat T5
#1607
opened Jun 5, 2023 by
Mihir2
Converting the weights into HuggingFace transformers format is resulting in creating tmp folder
#1601
opened Jun 4, 2023 by
kyawthetkt
Previous Next
ProTip!
Updated in the last three days: updated:>2023-06-08.