A fast library for AutoML and tuning. Join our Discord: https://discord.gg/Cppx2vSPVP.
-
Updated
Feb 23, 2023 - Jupyter Notebook
A fast library for AutoML and tuning. Join our Discord: https://discord.gg/Cppx2vSPVP.
End-to-End recipes for pre-training and fine-tuning BERT using Azure Machine Learning Service
Guide: Finetune GPT2-XL (1.5 Billion Parameters) and finetune GPT-NEO (2.7 B) on a single GPU with Huggingface Transformers using DeepSpeed
A curated list of resources for molecular pre-trained models or chemical language models
Fine-tune Facebook's DETR (DEtection TRansformer) on Colaboratory.
A PyTorch Lightning extension that accelerates and enhances foundation model experimentation with flexible fine-tuning schedules.
GPT2 finetuning with transformers
Fight Detection From Surveillance Cameras by fine-tuning a PyTorch Pretrained Model
Implementation of our ACL 2020 paper: Structured Tuning for Semantic Role Labeling
This is a obsidian plugin to help with the creation of personal datasets for text generation models.
Fine tune a pixelart diffusion model with isometric dataset.
Code for our paper "Transfer Learning for Sequence Generation: from Single-source to Multi-source" in ACL 2021.
This repository contains tutorials about finetuning pretrained Pytorch models
Anime Character Segmentation
Source code for EMNLP2022 long paper: Parameter-Efficient Tuning Makes a Good Classification Head
Training and serving XLM-RoBERTa for named entity recognition on custom dataset with PyTorch.
Comparing Selective Masking Methods for Depression Detection in Social Media
Adaptively fine tuning transformer based models for multiple domains and multiple tasks
Add a description, image, and links to the finetuning topic page so that developers can more easily learn about it.
To associate your repository with the finetuning topic, visit your repo's landing page and select "manage topics."