Grow your team on GitHub
GitHub is home to over 50 million developers working together. Join them to grow your own development teams, manage permissions, and collaborate on projects.
Sign upPinned repositories
Repositories
-
-
open-covid-19-data
Open source aggregation pipeline for public COVID-19 data, including hospitalization/ICU/ventilator numbers for many countries.
-
-
dex-lang
Research language for array processing in the Haskell/ML family
-
tiny-differentiable-simulator
Tiny Differentiable Simulator is a header-only C++ physics library with zero dependencies.
-
text-to-text-transfer-transformer
Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"
-
tensor2robot
Distributed machine learning infrastructure for large-scale robotics research
-
federated
A collection of Google research projects related to Federated Learning and Federated Analytics.
-
rigl
End-to-end training of sparse deep neural networks with little-to-no performance loss.
-
batch_rl
Offline Reinforcement Learning (aka Batch Reinforcement Learning) on Atari 2600 games
-
sputnik
A library of GPU kernels for sparse matrix operations.
-
graph-attribution
Codebase for Evaluating Attribution for Graph Neural Networks.
-
soft-dtw-divergences
An implementation of soft-DTW divergences.
-
reverse-engineering-neural-networks
A collection of tools for reverse engineering neural networks.
-
tapas
End-to-end neural table-text understanding models.
-
noisystudent
Code for Noisy Student Training. https://arxiv.org/abs/1911.04252
-
bleurt
BLEURT is a metric for Natural Language Generation based on transfer learning.
-
electra
ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators
-
motion_imitation
Code accompanying the paper "Learning Agile Robotic Locomotion Skills by Imitating Animals"
-
pisac
Tensorflow source code for the PI-SAC agent from "Predictive Information Accelerates Learning in RL" (NeurIPS 2020)
-
big_transfer
Official repository for the "Big Transfer (BiT): General Visual Representation Learning" paper.