Skip to content
#

reinforcement-learning

Here are 8,067 public repositories matching this topic...

ericl
ericl commented May 3, 2022

Description

Per https://discuss.ray.io/t/how-do-i-sample-from-a-ray-datasets/5308, we should add a random_sample(N) API that returns N records from a Dataset. This can be implemented via a map_batches() followed by a take().

cc @simon-mo @clarkzinzow

Use case

Random sample is useful for a variety of scenarios, including creating training batches, and downsampling the dataset for

good first issue enhancement P2 datasets
annotated_deep_learning_paper_implementations

🧑‍🏫 50! Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, ...), gans(cyclegan, stylegan2, ...), 🎮 reinforcement learning (ppo, dqn), capsnet, distillation, ... 🧠

  • Updated May 6, 2022
  • Jupyter Notebook
stable-baselines
calerc
calerc commented Nov 23, 2020

The following applies to DDPG and TD3, and possibly other models. The following libraries were installed in a virtual environment:

numpy==1.16.4
stable-baselines==2.10.0
gym==0.14.0
tensorflow==1.14.0

Episode rewards do not seem to be updated in model.learn() before callback.on_step(). Depending on which callback.locals variable is used, this means that:

  • episode rewards may n
good first issue question

Improve this page

Add a description, image, and links to the reinforcement-learning topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the reinforcement-learning topic, visit your repo's landing page and select "manage topics."

Learn more