reinforcement-learning
Here are 7,606 public repositories matching this topic...
-
Updated
Jan 30, 2022
-
Updated
Jan 31, 2022 - C#
-
Updated
Jan 12, 2022 - Python
-
Updated
Jan 8, 2022 - Python
-
Updated
Oct 28, 2021 - Python
-
Updated
Oct 19, 2021 - HTML
-
Updated
Jan 15, 2021 - Jupyter Notebook
-
Updated
Jan 31, 2022 - C++
-
Updated
Jan 29, 2022 - Python
-
Updated
Nov 1, 2020 - Python
-
Updated
Dec 3, 2021 - Python
Bidirectional RNN
Is there a way to train a bidirectional RNN (like LSTM or GRU) on trax nowadays?
-
Updated
Nov 17, 2021
-
Updated
Jan 20, 2022 - Jupyter Notebook
-
Updated
Jan 28, 2022 - Python
-
Updated
Jan 31, 2022 - Jupyter Notebook
-
Updated
Nov 27, 2021 - Python
-
Updated
Jan 26, 2022 - Jupyter Notebook
-
Updated
Jan 5, 2022 - Python
-
Updated
Jan 28, 2022 - Jupyter Notebook
-
Updated
Jan 29, 2022
-
Updated
Dec 14, 2019 - Jupyter Notebook
-
Updated
May 7, 2021 - JavaScript
-
Updated
Dec 6, 2021 - Jupyter Notebook
-
Updated
Jan 31, 2022 - Python
-
Updated
Dec 22, 2021
The following applies to DDPG and TD3, and possibly other models. The following libraries were installed in a virtual environment:
numpy==1.16.4
stable-baselines==2.10.0
gym==0.14.0
tensorflow==1.14.0
Episode rewards do not seem to be updated in model.learn() before callback.on_step(). Depending on which callback.locals variable is used, this means that:
- episode rewards may n
-
Updated
Jun 30, 2020 - Jupyter Notebook
Improve this page
Add a description, image, and links to the reinforcement-learning topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the reinforcement-learning topic, visit your repo's landing page and select "manage topics."
According to FastAPI's docs,
response_modelcan accept type annotations that are not pydantic models. However, the code referenced below is checking for the__fields__attribute, which won't be on type annotations such aslist[float], for example.https://github.com/ray-project/ray/blob/e60a5f52eb93c851b186cb78fa1f70d