diff --git a/com.unity.ml-agents/CHANGELOG.md b/com.unity.ml-agents/CHANGELOG.md index 04f6579c46..7eaa4ee1d6 100755 --- a/com.unity.ml-agents/CHANGELOG.md +++ b/com.unity.ml-agents/CHANGELOG.md @@ -16,6 +16,7 @@ and this project adheres to #### com.unity.ml-agents (C#) - `RayPerceptionSensor.Perceive()` now additionally store the GameObject that was hit by the ray. (#4111) #### ml-agents / ml-agents-envs / gym-unity (Python) +- Added new Google Colab notebooks to show how to use `UnityEnvironment'. (#4117) ### Bug Fixes #### com.unity.ml-agents (C#) diff --git a/docs/Readme.md b/docs/Readme.md index 10456d7d41..ced395f6c9 100644 --- a/docs/Readme.md +++ b/docs/Readme.md @@ -34,6 +34,12 @@ - [Creating Custom Side Channels](Custom-SideChannels.md) - [Creating Custom Samplers for Environment Parameter Randomization](Training-ML-Agents.md#defining-a-new-sampler-type) +## Python Tutorial with Google Colab + +- [Using a UnityEnvironment](https://colab.research.google.com/drive/1Qg6E5kmf9n4G8rc5lXHIM_cQzMUFGH-g#forceEdit=true&sandboxMode=true) +- [Q-Learning with a UnityEnvironment](https://colab.research.google.com/drive/1nkOztXzU91MHEbuQ1T9GnynYdL_LRsHG#forceEdit=true&sandboxMode=true) +- [Using Side Channels on a UnityEnvironment](https://colab.research.google.com/drive/1-g7CwEpk9nJ7SgWXfoCUf8pSLUW1b48i#scrollTo=pbVXrmEsLXDt&forceEdit=true&sandboxMode=true) + ## Help - [Migrating from earlier versions of ML-Agents](Migrating.md)