Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions com.unity.ml-agents/CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.
- The offset logic was removed from DecisionRequester.
- The signature of `Agent.Heuristic()` was changed to take a `float[]` as a parameter, instead of returning the array. This was done to prevent a common source of error where users would return arrays of the wrong size.
- The communication API version has been bumped up to 1.0.0 and will use [Semantic Versioning](https://semver.org/) to do compatibility checks for communication between Unity and the Python process.
- The obsolete `Agent` methods `GiveModel`, `Done`, `InitializeAgent`, `AgentAction` and `AgentReset` have been removed.

### Minor Changes
- Format of console output has changed slightly and now matches the name of the model/summary directory. (#3630, #3616)
Expand Down
51 changes: 3 additions & 48 deletions com.unity.ml-agents/Runtime/Agent.cs
Original file line number Diff line number Diff line change
Expand Up @@ -353,15 +353,6 @@ void NotifyAgentDone(DoneReason doneReason)
m_RequestDecision = false;
}

[Obsolete("GiveModel() has been deprecated, use SetModel() instead.")]
public void GiveModel(
string behaviorName,
NNModel model,
InferenceDevice inferenceDevice = InferenceDevice.CPU)
{
SetModel(behaviorName, model, inferenceDevice);
}

/// <summary>
/// Updates the Model for the agent. Any model currently assigned to the
/// agent will be replaced with the provided one. If the arguments are
Expand Down Expand Up @@ -470,12 +461,6 @@ void UpdateRewardStats()
TimerStack.Instance.SetGauge(gaugeName, GetCumulativeReward());
}

[Obsolete("Done() has been deprecated, use EndEpisode() instead.")]
public void Done()
{
EndEpisode();
}

/// <summary>
/// Sets the done flag to true.
/// </summary>
Expand Down Expand Up @@ -519,11 +504,6 @@ void ResetData()
}
}

[Obsolete("InitializeAgent() has been deprecated, use Initialize() instead.")]
public virtual void InitializeAgent()
{
}

/// <summary>
/// Initializes the agent, called once when the agent is enabled. Can be
/// left empty if there is no special, unique set-up behavior for the
Expand All @@ -533,12 +513,7 @@ public virtual void InitializeAgent()
/// One sample use is to store local references to other objects in the
/// scene which would facilitate computing this agents observation.
/// </remarks>
public virtual void Initialize()
{
#pragma warning disable 0618
InitializeAgent();
#pragma warning restore 0618
}
public virtual void Initialize(){}

/// <summary>
/// When the Agent uses Heuristics, it will call this method every time it
Expand Down Expand Up @@ -719,11 +694,6 @@ public virtual void CollectDiscreteActionMasks(DiscreteActionMasker actionMasker
{
}

[Obsolete("AgentAction() has been deprecated, use OnActionReceived() instead.")]
public virtual void AgentAction(float[] vectorAction)
{
}

/// <summary>
/// Specifies the agent behavior at every step based on the provided
/// action.
Expand All @@ -732,29 +702,14 @@ public virtual void AgentAction(float[] vectorAction)
/// Vector action. Note that for discrete actions, the provided array
/// will be of length 1.
/// </param>
public virtual void OnActionReceived(float[] vectorAction)
{
#pragma warning disable 0618
AgentAction(m_Action.vectorActions);
#pragma warning restore 0618
}

[Obsolete("AgentReset() has been deprecated, use OnEpisodeBegin() instead.")]
public virtual void AgentReset()
{
}
public virtual void OnActionReceived(float[] vectorAction){}

/// <summary>
/// Specifies the agent behavior when being reset, which can be due to
/// the agent or Academy being done (i.e. completion of local or global
/// episode).
/// </summary>
public virtual void OnEpisodeBegin()
{
#pragma warning disable 0618
AgentReset();
#pragma warning restore 0618
}
public virtual void OnEpisodeBegin(){}

/// <summary>
/// Returns the last action that was decided on by the Agent
Expand Down
1 change: 1 addition & 0 deletions docs/Migrating.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@ The versions can be found in
* The `play_against_current_self_ratio` self-play trainer hyperparameter has been renamed to `play_against_latest_model_ratio`
* Removed the multi-agent gym option from the gym wrapper. For multi-agent scenarios, use the [Low Level Python API](Python-API.md).
* The low level Python API has changed. You can look at the document [Low Level Python API documentation](Python-API.md) for more information. If you use `mlagents-learn` for training, this should be a transparent change.
* The obsolete `Agent` methods `GiveModel`, `Done`, `InitializeAgent`, `AgentAction` and `AgentReset` have been removed.
* The signature of `Agent.Heuristic()` was changed to take a `float[]` as a parameter, instead of returning the array. This was done to prevent a common source of error where users would return arrays of the wrong size.

### Steps to Migrate
Expand Down