Skip to content

Modularity and extensibility of RProp implementation #4

@gnitr

Description

@gnitr

The architecture of Encog is really well designed with nice separations between the different parts of the networks and the training algorithms. It seems to be built to be very extensible and flexible.

There is a limitiation with RProp implementation however. Due to the strong encapsulation of the parameters (see issue #2) but also to the fact that the updateWeightXXX() functions are not very modular, the only way to introduce modifications to the behaviour of the algorithm is to change the code directly in TrainFlatNetworkResilient.java and apply that same modification to the four upateWeightXXX() functions.

Some improvements of RProp have been proposed (e.g. SARProp or RProp with nonextensive schedule) which only require a slight modification of the calculation of the delta when the sign fo the gradient is changing.

I don't think that it would be possible to implement those changes with a Strategy class as the modification is located in the algorithm itself.

If at least the fields and methods in TrainFlatNetworkResilient were declared as protected rather than private, subclasing would be feasible. Although the subclass would still have to contain nearly the same code as the parent class, which is not ideal.

Not sure if there is a nice solution to this problem but I still mention it in case you had an idea.

Metadata

Metadata

Assignees

Labels

No labels
No labels

Projects

No projects

Relationships

None yet

Development

No branches or pull requests

Issue actions