Details
-
Bug
-
Status: Closed
-
Major
-
Resolution: Fixed
-
None
-
None
Description
The initialization and update of step_size based on the learning rate policy seems to be broken in MLP.
zero_indexed_iteration = current_iteration - 1 if learning_rate_policy == "exp": step_size = step_size_init * gamma**zero_indexed_iteration elif learning_rate_policy == "inv": step_size = step_size_init * (current_iteration)**(-power) elif learning_rate_policy == "step": step_size = step_size_init * gamma**( math.floor(zero_indexed_iteration / iterations_per_step))
The variable current_iteration in the above code snippet (from mlp_igd.py_in) does not seem to be updated in MLP, so the step_size will remain the same.
Attachments
Issue Links
- links to