-
Notifications
You must be signed in to change notification settings - Fork 319
Description
brax/brax/training/gradients.py
Line 63 in 12c29bd
| params_update, optimizer_state = optimizer.update(grads, optimizer_state) |
If I use AdamW, the optimizer.update need the parameters to update.
File ".../brax/training/agents/crossq/train.py", line 299, in sgd_step (critic_loss, critic_updated_vars), q_params_from_grad, q_optimizer_state = critic_update( File ".../brax/training/gradients.py", line 63, in f params_update, optimizer_state = optimizer.update(grads, optimizer_state) File ".../optax/transforms/_combining.py", line 89, in update_fn updates, new_s = fn(updates, s, params, **extra_args) File ".../optax/_src/base.py", line 335, in update return tx.update(updates, state, params) File ".../optax/transforms/_adding.py", line 49, in update_fn raise ValueError(base.NO_PARAMS_MSG) ValueError: You are using a transformation that requires the current value of parameters, but you are not passing params when calling update.It is simple to fix this by pass arg[0] into the optimizer.update
`
def gradient_update_fn(
loss_fn: Callable[..., float],
optimizer: optax.GradientTransformation,
pmap_axis_name: Optional[str],
has_aux: bool = False,
):
"""Wrapper of the loss function that apply gradient updates.
Args:
loss_fn: The loss function.
optimizer: The optimizer to apply gradients.
pmap_axis_name: If relevant, the name of the pmap axis to synchronize
gradients.
has_aux: Whether the loss_fn has auxiliary data.
Returns:
A function that takes the same argument as the loss function plus the
optimizer state. The output of this function is the loss, the new parameter,
and the new optimizer state.
"""
loss_and_pgrad_fn = loss_and_pgrad(
loss_fn, pmap_axis_name=pmap_axis_name, has_aux=has_aux
)
def f(*args, optimizer_state):
value, grads = loss_and_pgrad_fn(*args)
params_update, optimizer_state = optimizer.update(grads, optimizer_state, args[0])
params = optax.apply_updates(args[0], params_update)
return value, params, optimizer_state
return f
`
It has no side effect to adam optimizer because in:
https://github.com/google-deepmind/optax/blob/daecb91b3f0e5de6d3e763c67362e7ac2737bb24/optax/_src/transform.py#L284
The params will be delete and it will only used by Weight Decay in:
https://github.com/google-deepmind/optax/blob/daecb91b3f0e5de6d3e763c67362e7ac2737bb24/optax/_src/alias.py#L725
However I have no idea it will affect other optimizer. Need more tests.