optimizer#

class pointrix.optimizer.optimizer.BaseOptimizer(cfg: dict | DictConfig | None = None, *args, **kwargs)#

Bases: BaseObject

Base class for all optimizers.

class Config(backward: bool = False)#

Bases: object

get_lr() Dict[str, List[float]]#

Get learning rate of the optimizer.

Returns:

The learning rate of the optimizer.

Return type:

Dict[str, List[float]]

get_momentum() Dict[str, List[float]]#

Get momentum of the optimizer.

Returns:

The momentum of the optimizer.

Return type:

Dict[str, List[float]]

load_state_dict(state_dict: dict) None#

A wrapper of Optimizer.load_state_dict.

property param_groups: List[dict]#

Get the parameter groups of the optimizer.

Returns:

The parameter groups of the optimizer.

Return type:

List[dict]

state_dict() dict#

A wrapper of Optimizer.state_dict.

update_model(**kwargs) None#

update the model with the loss. you need backward first, then call this function to update the model.

Parameters:

loss (torch.Tensor) – The loss tensor.

class pointrix.optimizer.optimizer.OptimizerList(optimizer_dict: dict)#

Bases: object

A wrapper for multiple optimizers.

load_state_dict(state_dict: dict) None#

A wrapper of Optimizer.load_state_dict.

Parameters:

state_dict (dict) – The state dictionary of the optimizer.

property param_groups#

Get the parameter groups of the optimizers.

Returns:

The parameter groups of the optimizers.

Return type:

list

state_dict() dict#

A wrapper of Optimizer.state_dict.

Returns:

The state dictionary of the optimizer.

Return type:

dict

update_model(**kwargs) None#

update the model with the loss.

Parameters:
  • loss (torch.Tensor) – The loss tensor.

  • kwargs (dict) – The keyword arguments.

class pointrix.optimizer.scheduler.ExponLRScheduler(config: dict, lr_scale=1.0)#

Bases: object

A learning rate scheduler using exponential decay.

Parameters:
  • config (dict) – The configuration dictionary.

  • lr_scale (float, optional) – The learning rate scale, by default 1.0

get_exponential_lr(init_lr: float, final_lr: float, max_steps: int = 1000000) callable#

Generates a function to compute the exponential learning rate based on the current step.

Parameters:
  • init_lr (float) – The initial learning rate.

  • final_lr (float) – The final learning rate.

  • max_steps (int, optional) – The maximum number of steps (default is 1000000).

Returns:

A function that takes the current step as input and returns the learning rate for that step.

Return type:

callable

step(global_step: int, optimizer_list: OptimizerList) None#

Update the learning rate for the optimizer.

Parameters:
  • global_step (int) – The global step in training.

  • optimizer_list (OptimizerList) – The list of all the optimizers which need to be updated.