curobo.opt.newton.newton_base module¶
- class LineSearchType(value)¶
Bases:
Enum
An enumeration.
- GREEDY = 'greedy'¶
- ARMIJO = 'armijo'¶
- WOLFE = 'wolfe'¶
- STRONG_WOLFE = 'strong_wolfe'¶
- APPROX_WOLFE = 'approx_wolfe'¶
- class NewtonOptConfig(
- d_action: 'int',
- action_lows: 'List[float]',
- action_highs: 'List[float]',
- action_horizon: 'int',
- horizon: 'int',
- n_iters: 'int',
- cold_start_n_iters: 'Union[int, None]',
- rollout_fn: 'RolloutBase',
- tensor_args: 'TensorDeviceType',
- use_cuda_graph: 'bool',
- store_debug: 'bool',
- debug_info: 'Any',
- n_problems: 'int',
- num_particles: 'Union[int, None]',
- sync_cuda_time: 'bool',
- use_coo_sparse: 'bool',
- line_search_scale: List[int],
- cost_convergence: float,
- cost_delta_threshold: float,
- fixed_iters: bool,
- inner_iters: int,
- line_search_type: curobo.opt.newton.newton_base.LineSearchType,
- use_cuda_line_search_kernel: bool,
- use_cuda_update_best_kernel: bool,
- min_iters: int,
- step_scale: float,
- last_best: float = 0,
- use_temporal_smooth: bool = False,
- cost_relative_threshold: float = 0.999,
- fix_terminal_action: bool = False,
Bases:
OptimizerConfig
- line_search_type: LineSearchType¶
- static create_data_dict(
- data_dict: Dict,
- rollout_fn: RolloutBase,
- tensor_args: TensorDeviceType = TensorDeviceType(device=device(type='cuda', index=0), dtype=torch.float32, collision_geometry_dtype=torch.float32, collision_gradient_dtype=torch.float32, collision_distance_dtype=torch.float32),
- child_dict: Dict | None = None,
Helper function to create dictionary from optimizer parameters and rollout class.
- Parameters:
data_dict – optimizer parameters dictionary.
rollout_fn – rollout function.
tensor_args – tensor cuda device.
child_dict – new dictionary where parameters will be stored.
- Returns:
Dictionary with parameters to create a
OptimizerConfig
- cold_start_n_iters: int | None¶
Number of iterations to run optimization during the first instance. Setting to None will use n_iters. This parameter is useful in MPC like settings where we need to run many iterations during initialization (cold start) and then run only few iterations (warm start).
- rollout_fn: RolloutBase¶
Rollout function to use for computing cost, given optimization variables.
- tensor_args: TensorDeviceType¶
Tensor device to use for optimization.
- use_cuda_graph: bool¶
Capture optimization iteration in a cuda graph and replay graph instead of eager execution. Enabling this can make optimization 10x faster. But changing control flow, tensor shapes, or problem type is not allowed.
- store_debug: bool¶
Record debugging data such as optimization variables, and cost at every iteration. Enabling this will disable cuda graph.
- debug_info: Any¶
Use this to record additional attributes from rollouts.
- num_particles: int | None¶
Number of particles to use per problem. Common optimization solvers use many particles to optimize a single problem. E.g., MPPI rolls out many parallel samples and computes a weighted mean. In cuRobo, Quasi-Newton solvers use particles to run many line search magnitudes. Total optimization batch size = n_problems * num_particles.
- class NewtonOptBase(
- config: NewtonOptConfig | None = None,
Bases:
Optimizer
,NewtonOptConfig
Base optimization solver class
- Parameters:
config – Initialized with parameters from a dataclass.
- reset_cuda_graph()¶
Reset CUDA Graphs. This does not work, workaround is to create a new instance.
- _get_step_direction(
- cost,
- q,
- grad_q,
Reimplement this function in derived class. Gradient Descent is implemented here.
- _shift(shift_steps=1)¶
Shift the variables in the solver to hotstart the next timestep.
- Parameters:
shift_steps – Number of timesteps to shift.
- _optimize(
- q: Tensor,
- shift_steps=0,
- n_iters=None,
Implement this function in a derived class containing the solver.
- Parameters:
opt_tensor – Initial value of optimization variables. Shape: [n_problems, action_horizon, d_action]
shift_steps – Shift variables along action_horizon. Useful in mpc warm-start setting.
n_iters – Override number of iterations to run optimization.
- Returns:
Optimized variables in tensor shape [action_horizon, d_action].
- reset()¶
Reset optimizer.
- _opt_iters(
- q,
- grad_q,
- shift_steps=0,
- _opt_step(q, grad_q)¶
- clip_bounds(x)¶
- scale_step_direction(dx)¶
- project_bounds(x)¶
- _compute_cost_gradient(x)¶
- _wolfe_line_search(
- x,
- step_direction,
- _greedy_line_search(
- x,
- step_direction,
- _armijo_line_search(
- x,
- step_direction,
- _approx_line_search(
- x,
- step_direction,
- check_convergence(cost)¶
- _update_best(
- q,
- grad_q,
- cost,
- update_nproblems(
- n_problems,
Update the number of problems that need to be optimized.
- Parameters:
n_problems – number of problems.
- _initialize_opt_iters_graph(
- q,
- grad_q,
- shift_steps,
- _create_box_line_search(
- line_search_scale,
- Parameters:
line_search_scale (_type_) – should have n values
- _call_opt_iters_graph(
- q,
- grad_q,
- _create_opt_iters_graph(
- q,
- grad_q,
- shift_steps,
- _update_problem_kernel( )¶
Update matrix used to map problem index to number of particles.
- Parameters:
n_problems – Number of optimization problems.
num_particles – Number of particles per problem.
- static create_data_dict(
- data_dict: Dict,
- rollout_fn: RolloutBase,
- tensor_args: TensorDeviceType = TensorDeviceType(device=device(type='cuda', index=0), dtype=torch.float32, collision_geometry_dtype=torch.float32, collision_gradient_dtype=torch.float32, collision_distance_dtype=torch.float32),
- child_dict: Dict | None = None,
Helper function to create dictionary from optimizer parameters and rollout class.
- Parameters:
data_dict – optimizer parameters dictionary.
rollout_fn – rollout function.
tensor_args – tensor cuda device.
child_dict – new dictionary where parameters will be stored.
- Returns:
Dictionary with parameters to create a
OptimizerConfig
- get_all_rollout_instances() List[RolloutBase] ¶
Get all instances of Rollout class in the optimizer.
- get_nproblem_tensor(x)¶
This function takes an input tensor of shape (n_problem,….) and converts it into (n_particles,…).
- optimize(
- opt_tensor: Tensor,
- shift_steps=0,
- n_iters=None,
Find a solution through optimization given the initial values for variables.
- Parameters:
opt_tensor – Initial value of optimization variables. Shape: [n_problems, action_horizon, d_action]
shift_steps – Shift variables along action_horizon. Useful in mpc warm-start setting.
n_iters – Override number of iterations to run optimization.
- Returns:
Optimized values returned as a tensor of shape [n_problems, action_horizon, d_action].
- reset_seed()¶
Reset seeds.
- reset_shape()¶
Reset any flags in rollout class. Useful to reinitialize tensors for a new shape.
- update_params(
- goal: Goal,
Update parameters in the
curobo.rollout.rollout_base.RolloutBase
instance.- Parameters:
goal – parameters to update rollout instance.
- line_search_type: LineSearchType¶
- cold_start_n_iters: int | None¶
Number of iterations to run optimization during the first instance. Setting to None will use n_iters. This parameter is useful in MPC like settings where we need to run many iterations during initialization (cold start) and then run only few iterations (warm start).
- rollout_fn: RolloutBase¶
Rollout function to use for computing cost, given optimization variables.
- tensor_args: TensorDeviceType¶
Tensor device to use for optimization.
- use_cuda_graph: bool¶
Capture optimization iteration in a cuda graph and replay graph instead of eager execution. Enabling this can make optimization 10x faster. But changing control flow, tensor shapes, or problem type is not allowed.
- store_debug: bool¶
Record debugging data such as optimization variables, and cost at every iteration. Enabling this will disable cuda graph.
- debug_info: Any¶
Use this to record additional attributes from rollouts.
- num_particles: int | None¶
Number of particles to use per problem. Common optimization solvers use many particles to optimize a single problem. E.g., MPPI rolls out many parallel samples and computes a weighted mean. In cuRobo, Quasi-Newton solvers use particles to run many line search magnitudes. Total optimization batch size = n_problems * num_particles.
- get_x_set_jit(
- step_vec,
- x,
- alpha_list,
- action_lows,
- action_highs,
- _armijo_line_search_tail_jit(
- c,
- g_x,
- step_direction,
- c_1,
- alpha_list,
- c_idx,
- x_set,
- d_opt,
- scale_action_old(dx, action_step_max)¶
- scale_action(dx, action_step_max)¶