curobo.opt.newton.newton_base module

class curobo.opt.newton.newton_base.LineSearchType(value)

Bases: Enum

An enumeration.

GREEDY = 'greedy'
ARMIJO = 'armijo'
WOLFE = 'wolfe'
STRONG_WOLFE = 'strong_wolfe'
APPROX_WOLFE = 'approx_wolfe'
class curobo.opt.newton.newton_base.NewtonOptConfig(d_action: 'int', action_lows: 'List[float]', action_highs: 'List[float]', action_horizon: 'int', horizon: 'int', n_iters: 'int', cold_start_n_iters: 'Union[int, None]', rollout_fn: 'RolloutBase', tensor_args: 'TensorDeviceType', use_cuda_graph: 'bool', store_debug: 'bool', debug_info: 'Any', n_problems: 'int', num_particles: 'Union[int, None]', sync_cuda_time: 'bool', use_coo_sparse: 'bool', line_search_scale: List[int], cost_convergence: float, cost_delta_threshold: float, fixed_iters: bool, inner_iters: int, line_search_type: curobo.opt.newton.newton_base.LineSearchType, use_cuda_line_search_kernel: bool, use_cuda_update_best_kernel: bool, min_iters: int, step_scale: float, last_best: float = 0, use_temporal_smooth: bool = False, cost_relative_threshold: float = 0.999)

Bases: OptimizerConfig

Parameters:
  • d_action (int) –

  • action_lows (List[float]) –

  • action_highs (List[float]) –

  • action_horizon (int) –

  • horizon (int) –

  • n_iters (int) –

  • cold_start_n_iters (int | None) –

  • rollout_fn (RolloutBase) –

  • tensor_args (TensorDeviceType) –

  • use_cuda_graph (bool) –

  • store_debug (bool) –

  • debug_info (Any) –

  • n_problems (int) –

  • num_particles (int | None) –

  • sync_cuda_time (bool) –

  • use_coo_sparse (bool) –

  • line_search_scale (List[int]) –

  • cost_convergence (float) –

  • cost_delta_threshold (float) –

  • fixed_iters (bool) –

  • inner_iters (int) –

  • line_search_type (LineSearchType) –

  • use_cuda_line_search_kernel (bool) –

  • use_cuda_update_best_kernel (bool) –

  • min_iters (int) –

  • step_scale (float) –

  • last_best (float) –

  • use_temporal_smooth (bool) –

  • cost_relative_threshold (float) –

line_search_scale: List[int]
cost_convergence: float
cost_delta_threshold: float
fixed_iters: bool
inner_iters: int
line_search_type: LineSearchType
use_cuda_line_search_kernel: bool
use_cuda_update_best_kernel: bool
min_iters: int
step_scale: float
last_best: float = 0
use_temporal_smooth: bool = False
cost_relative_threshold: float = 0.999
class curobo.opt.newton.newton_base.NewtonOptBase(config=None)

Bases: Optimizer, NewtonOptConfig

Base optimization solver class

Parameters:

config (NewtonOptConfig | None) – Initialized with parameters from a dataclass.

reset_cuda_graph()

Reset CUDA Graphs. This does not work, workaround is to create a new instance.

_get_step_direction(cost, q, grad_q)

Reimplement this function in derived class. Gradient Descent is implemented here.

_shift(shift_steps=1)

Shift the variables in the solver to hotstart the next timestep.

Parameters:

shift_steps – Number of timesteps to shift.

_optimize(q, shift_steps=0, n_iters=None)

Implement this function in a derived class containing the solver.

Parameters:
  • opt_tensor – Initial value of optimization variables. Shape: [n_problems, action_horizon, d_action]

  • shift_steps – Shift variables along action_horizon. Useful in mpc warm-start setting.

  • n_iters – Override number of iterations to run optimization.

  • q (Tensor) –

Returns:

Optimized variables in tensor shape [action_horizon, d_action].

reset()

Reset optimizer.

_opt_iters(q, grad_q, shift_steps=0)
_opt_step(q, grad_q)
clip_bounds(x)
scale_step_direction(dx)
project_bounds(x)
_compute_cost_gradient(x)
check_convergence(cost)
_update_best(q, grad_q, cost)
update_nproblems(n_problems)

Update the number of problems that need to be optimized.

Parameters:

n_problems – number of problems.

_initialize_opt_iters_graph(q, grad_q, shift_steps)
Parameters:

line_search_scale (_type_) – should have n values

_call_opt_iters_graph(q, grad_q)
_create_opt_iters_graph(q, grad_q, shift_steps)