curobo.opt.particle.parallel_mppi module

class BaseActionType(value)

Bases: Enum

An enumeration.

REPEAT = 0
NULL = 1
RANDOM = 2
class CovType(value)

Bases: Enum

An enumeration.

SIGMA_I = 0
DIAG_A = 1
FULL_A = 2
FULL_HA = 3
class ParallelMPPIConfig(
d_action: 'int',
action_lows: 'List[float]',
action_highs: 'List[float]',
action_horizon: 'int',
horizon: 'int',
n_iters: 'int',
cold_start_n_iters: 'Union[int, None]',
rollout_fn: 'RolloutBase',
tensor_args: 'TensorDeviceType',
use_cuda_graph: 'bool',
store_debug: 'bool',
debug_info: 'Any',
n_problems: 'int',
num_particles: 'Union[int, None]',
sync_cuda_time: 'bool',
use_coo_sparse: 'bool',
gamma: float,
sample_mode: curobo.opt.particle.particle_opt_base.SampleMode,
seed: int,
calculate_value: bool,
store_rollouts: bool,
init_mean: float,
init_cov: float,
base_action: curobo.opt.particle.parallel_mppi.BaseActionType,
step_size_mean: float,
step_size_cov: float,
null_act_frac: float,
squash_fn: curobo.opt.particle.particle_opt_utils.SquashType,
cov_type: curobo.opt.particle.parallel_mppi.CovType,
sample_params: curobo.util.sample_lib.SampleConfig,
update_cov: bool,
random_mean: bool,
beta: float,
alpha: float,
kappa: float,
sample_per_problem: bool,
)

Bases: ParticleOptConfig

init_mean: float
init_cov: float
base_action: BaseActionType
step_size_mean: float
step_size_cov: float
null_act_frac: float
squash_fn: SquashType
cov_type: CovType
sample_params: SampleConfig
update_cov: bool
random_mean: bool
beta: float
alpha: float
gamma: float
kappa: float
sample_per_problem: bool
static create_data_dict(
data_dict: Dict,
rollout_fn: RolloutBase,
tensor_args: TensorDeviceType = TensorDeviceType(device=device(type='cuda', index=0), dtype=torch.float32, collision_geometry_dtype=torch.float32, collision_gradient_dtype=torch.float32, collision_distance_dtype=torch.float32),
child_dict: Dict | None = None,
)

Helper function to create dictionary from optimizer parameters and rollout class.

Parameters:
  • data_dict – optimizer parameters dictionary.

  • rollout_fn – rollout function.

  • tensor_args – tensor cuda device.

  • child_dict – new dictionary where parameters will be stored.

Returns:

Dictionary with parameters to create a OptimizerConfig

sample_mode: SampleMode
seed: int
calculate_value: bool
store_rollouts: bool
d_action: int

Number of optimization variables per timestep.

action_lows: List[float]

Lower bound for optimization variables.

action_highs: List[float]

Higher bound for optimization variables

action_horizon: int
horizon: int

Number of timesteps in optimization state, total variables = d_action * horizon

n_iters: int

Number of iterations to run optimization

cold_start_n_iters: int | None

Number of iterations to run optimization during the first instance. Setting to None will use n_iters. This parameter is useful in MPC like settings where we need to run many iterations during initialization (cold start) and then run only few iterations (warm start).

rollout_fn: RolloutBase

Rollout function to use for computing cost, given optimization variables.

tensor_args: TensorDeviceType

Tensor device to use for optimization.

use_cuda_graph: bool

Capture optimization iteration in a cuda graph and replay graph instead of eager execution. Enabling this can make optimization 10x faster. But changing control flow, tensor shapes, or problem type is not allowed.

store_debug: bool

Record debugging data such as optimization variables, and cost at every iteration. Enabling this will disable cuda graph.

debug_info: Any

Use this to record additional attributes from rollouts.

n_problems: int

Number of parallel problems to optimize.

num_particles: int | None

Number of particles to use per problem. Common optimization solvers use many particles to optimize a single problem. E.g., MPPI rolls out many parallel samples and computes a weighted mean. In cuRobo, Quasi-Newton solvers use particles to run many line search magnitudes. Total optimization batch size = n_problems * num_particles.

sync_cuda_time: bool

Synchronize device before computing solver time.

use_coo_sparse: bool

Matmul with a Sparse tensor is used to create particles for each problem index to save memory and compute. Some versions of pytorch do not support coo sparse, specifically during torch profile runs. Set this to False to use a standard tensor.

class ParallelMPPI(
config: ParallelMPPIConfig | None = None,
)

Bases: ParticleOptBase, ParallelMPPIConfig

Base optimization solver class

Parameters:

config – Initialized with parameters from a dataclass.

get_rollouts()
reset_distribution()

Reset control distribution

_compute_total_cost(costs)

Calculate weights using exponential utility

_exp_util(total_costs)
_exp_util_from_costs(costs)
_compute_mean(w, actions)
_compute_mean_covariance(
costs,
actions,
)
_compute_covariance(
w,
actions,
)
_update_cov_scale(
new_cov=None,
)
_update_distribution(
trajectories: Trajectory,
)

Update current control distribution using rollout trajectories

Parameters:

trajectories

dict Rollout trajectories. Contains the following fields observations : torch.tensor

observations along rollouts

actionstorch.tensor

actions sampled from control distribution along rollouts

coststorch.tensor

step costs along rollouts

sample_actions(init_act)

Sample actions from current control distribution

update_seed(init_act)
update_init_mean(init_mean)
reset_mean()
reset_covariance()
_get_action_seq(
mode: SampleMode,
)

Get action sequence to execute on the system based on current control distribution

Parameters:

mode – {‘mean’, ‘sample’} how to choose action to be executed ‘mean’ plays mean action and ‘sample’ samples from the distribution

generate_noise(
shape,
base_seed=None,
)

Generate correlated noisy samples using autoregressive process

_calc_val(
trajectories: Trajectory,
)

Calculate value of state given rollouts from a policy

reset()

Reset the optimizer

property squashed_mean
property full_cov
property full_inv_cov
property full_scale_tril
property entropy
reset_seed()

Reset seeds.

update_samples()
generate_rollouts(
init_act=None,
)

Samples a batch of actions, rolls out trajectories for each particle and returns the resulting observations, costs, actions

Parameters:

state (dict or np.ndarray) – Initial state to set the simulation problem to

_call_cuda_opt_iters(
init_act: Tensor,
)
_initialize_cuda_graph(
init_act: Tensor,
shift_steps=0,
)
_optimize(
init_act: Tensor,
shift_steps=0,
n_iters=None,
)

Optimize for best action at current state

Parameters:
  • state (torch.Tensor) – state to calculate optimal action from

  • calc_val (bool) – If true, calculate the optimal value estimate of the state along with action

Returns:

  • action (torch.Tensor) – next action to execute

  • value (float) – optimal value estimate (default: 0.)

  • info (dict) – dictionary with side-information

_run_opt_iters(
init_act: Tensor,
shift_steps=0,
n_iters=None,
)
abstract _shift(shift_steps=0)

Shift the variables in the solver to hotstart the next timestep.

Parameters:

shift_steps – Number of timesteps to shift.

_update_problem_kernel(
n_problems: int,
num_particles: int,
)

Update matrix used to map problem index to number of particles.

Parameters:
  • n_problems – Number of optimization problems.

  • num_particles – Number of particles per problem.

check_convergence()

Checks if controller has converged Returns False by default

static create_data_dict(
data_dict: Dict,
rollout_fn: RolloutBase,
tensor_args: TensorDeviceType = TensorDeviceType(device=device(type='cuda', index=0), dtype=torch.float32, collision_geometry_dtype=torch.float32, collision_gradient_dtype=torch.float32, collision_distance_dtype=torch.float32),
child_dict: Dict | None = None,
)

Helper function to create dictionary from optimizer parameters and rollout class.

Parameters:
  • data_dict – optimizer parameters dictionary.

  • rollout_fn – rollout function.

  • tensor_args – tensor cuda device.

  • child_dict – new dictionary where parameters will be stored.

Returns:

Dictionary with parameters to create a OptimizerConfig

get_all_rollout_instances() List[RolloutBase]

Get all instances of Rollout class in the optimizer.

get_nproblem_tensor(x)

This function takes an input tensor of shape (n_problem,….) and converts it into (n_particles,…).

optimize(
opt_tensor: Tensor,
shift_steps=0,
n_iters=None,
) Tensor

Find a solution through optimization given the initial values for variables.

Parameters:
  • opt_tensor – Initial value of optimization variables. Shape: [n_problems, action_horizon, d_action]

  • shift_steps – Shift variables along action_horizon. Useful in mpc warm-start setting.

  • n_iters – Override number of iterations to run optimization.

Returns:

Optimized values returned as a tensor of shape [n_problems, action_horizon, d_action].

reset_cuda_graph()

Reset CUDA Graphs. This does not work, workaround is to create a new instance.

reset_shape()

Reset any flags in rollout class. Useful to reinitialize tensors for a new shape.

update_nproblems(
n_problems,
)

Update the number of problems that need to be optimized.

Parameters:

n_problems – number of problems.

update_num_particles_per_problem(
num_particles_per_problem,
)
update_params(
goal: Goal,
)

Update parameters in the curobo.rollout.rollout_base.RolloutBase instance.

Parameters:

goal – parameters to update rollout instance.

d_action: int

Number of optimization variables per timestep.

action_lows: List[float]

Lower bound for optimization variables.

action_highs: List[float]

Higher bound for optimization variables

action_horizon: int
horizon: int

Number of timesteps in optimization state, total variables = d_action * horizon

n_iters: int

Number of iterations to run optimization

cold_start_n_iters: int | None

Number of iterations to run optimization during the first instance. Setting to None will use n_iters. This parameter is useful in MPC like settings where we need to run many iterations during initialization (cold start) and then run only few iterations (warm start).

rollout_fn: RolloutBase

Rollout function to use for computing cost, given optimization variables.

tensor_args: TensorDeviceType

Tensor device to use for optimization.

use_cuda_graph: bool

Capture optimization iteration in a cuda graph and replay graph instead of eager execution. Enabling this can make optimization 10x faster. But changing control flow, tensor shapes, or problem type is not allowed.

store_debug: bool

Record debugging data such as optimization variables, and cost at every iteration. Enabling this will disable cuda graph.

debug_info: Any

Use this to record additional attributes from rollouts.

n_problems: int

Number of parallel problems to optimize.

num_particles: int | None

Number of particles to use per problem. Common optimization solvers use many particles to optimize a single problem. E.g., MPPI rolls out many parallel samples and computes a weighted mean. In cuRobo, Quasi-Newton solvers use particles to run many line search magnitudes. Total optimization batch size = n_problems * num_particles.

sync_cuda_time: bool

Synchronize device before computing solver time.

use_coo_sparse: bool

Matmul with a Sparse tensor is used to create particles for each problem index to save memory and compute. Some versions of pytorch do not support coo sparse, specifically during torch profile runs. Set this to False to use a standard tensor.

init_mean: float
init_cov: float
base_action: BaseActionType
step_size_mean: float
step_size_cov: float
null_act_frac: float
squash_fn: SquashType
cov_type: CovType
sample_params: SampleConfig
update_cov: bool
random_mean: bool
beta: float
alpha: float
gamma: float
kappa: float
sample_per_problem: bool
sample_mode: SampleMode
seed: int
calculate_value: bool
store_rollouts: bool
jit_calculate_exp_util(
beta: float,
total_costs,
)
jit_calculate_exp_util_from_costs(
costs,
gamma_seq,
beta: float,
)
jit_compute_total_cost(gamma_seq, costs)
jit_diag_a_cov_update(
w,
actions,
mean_action,
)
jit_blend_cov(
cov_action,
cov_update,
step_size_cov: float,
kappa: float,
)
jit_blend_mean(
mean_action,
new_mean,
step_size_mean: float,
)
jit_mean_cov_diag_a(
costs,
actions,
gamma_seq,
mean_action,
cov_action,
step_size_mean: float,
step_size_cov: float,
kappa: float,
beta: float,
)