curobo.types.math module¶
- class Pose(
- position: Tensor | None = None,
- quaternion: Tensor | None = None,
- rotation: Tensor | None = None,
- batch: int = 1,
- n_goalset: int = 1,
- name: str = 'ee_link',
- normalize_rotation: bool = True,
Bases:
Sequence
Pose representation used in CuRobo. You can initialize a pose by calling pose = Pose(position, quaternion).
- n_goalset: int = 1¶
Goalset will be initialized from input when position shape is batch x n_goalset x 3
- normalize_rotation: bool = True¶
quaternion input will be normalized when this flag is enabled. This is recommended when a pose comes from an external source as some programs do not send normalized quaternions.
- get_rotation()¶
- unsqueeze(dim=-1)¶
- apply_kernel(kernel_mat)¶
- classmethod from_list(
- pose: List[float],
- tensor_args: TensorDeviceType = TensorDeviceType(device=device(type='cuda', index=0), dtype=torch.float32, collision_geometry_dtype=torch.float32, collision_gradient_dtype=torch.float32, collision_distance_dtype=torch.float32),
- q_xyzw=False,
- classmethod from_batch_list(
- pose: List[List[float]],
- tensor_args: TensorDeviceType,
- q_xyzw=False,
- to_list(q_xyzw=False)¶
- tolist(q_xyzw=False)¶
- clone()¶
- to(
- tensor_args: TensorDeviceType | None = None,
- device: device | None = None,
- get_numpy_matrix()¶
- get_pose_vector()¶
- copy_(
- pose: Pose,
Copies pose data from another memory buffer. This will create a new instance if buffers are not same shape
- Parameters:
pose (Pose) – _description_
- angular_distance( )¶
This function computes the angular distance phi_3.
See Huynh, Du Q. “Metrics for 3D rotations: Comparison and analysis.” Journal of Mathematical Imaging and Vision 35 (2009): 155-164.
- Parameters:
goal_quat – _description_
current_quat – _description_
- Returns:
Angular distance in range [0,1]
- transform_point(
- points: Tensor,
- out_buffer: Tensor | None = None,
- gp_out: Tensor | None = None,
- gq_out: Tensor | None = None,
- gpt_out: Tensor | None = None,
- transform_points(
- points: Tensor,
- out_buffer: Tensor | None = None,
- gp_out: Tensor | None = None,
- gq_out: Tensor | None = None,
- gpt_out: Tensor | None = None,
- batch_transform_points(
- points: Tensor,
- out_buffer: Tensor | None = None,
- gp_out: Tensor | None = None,
- gq_out: Tensor | None = None,
- gpt_out: Tensor | None = None,
- property shape¶
- _abc_impl = <_abc._abc_data object>¶
- _is_protocol = False¶
- count(
- value,
- index(
- value[,
- start[,
- stop,]]
Raises ValueError if the value is not present.
Supporting start and stop arguments is optional, but recommended.
- quat_multiply(q1, q2, q_res)¶
- angular_distance_phi3(
- goal_quat,
- current_quat,
This function computes the angular distance phi_3.
See Huynh, Du Q. “Metrics for 3D rotations: Comparison and analysis.” Journal of Mathematical Imaging and Vision 35 (2009): 155-164.
- Parameters:
goal_quat – _description_
current_quat – _description_
- Returns:
Angular distance in range [0,1]
- class OrientationError(*args, **kwargs)¶
Bases:
Function
- static geodesic_distance(
- goal_quat,
- current_quat,
- quat_res,
- static forward(
- ctx,
- goal_quat,
- current_quat,
- quat_res,
This function is to be overridden by all subclasses. There are two ways to define forward:
Usage 1 (Combined forward and ctx):
@staticmethod def forward(ctx: Any, *args: Any, **kwargs: Any) -> Any: pass
It must accept a context ctx as the first argument, followed by any number of arguments (tensors or other types).
See Combined or separate forward() and setup_context() for more details
Usage 2 (Separate forward and ctx):
@staticmethod def forward(*args: Any, **kwargs: Any) -> Any: pass @staticmethod def setup_context(ctx: Any, inputs: Tuple[Any, ...], output: Any) -> None: pass
The forward no longer accepts a ctx argument.
Instead, you must also override the
torch.autograd.Function.setup_context
staticmethod to handle setting up thectx
object.output
is the output of the forward,inputs
are a Tuple of inputs to the forward.See Extending torch.autograd for more details
The context can be used to store arbitrary data that can be then retrieved during the backward pass. Tensors should not be stored directly on ctx (though this is not currently enforced for backward compatibility). Instead, tensors should be saved either with
ctx.save_for_backward
if they are intended to be used inbackward
(equivalently,vjp
) orctx.save_for_forward
if they are intended to be used for injvp
.
- static backward(ctx, grad_out)¶
Defines a formula for differentiating the operation with backward mode automatic differentiation (alias to the vjp function).
This function is to be overridden by all subclasses.
It must accept a context
ctx
as the first argument, followed by as many outputs as theforward
returned (None will be passed in for non tensor outputs of the forward function), and it should return as many tensors, as there were inputs toforward
. Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input. If an input is not a Tensor or is a Tensor not requiring grads, you can just pass None as a gradient for that input.The context can be used to retrieve tensors saved during the forward pass. It also has an attribute
ctx.needs_input_grad
as a tuple of booleans representing whether each input needs gradient. E.g.,backward
will havectx.needs_input_grad[0] = True
if the first input toforward
needs gradient computed w.r.t. the output.
- _backward_cls¶
alias of
OrientationErrorBackward
- _get_compiled_autograd_symints()¶
- _is_compiled_autograd_tracing()¶
- _materialize_non_diff_grads¶
- _raw_saved_tensors¶
- static _register_hook(
- backward_hooks,
- hook,
- _register_hook_dict()¶
- _sequence_nr()¶
- classmethod apply(*args, **kwargs)¶
- dirty_tensors¶
- generate_vmap_rule = False¶
- is_traceable = False¶
Bool that specifies if PyTorch should attempt to autogenerate
torch.vmap
support for this autograd.Function. You may set this to True only if this autograd.Function’s forward, backward, and jvp (if they exist) are written using PyTorch operations; otherwise, please overridetorch.autograd.Function.vmap
to add support fortorch.vmap
.Please see Extending torch.func with autograd.Function for more details.
- static jvp( ) Any ¶
Defines a formula for differentiating the operation with forward mode automatic differentiation. This function is to be overridden by all subclasses. It must accept a context
ctx
as the first argument, followed by as many inputs as theforward
got (None will be passed in for non tensor inputs of the forward function), and it should return as many tensors as there were outputs toforward
. Each argument is the gradient w.r.t the given input, and each returned value should be the gradient w.r.t. the corresponding output. If an output is not a Tensor or the function is not differentiable with respect to that output, you can just pass None as a gradient for that input.You can use the
ctx
object to pass any value from the forward to this functions.
- mark_dirty(
- *args: Tensor,
Marks given tensors as modified in an in-place operation.
This should be called at most once, only from inside the
forward
method, and all arguments should be inputs.Every tensor that’s been modified in-place in a call to
forward
should be given to this function, to ensure correctness of our checks. It doesn’t matter whether the function is called before or after modification.- Examples::
>>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_AUTOGRAD) >>> class Inplace(Function): >>> @staticmethod >>> def forward(ctx, x): >>> x_npy = x.numpy() # x_npy shares storage with x >>> x_npy += 1 >>> ctx.mark_dirty(x) >>> return x >>> >>> @staticmethod >>> @once_differentiable >>> def backward(ctx, grad_output): >>> return grad_output >>> >>> a = torch.tensor(1., requires_grad=True, dtype=torch.double).clone() >>> b = a * a >>> Inplace.apply(a) # This would lead to wrong gradients! >>> # but the engine would not know unless we mark_dirty >>> # xdoctest: +SKIP >>> b.backward() # RuntimeError: one of the variables needed for gradient >>> # computation has been modified by an inplace operation
- mark_non_differentiable(
- *args: Tensor,
Marks outputs as non-differentiable.
This should be called at most once, only from inside the
forward
method, and all arguments should be tensor outputs.This will mark outputs as not requiring gradients, increasing the efficiency of backward computation. You still need to accept a gradient for each output in
backward
, but it’s always going to be a zero tensor with the same shape as the shape of a corresponding output.- This is used e.g. for indices returned from a sort. See example::
>>> class Func(Function): >>> @staticmethod >>> def forward(ctx, x): >>> sorted, idx = x.sort() >>> ctx.mark_non_differentiable(idx) >>> ctx.save_for_backward(x, idx) >>> return sorted, idx >>> >>> @staticmethod >>> @once_differentiable >>> def backward(ctx, g1, g2): # still need to accept g2 >>> x, idx = ctx.saved_tensors >>> grad_input = torch.zeros_like(x) >>> grad_input.index_add_(0, idx, g1) >>> return grad_input
- *pairs,
- materialize_grads¶
- maybe_clear_saved_tensors()¶
- metadata¶
- name()¶
- needs_input_grad¶
- next_functions¶
- non_differentiable¶
- register_hook()¶
- register_prehook()¶
- requires_grad¶
- save_for_backward(
- *tensors: Tensor,
Saves given tensors for a future call to
backward
.save_for_backward
should be called at most once, only from inside theforward
method, and only with tensors.All tensors intended to be used in the backward pass should be saved with
save_for_backward
(as opposed to directly onctx
) to prevent incorrect gradients and memory leaks, and enable the application of saved tensor hooks. Seetorch.autograd.graph.saved_tensors_hooks
.Note that if intermediary tensors, tensors that are neither inputs nor outputs of
forward
, are saved for backward, your custom Function may not support double backward. Custom Functions that do not support double backward should decorate theirbackward
method with@once_differentiable
so that performing double backward raises an error. If you’d like to support double backward, you can either recompute intermediaries based on the inputs during backward or return the intermediaries as the outputs of the custom Function. See the double backward tutorial for more details.In
backward
, saved tensors can be accessed through thesaved_tensors
attribute. Before returning them to the user, a check is made to ensure they weren’t used in any in-place operation that modified their content.Arguments can also be
None
. This is a no-op.See Extending torch.autograd for more details on how to use this method.
- Example::
>>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_AUTOGRAD) >>> class Func(Function): >>> @staticmethod >>> def forward(ctx, x: torch.Tensor, y: torch.Tensor, z: int): >>> w = x * z >>> out = x * y + y * z + w * y >>> ctx.save_for_backward(x, y, w, out) >>> ctx.z = z # z is not a tensor >>> return out >>> >>> @staticmethod >>> @once_differentiable >>> def backward(ctx, grad_out): >>> x, y, w, out = ctx.saved_tensors >>> z = ctx.z >>> gx = grad_out * (y + y * z) >>> gy = grad_out * (x + z + w) >>> gz = None >>> return gx, gy, gz >>> >>> a = torch.tensor(1., requires_grad=True, dtype=torch.double) >>> b = torch.tensor(2., requires_grad=True, dtype=torch.double) >>> c = 4 >>> d = Func.apply(a, b, c)
- save_for_forward(
- *tensors: Tensor,
Saves given tensors for a future call to
jvp
.save_for_forward
should be only called once, from inside theforward
method, and only be called with tensors.In
jvp
, saved objects can be accessed through thesaved_tensors
attribute.Arguments can also be
None
. This is a no-op.See Extending torch.autograd for more details on how to use this method.
- Example::
>>> # xdoctest: +SKIP >>> class Func(torch.autograd.Function): >>> @staticmethod >>> def forward(ctx, x: torch.Tensor, y: torch.Tensor, z: int): >>> ctx.save_for_backward(x, y) >>> ctx.save_for_forward(x, y) >>> ctx.z = z >>> return x * y * z >>> >>> @staticmethod >>> def jvp(ctx, x_t, y_t, _): >>> x, y = ctx.saved_tensors >>> z = ctx.z >>> return z * (y * x_t + x * y_t) >>> >>> @staticmethod >>> def vjp(ctx, grad_out): >>> x, y = ctx.saved_tensors >>> z = ctx.z >>> return z * grad_out * y, z * grad_out * x, None >>> >>> a = torch.tensor(1., requires_grad=True, dtype=torch.double) >>> t = torch.tensor(1., dtype=torch.double) >>> b = torch.tensor(2., requires_grad=True, dtype=torch.double) >>> c = 4 >>> >>> with fwAD.dual_level(): >>> a_dual = fwAD.make_dual(a, t) >>> d = Func.apply(a_dual, b, c)
- saved_for_forward¶
- saved_tensors¶
- saved_variables¶
- set_materialize_grads(
- value: bool,
Sets whether to materialize grad tensors. Default is
True
.This should be called only from inside the
forward
methodIf
True
, undefined grad tensors will be expanded to tensors full of zeros prior to calling thebackward
andjvp
methods.- Example::
>>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_AUTOGRAD) >>> class SimpleFunc(Function): >>> @staticmethod >>> def forward(ctx, x): >>> return x.clone(), x.clone() >>> >>> @staticmethod >>> @once_differentiable >>> def backward(ctx, g1, g2): >>> return g1 + g2 # No check for None necessary >>> >>> # We modify SimpleFunc to handle non-materialized grad outputs >>> class Func(Function): >>> @staticmethod >>> def forward(ctx, x): >>> ctx.set_materialize_grads(False) >>> ctx.save_for_backward(x) >>> return x.clone(), x.clone() >>> >>> @staticmethod >>> @once_differentiable >>> def backward(ctx, g1, g2): >>> x, = ctx.saved_tensors >>> grad_input = torch.zeros_like(x) >>> if g1 is not None: # We must check for None now >>> grad_input += g1 >>> if g2 is not None: >>> grad_input += g2 >>> return grad_input >>> >>> a = torch.tensor(1., requires_grad=True) >>> b, _ = Func.apply(a) # induces g2 to be undefined
- static setup_context( ) Any ¶
There are two ways to define the forward pass of an autograd.Function.
Either:
Override forward with the signature forward(ctx, *args, **kwargs).
setup_context
is not overridden. Setting up the ctx for backward happens inside theforward
.Override forward with the signature forward(*args, **kwargs) and override
setup_context
. Setting up the ctx for backward happens insidesetup_context
(as opposed to inside theforward
)
See
torch.autograd.Function.forward
and Extending torch.autograd for more details.
- to_save¶
- static vjp( ) Any ¶
Defines a formula for differentiating the operation with backward mode automatic differentiation (alias to the vjp function).
This function is to be overridden by all subclasses.
It must accept a context
ctx
as the first argument, followed by as many outputs as theforward
returned (None will be passed in for non tensor outputs of the forward function), and it should return as many tensors, as there were inputs toforward
. Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input. If an input is not a Tensor or is a Tensor not requiring grads, you can just pass None as a gradient for that input.The context can be used to retrieve tensors saved during the forward pass. It also has an attribute
ctx.needs_input_grad
as a tuple of booleans representing whether each input needs gradient. E.g.,backward
will havectx.needs_input_grad[0] = True
if the first input toforward
needs gradient computed w.r.t. the output.
- static vmap(
- info,
- in_dims,
- *args,
Defines a rule for the behavior of this autograd.Function underneath
torch.vmap
. For atorch.autograd.Function
to supporttorch.vmap
, you must either override this staticmethod, or setgenerate_vmap_rule
toTrue
(you may not do both).If you choose to override this staticmethod: it must accept
an
info
object as the first argument.info.batch_size
specifies the size of the dimension being vmapped over, whileinfo.randomness
is the randomness option passed totorch.vmap
.an
in_dims
tuple as the second argument. For each arg inargs
,in_dims
has a correspondingOptional[int]
. It isNone
if the arg is not a Tensor or if the arg is not being vmapped over, otherwise, it is an integer specifying what dimension of the Tensor is being vmapped over.*args
, which is the same as the args toforward
.
The return of the vmap staticmethod is a tuple of
(output, out_dims)
. Similar toin_dims
,out_dims
should be of the same structure asoutput
and contain oneout_dim
per output that specifies if the output has the vmapped dimension and what index it is in.Please see Extending torch.func with autograd.Function for more details.