Using in Python

We will go over some python snippets to access some of the core functionalities in cuRobo.

Forward Kinematics

Forward kinematics maps a robot’s joint configuration to the Cartesian pose of the robot’s links and also the position of the collision spheres. To compute Forward kinematics from a urdf file of a robot use the below snippet,

# Third Party
import torch

# cuRobo
from curobo.cuda_robot_model.cuda_robot_model import CudaRobotModel, CudaRobotModelConfig
from curobo.types.base import TensorDeviceType
from curobo.types.robot import RobotConfig
from curobo.util_file import get_robot_path, join_path, load_yaml

# convenience function to store tensor type and device
tensor_args = TensorDeviceType()

# this example loads urdf from a configuration file, you can also load from path directly
# load a urdf, the base frame and the end-effector frame:
config_file = load_yaml(join_path(get_robot_path(), "franka.yml"))

urdf_file = config_file["robot_cfg"]["kinematics"][
    "urdf_path"
]  # Send global path starting with "/"
base_link = config_file["robot_cfg"]["kinematics"]["base_link"]
ee_link = config_file["robot_cfg"]["kinematics"]["ee_link"]

# Generate robot configuration from  urdf path, base frame, end effector frame

robot_cfg = RobotConfig.from_basic(urdf_file, base_link, ee_link, tensor_args)

kin_model = CudaRobotModel(robot_cfg.kinematics)

# compute forward kinematics:
# torch random sampling might give values out of joint limits
q = torch.rand((10, kin_model.get_dof()), **(tensor_args.as_torch_dict()))
out = kin_model.get_state(q)

When loading a robot from a urdf file, we do not have any collision sphere information and hence collision checking is not possible. To load collision spheres for a robot, convert your robot urdf to a cuRobo robot configuration file following Configuring a New Robot.

You can load forward kinematics from a robot configuration file as below,

# Third Party
import torch

# cuRobo
from curobo.cuda_robot_model.cuda_robot_model import CudaRobotModel, CudaRobotModelConfig
from curobo.types.base import TensorDeviceType
from curobo.types.robot import RobotConfig
from curobo.util_file import get_robot_path, join_path, load_yaml


tensor_args = TensorDeviceType()

config_file = load_yaml(join_path(get_robot_path(), "franka.yml"))["robot_cfg"]
robot_cfg = RobotConfig.from_dict(config_file, tensor_args)

kin_model = CudaRobotModel(robot_cfg.kinematics)

# compute forward kinematics:
# torch random sampling might give values out of joint limits
q = torch.rand((10, kin_model.get_dof()), **(tensor_args.as_torch_dict()))
out = kin_model.get_state(q)

Collision Checking

cuRobo provides a convenience class curobo.wrap.model.robot_world.RobotWorld for computing signed distance between a robot and the world. It can also be used to query signed distance for arbitrary spheres, robot self-collision distance, and robot kinematics functions. We explain the different components of this class and also how to use in a learning pipeline in Using in a Neural Network page. We dive into more details on how to build the world model, including having tensorized world models for querying signed distance for robots working in different worlds and the available collision checkers in Collision World Representation. We provide some quick examples here for collision checking.

Create an instance of curobo.wrap.model.robot_world.RobotWorld with a Franka Panda robot and a world built from a dictionary,

# Third Party
import torch
# cuRobo
from curobo.types.base import TensorDeviceType
from curobo.wrap.model.robot_world import RobotWorld, RobotWorldConfig

robot_file = "franka.yml"

# create a world from a dictionary of objects
# cuboid: {} # dictionary of objects that are cuboids
# mesh: {} # dictionary of objects that are meshes
world_config = {
    "cuboid": {
        "table": {"dims": [2, 2, 0.2], "pose": [0.4, 0.0, -0.1, 1, 0, 0, 0]},
        "cube_1": {"dims": [0.1, 0.1, 0.2], "pose": [0.4, 0.0, 0.5, 1, 0, 0, 0]},
    },
    "mesh": {
        "scene": {
            "pose": [1.5, 0.080, 1.6, 0.043, -0.471, 0.284, 0.834],
            "file_path": "scene/nvblox/srl_ur10_bins.obj",
        }
    },
}
tensor_args = TensorDeviceType()
config = RobotWorldConfig.load_from_config(robot_file, world_config,
                                          collision_activation_distance=0.0)
curobo_fn = RobotWorld(config)

query signed distance for arbitrary spheres,

# create spheres with shape batch, horizon, n_spheres, 4.
q_sph = torch.randn((10, 1, 1, 4), device=tensor_args.device, dtype=tensor_args.dtype)
q_sph[...,3] = 0.2 # radius of spheres

d = curobo_fn.get_collision_distance(q_sph)

compute signed distance between robot and world with,

q_s = curobo_fn.sample(5, mask_valid=False)
d_world, d_self = curobo_fn.get_world_self_collision_distance_from_joints(q_s)

This robot world instance also provides kinematics queries, outputting curobo.cuda_robot_model.types.CudaRobotModelState

state = curobo_fn.get_kinematics(q_s)

Note

cuRobo computes signed distance as penetration depth, so a positive distance means that there is collision. cuRobo currently does not provide distance outside of collision, however if you want to add a buffer for safety, you can make collision_activation_distance greater than zero, which will return distance upto the specified distance outside collision.

Inverse Kinematics

To run inverse kinematics, given a urdf,

# Third Party
import torch

# cuRobo
from curobo.types.base import TensorDeviceType
from curobo.types.math import Pose
from curobo.types.robot import RobotConfig
from curobo.util_file import get_robot_configs_path, join_path, load_yaml
from curobo.wrap.reacher.ik_solver import IKSolver, IKSolverConfig


tensor_args = TensorDeviceType()

config_file = load_yaml(join_path(get_robot_configs_path(), "franka.yml"))
urdf_file = config_file["robot_cfg"]["kinematics"][
    "urdf_path"
]  # Send global path starting with "/"
base_link = config_file["robot_cfg"]["kinematics"]["base_link"]
ee_link = config_file["robot_cfg"]["kinematics"]["ee_link"]
robot_cfg = RobotConfig.from_basic(urdf_file, base_link, ee_link, tensor_args)

ik_config = IKSolverConfig.load_from_robot_config(
    robot_cfg,
    None,
    rotation_threshold=0.05,
    position_threshold=0.005,
    num_seeds=20,
    self_collision_check=False,
    self_collision_opt=False,
    tensor_args=tensor_args,
    use_cuda_graph=True,
)
ik_solver = IKSolver(ik_config)

q_sample = ik_solver.sample_configs(5000)
kin_state = ik_solver.fk(q_sample)
goal = Pose(kin_state.ee_position, kin_state.ee_quaternion)
result = ik_solver.solve_batch(goal)
q_solution = result.solution[result.success]

To obtain collision-free inverse kinematics,

# Third Party
import torch

# cuRobo
from curobo.geom.types import WorldConfig
from curobo.types.base import TensorDeviceType
from curobo.types.math import Pose
from curobo.types.robot import RobotConfig
from curobo.util_file import (
    get_robot_configs_path,
    get_world_configs_path,
    join_path,
    load_yaml,
    )
from curobo.wrap.reacher.ik_solver import IKSolver, IKSolverConfig

tensor_args = TensorDeviceType()
world_file = "collision_cage.yml"

robot_file = "franka.yml"
robot_cfg = RobotConfig.from_dict(
    load_yaml(join_path(get_robot_configs_path(), robot_file))["robot_cfg"]
)
world_cfg = WorldConfig.from_dict(load_yaml(join_path(get_world_configs_path(), world_file)))
ik_config = IKSolverConfig.load_from_robot_config(
    robot_cfg,
    world_cfg,
    rotation_threshold=0.05,
    position_threshold=0.005,
    num_seeds=20,
    self_collision_check=True,
    self_collision_opt=True,
    tensor_args=tensor_args,
    use_cuda_graph=True,
)
ik_solver = IKSolver(ik_config)

q_sample = ik_solver.sample_configs(5000)
kin_state = ik_solver.fk(q_sample)
goal = Pose(kin_state.ee_position, kin_state.ee_quaternion)
result = ik_solver.solve_batch(goal)
q_solution = result.solution[result.success]

cuRobo’s curobo.wrap.reacher.ik_solver.IKSolver returns curobo.wrap.reacher.ik_solver.IKResult which also provides errors and success metrics.

Note

If you want to run IK with different batch sizes, set use_cuda_graph=False when initializing IKSolverConfig. This will run slower, can take upto 10x more time. Enabling cuda graph allows for capturing all cuda kernel calls in a CUDAGraph and only calling this recorded graph to run the IKSolver for a new problem set. However, we cannot change the shape of any tensor once the graph is recorded. Hence, CUDAGraph is only available for fixed size queries in cuRobo.

Motion Generation

To obtain collision-free trajectories, use MotionGen. An example is below,

# Third Party
import torch

# cuRobo
from curobo.types.math import Pose
from curobo.types.robot import JointState
from curobo.wrap.reacher.motion_gen import MotionGen, MotionGenConfig, MotionGenPlanConfig

world_config = {
    "mesh": {
        "base_scene": {
            "pose": [10.5, 0.080, 1.6, 0.043, -0.471, 0.284, 0.834],
            "file_path": "scene/nvblox/srl_ur10_bins.obj",
        },
    },
    "cuboid": {
        "table": {
            "dims": [5.0, 5.0, 0.2],  # x, y, z
            "pose": [0.0, 0.0, -0.1, 1, 0, 0, 0.0],  # x, y, z, qw, qx, qy, qz
        },
    },
}

motion_gen_config = MotionGenConfig.load_from_robot_config(
    "ur5e.yml",
    world_config,
    interpolation_dt=0.01,
)
motion_gen = MotionGen(motion_gen_config)
motion_gen.warmup()

goal_pose = Pose.from_list([-0.4, 0.0, 0.4, 1.0, 0.0, 0.0, 0.0])  # x, y, z, qw, qx, qy, qz
start_state = JointState.from_position(
    torch.zeros(1, 6).cuda(),
    joint_names=[
        "shoulder_pan_joint",
        "shoulder_lift_joint",
        "elbow_joint",
        "wrist_1_joint",
        "wrist_2_joint",
        "wrist_3_joint",
    ],
)

result = motion_gen.plan_single(start_state, goal_pose, MotionGenPlanConfig(max_attempts=1))
traj = result.get_interpolated_plan()  # result.interpolation_dt has the dt between timesteps
print("Trajectory Generated: ", result.success)

Note

In all above examples, we loaded the world from a yaml file. cuRobo consumes world representation from an instance of curobo.geom.types.WorldConfig. You can store your world as a dictionary in a yaml file and load from disk or build the world programmatically. See Collision World Representation for a more details.

Model Predictive Control (MPC)

cuRobo also implements a model predictive control~(MPC) framework similar to STORM, with kinematics, collision checking, and cost computations coming from cuRobo’s custom kernels. This framework also enables the use of CUDAGraphs for running a full step of MPPI. Leveraging these advancements, cuRobo’s MPPI can reach 500hz control rate on a NVIDIA RTX 4090 compared to 125hz reached by STORM.

Use MpcSolver to implement MPC. An example is below,

from curobo.wrap.reacher.mpc import MpcSolver, MpcSolverConfig
from curobo.types.state import JointState
from curobo.rollout.rollout_base import Goal

# create a world representation
world_config = {
        "cuboid": {
            "table": {"dims": [2, 2, 0.2], "pose": [0.4, 0.0, -0.1, 1, 0, 0, 0]},
            "cube_1": {"dims": [0.1, 0.1, 0.2], "pose": [0.4, 0.0, 0.5, 1, 0, 0, 0]},
        },
        "mesh": {
            "scene": {
                "pose": [1.5, 0.080, 1.6, 0.043, -0.471, 0.284, 0.834],
                "file_path": "scene/nvblox/srl_ur10_bins.obj",
            }
        },
    }

mpc_config = MpcSolverConfig.load_from_robot_config(
        "ur5e.yml",
        world_config,
        store_rollouts=True,
        step_dt=0.03,
    )
mpc = MpcSolver(mpc_config)


retract_cfg = mpc.rollout_fn.dynamics_model.retract_config.clone().unsqueeze(0)


kinematics_state = mpc.rollout_fn.compute_kinematics(
    JointState.from_position(retract_cfg + 0.5, joint_names=mpc.joint_names)
)
goal_pose = kinematics_state.ee_pose.clone()


# read start state from robot here:
start_state = JointState.from_position(retract_cfg, joint_names=mpc.joint_names)

goal = Goal(
    current_state=start_state,
    goal_state=JointState.from_position(retract_cfg, joint_names=mpc.joint_names),
    goal_pose=goal_pose,
)

goal_buffer = mpc.setup_solve_single(goal, 1)


mpc.update_goal(goal_buffer)

# using an iterator to prevent infinite loop
i = 0

while True and i < 100: # control loop

    # To change goal pose:
    # goal_buffer.goal_pose.copy_(new_pose) # for Cartesian Pose
    # mpc.enable_pose_cost(enable=True)
    # mpc.enable_cspace_cost(enable=False)
    #
    # To change goal state:
    # goal_buffer.goal_state.copy_(new_goal_state) # for joint configuration
    # mpc.enable_pose_cost(enable=False)
    # mpc.enable_cspace_cost(enable=True)


    # mpc.update_goal(goal_buffer)


    # read current state from robot
    # current_state = your_robot_api.current_state() # convert to JointState

    current_state = start_state

    # run MPC:
    result = mpc.step(current_state)

    # send action to robot:
    command = result.action
    # your_robot_api.send_command(command)
    print(result.metrics.pose_error.item())

    # setting new current state from last command for this example:
    # comment this out and instead read state from your robot.
    start_state = command

    i += 1

Warning

cuRobo’s MPC framework is experimental and does not provide safety guarantees as the MPC optimizes constraints as cost terms with large weights, which can put the robot in a collision state if the costs are not tuned. We advise testing MPC in simulation in your task setting before deploying on a real robot.

Note

Motions from MPC will be lower in quality when compared to motions from Motion Generation as MPC tries to optimize using MPPI from a few sampled accelerations while motion generation leverages trajectory optimization to obtain more optimal trajectories.