Skip to content

Calibration and System Integration

Overview

Calibration is the foundation of robot perception and manipulation accuracy. System integration connects sensors, controllers, and actuators through a unified coordinate framework and communication architecture into a complete system. This article covers camera calibration, hand-eye calibration, multi-sensor extrinsic calibration, and system integration under ROS2.

Accuracy Chain

Final manipulation accuracy = Camera intrinsic accuracy x Hand-eye calibration accuracy x Kinematic calibration accuracy x Control accuracy. Errors at any link accumulate to the end-effector.


I. Camera Intrinsic Calibration

1.1 Pinhole Camera Model

A camera projects 3D world points \(P_w = [X, Y, Z]^T\) to 2D pixels \(p = [u, v]^T\):

\[ s \begin{bmatrix} u \\ v \\ 1 \end{bmatrix} = K [R | t] \begin{bmatrix} X \\ Y \\ Z \\ 1 \end{bmatrix} \]

where the intrinsic matrix:

\[ K = \begin{bmatrix} f_x & 0 & c_x \\ 0 & f_y & c_y \\ 0 & 0 & 1 \end{bmatrix} \]
  • \(f_x, f_y\): focal length (pixel units)
  • \(c_x, c_y\): principal point coordinates
  • Distortion parameters: \(k_1, k_2, p_1, p_2, k_3\) (radial + tangential)

1.2 Zhang's Calibration Method

Zhang's method uses a planar checkerboard and is the most widely used camera calibration method:

Algorithm Flow:

  1. Capture multiple checkerboard images (different angles, 15-25 recommended)
  2. Detect corners: Sub-pixel accuracy corner extraction
  3. Compute homography matrices: One homography matrix \(H_i\) per image
  4. Solve intrinsics: Using homography matrix constraints
  5. Solve extrinsics: Decompose each \(H_i\) using intrinsics
  6. Nonlinear optimization: Minimize reprojection error
\[ \min_{K, k_1, k_2, R_i, t_i} \sum_{i=1}^{n} \sum_{j=1}^{m} \| p_{ij} - \hat{p}(K, k_1, k_2, R_i, t_i, P_j) \|^2 \]

Calibration Tips

  • Checkerboard should cover all image regions
  • Angle variation should be as large as possible (tilt +-45 deg)
  • Reprojection error should be < 0.5 pixel (good) or < 0.3 pixel (excellent)
  • Calibration board flatness is critical (aluminum or glass substrate printing recommended)

II. Hand-Eye Calibration

2.1 Problem Definition

Hand-eye calibration solves for the fixed transform between the camera and the robot end-effector.

Eye-in-Hand configuration (camera fixed on arm end):

\[ A_i X = X B_i \]

where: - \(A_i = T_{end}^{(i+1)^{-1}} T_{end}^{(i)}\): End-effector transform between two motions - \(B_i = T_{cam}^{(i+1)} T_{cam}^{(i)^{-1}}\): Camera transform between two motions - \(X = T_{end}^{cam}\): Hand-eye transform to solve

2.2 Tsai-Lenz Solution

Tsai-Lenz (1989) is a classic two-step solution:

Step 1: Solve rotation using axis-angle parameterization with half-angle formula:

\[ (\text{skew}(n_{A_i} + n_{B_i})) n_X' = n_{B_i} - n_{A_i} \]

Step 2: Solve translation:

\[ (R_{A_i} - I) t_X = R_X t_{B_i} - t_{A_i} \]

2.3 Other Hand-Eye Calibration Methods

Method Year Features Accuracy
Tsai-Lenz 1989 Separate rotation and translation Medium
Park-Martin 1994 Exploits Lie group structure Higher
Daniilidis (dual quaternion) 1999 Simultaneous rotation and translation High
Shah (iterative optimization) 2013 Nonlinear optimization Highest

Practical Recommendations

  • Collect 15-25 groups of data at different poses
  • Motions should include variation in all 6 DOF
  • Avoid small-angle motions (<10 deg); use larger angle changes
  • Cross-validate using multiple methods

III. Multi-Sensor Extrinsic Calibration

3.1 LiDAR-Camera Extrinsic Calibration

Solve the transform \(T_{lidar}^{cam}\) from LiDAR frame to camera frame.

Target-based methods: 1. Use targets with known geometric features (checkerboard, circular targets) 2. Detect targets separately in camera and LiDAR 3. Establish correspondences, minimize registration error

Targetless methods: - Edge alignment: Project LiDAR points to image, align edges - Mutual Information: Maximize statistical correlation between modalities - Learning-based: CalibNet, LCCNet, etc.

3.2 IMU-Camera Extrinsic Calibration

Kalibr is the de facto standard tool: - Uses AprilGrid targets - Continuous motion data collection (sufficiently exciting all axes) - Joint optimization of IMU extrinsics, time offset, IMU intrinsics

\[ \min_{T_{imu}^{cam}, t_d, b_a, b_g} \sum_k \left[ \|e_{cam}^k\|^2 + \|e_{imu}^k\|^2 \right] \]

3.3 Calibration Pipeline

graph TD
    A[Camera Intrinsic Calibration] --> B[Hand-Eye Calibration]
    A --> C[LiDAR-Camera Calibration]
    D[IMU Intrinsic Calibration] --> E[IMU-Camera Calibration]
    B --> F[Unified Coordinate Frame]
    C --> F
    E --> F
    F --> G[tf2 Transform Tree Publishing]
    G --> H[System Integration Verification]
    H -->|Insufficient Accuracy| A
    H -->|Pass| I[Deployment Ready]

IV. System Integration Under ROS2

4.1 tf2 Transform Tree

ROS2 uses tf2 to manage transform relationships between all coordinate frames:

world
  +-- base_link
       +-- base_footprint
       +-- joint_1
       |    +-- link_1
       |         +-- joint_2
       |              +-- link_2
       |                   +-- ... 
       |                        +-- end_effector
       |                             +-- camera_link
       |                                  +-- camera_color_optical_frame
       |                                  +-- camera_depth_optical_frame
       +-- lidar_link

4.2 Static and Dynamic Transforms

Static transforms (calibration results, do not change over time):

# Publish hand-eye calibration result as static transform
from tf2_ros import StaticTransformBroadcaster

static_broadcaster = StaticTransformBroadcaster(node)
t = TransformStamped()
t.header.frame_id = 'end_effector'
t.child_frame_id = 'camera_link'
t.transform.translation.x = 0.05  # From calibration
t.transform.translation.y = 0.0
t.transform.translation.z = 0.03
t.transform.rotation = quaternion_from_euler(0, -pi/2, 0)
static_broadcaster.sendTransform(t)

4.3 System Integration Architecture

graph LR
    subgraph Sensor_Layer["Sensor Layer"]
        CAM[RGB-D Camera]
        LID[LiDAR]
        FT[Torque Sensor]
        ENC[Encoder]
    end

    subgraph Driver_Layer["Driver Layer"]
        CD[camera_driver]
        LD[lidar_driver]
        RD[robot_driver]
    end

    subgraph Middleware_Layer["Middleware Layer"]
        TF[tf2 Transform Tree]
        PC[Point Cloud Fusion]
        ST[State Estimation]
    end

    subgraph Application_Layer["Application Layer"]
        PER[Perception Module]
        PLA[Planning Module]
        CTR[Control Module]
    end

    CAM --> CD --> TF
    LID --> LD --> TF
    FT --> RD --> TF
    ENC --> RD
    TF --> PC --> PER
    TF --> ST --> PLA
    PER --> PLA --> CTR --> RD

V. Calibration Verification and Maintenance

5.1 Verification Methods

Verification Method Pass Criteria
Reprojection error Calibration board corner projection comparison < 0.5 pixel
Point cloud-image alignment LiDAR point projection to image Edge deviation < 3 pixels
Grasping accuracy Known-position object grasping Position error < 2mm
Hand-eye consistency Multi-pose target localization Position std dev < 1mm

5.2 Common Troubleshooting

Symptom Possible Cause Solution
Fixed-direction grasp offset Hand-eye calibration translation error Re-collect data and re-calibrate
Position-dependent grasp offset Inaccurate camera distortion calibration Increase number of calibration images
Point cloud-image misalignment Extrinsic calibration drift Check mounting, re-calibrate
Large error at distance, accurate close up Depth calibration issue Multi-distance depth correction

Further Reading

  • Sensors - Sensor principles and selection
  • ROS2 System - ROS2 framework and communication
  • Zhang, Z. "A Flexible New Technique for Camera Calibration." IEEE TPAMI, 2000.
  • Tsai, R.Y., Lenz, R.K. "A New Technique for Fully Autonomous and Efficient 3D Robotics Hand/Eye Calibration." IEEE TRA, 1989.
  • Kalibr: https://github.com/ethz-asl/kalibr

评论 #