Skip to content

Robot Control Theory

Overview

The goal of robot control is to make the robot precisely execute desired motions or apply desired forces. The controller computes driving torques based on the dynamics model to achieve accurate tracking of position, velocity, and force.

graph TB
    subgraph "Control Architecture"
        A["Trajectory Planner"] --> B["Controller"]
        B --> C["Robot"]
        C -->|"Joint Encoders"| D["State Estimation"]
        C -->|"Force/Torque Sensors"| E["Force Feedback"]
        D --> B
        E --> B
    end
    style A fill:#e8f4fd,stroke:#2196F3
    style B fill:#fff3e0,stroke:#FF9800
    style C fill:#f3e5f5,stroke:#9C27B0

1 PID Control

1.1 Basic Form

PID is the most fundamental and widely used control method. For a single joint:

\[ u(t) = K_p e(t) + K_i \int_0^t e(\tau) \, d\tau + K_d \dot{e}(t) \]

where the error \(e(t) = q_d(t) - q(t)\).

Role of each term:

Term Effect Tuning Impact
P (Proportional) \(K_p e\) Reduces steady-state error; too large causes oscillation
I (Integral) \(K_i \int e \, dt\) Eliminates steady-state error; too large causes overshoot and oscillation
D (Derivative) \(K_d \dot{e}\) Adds damping, suppresses oscillation; sensitive to noise

1.2 Independent Joint PID

Treating each joint as an independent system, designing PID separately:

\[ \tau_i = K_{p,i} e_i + K_{d,i} \dot{e}_i + K_{i,i} \int e_i \, dt \]

Limitations:

  • Ignores dynamic coupling between joints
  • Ignores gravity compensation
  • Tracking accuracy degrades at high speeds

1.3 PD + Gravity Compensation

One of the most commonly used simple control laws in practice:

\[ \tau = K_p e + K_d \dot{e} + g(q) \]

where \(g(q)\) is the gravity compensation term. This control law ensures accuracy under static and low-speed conditions.

Asymptotic Stability Proof

Choosing the Lyapunov function \(V = \frac{1}{2}e^T K_p e + \frac{1}{2}\dot{q}^T M(q)\dot{q}\), one can prove that the system is asymptotically stable under PD + gravity compensation.


2 Computed Torque Control

2.1 Basic Idea

Utilize the complete dynamics model for nonlinear feedback linearization, transforming the nonlinear system into a linear one.

Control Law:

\[ \tau = M(q)\ddot{q}_d + C(q, \dot{q})\dot{q} + g(q) + K_p e + K_d \dot{e} \]

More compactly written as:

\[ \tau = M(q)[\ddot{q}_d + K_p e + K_d \dot{e}] + C(q, \dot{q})\dot{q} + g(q) \]

2.2 Closed-Loop Analysis

Substituting the control law into the equations of motion \(M\ddot{q} + C\dot{q} + g = \tau\):

\[ M(q)[\ddot{q}_d + K_p e + K_d \dot{e} - \ddot{q}] = 0 \]

Since \(M(q)\) is positive definite, the error dynamics become:

\[ \ddot{e} + K_d \dot{e} + K_p e = 0 \]

This is a linear second-order system! Choosing \(K_p, K_d > 0\) guarantees exponential stability.

2.3 Practical Considerations

Issue Description Solution
Model Uncertainty \(M, C, g\) deviate from true values Robust control, adaptive control
Computational Delay Real-time inverse dynamics computation Newton-Euler \(O(n)\) recursion
Joint Friction Unmodeled friction forces Friction compensation terms
Sensor Noise Inaccurate velocity estimates Filtering, observers

3 Impedance Control

3.1 Motivation

During environmental interaction (e.g., assembly, wiping), pure position control is too "rigid." Impedance control adjusts the dynamic behavior of the end-effector to achieve compliant interaction.

3.2 Target Impedance

The desired end-effector dynamic behavior:

\[ M_d \ddot{e} + D_d \dot{e} + K_d e = F_{\text{ext}} \]

where:

Parameter Meaning Effect
\(M_d\) Desired inertia Controls acceleration response
\(D_d\) Desired damping Controls velocity response
\(K_d\) Desired stiffness Controls displacement response
\(e = x_d - x\) End-effector position error
\(F_{\text{ext}}\) External contact force

3.3 Implementation

Joint-Space Impedance Control:

\[ \tau = M(q)\ddot{q}_r + C(q,\dot{q})\dot{q}_r + g(q) - J^T(q) F_{\text{ext}} \]

where the reference acceleration \(\ddot{q}_r\) is determined by the impedance relationship.

Simplified Implementation (without force sensor):

\[ \tau = J^T(q)[K_d(x_d - x) + D_d(\dot{x}_d - \dot{x})] + g(q) \]

Here the impedance is determined by \(K_d\) and \(D_d\), requiring no force sensor.

3.4 Admittance Control

The dual form of impedance control:

  • Impedance Control: Input motion deviation, output force \(\rightarrow\) suitable for force-controlled actuators
  • Admittance Control: Input force deviation, output motion \(\rightarrow\) suitable for position-controlled actuators
\[ x_c = M_d^{-1}(F_{\text{ext}} - D_d \dot{x}_c - K_d x_c) \]

4 Force Control

4.1 Hybrid Force/Position Control

Raibert-Craig method (1981): Control force and position separately in different directions.

Define a selection matrix \(S\) (diagonal, elements are 0 or 1):

\[ \tau = J^T [S \cdot F_{\text{force\_ctrl}} + (I - S) \cdot F_{\text{pos\_ctrl}}] \]
  • \(S_{ii} = 1\): Force control in the \(i\)-th direction
  • \(S_{ii} = 0\): Position control in the \(i\)-th direction

Example: Force control perpendicular to the workpiece surface, position control parallel to it.

4.2 Force Controller Design

PI Force Control:

\[ F_{\text{force\_ctrl}} = K_{fp}(F_d - F_{\text{meas}}) + K_{fi} \int (F_d - F_{\text{meas}}) \, dt \]

Challenges of Force Control

  • Discontinuity at contact/non-contact state transitions
  • Stability is difficult to guarantee with unknown environment stiffness
  • Force sensor noise and delays

5 Operational Space Control

5.1 Basic Idea

Design control laws directly in task space (Cartesian space) and map to joint space via the Jacobian.

Operational Space Dynamics (Khatib, 1987):

\[ \Lambda(x) \ddot{x} + \mu(x, \dot{x}) + p(x) = F \]

where:

\[ \Lambda(x) = (J M^{-1} J^T)^{-1} \]
\[ \mu = \Lambda J M^{-1} C \dot{q} - \Lambda \dot{J} \dot{q} \]
\[ p = \Lambda J M^{-1} g \]

5.2 Control Law

\[ F = \Lambda(x)[\ddot{x}_d + K_p e + K_d \dot{e}] + \mu(x, \dot{x}) + p(x) \]

Mapped to joint torques:

\[ \tau = J^T F \]

Equivalently:

\[ F = J^{-T} \tau \]

5.3 Null-Space Control for Redundant Robots

For redundant robots (\(n > 6\)), secondary tasks can be accomplished in the null space while satisfying the primary task:

\[ \tau = J^T F_{\text{task}} + (I - J^T J^{-T}) \tau_0 \]

where \(\tau_0\) is the null-space torque, which can be used for:

  • Joint limit avoidance
  • Singularity avoidance
  • Self-collision avoidance
  • Posture optimization

6 Adaptive Control

6.1 Motivation

When dynamics parameters (mass, inertia, friction coefficients, etc.) are unknown or changing, online estimation and compensation are needed.

6.2 Adaptive Computed Torque

Leveraging the linear parameterization property of the dynamics equations:

\[ M(q)\ddot{q} + C(q, \dot{q})\dot{q} + g(q) = Y(q, \dot{q}, \ddot{q}) \, \hat{\pi} \]

Control law:

\[ \tau = Y(q, \dot{q}, \dot{q}_r, \ddot{q}_r) \hat{\pi} + K_d s \]

where \(s = \dot{e} + \Lambda e\) is the sliding variable and \(\hat{\pi}\) is the parameter estimate.

Parameter Update Law:

\[ \dot{\hat{\pi}} = -\Gamma Y^T s \]

where \(\Gamma > 0\) is the learning rate matrix.

Stability Guarantee

Through Lyapunov analysis, it can be proven that \(s \to 0\) (tracking error converges), but \(\hat{\pi} \to \pi\) is not guaranteed (parameters do not necessarily converge to true values unless the persistent excitation condition is satisfied).


7 Robust Control

7.1 Sliding Mode Control

Design a sliding surface \(s = \dot{e} + \lambda e\); the control law ensures \(s\) reaches zero quickly and stays there:

\[ \tau = \hat{M}(q)[\ddot{q}_d + \lambda\dot{e}] + \hat{C}\dot{q} + \hat{g} - K \, \text{sgn}(s) \]

where \(K\) is large enough to overcome model uncertainty.

Chattering Problem

The sign function \(\text{sgn}(s)\) causes high-frequency chattering near \(s=0\). In practice, a saturation function \(\text{sat}(s/\phi)\) is used instead, where \(\phi\) is the boundary layer thickness.

7.2 \(H_\infty\) Control

Designed in the frequency domain, minimizing the \(\infty\)-norm of the transfer function from disturbance to performance output:

\[ \min_K \|T_{zw}(s)\|_\infty \]

Suitable for handling structured uncertainty and external disturbances.


8 Learning-Based Control

8.1 Iterative Learning Control (ILC)

For repetitive tasks, use the error from the previous execution to improve the next control input:

\[ u_{k+1}(t) = u_k(t) + L \cdot e_k(t) \]

8.2 Reinforcement Learning Control

Directly learn the control policy without relying on an accurate model:

  • Model-Free: PPO, SAC directly optimize the policy
  • Model-Based: Learn a dynamics model + MPC
  • Sim-to-Real: Train in simulation, transfer to the real robot

8.3 Real-Time System Requirements

Robot control has strict real-time requirements (see Real-Time Systems):

Control Layer Frequency Latency Requirement
Joint Servo 1-10 kHz < 1 ms
Force Control 500 Hz - 1 kHz < 2 ms
Motion Control 100-500 Hz < 10 ms
Planning Layer 1-50 Hz < 100 ms

9 Control Architecture Summary

graph TB
    A["Task Planning"] --> B["Trajectory Generation"]
    B --> C{"Control Strategy Selection"}
    C -->|"Known Model"| D["Computed Torque Control"]
    C -->|"Model Uncertain"| E["Adaptive Control"]
    C -->|"Disturbances"| F["Robust Control"]
    C -->|"Interaction Needed"| G["Impedance/Force Control"]
    C -->|"No Model"| H["Learning-Based Control"]
    D --> I["Joint Servo"]
    E --> I
    F --> I
    G --> I
    H --> I
    I --> J["Robot"]
    style C fill:#fff3e0,stroke:#FF9800
    style I fill:#e8f4fd,stroke:#2196F3

10 Common Tools and Frameworks

Tool Purpose
Simulink / MATLAB Classical control design and simulation
Drake Model predictive control, trajectory optimization
ros2_control ROS2 control framework
MuJoCo Dynamics simulation (including contacts)
IsaacGym / IsaacSim GPU-parallel simulation + RL training

References

  1. Siciliano, B. et al. (2009). Robotics: Modelling, Planning and Control. Springer.
  2. Slotine, J.-J. E. & Li, W. (1991). Applied Nonlinear Control. Prentice-Hall.
  3. Khatib, O. (1987). A unified approach for motion and force control of robot manipulators. IEEE RA.
  4. Craig, J. J. (2005). Introduction to Robotics: Mechanics and Control. Pearson.
  5. Spong, M. W. et al. (2006). Robot Modeling and Control. Wiley.

评论 #