Skip to content

Exploration and Reward

This section covers the core topics of exploration strategy design and reward engineering in reinforcement learning.

Overview

Exploration Strategies

The exploration-exploitation tradeoff, including classic methods (ε-greedy, Boltzmann, UCB) and modern curiosity-driven approaches (ICM, RND, NovelD, Go-Explore).

Reward Engineering

Reward shaping, reward curricula, sparse vs. dense rewards, multi-objective rewards, reward from human feedback (RLHF), reward hacking, and mitigation strategies.

Inverse Reinforcement Learning

Recovering reward functions from expert demonstrations, including Maximum Entropy IRL, Generative Adversarial Imitation Learning (GAIL), AIRL, and connections to imitation learning.

Core Ideas

Exploration and reward are two fundamental problems in reinforcement learning:

  • Exploration: How to efficiently gather information in unknown environments?
  • Reward Design: How to define the correct optimization objective?
  • Inverse RL: How to infer objectives from observed behavior?

These three problems are deeply interconnected — effective exploration strategies require guidance from intrinsic rewards, while inverse RL provides a pathway to automatically derive rewards from demonstrations.


评论 #