BDI Model
Overview
The BDI (Belief-Desire-Intention) model is one of the most influential theoretical frameworks in agent research. Originating from Michael Bratman's philosophy of practical reasoning, it was formalized as a computational model by Rao and Georgeff and remains a core reference for understanding and designing rational agents.
1. Philosophical Foundations
1.1 Bratman's Practical Reasoning Theory
Bratman (1987) argued that rational human behavior is driven not only by beliefs and desires but also requires intentions as a commitment mechanism:
- Belief: Information the agent has about the world state, which may be incomplete or incorrect
- Desire: States the agent wishes to achieve, which may be mutually contradictory
- Intention: Action plans the agent is committed to executing, which are persistent
The Key Role of Intentions
Intentions are not merely "selected desires." Intentions possess commitment (not easily abandoned), constraining power (constrain future decisions), and reasoning-triggering capability (trigger means-end reasoning).
1.2 Distinction from Rational Choice Theory
Classical rational choice theory (e.g., expected utility maximization) assumes full rationality:
However, Bratman pointed out that humans are boundedly rational. The intention mechanism allows agents to:
- Reduce computational burden: No need to recompute all options at every step
- Maintain behavioral consistency: Committed intentions constrain the feasible action space
- Support coordination: Expectations about others' intentions make collaboration possible
2. Formal Model
2.1 Modal Logic Representation
Rao and Georgeff (1991) used branching-time modal logic (CTL*) to formalize BDI:
Beliefs (information about world states):
Desires (desired states):
Intentions (committed plans):
2.2 BDI Axioms
The BDI modal logic satisfies the following key axioms:
Consistency of beliefs:
Intention implies belief:
That is, an agent will not intend to do something it believes to be impossible.
Intention implies desire:
That is, intentions are a subset of desires (but not vice versa).
Persistence of intentions:
Intentions persist until: (1) the agent believes the goal has been achieved, or (2) the agent believes it is impossible to achieve. \(\mathcal{U}\) is the "Until" operator in temporal logic.
2.3 BDI Agent World Model
graph TD
subgraph Agent Internal State
B[Belief Set B<br/>Information about the world]
D[Desire Set D<br/>Desired goal states]
I[Intention Set I<br/>Committed action plans]
PL[Plan Library<br/>Available action recipes]
end
ENV[Environment] -->|Perception| B
B --> DR[Deliberation]
D --> DR
DR --> I
I --> MR[Means-End Reasoning]
PL --> MR
MR --> ACT[Action Execution]
ACT --> ENV
3. BDI Reasoning Cycle
The core operational loop of a BDI agent is as follows:
3.1 Algorithm Description
Algorithm: BDI Reasoning Cycle
Input: Initial beliefs B₀, Initial desires D₀, Plan library PlanLib
B ← B₀
D ← D₀
I ← ∅
while true do
p ← PERCEIVE(environment) // Perceive environment
B ← BRF(B, p) // Belief revision function
D ← OPTION-GENERATOR(B, D, I) // Generate options (desire update)
I ← DELIBERATE(B, D, I) // Deliberate: select intentions
π ← PLAN(B, I, PlanLib) // Means-end reasoning: generate plan
while not (EMPTY(π) or SUCCEEDED(I, B) or IMPOSSIBLE(I, B)) do
α ← HEAD(π) // Take first action of the plan
EXECUTE(α) // Execute action
p ← PERCEIVE(environment) // Re-perceive
B ← BRF(B, p) // Update beliefs
if RECONSIDER?(I, B) then // Should we reconsider?
D ← OPTION-GENERATOR(B, D, I)
I ← DELIBERATE(B, D, I)
end if
if not SOUND(π, I, B) then // Is the plan still valid?
π ← PLAN(B, I, PlanLib) // Re-plan
end if
end while
end while
3.2 Key Decision Points
When to Reconsider?
This is the most subtle design decision in BDI systems:
- Too frequent: Wastes computational resources, incoherent behavior ("monkey mind")
- Too infrequent: Cannot adapt to environmental changes ("stubbornness")
- Best strategy: Reconsider when the environment changes significantly
4. PRS: Procedural Reasoning System
PRS (Procedural Reasoning System), developed by Georgeff and Lansky (1987), is the first complete implementation of the BDI model.
4.1 PRS Architecture
| Component | Function |
|---|---|
| Belief Database | Stores current information about the world |
| Goal Stack | Maintains currently active goals |
| Knowledge Areas (KA) | Library of available plans/methods |
| Intention Structure | Plan tree currently being executed |
| Meta-level KA | Rules controlling the reasoning process itself |
4.2 PRS Applications
PRS was originally applied to the Space Shuttle fault diagnosis system (SRI International for NASA), and was subsequently widely used in:
- Air traffic management
- Business process management
- Network management
- Military simulation
5. AgentSpeak(L) Language
Rao (1996) proposed AgentSpeak(L), a BDI-based agent programming language.
5.1 Syntax
// Initial beliefs
location(agent, office).
time(morning).
// Initial goal
!start_work.
// Plan rules
+!start_work : location(agent, office) & time(morning)
<- open_computer;
check_email;
!plan_day.
+!start_work : not location(agent, office)
<- !go_to(office);
!start_work.
+!plan_day : has_meetings(today)
<- review_calendar;
prepare_materials.
+!plan_day : not has_meetings(today)
<- !focus_work.
5.2 AgentSpeak(L) Semantics
+!g: Adopt new goalg-!g: Drop goalg+b: Add beliefb-b: Remove beliefbcontext: Plan precondition (query against the belief set)body: Plan body (sequence of actions and sub-goals)
5.3 Jason Implementation
Jason (Bordini & Hubner, 2007) is the most mature interpreter for AgentSpeak(L), providing:
- Complete BDI reasoning cycle implementation
- Multi-agent communication infrastructure
- Java interoperability
- Customizable selection functions
6. BDI and LLM Agents
6.1 Mapping Relationships
Modern LLM agents can be understood from a BDI perspective:
| BDI Concept | LLM Agent Counterpart |
|---|---|
| Belief | Information in context window + external memory retrieval |
| Desire | User instructions + goals in system prompt |
| Intention | Plan steps currently being executed |
| Plan Library | Tool definitions + few-shot examples + built-in knowledge |
| Belief Revision | Updating context after observing tool execution results |
| Deliberation | CoT reasoning process |
| Means-End Reasoning | Tool selection and parameter generation |
6.2 LLM as a "Soft" BDI Implementation
Traditional BDI "hard" logical reasoning:
LLM "soft" reasoning:
System: You are an assistant. Current weather: raining. User goal: stay dry.
LLM: I suggest bringing an umbrella. [Select tool: check_weather → confirm rain]
Let me help you find the nearest convenience store to purchase rain gear.
LLMs provide a "fuzzy" but more flexible implementation of BDI:
- Beliefs are not precise logical formulae but probabilistic linguistic statements
- Reasoning is not strict logical deduction but analogical reasoning based on pattern matching
- Plans are not predefined programs but natural language steps generated online
6.3 Strengths and Weaknesses
| Dimension | Classic BDI | LLM-BDI |
|---|---|---|
| Reasoning rigor | High (logical guarantees) | Low (may "hallucinate") |
| Flexibility | Low (limited by plan library) | High (generated online) |
| Knowledge coverage | Narrow (manually coded) | Broad (pre-training corpora) |
| Interpretability | High (rules are traceable) | Medium (CoT partially interpretable) |
| Robustness | Low (brittle failure) | Medium (graceful degradation) |
Cross-Reference
For the cognitive architecture framework of the LLM era, see LLM Cognitive Architecture. For a detailed discussion of planning and reasoning, see Planning and Reasoning Survey.
7. Limitations and Criticisms of BDI
- Logical omniscience problem: The formalization assumes agents know all logical consequences of their beliefs
- Frame problem: Difficulty in efficiently reasoning about which beliefs are unaffected by actions
- Emotional and social factors: The original model does not include emotions, social norms, etc.
- Lack of learning: Classic BDI has no built-in learning mechanism
- Plan repair: The overhead of re-planning when plans fail can be substantial
Summary
The BDI model provides an elegant theoretical framework for rational agent behavior. Despite being over 30 years old, its core ideas -- beliefs driving reasoning, desires driving goals, intentions driving commitment -- remain highly relevant for guiding agent design in the LLM era.
References
- Bratman, M.E. (1987). Intention, Plans, and Practical Reason. Harvard University Press.
- Rao, A.S. & Georgeff, M.P. (1991). Modeling Rational Agents within a BDI-Architecture. KR 1991.
- Rao, A.S. (1996). AgentSpeak(L): BDI Agents Speak Out in a Logical Computable Language. MAAMAW 1996.
- Georgeff, M.P. & Lansky, A.L. (1987). Reactive Reasoning and Planning. AAAI 1987.
- Bordini, R.H. & Hubner, J.F. (2007). Programming Multi-Agent Systems in AgentSpeak using Jason. Wiley.
- Wooldridge, M. (2000). Reasoning about Rational Agents. MIT Press.