ACT-R and SOAR
Overview
ACT-R and SOAR are the two most influential cognitive architectures in cognitive science. ACT-R emphasizes memory activation mechanisms and production rule matching, while SOAR emphasizes problem-space search and automatic compilation of experience. Both attempt to build a "unified theory of cognition" but take fundamentally different paths.
1. ACT-R (Adaptive Control of Thought -- Rational)
1.1 Basic Architecture
ACT-R has been developed by John Anderson (Carnegie Mellon University) since 1976, with the current version being ACT-R 7.x.
graph TD
subgraph ACT-R Architecture
PM[Perceptual Modules<br/>Visual, Aural] --> BUF[Buffers]
BUF --> PS[Production System<br/>Pattern Matching]
DM[Declarative Memory] --> BUF
PS --> DM
PS --> MM[Motor Modules<br/>Manual, Vocal]
PS --> GM[Goal Module<br/>Goal Buffer]
GM --> PS
end
1.2 Core Components
Declarative Memory
Stores factual knowledge in basic units called chunks:
chunk: addition-fact-3-4
isa: addition-fact
addend1: 3
addend2: 4
sum: 7
Each chunk has an activation value \(A_i\) that determines retrieval speed and probability:
where:
- \(B_i\): Base-Level Activation -- reflects usage frequency and recency
- \(W_j \cdot S_{ji}\): Spreading Activation -- associative strength from current context
- \(\epsilon_i\): Noise term (follows a Logistic distribution)
Base-Level Activation Computation:
where \(t_j\) is the time interval from the \(j\)-th use to the present, and \(d\) is the decay parameter (typically \(d \approx 0.5\)).
Power-Law Forgetting
The base-level activation formula embodies the power-law decay of human memory -- memories that are older and less frequently used are harder to retrieve, which closely matches psychological experimental data.
Retrieval Probability:
where \(\tau\) is the retrieval threshold and \(s\) is the noise parameter.
Retrieval Latency:
The higher the activation value, the faster the retrieval.
Procedural Memory
Stores skill-based knowledge as production rules:
Production: retrieve-addition
IF Goal buffer contains (isa: add, num1: =x, num2: =y, result: nil)
THEN Retrieve from declarative memory (isa: addition-fact, addend1: =x, addend2: =y)
Place result in retrieval buffer
Each production rule also has a utility value, updated via reinforcement learning:
where \(R_i\) is the reward received and \(\alpha\) is the learning rate.
Buffers
Buffers serve as the communication interface between modules. Each buffer can hold only one chunk at a time (extremely limited capacity), corresponding to the capacity limitations of human working memory.
| Buffer | Corresponding Module | Function |
|---|---|---|
| Goal | Goal module | Current goal and task context |
| Retrieval | Declarative memory | Memory retrieval results |
| Visual | Visual module | Visual attention focus |
| Manual | Motor module | Motor control commands |
| Imaginal | Imaginal module | Temporary manipulation of problem representations |
1.3 ACT-R Cognitive Cycle
Each cycle (~50ms):
- Pattern Matching: Check which production rules' conditions are satisfied by current buffer contents
- Conflict Resolution: Select the best production based on utility values
- Execution: Fire the selected production's action part (request retrieval, set buffers, issue motor commands, etc.)
2. SOAR (State, Operator And Result)
2.1 Basic Architecture
SOAR has been developed by Allen Newell, John Laird, and Paul Rosenbloom since 1983, with the current version being Soar 9.x.
graph TD
subgraph SOAR Architecture
WM[Working Memory] --> DM2[Decision Procedure<br/>Decision Cycle]
LTM[Long-Term Memory] --> WM
LTM --> PM2[Procedural<br/>Production Rules]
LTM --> SM[Semantic]
LTM --> EM[Episodic]
DM2 --> OP[Operator Application]
OP --> WM
DM2 -->|Impasse| IMP[Subgoal Creation<br/>Impasse & Subgoaling]
IMP -->|Resolution| CH[Chunking<br/>Experience Compilation]
CH --> PM2
end
2.2 Core Concepts
Problem Space
SOAR models all cognitive activity as search in a problem space:
- \(S\): Set of states
- \(O\): Set of operators (state transition functions)
- \(I\): Initial state
- \(G\): Goal state (or goal test function)
Decision Cycle
Each SOAR decision cycle consists of five phases:
- Input: Acquire new information from perception into working memory
- Propose: Production rules propose available operators
- Evaluate: Production rules compare and evaluate candidate operators
- Decision: Select the best operator
- Apply: Apply the selected operator, modifying working memory
Impasse and Subgoaling
When the decision process cannot continue (e.g., no available operators, multiple indistinguishable operators), SOAR automatically creates a subgoal to resolve the impasse:
| Impasse Type | Description | Example |
|---|---|---|
| Tie | Multiple operators cannot be distinguished | Two approaches look equally good |
| Conflict | Multiple operators contradict each other | One rule suggests forward, another suggests backward |
| No-change | No available operators | Don't know what to do |
| Constraint-failure | Selected operator cannot be applied | Preconditions not met |
2.3 Chunking: Automatic Compilation of Experience
Chunking is SOAR's most fundamental learning mechanism. When a subgoal is resolved, SOAR automatically compiles the resolution process into a new production rule:
Process:
- Reasoning within the subgoal resolves the impasse
- SOAR traces back to analyze: which working memory elements led to this result?
- These elements become conditions, and the result becomes the action, creating a new production rule
- Next time the same situation is encountered, the new rule matches directly without creating a subgoal
Analogy: Like a human needing to derive a math problem the first time, but after repeated practice being able to directly "see" the answer.
2.4 Long-Term Memory Types (Soar 9+)
| Memory Type | Contents | Access Method |
|---|---|---|
| Procedural | Production rules | Automatic matching (parallel) |
| Semantic | Facts and concepts (graph structure) | Query retrieval |
| Episodic | Snapshots of past experiences | Timeline or content query |
Cross-Reference
For a detailed discussion of memory systems, see Episodic and Semantic Memory.
3. ACT-R vs. SOAR Comparison
3.1 Core Differences
| Dimension | ACT-R | SOAR |
|---|---|---|
| Theoretical goal | Cognitive model (simulating humans) | General intelligence (AGI) |
| Core mechanism | Activation + Utility | Search + Chunking |
| Memory | Activation-driven retrieval | Production matching + Semantic/Episodic |
| Learning | Activation adjustment + Utility learning | Chunking + RL |
| Parallelism | Buffer-level parallelism | Production-level parallelism |
| Temporal modeling | Precise time predictions (ms-level) | No emphasis on temporal prediction |
| Application domains | Psychology experiment simulation | Game AI, robotics, military |
| Programming language | Lisp | C++/Java/Python |
3.2 Comparison on the Same Problem
Task: Compute 3 + 4
ACT-R Approach:
1. Goal buffer: (add num1:3 num2:4 result:nil)
2. Production match: retrieve-addition fires
3. Declarative memory retrieval: addition-fact-3-4 (highest activation)
4. Retrieval success: sum = 7
5. Production match: store-result fires
6. Goal buffer: (add num1:3 num2:4 result:7)
SOAR Approach:
1. State: (add ^num1 3 ^num2 4 ^result nil)
2. Propose operators: recall-sum, count-up, count-down
3. Evaluate: recall-sum has highest priority
4. Apply recall-sum: match known fact 3+4=7
5. State update: (add ^num1 3 ^num2 4 ^result 7)
If no direct memory exists (triggering an impasse), SOAR creates a subgoal, solves it via a counting strategy, then learns via Chunking:
Subgoal: Solve via count-up strategy
Chunk: IF (add ^num1 3 ^num2 4) THEN (^result 7)
4. Connections to Modern LLM Agents
4.1 Concept Mapping
| Classic Concept | LLM Agent Counterpart |
|---|---|
| ACT-R Declarative Memory | Vector database + RAG |
| ACT-R Activation Values | Embedding similarity + Temporal decay |
| ACT-R Production Rules | Tool definitions + System prompts |
| SOAR Problem-Space Search | Tree of Thoughts |
| SOAR Impasse → Subgoal | "I'm not sure, let me break down this problem" |
| SOAR Chunking | Experience summaries written to memory (Reflexion) |
4.2 Insights
- ACT-R's activation mechanism: Can be used to design memory retrieval priorities for LLM agents
- SOAR's Chunking: Analogous to compiling experience into verbalized rules in Reflexion
- ACT-R's temporal predictions: Can be used to model response latency of LLM agents
- SOAR's subgoaling: Analogous to task decomposition and recursive planning in LLM agents
5. Other Important Cognitive Architectures
| Architecture | Core Feature |
|---|---|
| CLARION | Dual-process theory (explicit + implicit knowledge) |
| Icarus | Hierarchical concepts and skills |
| LIDA | Global Workspace Theory |
| Sigma | Graphical models unifying cognitive functions |
| OpenCog | Open-source AGI cognitive architecture |
Summary
ACT-R and SOAR represent two major traditions in cognitive architecture research: ACT-R focuses more on precise modeling of human cognition, while SOAR focuses more on general problem-solving capability. The core ideas of both -- activation mechanisms, production-based reasoning, problem-space search, and experience compilation -- continue in new forms in the LLM era.
References
- Anderson, J.R. (2007). How Can the Human Mind Occur in the Physical Universe? Oxford University Press.
- Anderson, J.R. et al. (2004). An Integrated Theory of the Mind. Psychological Review, 111(4), 1036-1060.
- Laird, J.E. (2012). The Soar Cognitive Architecture. MIT Press.
- Newell, A. (1990). Unified Theories of Cognition. Harvard University Press.
- Laird, J.E., Newell, A. & Rosenbloom, P.S. (1987). SOAR: An Architecture for General Intelligence. Artificial Intelligence, 33(1), 1-64.