Skip to content

The Master Algorithm

This notebook is based on Pedro Domingos's The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World (Basic Books, 2015; paperback edition 2018-02-13). The book divides the field of machine learning into five tribes and poses a central question: does there exist an ultimate algorithm capable of learning anything from any data?

Scope note: this notebook focuses on three tribes that have not yet been systematically organized elsewhere on the site — Bayesians, Evolutionaries, and Analogizers. The Symbolists and Connectionists are already covered thoroughly in 02_Symbolic_AI and 1_DeepLearning respectively, so this entry simply links to them rather than rewriting the material.


1. What the book is about

Pedro Domingos is a professor of computer science at the University of Washington and one of the originators of Markov Logic Networks (MLN), which unify probability and symbolic logic. In 2015 he wrote this book for a general audience, breaking down the internal factions of ML for non-specialists and putting forward a thesis that interests both engineers and philosophers:

The Master Algorithm hypothesis: there exists a universal learning algorithm that, given enough data, can learn anything that is learnable. It must necessarily fuse the strengths of all five existing tribes.

Whether this hypothesis holds is debatable (see the No Free Lunch theorem), but it provides a clear map of the relationships between the five tribes — extremely useful for learners.


2. The five tribes at a glance

graph TD
    ML[Machine Learning]
    ML --> S[Symbolists]
    ML --> C[Connectionists]
    ML --> E[Evolutionaries]
    ML --> B[Bayesians]
    ML --> A[Analogizers]

    S --> S1[Inverse Deduction]
    C --> C1[Backpropagation]
    E --> E1[Genetic Search]
    B --> B1[Bayes' Theorem]
    A --> A1[Similarity-based]

    style B fill:#ffe4b5,stroke:#d4a017
    style E fill:#c8e6c9,stroke:#388e3c
    style A fill:#bbdefb,stroke:#1976d2
    style S fill:#f5f5f5,stroke:#999
    style C fill:#f5f5f5,stroke:#999

The three highlighted tribes — Bayesians, Evolutionaries, and Analogizers — are the focus of this notebook. Symbolists and Connectionists are covered in other sections of the site.

Tribe Core belief Master algorithm Representative methods In-site deep dive
Symbolists Intelligence = symbol manipulation Inverse deduction Decision trees, rule induction, ILP, knowledge graphs 02_Symbolic_AI/
Connectionists Intelligence emerges from neural connections Backprop + gradient descent MLP, CNN, Transformer, LLM 1_DeepLearning/
Evolutionaries Intelligence is a product of evolution Genetic search GA, GP, ES, CMA-ES, NEAT 进化派.md ← this notebook
Bayesians Learning = probabilistic inference Bayes' theorem Naive Bayes, Bayes Net, HMM, LDA, MCMC 贝叶斯派.md ← this notebook
Analogizers Extrapolate from similar cases Similarity metric k-NN, SVM/kernels, CBR, recommender systems, contrastive learning 类比派.md ← this notebook

3. The five tribes vs. the site's three-tribe taxonomy

In 人工智能综述 and AI研究范式, ai-notes adopts the classical three-tribe taxonomy (symbolic / connectionist / behaviorist) — the traditional perspective of AI philosophy. Domingos's five tribes offer a finer-grained internal view of ML, and the two are orthogonal and complementary:

graph LR
    subgraph "Three tribes (AI philosophy)"
        T1[Symbolic]
        T2[Connectionist]
        T3[Behaviorist]
    end

    subgraph "Five tribes (within ML)"
        F1[Symbolists]
        F2[Connectionists]
        F3[Bayesians]
        F4[Analogizers]
        F5[Evolutionaries]
    end

    T1 --- F1
    T1 -.- F3
    T2 --- F2
    T2 -.- F4
    T3 --- F5

    style T1 fill:#f9f
    style T2 fill:#bbf
    style T3 fill:#bfb
  • Symbolic ⊇ Symbolists; partially Bayesians (graphical-model semantics can be viewed as symbolic)
  • Connectionist ⊇ Connectionists; partially Analogizers (deep contrastive learning lives in embedding space)
  • Behaviorist ⊇ Evolutionaries (an extreme form of trial-and-error search); overlaps with reinforcement learning

The three-tribe view cuts along "where does intelligence come from"; the five-tribe view cuts along "what mathematical tools does the learner use". Both perspectives are useful.


4. Navigating this notebook

06_The_Master_Algorithm/
├── index.md       # you are here
├── 贝叶斯派.md    # tribe philosophy + graphical models/HMM/LDA + probabilistic programming + Bayesian deep learning (~1300 lines)
├── 进化派.md      # tribe philosophy + GA/GP/Schema theorem + ES/CMA-ES/NEAT + swarm intelligence/AutoML (~1300 lines)
├── 类比派.md      # tribe philosophy + k-NN/kernels/SVM + metric & contrastive learning + recommenders & modern retrieval (~1800 lines)
└── 终极算法.md    # Five-tribe fusion hypothesis + MLN + Neuro-Symbolic + No Free Lunch + 2026 perspective

Each tribe page is a single long document containing four major sections: tribe entry plus three deep-dive topics. Internal organization uses H2/H3 levels; the tribe pages are independent and can be read separately.

Suggested reading order:

  1. Pick a tribe page that interests you and read its §1 (tribe philosophy + algorithmic map) front-to-back.
  2. Then dive into §2/§3/§4 — the three deep-dive topics — as needed.
  3. Finally read 终极算法 to return to Domingos's unification hypothesis.

5. Division of labor with existing site pages

The new notebook does not rewrite existing material — it only fills in the "tribe-level umbrella overview" and gaps elsewhere on the site:

Existing page Role Counterpart in this notebook
贝叶斯学习.md Mathematical derivations (Bayes/MAP/MCMC/VI/GP) 贝叶斯派.md: tribe perspective + graphical models/HMM/LDA + probabilistic programming + Bayesian DL
probabilistic_models.md Foundations of graphical models 贝叶斯派.md §Graphical Models & HMMs
kernel_methods.md Algorithmic details of kernel methods 类比派.md: tribe-level umbrella + k-NN/SVM/metric learning/recommenders
supervised.md §k-NN The k-NN algorithm 类比派.md §Nearest Neighbors & Kernel Methods
meta_learning.md Siamese / Prototypical 类比派.md §Metric Learning & Contrastive Learning
视觉自监督.md SimCLR / MoCo Same as above: theoretical framework of metric learning
vision_foundation.md CLIP Same as above: metric-based reading of CLIP
长期记忆与向量数据库.md FAISS / HNSW engineering 类比派.md §Recommender Systems & Modern Retrieval
1_search.md §进化算法 Brief intro to GA 进化派.md §Genetic Algorithms & Genetic Programming

6. Key references

  • Pedro Domingos. The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World. Basic Books, 2015. (Paperback 2018-02-13)
  • Pedro Domingos. "A Few Useful Things to Know About Machine Learning". Communications of the ACM, 55(10), 2012.
  • Matt Richardson, Pedro Domingos. "Markov Logic Networks". Machine Learning, 62(1-2), 2006.
  • David H. Wolpert, William G. Macready. "No Free Lunch Theorems for Optimization". IEEE Transactions on Evolutionary Computation, 1(1), 1997.

Next step: pick the tribe page that interests you most — 贝叶斯派进化派类比派终极算法


评论 #