Contemporary AI faces important limitations that remain unresolved despite significant successes. Chief among these are the inability to acquire new knowledge without destroying old, the incomprehensible and nonengineerable nature of internal representations, and the difficulty of integrating learned models with symbolic reasoning. As a result, essential capabilitiesâ continual open-world adaptation, recursive improvability, and transparent, human-controllable autonomyâ remain out of reach. This thesis argues that overcoming these challenges requires a rethinking of the foundations of artificial learning. Toward that end, it proposes foundations of a new paradigm for learning in AI grounded in principles of adaptation revealed by evolutionary developmental biology.
The first part lays the conceptual groundwork by drawing parallels between AI and evolutionary theory. It critiques foundations of the current mainstream AI approachesâ fixed architectures, distributed representations, statistical optimizationâ showing how aforementioned limitations follow directly from these. The discussion then turns to the Modern Synthesis of evolution, which, though powerful, fails on its own to explain accelerating evolutionary change and the structural properties of biological organization. These gaps have been addressed in the "extended evolutionary synthesis," notably through insights from developmental biology. By aligning the explanatory limitations of the Modern Synthesis with capability limitations in AI, this section argues that principles discovered by evolutionary developmental biology offer a promising foundation for reimagining how learning systems are built.
The second part operationalizes one such principleâ conditional regulationâ within the current learning paradigm. It introduces the Directed Adaptation Network (DIRAN), a method that incrementally constructs a continuously-parameterized structure learning via gradient signal without resorting to heavy overparameterization. Instead of tuning a fixed architecture, DIRAN generates regulatory connections through a developmental process driven by conflicting learning pressures, and guarantees convergence to good solutions with minimal complexity.
The final part presents the thesisâ s main technical contribution: the foundations and early design on a new class of learning systems called \textit{variation and selection (varsel) networks}. In addition to incorporating the principle of regulability, these systems reconceive learning as the generation of structured explanations through local component-level variation and selection instead of iterative optimization. As a non-gradient-based method, this approach generates weakly-linked, topologically organized representations that support continual learning without any constraining assumptions, decomposability and interpretability, and natural integrability with symbolic processes. Demonstrative experiments on those capabilitiesâ including modeling