This, IMO is the biggest insight into where we're at and where we're going:
> Because both evaluation and self-modification are coding tasks, gains in coding ability can translate into gains in self-improvement ability.
There's a thing that I've noticed early into LLMs: once they unlock one capability, you can use that capability to compose stuff and improve on other, related or not, capabilities. For example "reflexion" goes into coding - hey, this didn't work, let me try ... Then "tools". Then "reflxion" + "tools". And so on.
You can get workflows that have individual parts that aren't so precise become better by composing them, and letting one component influence the other. Like e2e coding gets better by checking with "gof" tools (linters, compilers, etc). Then it gets even better by adding a coding review stage. Then it gets even better by adding a static analysis phase.
Now we're seeing this all converge on "self improving" by combining "improving" components. And so on. This is really cool.
Agree. It's code all the way down. The key is to give agents a substrate where they can code up new capabilities and then compose them meaningfully and safely.
Larger composition, though, starts to run into typical software design problems, like dependency graphs, shared state, how to upgrade, etc.
I disagree that evaluation is always a coding task. Evaluation is scrutiny for the person who wants the thing. It’s subjective. So, unless you’re evaluating something purely objective, such as an algorithm, I don’t see how a self contained, self “improving “ agent accomplishes the subjectivity constraint - as by design you are leaving out the subject.
In science there are ways to surface subjectivity (cannot be counted) into observable quantized phenomena. Take opinion polls for instance: "approval" of a political figure can mean many things and is subjective, but experts in the field make "approval" into a number through scientific methods. These methods are just an approximation and have many IFs, they're not perfect (and for presidential campaign analysis in particular they've been failing for reasons I won't clarify here), but they're useful nonetheless.
Another thing that get quantized is video preferences to maximize engagement.
No matter how far we go, we end up with generation / discrimination architecture.
Its is the core of any and all learning/exellency; exposure to chaotic perturbations allow selection of solutions that are then generalized to further, ever more straining problems; producing increasingly applicable solutions.
This is the core of evolution, and is actually derivable from just a single rule.
Self-improving AI systems aim to reduce reliance on human engineering by learning to improve their own learning and problem-solving processes. Existing approaches to self-improvement rely on fixed, handcrafted meta-level mechanisms, fundamentally limiting how fast such systems can improve. The Darwin Gödel Machine (DGM) demonstrates open-ended self-improvement in coding by repeatedly generating and evaluating self-modified variants. Because both evaluation and self-modification are coding tasks, gains in coding ability can translate into gains in self-improvement ability. However, this alignment does not generally hold beyond coding domains. We introduce \textbf{hyperagents}, self-referential agents that integrate a task agent (which solves the target task) and a meta agent (which modifies itself and the task agent) into a single editable program. Crucially, the meta-level modification procedure is itself editable, enabling metacognitive self-modification, improving not only the task-solving behavior, but also the mechanism that generates future improvements. We instantiate this framework by extending DGM to create DGM-Hyperagents (DGM-H), eliminating the assumption of domain-specific alignment between task performance and self-modification skill to potentially support self-accelerating progress on any computable task. Across diverse domains, the DGM-H improves performance over time and outperforms baselines without self-improvement or open-ended exploration, as well as prior self-improving systems. Furthermore, the DGM-H improves the process by which it generates new agents (e.g., persistent memory, performance tracking), and these meta-level improvements transfer across domains and accumulate across runs. DGM-Hyperagents offer a glimpse of open-ended AI systems that do not merely search for better solutions, but continually improve their search for how to improve.
This 'self vs non-self' logic is very similar to how plants prevent self-pollination. They have a biological 'discrimination' system to recognize and reject their own genetic code.
I think even code bases will have self improving agents. Software is moving from just the product code, to the agent code that maintains the product. Engineering teams/companies that move in this direction will vastly out produce others.
I've had to really shift how I think about building code bases, alot of logic can go into claude skills and sub agents. Requires essentially relearning software engineering
This, IMO is the biggest insight into where we're at and where we're going:
> Because both evaluation and self-modification are coding tasks, gains in coding ability can translate into gains in self-improvement ability.
There's a thing that I've noticed early into LLMs: once they unlock one capability, you can use that capability to compose stuff and improve on other, related or not, capabilities. For example "reflexion" goes into coding - hey, this didn't work, let me try ... Then "tools". Then "reflxion" + "tools". And so on.
You can get workflows that have individual parts that aren't so precise become better by composing them, and letting one component influence the other. Like e2e coding gets better by checking with "gof" tools (linters, compilers, etc). Then it gets even better by adding a coding review stage. Then it gets even better by adding a static analysis phase.
Now we're seeing this all converge on "self improving" by combining "improving" components. And so on. This is really cool.
Larger composition, though, starts to run into typical software design problems, like dependency graphs, shared state, how to upgrade, etc.
I've been working on this front for over two years now too: https://github.com/smartcomputer-ai/agent-os/
Another thing that get quantized is video preferences to maximize engagement.
Or maybe some kind of really simple task like manufacturing paperclips
Its is the core of any and all learning/exellency; exposure to chaotic perturbations allow selection of solutions that are then generalized to further, ever more straining problems; producing increasingly applicable solutions.
This is the core of evolution, and is actually derivable from just a single rule.
I've always felt that the most important part of engineering was feedback loops.
Maybe nature is the greatest engineer ever?
Abstract:
Self-improving AI systems aim to reduce reliance on human engineering by learning to improve their own learning and problem-solving processes. Existing approaches to self-improvement rely on fixed, handcrafted meta-level mechanisms, fundamentally limiting how fast such systems can improve. The Darwin Gödel Machine (DGM) demonstrates open-ended self-improvement in coding by repeatedly generating and evaluating self-modified variants. Because both evaluation and self-modification are coding tasks, gains in coding ability can translate into gains in self-improvement ability. However, this alignment does not generally hold beyond coding domains. We introduce \textbf{hyperagents}, self-referential agents that integrate a task agent (which solves the target task) and a meta agent (which modifies itself and the task agent) into a single editable program. Crucially, the meta-level modification procedure is itself editable, enabling metacognitive self-modification, improving not only the task-solving behavior, but also the mechanism that generates future improvements. We instantiate this framework by extending DGM to create DGM-Hyperagents (DGM-H), eliminating the assumption of domain-specific alignment between task performance and self-modification skill to potentially support self-accelerating progress on any computable task. Across diverse domains, the DGM-H improves performance over time and outperforms baselines without self-improvement or open-ended exploration, as well as prior self-improving systems. Furthermore, the DGM-H improves the process by which it generates new agents (e.g., persistent memory, performance tracking), and these meta-level improvements transfer across domains and accumulate across runs. DGM-Hyperagents offer a glimpse of open-ended AI systems that do not merely search for better solutions, but continually improve their search for how to improve.
Here is a breakdown - https://vectree.io/c/plant-self-incompatibility-logic
https://github.com/NousResearch/hermes-agent
But this idea of having a task agent & meta agent maybe has wings. Neat submission.
I've had to really shift how I think about building code bases, alot of logic can go into claude skills and sub agents. Requires essentially relearning software engineering