The dominant theme in software engineering for the past two decades has been continuous. CI transformed integration, CD transformed delivery — each wave of “continuousification” turned a discrete, human-dependent step into an automated flow.
But one step has remained untouched: generation.
Code is written by hand. Documentation is written by hand. Test reports are assembled by hand. Config files are filled in line by line with the docs open in a split screen. Every engineering artifact is, fundamentally, handcrafted. The arrival of AI agents lets us ask for the first time: if all these artifacts can be generated from intent, should “handcrafted” still be the default?
Ida (Intent-Driven Agent) is a methodology built around that question. Its scope is engineering output with clear technical specifications that can be objectively verified — code, documentation, tests, configuration, reports. Organizational dynamics and process politics are out of scope.
Five Core Principles
1. Intent is the source; artifacts are derived
Humans provide goals and constraints. All engineering artifacts — code, docs, tests, reports, images, configs — are projections of those goals, not independently maintained first-class citizens. When upstream intent changes, downstream artifacts are regenerated.
This means our understanding of “source code” needs to shift up one layer: the true “source” is no longer the code file — it’s the intent in the human’s mind. Code is just one compilation target of that intent.
2. Goals must be well-formed
Not every sentence qualifies as input for Ida. A well-formed goal must satisfy four properties:
- Comprehensible — the agent can decompose it into concrete actions
- Achievable — it can be completed within current capabilities
- Verifiable — completion can be objectively judged as correct or incorrect
- Decomposable — it can be broken into small steps with visible progress at each stage
Refining a vague requirement into a well-formed goal is the human’s first job. This doesn’t lower the bar — quite the opposite. It demands that humans think more clearly and express themselves more precisely.
3. Generation is an ongoing conversation
The generation process is not a rocket you can only watch after launch. Humans can correct goals, add constraints, and give feedback on intermediate artifacts at any stage. Agents should proactively surface state and request confirmation at key checkpoints.
Intent itself is iterative. Good intent often takes shape gradually through multiple rounds of human-agent interaction.
4. Humans and agents each own their domain
Humans own the quality of intent. Agents own the quality of output.
The human’s job: define goals, set constraints, review output, make architectural decisions. The agent’s job: understand intent, plan execution paths, generate artifacts, maintain consistency.
This is not an “AI replaces humans” narrative — it’s a division of labor. Compilers didn’t replace programmers; they let programmers work at a higher level of abstraction.
5. Constraints are guardrails
Constraints define the corridor in which an agent may act — the envelope of permitted paths through the problem space. An agent without constraints is dangerous, like a highway without guardrails.
Practical Scenarios
Abstract principles are only as good as their application. Ida outlines three typical scenarios.
NPI: New Platform Introduction
Traditional approach: engineers manually write baseline configs, build images, run validation, write reports — a serial process heavily dependent on individual experience.
The Ida way: the human defines intent — “This platform needs to support bookworm, kernel BSK 6.1, and must pass boot validation and hardware compatibility tests.” The agent generates configs, builds images, runs validation, and produces reports. Validation fails? The agent adjusts configs based on failure signals and regenerates. The human reviews the final artifacts and makes the accept/reject decision.
Configs, images, test reports — all derived from the same intent. Any upstream change triggers downstream regeneration.
Kernel Patch Backport
An upstream kernel has a critical fix that needs to be backported to multiple internal kernel branches. Traditional approach: cherry-pick each branch manually, resolve conflicts, compile, verify, submit for review. More branches, more repetitive work.
The Ida way: the human defines intent — “Backport upstream commit abc123 to the 5.15, 6.1, and 6.6 branches. Ensure compilation passes and relevant tests don’t regress.” The agent attempts the backport on each branch, resolves conflicts, runs compilation and tests. When a semantic conflict can’t be resolved automatically, it surfaces the decision to the human. All three branches progress in parallel, each independently verifiable.
Human judgment is spent where it matters — reviewing conflict resolutions, not grinding through mechanical cherry-picks.
Oncall Diagnosis
3 AM. An alert fires: machines across a cluster are kernel-panicking in batches.
The Ida way: the human defines intent — “These machines are panicking. Find the root cause and scope of impact.” The agent automatically collects crash logs, matches against known issue patterns, queries for commonalities among affected machines (same hardware batch? same kernel version? same config change?), and generates a diagnostic report.
The oncall engineer’s job is judgment and decision-making — confirming root cause, deciding on a fix, approving execution — not typing commands half-asleep to gather data.
Key Corollaries
- Artifacts are renewable resources — regeneration cost approaches zero; don’t treat every line of code as precious
- Human value lies in judgment — not in repetitive production labor
- Feedback loops matter more than generation itself — test results, CI signals, user feedback, monitoring alerts are all signals that drive regeneration
- Acknowledge boundaries — goals that aren’t verifiable or decomposable shouldn’t be forced onto an agent
My Take
Ida is not a product or framework — it’s an engineering philosophy. Its core insight is plain: when generation cost approaches zero, engineers should spend their time where it has the most leverage — figuring out what they actually want.
This follows the same logic as “compilers let programmers work at higher abstraction levels.” In the assembly era, programmers managed registers and jump instructions. When high-level languages arrived, programmers described logic and compilers handled translation. Ida pushes this one layer further: programmers describe intent, agents generate artifacts.
What I find particularly interesting is the “goals must be well-formed” principle. It rejects the fantasy that “AI can do anything” — agents are not omnipotent, and humans need to refine vague requirements into goals that are comprehensible, achievable, verifiable, and decomposable. That refinement process is itself a demonstration of engineering skill.
Tools will evolve and models will improve, but this insight won’t expire: the most irreplaceable human capability is defining intent.