When people hear "AI builds 70% of your software," they picture a chatbot spitting out rough code that someone has to clean up. That is not what we do. The 70% AI handles is high-quality, structured, tested implementation — functions, components, API routes, database queries. The 30% our engineers own is the work that determines whether any of it makes it to production.
What the 70% looks like
Our AI agents work from a detailed technical specification that our engineers write. Given a well-scoped spec, the agent produces component code, unit tests, API handlers, and type definitions. It does this faster than any human can type, and it does not make the copy-paste errors humans do.
- React components — full implementations with props, state, and event handlers
- API routes — endpoint logic, validation, and error responses
- Database queries — typed queries with parameterisation and pagination
- Unit tests — coverage for the happy path and common error cases
What the 30% looks like
The 30% that engineers own is not editing AI code. It is the set of decisions the AI cannot make correctly without human judgment:
- Architecture review — Does this component model hold under real user load? Is this the right data structure for a schema that will grow?
- Security audit — AI misses injection vectors, insecure direct object references, and authentication edge cases. Every endpoint gets a manual review.
- Database migrations — Changing a live schema requires judgment about data distribution, rollback strategy, and downtime tolerance. AI does not have this context.
- Deployment configuration — Environment variables, CDN rules, edge function placement, serverless timeouts. These are infrastructure decisions with real consequences.
- Monitoring setup — What gets alerted on. What error rates are acceptable. How the on-call rotation works.
Why this is actually 5x faster
A traditional agency spends roughly 60-70% of its engineering time on implementation — writing functions, components, and queries. If you can move that work to AI without losing quality, the engineering team redirects to architecture, review, and deployment. Projects that used to take 12 weeks take 3.
The catch is that this only works if the specification is precise and the human review is rigorous. Skipping the review is how you get a fast prototype that fails in production. We have seen enough of those to know exactly where the failures happen.
That is why our process is not "AI first, humans later." It is "humans spec, AI builds, humans verify." The ratio is 70/30. The sequence matters as much as the split.