1- The first few thousand lines determine everything
When I start a new project, I obsess over getting the process, guidelines, and guardrails right from the start. Whenever something is being done for the first time, I make sure it's done clean. Those early patterns are what the agent replicates across the next 100,000+ lines. Get it wrong early and the whole project turns to garbage.
2- Parallel agents, zero chaos
I set up the process and guardrails so well that I unlock a superpower. Running multiple agents in parallel while everything stays on track. This is only possible because I nail point 1.
3- AI is a force multiplier in whatever direction you're already going
If your codebase is clean, AI makes it cleaner and faster. If it's a mess, AI makes it messier faster. The temporary dopamine hit from shipping with AI agents makes you blind. You think you're going fast, but zoom out and you actually go slower because of constant refactors from technical debt ignored early.
4- The 1-shot prompt test
One of my signals for project health: when I want to do something, I should be able to do it in 1 shot. If I can't, either the code is becoming a mess, I don't understand some part of the system well enough to craft a good prompt, or the problem is too big to tackle all at once and needs breaking down.
5- Technical vs non-technical AI coding
There's a big difference between technical and non-technical people using AI to build production apps. Engineers who built projects before AI know what to watch out for and can detect when things go sideways. Non-technical people can't. Architecture, system design, security, and infra decisions will bite them later.
6- AI didn't speed up all steps equally
Most people think AI accelerated every part of programming the same way. It didn't. For example, choosing the right framework, dependencies, or database schema, the foundation everything else is built on, can't be done by giving your agent a one-liner prompt. These decisions deserve more time than adding a feature.
7- Complex agent setups suck
Fancy agents with multiple roles and a ton of .md files? Doesn't work well in practice. Simplicity always wins.
8- Agent experience is a priority
Treat the agent workflow itself as something worth investing in. Monitor how the agent is using your codebase. Optimize the process iteratively over time.
9- Own your prompts, own your workflow
I don't like to copy-paste some skill/command or install a plugin and use it as a black box. I always change and modify based on my workflow and things I notice while building.
10- Process alignment becomes critical in teams
Doing this as part of a team is harder than doing it yourself. It becomes critical that all members follow the same process and share updates to the process together.
11- AI code is not optimized by default
AI-generated code is not optimized for security, performance, or scalability by default. You have to explicitly ask for it and verify it yourself.
12- Check git diff for critical logic
When you can't afford to make a mistake or have hard-to-test apps with bigger test cycles, review the git diff. For example, the agent might use created_at as a fallback for birth_date. You won't catch that with just testing if it works or not.
13- You don't need an LLM call to calculate 1+1
It amazes me how people default to LLM calls when you can do it in a simple, free, and deterministic function. But then we're not "AI-driven" right?
1- The first few thousand lines determine everything
When I start a new project, I obsess over getting the process, guidelines, and guardrails right from the start. Whenever something is being done for the first time, I make sure it's done clean. Those early patterns are what the agent replicates across the next 100,000+ lines. Get it wrong early and the whole project turns to garbage.
2- Parallel agents, zero chaos
I set up the process and guardrails so well that I unlock a superpower. Running multiple agents in parallel while everything stays on track. This is only possible because I nail point 1.
3- AI is a force multiplier in whatever direction you're already going
If your codebase is clean, AI makes it cleaner and faster. If it's a mess, AI makes it messier faster. The temporary dopamine hit from shipping with AI agents makes you blind. You think you're going fast, but zoom out and you actually go slower because of constant refactors from technical debt ignored early.
4- The 1-shot prompt test
One of my signals for project health: when I want to do something, I should be able to do it in 1 shot. If I can't, either the code is becoming a mess, I don't understand some part of the system well enough to craft a good prompt, or the problem is too big to tackle all at once and needs breaking down.
5- Technical vs non-technical AI coding
There's a big difference between technical and non-technical people using AI to build production apps. Engineers who built projects before AI know what to watch out for and can detect when things go sideways. Non-technical people can't. Architecture, system design, security, and infra decisions will bite them later.
6- AI didn't speed up all steps equally
Most people think AI accelerated every part of programming the same way. It didn't. For example, choosing the right framework, dependencies, or database schema, the foundation everything else is built on, can't be done by giving your agent a one-liner prompt. These decisions deserve more time than adding a feature.
7- Complex agent setups suck
Fancy agents with multiple roles and a ton of .md files? Doesn't work well in practice. Simplicity always wins.
8- Agent experience is a priority
Treat the agent workflow itself as something worth investing in. Monitor how the agent is using your codebase. Optimize the process iteratively over time.
9- Own your prompts, own your workflow
I don't like to copy-paste some skill/command or install a plugin and use it as a black box. I always change and modify based on my workflow and things I notice while building.
10- Process alignment becomes critical in teams
Doing this as part of a team is harder than doing it yourself. It becomes critical that all members follow the same process and share updates to the process together.
11- AI code is not optimized by default
AI-generated code is not optimized for security, performance, or scalability by default. You have to explicitly ask for it and verify it yourself.
12- Check git diff for critical logic
When you can't afford to make a mistake or have hard-to-test apps with bigger test cycles, review the git diff. For example, the agent might use created_at as a fallback for birth_date. You won't catch that with just testing if it works or not.
13- You don't need an LLM call to calculate 1+1
It amazes me how people default to LLM calls when you can do it in a simple, free, and deterministic function. But then we're not "AI-driven" right?