Slop — this is the term we hear more and more often when talking about AI products. Something low-quality, difficult to control, created non-stop in alarming quantities with a single goal: capture attention, kill time, generate hype. Add to this the reports about a bubble in the industry, the scale of which, along with the consequences of its potential burst, remains to be seen.
All of this once again fuels the debate about the real effectiveness and dangers of AI. Software development is no exception. Research convincingly shows that mastering AI tools isn't easy, and without proper skills, the effect of using them can even be negative.
Nevertheless, AI can be extremely useful in software development. All it takes is rigor, consistency, discipline. Or following a checklist.
Software development hasn't fundamentally changed in decades. We've refined methodologies, reinvented programming languages, delivery methods, observability, QA, and so on. But we've always written code line by line, as text, maintaining control over what we write as we go.
That era is over. Whether we like it or not, code will soon be generated primarily by language models. This is a massive shift. We need to change many of our habits — literally change our profession. That takes serious motivation. For me, the most compelling reasons to use AI tools in development every day are:
\
It's becoming a job requirement. A good example is this hiring standard from Zapier:
Checking whether you have the right motivation is simple - just notice your emotions when new LLM versions and AI products for software development come out. The right emotion is joy about getting a new tool that's often cheaper or more reliable.
If, instead of joy and curiosity, you feel anxiety, it might mean you haven't adapted enough to the new tools. But anxiety can also be productive - use it to push yourself to learn the new tools. The relief you feel when that anxiety dissolves becomes another powerful reason to keep going.
From this point on, all checklist items will be somehow connected to context-engineering. The approach here is simple - the more concise, specific, and detailed the instructions the LLM receives, the better its output quality.
Context specificity is achieved through proper codebase organization - breaking things down into independent, isolated modules with a limited scope of responsibility.
The easiest way to organize such modules is within monorepos. Today, it doesn't take much to set up a comprehensive multi-language repository for the whole project. Start by identifying the core entities - deployable applications and shared "packages" with utility code.
The rules are simple - applications should depend on packages, and packages can depend on each other, but packages can not depend on applications.
Here is the example structure you can use. It fits most of the cases:
root/ ├── applications/ │ ├── website/ │ ├── public-api/ │ ├── admin-dashboard/ │ └── ... │ ├── packages/ │ ├── js/ │ │ ├── ui/ │ │ ├── client/ │ │ ├── validation/ │ │ └── ... │ │ │ ├── py/ │ │ ├── common-lib/ │ │ ├── models/ │ │ ├── auth/ │ │ └── ... │ │ │ └── go/ │ ├── common/ │ ├── clients/ │ └── ... └── Makefile
Makefile will act as an entry point for everything to allow technology-agnostic script orchestration.
Python and Go support monorepos practically out of the box. For JS, pnpm has proven to be the best choice — today I'd pick it over npm and yarn.
This kind of monorepo is a very convenient, flexible, and extensible structure. And it works really well with AI agents. Developing a package is straightforward since the code is strictly isolated and serves a specific function.
When working with code in applications, you can:
The quality of AI-generated code dramatically improves when the AI agent can run tests, linters, and use the output from these tools to adjust the result during the work process.
In an ideal world, the best solution would be giving AI access to a code execution environment — for web projects, that means browsers. Development of such technologies is moving fast, but right now there's still no ready-made solution.
That said, even basic linters and unit tests are excellent tools for quality control and reducing hallucinations. Especially since setting up static code checks in a monorepo is pretty cheap. And you can always generate unit tests based on code that's already written.
While some AI tools (like Cursor) run linters automatically, it still makes sense to explicitly instruct the LLM to run checks.
For example, a prompt might look like this:
Update packages/js/api-client: Add getUserProfile(userId) method: - GET /users/{userId}/profile - Include tests for success and 404 error Run `make test-api-client` and fix any failures.
Memory-bank is a framework that describes how to store and manage two types of entities: Rules and Documentation.
Documentation is pretty straightforward - it's your product described from different perspectives: product, architecture, and component levels. There are plenty of articles online about writing good technical documentation. What matters for us is that the more granular the documentation is and the closer it sits to the code, the easier it is for an agent to access it.
The standard entry point for AI agents is the AGENTS.md file. This file should outline the project's overall structure and purpose, and provide links to other documentation files. A monorepo setup makes it convenient to organize documentation within specific modules while also maintaining a root-level memory bank.
Here's what a memory-bank file structure might look like:
monorepo/ ├── memory-bank/ # Shared across entire repo │ ├── AGENTS.md # Root entry point │ ├── monorepo-conventions.md # Overall structure & rules │ ├── shared-tech-stack.md # Common technologies used │ ├── development-workflow.md # Git, CI/CD, deployment │ └── architecture-overview.md # High-level system design │ ├── applications/ │ ├── website/ │ │ ├── memory-bank/ │ │ │ ├── AGENTS.md # References root + specific context │ │ │ ├── product-overview.md # What this app does │ │ │ ├── user-journeys.md # Key user flows │ │ │ ├── tech-stack.md # Specific technologies │ │ │ ├── api-integration.md # How it connects to APIs │ │ │ └── tasks.md # Current work items │ │ └── src/
Rules are a relatively new concept and, in my opinion, one of the most productive ideas for reusing prompts. Rules are coding guidelines written in natural language — essentially a combination of contribution.md
and .eslintrc
. The Cursor editor's approach to Rules seems especially powerful, as it lets you attach rules to specific files using glob patterns. This automatically "imports" the relevant rules into context when working with different parts of your repository. You can define rules for TypeScript, Supabase, or GitHub Actions once, then reuse them across projects. This is particularly handy when following naming conventions and repository structure standards.
Rules are always global and universal. The best practice is to store them in the repository root — that's what code editors expect.
The "blank page" problem for both Documentation and Rules is easy to avoid. Documentation can be AI-generated with a simple prompt and then edited. And rules for specific technologies are easy to find online: one, two, three.
So every session with an AI agent starts by reading the Memory-bank and ends with updating it. This takes discipline and attention from, but it pays off quickly since it saves time when starting the next session.
The Plan → Act approach is probably the biggest shift in how we work with AI assistants, and it's really the only reliable way to tackle complex, large-scale tasks.
Many AI coding assistants have this built in out of the box. Cursor, for example, automatically plans task execution without requiring extra instructions. But for best results, especially on bigger tasks, you're better off running Plan → Act manually.
The approach is based on the 90/10 principle: be ready to spend 90% of your time planning and only 10% on implementation. Planning produces specific artifacts — tasks. The best practice is to store these tasks in a memory bank at the module level, in a .md file.
The more detailed, thorough, and consistent your tasks are, the better your results will be.
The typical Plan → Act workflow looks like this:
Two important tips for using this pipeline:
Mastering the Plan → Act pipeline isn't just about AI literacy. It's an important, even critical, engineering skill. Basically, a good Plan is a ready-made design doc — you can discuss it with your team lead or team, align on it before implementation in a PR, and use it as a foundation for documentation and your testing layer.
There are so many different tools on the market, and I didn't want to focus on any specific one. But let's at least break down the two main types: local agents (like Claude Code, Cursor, and Codex CLI) and cloud-based ones (like Codex Cloud, Jules, or the newly released Claude Code on the web).
As a developer, you'll usually reach for tools based on local agents. Whether it's an AI-powered code editor or a CLI tool, most daily work tasks are simply easier and more efficient to handle that way.
But cloud-based tools have their own sweet spot. Their main thing is that all interaction happens through a browser — you don't need a dev environment or a locally deployed project. The second important quality is that they're built for parallel work and produce multiple results at the same time.
That's why they're perfect for these use cases:
The perfect task is one that solves itself, and even better, a process for solving similar tasks. A cloud coding agent can become one of those processes.
Perhaps the toughest part of AI coding is teamwork. How do you share best practices? Should you standardize workflows between teammates, and does everyone need to use the same set of tools?
And those are just the most abstract questions. AI assistants raise plenty of very specific ones too: how do you manage the memory bank and who should be responsible for keeping it up to date? How do you work on PRs? How to share AI-rules between each other?
AI providers are trying to address this need one way or another, but there are no ready-made solutions. As happens so often in software engineering, we'll have to invent the solution ourselves.
From my experience, the best way to spread knowledge is through open horizontal forums. If you already have regular developer meetings for sharing practices, discussing news, and passing on experience, make AI adoption an important topic at these meetings. If you don't have such a forum yet, maybe it makes sense to create one, or advocate for AI adoption on whatever platform is available.