Vibe Coding

Vibe Coding

A new way to build software has emerged where the productive bottleneck is no longer typing speed. It is judgment. A solo developer with the right tools and the right instincts can ship a working application in the time it once took to scaffold the project, and the difference between a good outcome and a bad one has almost nothing to do with how fast that person can write code by hand.

That shift has a name now. People started calling it vibe coding sometime in late 2024 and the label stuck. The phrase is breezy and slightly self-mocking, which is part of why it caught on, but the underlying practice is serious. It describes a real change in how software gets made when a capable AI agent is doing the typing and a human is doing the directing.

This page defines the discipline, traces where it came from, names what changes and what does not, and draws the lines around where the approach actually works. Other topics in this curriculum go deep on the specifics -- prompt engineering, agent instruction files, context engineering, tool comparisons. Here, we set the frame.

200K
Tokens of context a frontier model can hold today, versus roughly 4K when ChatGPT first launched in late 2022
~50x
Productivity multiplier reported by experienced practitioners on greenfield work, measured in features shipped per week
1
People required to ship a production-grade SaaS in 2025-2026 with the right toolchain and discipline

What Vibe Coding Actually Is

Vibe coding is the practice of building software by directing an AI agent to write the code, rather than writing the code yourself. The human stays in the loop on every meaningful decision. The AI handles the keystrokes, the syntax, the boilerplate, most of the file edits, and a large share of the debugging traversal. The human owns the product thinking, the architecture, the review gate, and the choice of what to build next.

That definition is narrower than it sounds, and it is broader than it sounds, depending on which misconception you arrived with. It is narrower because it does not include drag-and-drop no-code tools like Webflow or Bubble, which solve a different problem for a different audience. It does not include single-shot prompt-only generation -- the "type one prompt, copy the output, hope it works" pattern that produces toy demos and breaks at the first edge case. Vibe coding involves iteration, review, refactoring, and judgment over many turns.

It is broader because the discipline shows up across the full software lifecycle. You can vibe-code a database schema, an API surface, a UI component library, a deployment pipeline, a test suite, and a marketing landing page in the same week, with the same agent, in the same project. The label is not about a specific kind of code. It is about a specific kind of relationship between the human and the typing.

The discipline is composed of a handful of recognizable activities. Product thinking -- deciding what the software should do and for whom. Architecture decisions -- choosing the framework, the database, the deployment target, the auth model, the data shape. Prompt engineering -- writing instructions to the agent that produce the code you actually want. Code review -- reading the diffs, catching the bugs, refusing the lazy solutions. Debugging -- noticing when something is wrong, narrowing it down, directing the agent toward the fix. Deployment -- getting the code in front of real users and watching what happens. The same five-or-six-thing list a senior engineer has always owned, with one major difference: the typing is delegated.

What it is not

Three things the label gets confused with, worth clearing up early. Vibe coding is not no-code. No-code tools assemble pre-built components through visual interfaces and ship a constrained product. They are useful for the audience they serve. They are not the same activity, and they do not produce arbitrary software. Vibe coding produces arbitrary software. If a real engineer could write it by hand in a normal language using a normal framework, you can vibe-code it.

Vibe coding is not single-prompt generation. The phrase "type a prompt, get an app" describes a fantasy that has been heavily marketed and rarely delivered. Real vibe coding is a session, often hours long, with dozens of turns, course corrections, refactors, and explicit teach-the-model moments. The output of one prompt is rarely the final output of the session. The expectation that a single prompt does the job is a tell that someone has not actually built anything serious this way.

Vibe coding is not lazy coding. This one matters most because it is the misconception that makes serious engineers dismiss the practice without trying it. The cognitive load of vibe coding is real. It is just shifted. You spend less time on syntax and more time on decisions. Less time on copy-pasting from documentation and more time on architecture trade-offs. Less time fighting your editor and more time reading diffs the agent just produced and asking whether they belong in your codebase. Anyone who has done this for forty hours straight will tell you it is not less tiring than writing code by hand. It is a different kind of tiring.

Why Now -- The LLM Context Inflection

This is not a discipline that could have existed in 2020. The hardware was there, the language models were not, and the gap mattered. The arrival of vibe coding as a viable practice tracks a specific technical curve, and the curve is worth naming exactly because the dates explain why people who tried this in 2023 came away unimpressed and people who try it in 2026 cannot stop talking about it.

The relevant milestones, with the actual dates:

1
November 2022 -- ChatGPT launches with GPT-3.5

Context window of roughly 4,000 tokens, which is about three pages of code. Useful for snippets and explanations. Useless for understanding a real codebase. Productive use cases were limited to autocomplete-style tasks and quick one-off scripts. The first generation of AI coding hype came from this moment, and most of it was wrong because the technology was not yet capable of what people were claiming.

2
March 2023 -- GPT-4 arrives at 8K-32K tokens

An order of magnitude more context, and noticeably better reasoning. You could now hold a single file in mind. The phrase "AI pair programmer" started to mean something. Tools like GitHub Copilot started to feel less like fancy autocomplete. But the workflow was still snippet-by-snippet -- you had to manually feed the model the right slice of your codebase, and you got back a slice in return.

3
Mid-2024 -- Claude 3.5 Sonnet ships with 200K context

This is the threshold that mattered. Two hundred thousand tokens is not just "more pages." It is roughly the size of a small-to-medium application -- the source files, the schema, the config, the tests. For the first time, an AI agent could hold an entire project in working memory and reason about it as a coherent system. This is where vibe coding stopped being aspirational and started shipping real software.

4
2025-2026 -- Production agents go mainstream

Claude Code, Cursor in agent mode, Copilot's agent features, Codex CLI. Tools that do not just suggest code but actually open files, run commands, edit across the codebase, run tests, read the output, and iterate. The interface stops being a chat box and starts being a session in your terminal or editor. Multi-file changes happen routinely. The agent reads its own mistakes and fixes them. This is where the productivity numbers stop being marketing and start being measurable.

The arc from 4K to 200K is not just a number getting bigger. It is the difference between an assistant that can answer a question about your code and an assistant that can build a feature in your code. The first is interesting. The second changes who can ship software.

Context window growth, 2022 to 2026 50x
Time a vibe coder spends reviewing AI output vs writing prompts ~60%
Share of typing now done by the agent in a typical session ~95%

One thing the timeline obscures: the change felt slow at first and then suddenly fast. People who used GPT-4 in 2023 and concluded "AI cannot write real software" were not wrong about 2023. They were wrong to extrapolate. The capability curve is not linear, and the threshold of "useful for snippets" to "useful for entire applications" was crossed in roughly eighteen months. If your mental model of AI coding came from 2023, it is now stale. Update it.

What Changes in the Human Role

The human role does not disappear. It moves. Specifically, it moves up the stack toward decisions and away from execution, and that move has consequences for which skills suddenly matter more and which suddenly matter less. The framing here is opinionated and worth disagreeing with if you think it is wrong, but the pattern is consistent across the practitioners actually shipping software this way.

Traditional Software Development

One person plays implementer, debugger, designer, reviewer, and deployer. The bottleneck is typing speed, syntax recall, framework knowledge, and willingness to grind through edge cases. You build what you can build because you have to type every line. The shape of the work is constrained by the shape of one human's typing capacity over an eight-hour day.

Vibe Coding

One person plays architect, reviewer, decision-maker, prompt-engineer, and debug-director. The bottleneck is judgment -- knowing what to build, recognizing when the build is wrong, choosing the right scope, and refusing the lazy answer. You build what you can decide on, because the typing is delegated. The shape of the work is constrained by the shape of one human's judgment over an eight-hour day.

Skills that move up in importance

Product thinking. The single biggest skill that gets revalued. When you can build anything in roughly an order of magnitude less time, the question of what to build becomes the dominant constraint. Vibe coders who do not know what users want produce a lot of well-built software no one wants. The bottleneck moves from execution to taste-and-judgment, and product thinking is the largest component of that bundle.

System architecture. AI agents have opinions about architecture, but the opinions are statistically median. Left to its own defaults, an AI will pick the most popular framework, the most common database, the most generic auth pattern. That is sometimes correct and sometimes catastrophically wrong for your use case. Architectural judgment -- choosing the right framework for the job, drawing the right interface boundaries, knowing when to use a queue and when to use a cron -- is the human's job and the agent will not do it for you.

Code review. This one surprises people. The skill of reading a diff and asking "is this actually good?" used to be a senior-engineer thing that junior engineers grew into over years. In vibe coding, every developer is a reviewer all day every day. The agent produces code; you decide whether it ships. If you cannot tell the difference between code that works and code that works but is a mess, the agent's output will silently degrade your codebase. Review skill is the floor for being any good at this.

Debugging mindset. The agent will get stuck. When it gets stuck, the human has to recognize the kind of stuck, narrow down the actual problem, and direct the agent toward the fix. This is not the same as letting the agent debug for you -- the agent does not always know it is wrong. The human's debugging skill is the difference between a fifteen-minute fix and a three-hour rabbit hole.

Taste. The hardest one to teach and the most important one to have. Taste is the ability to look at something and know it is bad before you can articulate why. In vibe coding, taste is what stops you from accepting the first thing the agent produces. The agent will give you something plausible. Plausible is not the same as good. Taste is what separates the vibe coders shipping memorable products from the ones shipping forgettable ones.

Skills that move down in importance

Typing speed. Once a top-tier signal of senior engineering, now mostly irrelevant. The agent types faster than you do. There is no scenario where your typing speed is the constraint.

Framework memorization. Knowing the exact name of every Next.js hook, every Tailwind class, every Postgres function -- this used to take years to accumulate and was a meaningful productivity edge. The agent has all of it memorized. Your memorization adds nothing on top.

Syntax recall. Whether you can write a closure in JavaScript without looking it up, whether you remember the precise async/await syntax in Python, whether you know the difference between map and forEach off the top of your head. Useful for code review still, but no longer a productive edge in writing code.

Stack-overflow proficiency. The ability to find the right answer to a specific error message used to be a meaningful skill. The agent does this faster and the answer is delivered as code rather than as a snippet you have to adapt. The skill has not vanished, but it has been mostly absorbed.

None of these skills become useless. You will still benefit from knowing your tools. But the curve of "how much value does each additional hour of skill development produce" has flattened on the execution side and steepened on the judgment side. Spend your hours accordingly.

What Stays the Same

Here is the place to be opinionated, because the loudest discourse around vibe coding gets this wrong. A common claim is that AI agents have made engineering knowledge irrelevant -- that anyone can ship software now, regardless of background. That claim is wrong on its face, and the examples used to support it tend to be either toy projects or things that quietly fall apart under real use.

What actually stays the same: the irreducible core of engineering judgment. The agent does not know things that you bring to the table.

Takeaway

Vibe coding does not eliminate the need for engineering knowledge. It shifts where that knowledge applies. The hours you used to spend writing code, you now spend reviewing code, deciding what to build, and noticing when the agent is wrong. If you have no engineering knowledge, you have no review skill, and the agent's output will silently destroy your codebase.

Taste. AI defaults are statistically median because they are trained on the median of human-written code. Every popular framework is overrepresented. Every clichéd pattern is overrepresented. Without taste, your output is generic, and generic software does not stand out. The market does not reward generic software. It rewards software that feels like someone made specific choices and meant them.

Domain knowledge. The agent does not know your users. It does not know your industry's regulations. It does not know which features matter and which are vanity. It does not know that the third-largest customer is about to churn unless you ship a particular integration by next quarter. Domain knowledge is the input that turns generic capability into specific value, and the agent has no way to acquire it on your behalf.

Product instinct. Knowing what to build next. Knowing when to ship and when to keep cooking. Knowing which feature is a real customer need and which is one loud user pretending to be the market. The agent has no instincts. It has no preferences. It has no skin in the game. The instinct is yours, and it is the most valuable thing you bring.

Review floor. The ability to tell when the agent is wrong. This deserves its own paragraph because it is the most underrated skill in the whole bundle. The agent will, occasionally, produce code that looks fine, runs fine, passes the tests, and is wrong. Wrong in a subtle way. Wrong because it solved an adjacent problem instead of the actual problem. Wrong because it picked the right pattern in the wrong context. Catching that requires understanding the code well enough to argue with it, and that understanding is built up over years of writing code by hand. There is no shortcut yet. The day there is one will be the day vibe coding stops requiring engineering background, and that day is not today.

The opinionated position: vibe coding rewards the practitioners with the most engineering knowledge, not the least. The marginal value of an experienced engineer using a coding agent is higher than the marginal value of an inexperienced one. The agent multiplies what you bring. If you bring nothing, you get nothing back. If you bring twenty years of system design instinct, you get twenty years of system design at the speed of typing. The shift is not democratizing in the trivial sense. It is amplifying in the meaningful sense.

The Economic Shift -- One Person Ships Products

The practical consequence of all this is that one person can now ship the kind of software that used to take a team. Not in every domain, not for every product, but for a meaningful and growing slice of what gets built. The economic implications are large and worth thinking through directly.

Consider a small SaaS product -- a hypothetical CRM for boutique fitness studios, a hypothetical scheduling tool for tutors, a hypothetical subscription analytics dashboard for indie developers. The kind of thing that, in 2020, would have been a four-person team for six months to ship a credible v1. Marketing site, auth, billing, core product, dashboard, admin tooling, basic analytics, deployment pipeline, monitoring. Each of those used to be a meaningful chunk of work. Now, with a competent vibe coder and a serious AI agent, that whole bundle is a one-person three-week effort. Sometimes faster.

The marginal cost of a feature drops to maybe a tenth of what it was. A feature that used to be a week of work for a senior engineer is now a day. A feature that used to be a day is now an hour. The proportions hold across the whole stack. The implication is that the surface area of what one person can produce expands dramatically, and the cost-benefit math on which projects to even attempt shifts in the same direction.

1 person
Team size to ship a credible v1 SaaS product, down from a typical 3-5 in 2020
1-3 weeks
Realistic time-to-launch for a small-to-medium greenfield SaaS, down from 3-6 months
~1/10
Marginal cost of an additional feature, compared to traditional development

Three implications worth naming.

First, team sizes shrink for greenfield work. The four-person team that used to be the right shape for a small product now overshoots. You can ship faster with one person plus an agent than with four people coordinating. Coordination cost was always a tax, and that tax is now visible because the alternative is plausible.

Second, the threshold for what counts as "viable to build" drops sharply. A side project that would have taken six weekends now takes one. An internal tool that would not have justified a developer's time now does. A market that was too small to support a real product now can be served. You will see a proliferation of small, specific, well-built tools over the next few years, made by individuals serving niches that were previously below the floor of what a team could profitably address.

Third, the kinds of products that get made will shift. Software with high customization needs becomes more attractive because customization is cheap. Software with narrow audiences becomes more attractive because the development cost is recoverable. Software that competes on craft and taste becomes more attractive because craft is now affordable for indie shops. The shape of the software market follows the shape of what is cheap to make, and what is cheap to make is changing.

One important caveat. This shift is real for greenfield work in the small-to-medium scope range. It is partial for enterprise work, for legacy maintenance, for high-stakes systems. A solo developer with a coding agent cannot replace a fifty-person engineering org running a payments platform with a twenty-year codebase, dozens of integrations, and regulatory obligations measured in dollars per compliance violation. The economics there are different and the productivity multiplier is much smaller. The bigger and older and more constrained the work, the less of the curve applies. Stay calibrated.

Common Misconceptions

The discourse around vibe coding has produced a stable list of recurring misconceptions, each of which is worth addressing directly because each one steers people away from a useful practice. The pattern across all of them is the same: a partial truth gets generalized into a full claim, and the full claim is wrong.

"Vibe coding is lazy coding"

Already addressed above, but worth restating. The cognitive load is real, just shifted. The activity that vibe coding replaces is the typing. The activities it adds, or amplifies, are review, decision-making, prompting, and architectural thinking. Those are not less effortful than typing. They are arguably more so, because they require constant active engagement rather than the semi-automatic flow state that experienced coders enter when typing familiar patterns. Anyone who claims this is the lazy option has not done it for a full week.

"AI writes the code, you just press tab"

This was nearly true in 2023 with autocomplete-style tools. It is no longer the right description. In 2026, the workflow is conversational and iterative. You tell the agent what to build. It proposes a plan. You correct the plan. It writes the code. You read the diff. You catch issues. You ask for changes. It updates. You run tests. They fail. You debug together. Eventually you ship. The "just press tab" mental model is a holdover from an earlier generation of tools. The current workflow is closer to working with a fast junior engineer who never tires than to autocompleting your way through a function.

"It only works for toy projects"

This is the misconception held by engineers who tried AI tools in 2023 and have not updated their priors. It was true then and is no longer true now. Real, in-production software is being shipped by individuals with coding agents in 2026. The kinds of software that work well: SaaS products with conventional CRUD shapes, content sites, internal tools, prototypes that grow into products, integrations with well-documented APIs, marketing sites, dashboards, mobile apps with established patterns. The kinds that work less well are addressed in the next section. But the claim that this is only good for toys does not survive contact with the current generation of tooling.

"AI-generated code is buggy and unmaintainable"

It can be. If review is sloppy, if the prompts are vague, if the human is just nodding through diffs, the resulting codebase will be a mess. This is not a property of AI-generated code. It is a property of unsupervised AI-generated code. The same mess shows up when junior engineers write code without senior review. The fix is the same: review with care, refactor when warranted, push back when the agent picks the wrong abstraction. The agent will accept your pushback and produce better code. The output quality is downstream of the review quality, not the agent quality.

"It will replace developers"

The framing is wrong. Vibe coding changes the role. It does not eliminate it. The developers who adapt -- who treat the agent as a productivity multiplier and develop the judgment skills the new workflow requires -- become more valuable, not less. The ones who do not adapt face the same fate as any practitioner whose primary skill becomes commoditized: a slow erosion of bargaining power. The honest take is that the role of "person who turns ideas into working software" remains and is more amplified than ever; the role of "person who types code from spec into editor" is shrinking and was never the highest-paid part of the job anyway.

The pattern across these misconceptions

Each misconception is a 2023 truth applied to 2026 capability. The tools have changed faster than the discourse, and a lot of confident takes from technical people are running on stale data. If your model of AI coding is more than eighteen months old, it is wrong about something important. Update before judging.

Who Vibe Coding Works For (And Who It Does Not)

The honest version of this practice acknowledges its boundaries. There are domains where vibe coding is the obvious right answer, domains where it is workable with care, and domains where it is the wrong tool entirely. Knowing which is which is the difference between someone who has internalized the discipline and someone who has read about it on a blog.

Where it works well

Solo founders building v1 of a product. The single best fit. Greenfield, small team (one), urgent timeline, conventional shape (CRUD app, marketing site, basic dashboard). Every advantage of vibe coding compounds. The agent's productivity multiplier is highest when there is no legacy code to navigate, no team to coordinate with, and no pre-existing architecture to respect against the agent's defaults.

Small teams shipping fast. Two-to-five-person startups, especially those in the build-iterate-launch loop, get most of the same benefit. The agent absorbs a meaningful share of the typing across the team and frees up the engineers for design, review, and harder problems. Coordination cost is still a tax, but the per-person multiplier holds.

Prototype-to-production paths. The classic "build a thing in a hackathon, validate with users, then ship a real version" arc benefits enormously. The prototype is fast because the agent handles the tedium. The transition to production is fast because the agent can refactor the prototype's shortcuts into real systems. The whole loop tightens.

Side projects and indie tools. The economic case for building a tool just because you want it changes when the cost is one weekend instead of six. A whole class of tools that previously did not get built now do. The quality of these tools also goes up, because the same person who would have shipped a hacky prototype now has time to make it nice.

Internal tools. Companies have always under-invested in internal tooling because the ROI calculation rarely supported a real engineer's time. With the cost of building dropping by an order of magnitude, the math works for a much wider set of internal needs. Expect a quiet productivity boom inside companies that figure this out.

Content and marketing site builds. The shape is well-understood, the patterns are conventional, the SEO requirements are well-documented, and the agent has seen ten thousand examples. This is one of the highest-payoff applications, and it is one place where the productivity multiplier can exceed the headline 50x because so much of the work is in well-trodden patterns.

Where it works less well

Vibe coding fit zone
Mixed workable
Edge cases
Avoid

Deeply regulated systems. Medical devices, financial trading systems, aerospace control software. The cost of a wrong line of code is measured in lives or dollars at scale. Review processes are formalized and slow. Compliance requirements demand traceability that current AI tooling does not naturally produce. The shift here will come, but slowly, and it will look different from the indie-developer version of vibe coding. Until then, the right approach in these domains is heavy human review of every line, formal verification where possible, and a strong default toward writing the critical path by hand.

Performance-critical primitives. High-frequency trading engines, embedded real-time systems, kernel-level code, anything where nanoseconds matter or where memory layout has to be precise. The agent's defaults skew toward readable and conventional, not toward optimal. You can sometimes get the agent to produce performant code, but verifying that it is actually optimal requires the same expertise that would let you write it by hand, and the agent does not save you that work. For this domain, the agent is a useful research and review partner, not a primary producer.

Security-critical core. Cryptographic primitives, authentication and session management, anything where a subtle bug becomes a vulnerability. The agent will produce code that looks correct and may have a side-channel timing leak, an off-by-one in a buffer, or a subtle confused-deputy issue. These are bugs that experienced humans miss too, but the rate matters and the cost of missing them is high. Use established libraries for the primitives. Use the agent for the wiring between them. Do not vibe-code your own crypto.

Novel research. The agent is trained on existing patterns. Truly novel work -- the kind where the right answer is not in any existing repository -- is where the agent's defaults work against you. It will keep proposing patterns from prior work that do not fit. You can sometimes use it to scaffold or to handle the boring parts, but the core insight cannot come from the model. It has to come from you.

Legacy enterprise codebases with deep tribal knowledge. Twenty-year codebases with undocumented invariants, weird historical decisions that turn out to encode real constraints, and small armies of engineers who know which file to never touch on a Friday. The agent does not know any of that. It will confidently propose changes that violate constraints no one wrote down. Working in these codebases with an agent requires an experienced human who knows the tribal knowledge and can override the agent's defaults constantly. The productivity multiplier is much lower here, sometimes negative if the human is inexperienced.

The boundaries are not fixed. Each year, the agents get better, the tooling gets sharper, and some things that were edge cases in 2026 will be in the fit zone by 2028. But the rate is uneven, and the responsible position right now is to understand the lines clearly rather than pretend they do not exist. The limits get a deeper treatment elsewhere in this curriculum, and they are worth taking seriously.

The Tool Landscape, Briefly

A page about vibe coding that did not name the tools would be incomplete, so a quick survey. The deeper comparison lives in another topic; here is the orientation.

Claude Code is the coding agent from Anthropic. It runs as a CLI in your terminal, opens a session in your project, and operates with full access to your file system and shell. It is the tool that most directly embodies the workflow described in this page -- a long session with the agent doing the typing, the human doing the directing, and the agent able to read its own output, run tests, and iterate without constant babysitting. Claude Code is built on the Claude family of models -- Opus for the heaviest reasoning, Sonnet as the workhorse, Haiku for the fast and cheap path. The 200K context window is the same one mentioned earlier in the timeline, and it is the floor of what makes a real coding agent possible.

# A typical session opener
$ claude
> build me a contact form component with email validation
> use the existing Form primitives from src/components/ui
> tests live in __tests__

Cursor is an editor (forked from VS Code) with an integrated agent. It works inside the editor rather than in a separate terminal session, which suits people who think in editor windows rather than terminal sessions. The agent has gotten significantly more capable through 2025 and 2026 and is a credible alternative for people who prefer that interaction model.

GitHub Copilot has agent mode now, in addition to its original autocomplete-style suggestions. It is integrated into VS Code and into the GitHub web interface, and the workflow leans heavily on the GitHub-native parts of your stack -- pull requests, issues, code review. If your team already lives in GitHub, the surface area is convenient.

OpenAI Codex CLI is the OpenAI entry in the terminal-agent space. The capability is real and the workflow is similar to Claude Code at a high level.

The honest take: pick the tool whose interaction model fits how you actually work, and pick the model whose capability you trust for the work you do. The differences between the top tools are smaller than the differences between using any of them and not. If you are starting fresh, Claude Code with a Claude model is the recommendation -- the agent quality is high, the context window is enormous, and the workflow assumes serious work rather than autocomplete. But Cursor users ship serious software too. Copilot users ship serious software too. The discipline is what carries the result; the tool is the lever.

A Working Mental Model

If you are coming to vibe coding from a traditional engineering background, the mental model that works best is "I am the senior engineer; the agent is the team." You do code review. You do architecture. You set the direction. The agent does the implementation, the boilerplate, the test scaffolding, the file edits, the refactors. You correct it when it picks a bad pattern. You override it when it suggests something that does not fit. You teach it about your codebase as the session progresses, by writing instruction files and by leaving good code for it to learn from.

Human direction
AI execution
Human review
Decision: ship or iterate

The loop is short and runs many times in a session. Direction goes in. Execution comes back. Review happens. Either you ship or you iterate. The skill is in the direction-and-review parts. The agent gets better at execution every six months. Your job is the part that does not get faster with model upgrades, and that part is judgment.

One sub-skill worth highlighting: the practice of writing for the agent. Most projects benefit from a project-specific instruction file -- a description of the codebase, the conventions, the architectural choices, the things to avoid. Future topics in this curriculum cover that practice in depth. For now, the point is that part of vibe coding is teaching the agent your context, and the teaching pays off over the lifetime of the project. An agent that knows your codebase produces better code than an agent that has to figure it out fresh every session.

The shape of a productive session

If you have never sat through a serious vibe coding session, the rhythm is worth describing. A productive session usually opens with orientation. You tell the agent what you are about to do, point it at the relevant files, and either let it read on its own or feed it the specific context it needs. Skipping orientation is one of the most common mistakes. An agent that does not know where the auth lives will reinvent it. An agent that does not know your test conventions will scaffold tests in a style that does not match the rest of the suite. Five minutes of orientation saves an hour of cleanup.

The middle of the session is iterative. You give the agent a chunk of work, you read what comes back, you correct what is wrong, you accept what is right, you ask for the next chunk. Chunk sizes that work in practice are smaller than people expect. Asking the agent to "build the entire dashboard" in one turn produces output you cannot review carefully. Asking for "the table component, with sorting and filtering, no row selection yet" produces output you can actually grade. Smaller chunks compound into bigger wins because each chunk is reviewable.

The end of a session is consolidation. You ask the agent to summarize what changed, run the tests, fix anything broken, and commit. You write a short note for your future self about where the work stopped and what is still pending. The agent can write that note for you, and it is worth a minute of the agent's time because it makes the next session start ten minutes faster. Good session hygiene is a quiet productivity multiplier on top of everything else.

Prompting style that holds up

Two common prompting failures kill productivity in this practice, and naming them both saves a lot of pain. The first is the under-specified prompt. "Add a login form" produces a generic login form that almost certainly does not match your existing patterns. The second is the over-specified prompt -- a wall of text that tries to anticipate every detail and ends up locking the agent into one solution before it has even seen the codebase. The middle path is a short, specific prompt that names the goal, points at the relevant files, and trusts the agent to ask if it needs more.

The phrase that earns its keep more than almost any other in vibe coding is "follow the patterns already in the codebase." Said early in a session, it heads off most of the agent's worst defaults. The agent will read your existing code, infer the conventions, and produce work that fits in. Without the prompt, the agent picks defaults that reflect the median of its training data, and the median is rarely your codebase.

A Note on Quality

One concern that keeps coming up, especially from engineers who have not yet tried this seriously: does vibe coding produce worse software? The answer is "it depends, and not for the reason you think."

Vibe-coded software is not inherently worse than hand-written software. It is software written by an agent under human direction, and the quality tracks the quality of the direction. A careful vibe coder produces software that is at least as good as their hand-written work, and often better, because the agent is willing to add tests and documentation and small refactors that the human would have skipped under deadline pressure. A sloppy vibe coder produces sloppy software, the same way a sloppy hand-coder does, just faster.

The actual quality risk is more subtle. The agent's defaults bias toward conventional patterns. If you accept the defaults uncritically, your software will look like the median of what is on GitHub. That median is fine for most purposes and is genuinely good in many places. But for products that need to feel distinctive, the defaults are the enemy. You have to push back. You have to ask for the unconventional choice when the unconventional choice is right. You have to refuse the boilerplate component and ask for the custom one. The agent will do excellent custom work; it just does not default to it.

This is one of the places where taste matters most. If you cannot tell the difference between a generic implementation and a tasteful one, the agent will not tell you either, and your software will inherit the generic shape. If you can tell the difference, you can direct the agent toward the tasteful version, and the agent will produce it. The taste is yours; the typing is the agent's. That division of labor is the whole game.

The Honest Limitations

To close on a calibrated note rather than a hyped one: vibe coding is not magic. The agent makes mistakes. The session sometimes goes sideways and you have to start over. The model occasionally hallucinates an API that does not exist or remembers a deprecated pattern. The cost of these failures is real and adds up. Anyone selling a friction-free vision of this practice is either inexperienced or selling something.

The honest framing: vibe coding is dramatically faster than hand-coding for greenfield work in the conventional fit zone. It is meaningfully faster in adjacent zones. It is sometimes break-even or slower in zones it does not fit. The productivity numbers people quote -- the 10x and 50x figures -- are real and reproducible in the right conditions, and they are not the average across all conditions. Calibrate to your actual work, not to the headline number.

The discipline matters more than the speed. A serious vibe coder produces software that ships and works. A casual one produces software that demos well and falls over in production. The line between the two is the discipline of review, the standards for what you accept, and the willingness to push back on the agent when its first answer is wrong. That discipline is teachable, but it is not automatic, and it is the thing the rest of this curriculum is built to teach.

Vibe coding is a tool, not an ideology. The shift it represents is real but not infinite. The discipline matters more than ever -- it just looks different than it did in 2020. If you work in the fit zone, learn to do it well. If you work outside the fit zone, know where the boundaries are and respect them. And if you are trying to figure out which side you are on, the honest answer is usually "both, depending on the project," and the discipline is in knowing which mode you are in at any given moment.

The rest of the topics in this curriculum unpack the specifics. Prompt engineering as a real craft. Agent instruction files and how to write them well. Context engineering, which is the art of giving the model what it needs and nothing more. Tool comparisons, model selection, debugging an agent that has gone off the rails. The frame is set here. The depth lives in the topics that follow. Read on if you want to get good at this rather than just good at talking about it.