Over the last stretch, I have been building an Angular playground with GitHub Copilot next to me for most of the work.

The honest summary is this:

  • AI can absolutely write a lot of code for me.
  • AI can teach me APIs faster than most documentation journeys.
  • AI can remove a huge amount of implementation friction.

And yet the final quality still depends a lot on developer judgment. I used to think vibe coding meant “move fast and let the model figure it out.” Now I think vibe coding works best when it is paired with clear standards, strong product taste, and a willingness to intervene early.

The playground app is intentionally simple: a single index page that lists every demo case and links to each one. It is built with Angular 21.2.7, with Tailwind CSS and Angular Material used for layout and UI components.

This post is my attempt to document what I learned from that process. I use it as a sandbox to learn and experiment with new Angular features. The source code is here, and you can see the final project live here.

What AI Did Exceptionally Well

Copilot was a real force multiplier for speed. It helped me:

  • scaffold many demo pages quickly
  • compare old and new Angular APIs
  • translate rough ideas into runnable code
  • iterate design and UX without huge context switching
  • explain tradeoffs when I asked focused questions

For learning modern Angular features, this was especially useful. I could ask for examples, then immediately ask for caveats, then ask for a version that better fit my codebase and constraints. That loop is hard to beat.

Where AI Alone Was Not Enough

The code usually compiled. That was rarely the real challenge.

The challenge was shaping the codebase so it stayed readable, predictable, and maintainable after many iterations, then moving from “it works” to “this is how I actually want this repo to evolve.”

Example 1: Enforcing OnPush as a Team Standard

Generated components do not always reflect your architectural defaults unless you enforce them. I wanted ChangeDetectionStrategy.OnPush to be the standard approach. Not as a one-off preference, as a repeatable rule.

So I did not want this to live only in prompts. I pushed that expectation into project-level mechanisms:

  • generation/config defaults where possible
  • linting and instruction-level guidance

This is one of the biggest lessons I took away: if a rule matters, encode it. Do not rely on memory, and do not rely on correcting the model over and over.

Example 2: Reducing API Noise

I also cared about reducing unnecessary verbosity in component APIs, including access modifier noise when it was not adding value.

AI happily generates valid code either way. But valid is not the same as clear.

When I explicitly asked for less ceremonial code and a cleaner public surface area, readability improved quickly. In a demo-heavy repository, that kind of small clarity compounds.

Example 3: Simplifying Routing So New Demos Stay Cheap

Initially, every new demo risked adding route boilerplate, so I asked to simplify the pattern and moved to a catalog-driven route approach where generic demo routing handles new examples by slug. The result:

  • fewer route touchpoints per new demo
  • lower onboarding cost for contributors
  • less chance of route drift over time

AI generated plenty of page code quickly. I still had to choose the architecture that would keep that speed sustainable instead of turning into maintenance debt.

How I Now Use Copilot More Effectively

Prompt quality and constraint quality matter much more than prompt length. These patterns gave me better outcomes consistently.

1. Ask for outcomes, not just outputs

Weak:

Add a router demo for currentNavigation.

Better:

Add a real workflow demo where currentNavigation and lastSuccessfulNavigation solve a practical UX problem. Keep URLs clean. Avoid toy interactions.

The second prompt gives the model product intent, not just API intent.

2. Include non-functional expectations

I now state constraints like this:

  • readability over cleverness
  • avoid deprecated APIs
  • follow repo conventions
  • keep future maintenance cost low

Without this, AI can still generate technically correct code that slowly degrades consistency.

3. Ask for diagnosis before rewrite

When something breaks, I now ask for three things in order: the root cause, the smallest reliable fix, and why that fix is likely to hold up.

This prevents wide, noisy edits and keeps diffs understandable.

4. Turn repeated feedback into durable assets

If I repeat a preference twice, I promote it into tooling or repository guidance:

  • prompt files
  • lint/config rules
  • .github/copilot-instructions.md

This is one of my favorite workflow upgrades. I can even ask Copilot to summarize my expectations and non-negotiable rules, then generate those instruction and prompt files so future sessions start with much better alignment.

That shifts effort from repeated correction to reusable guardrails.

What “Vibe Coding” Means to Me Now

My old definition was basically:

  • speed first
  • let AI drive

My current definition is closer to this:

  • move fast, but steer hard
  • let AI accelerate implementation, not replace judgment

AI is an amplifier. It amplifies whatever process you bring to it:

  • weak constraints -> faster inconsistency
  • clear standards -> faster quality

Practical Checklist I Use Before Calling a Change “Done”

  1. Is the architecture cleaner than before, not just larger?
  2. Did I encode important preferences into rules, not comments?
  3. Are naming, routing, and API surfaces easy for a new teammate to follow?
  4. Do editor workflows and terminal workflows both work?
  5. Did I optimize for maintainability, not only delivery speed?

If I cannot say yes to these, I keep iterating.

Cost and Token Discipline Is a Real Skill

Another lesson that surprised me: not all AI requests cost the same, either in tokens or in premium request usage. Some questions are cheap. Some are expensive. Deep context, long threads, repeated retries, and broad “rewrite everything” prompts can consume premium requests much faster than expected.

I noticed that experienced developers often spend fewer premium requests for the same outcome, because they:

  • define constraints early
  • ask for smaller scoped changes
  • diagnose first, then patch
  • encode repeated standards into repo instructions

Newer developers can unintentionally burn a lot more tokens by using trial-and-error prompting, switching goals mid-thread, or repeatedly asking the model to rediscover the same project preferences.

The good news is that this is mostly fixable: token discipline is really workflow discipline.

Here are the habits that helped me reduce usage while keeping quality high:

  1. Start with a precise outcome and constraints.
  2. Request one focused change per turn.
  3. Avoid broad rewrites unless they are truly necessary.
  4. Ask for root cause before asking for a full solution.
  5. Turn repeated preferences into prompt files and copilot-instructions.md.
  6. Start a fresh chat when changing topics significantly.
  7. Ask for concise responses when deep explanation is not needed.

The mindset shift is simple: treat premium requests like engineering budget. Spend them where they create leverage, not where better scoping would have solved the same problem.

Mistakes I Made

  • I occasionally accepted generated code too quickly because it “worked.”
  • I sometimes optimized for immediate output over long-term shape.
  • I underestimated how often environment and tooling details decide success.

If I were starting again, I would define stricter standards earlier and automate them sooner.

Final Takeaway

AI-assisted development is not hype. It is already practical, and in the right hands it is very high leverage.

But the best results did not come from giving up control. They came from using AI aggressively while keeping engineering decisions intentional. The skill I value most now is not “getting code from AI.” It is steering AI with architecture, constraints, and product taste so the software gets better, not just bigger.

That is the version of vibe coding I want to keep.