Let’s be honest: how nice would it be, for anyone who writes software, to sit at a desk with a good cup of tea, give the right instructions, and watch the code write itself?
My answer: incredibly nice.
In recent times we’ve been witnessing something close to that scenario, thanks to the exponential improvements in AI and LLMs capable of generating code.
Out of this wave a new phenomenon has emerged: “vibe coding”.
What is vibe coding 🤟?
By “vibe coding” we mean a development workflow where a large portion of the code is written by AI. The developer’s role shifts: instead of reading documentation, writing every line, and testing everything manually, the focus becomes crafting clear prompts, providing context, setting constraints, and verifying the output produced by a “virtual programmer”.
In short: less typing, more directing.
When did vibe coding start 🗓️?
The term surfaced recently, in February 2025, when Andrej Karpathy, former Director of AI at Tesla, used it to describe a new approach to software development. The idea is to be guided by high‑level “vibes” and direction, while the AI executes the implementation, iterating on results until a satisfactory solution emerges.
Does a self‑driving car make us unable to drive 🚗?
Let me start with an analogy. In recent years, cars have learned to brake, steer, and accelerate on their own. In software, similarly, we’ve gotten AI assistants and copilots of all kinds.
Before, you read the docs, tried things, failed, optimized, and only after a fair bit of time (and a few curses at the screen) your code was ready for production. Today, many steps can be accelerated.
Does that make us less capable? I don’t think so. A self‑driving car doesn’t make us unable to drive, just like calculators didn’t make us unable to do math by hand. The real risk is different: for those starting today, all these aids can dampen curiosity and critical thinking if used without discipline.
If you already know how to “drive,” autonomy is a valuable support. But if you learn directly on a self‑driving car, you might never truly build the fundamentals.
Why I turned off AI autocompletions 💡?
I’ve tested GitHub Copilot extensively across multiple models. After months of use, here are my takeaways:
- 💔 Struggles on large codebases: it often misses relationships and contracts between distant modules.
- 💀 Invasive autocomplete: sometimes invents attribute or function names that sound plausible but don’t exist (dangerous if your editor doesn’t flag them immediately).
- 💊 The “tab” dependency: you accept suggestions in a row to “go faster,” but risk lowering quality, security, and architectural coherence.
For these reasons, I decided to disable autocomplete for every file type in the projects I work on.
How I use AI in my workflow ⚙️
Where AI really shines for me is the integrated chat (Copilot Chat, Cursor, etc.). I use it as a genuine development companion:
- ✅ I write the core logic and main tests.
- ✅ Then I ask the AI for a focused code review: performance, edge cases, security, readability, naming, error handling.
- ✅ I iterate on the feedback, validate with tests and static analysis tools, and only integrate what actually passes verification.
It’s like discussing code with a colleague who’s always available and very quick at proposing alternatives and refactors.
How far can AI go 🔮?
I see two plausible scenarios.
- 📈 The hype deflates a bit: companies remember the value of human developers. AI remains a powerful tool, but supportive. Those who can model problems, design architectures, and wield tools well will win.
- 📉 AI keeps improving: it will understand business requirements with increasing precision and propose solid implementations, reducing the amount of “typing” in favor of orchestration, validation, and governance.
In both cases, skills like API design, data modeling, threat modeling, observability, testing, and human review stay central. They’re what turns “code that works” into “reliable software”.
Conclusions 🎯
I lean toward scenario 1: AI won’t replace us, but it is one of the biggest revolutions in computing in the last 20 years. Those who combine solid fundamentals, critical thinking, and the ability to steer AI will have a major edge.
If scenario 2 comes true, it’ll be time to learn how to make pizza 🍕… unless Tesla’s robots already make it better than I do.
