Lessons from Vibe Coding Three Apps in Three Weeks

A decorative images showing a gear, chips, and the word AI.

While taking some time for paternity leave in a small village in the middle of Bulgaria, I used my baby’s nap times to dive deeper into vibe coding to see just how fast and close these AI tools can get you to building real, production-ready apps. It led to a serious of articles, LinkedIn posts, and product experiments, all focused on understanding and sharing my insights on the state of programming and product design that leverage AI.

In my previous article, “ColabWithMe: A GPT Specialized in Google Colab for Data Analysis & ML,” I talked about how generative AI is redefining the programming landscape. As the Harvard Business Review noted in “We’re All Programmers Now,” this shift represents more than just enabling non-technical employees to code. The real opportunity lies in developing multi-skilled professionals who can operate across domains, compressing innovation cycles from weeks to days. (I explore this further in “The Shape of AI Training: How Skill Profiles Guide AI Learning Paths.“)

Which brings me to what I actually built during those nap times—three different applications using a variety of AI tools. Rather than focusing on polished user interfaces, I focused on backend functionality and core business logic. I discovered that debugging the frontend and getting it to look how I wanted consumed far more time than implementing core backend features. So, many of these vibe-coded apps work nicely on the backend, but need more polish on the frontend. Let’s dig in.

Tools reviewed

Vibe coding means building software by describing what you want in natural language and letting AI generate the code. I tested tools across three categories to see how they enable this new way of building.

  • Integrated development environment (IDE) integrated agents: GitHub Copilot Agent, Gemini Code Assistant, Claude, Cursor.
  • Conversational interfaces: ChatGPT, ChatGPT Codex, Grok, Claude.
  • Prompt app builders: Replit, Lovable, Bolt, GitHub Spark.

Project 1: TickGoals.com, AI-powered goal setting (Approximately 7 hours)

The first application tackled a common productivity challenge: transforming vague aspirations into actionable SMART goals. The system implements a conversational AI interface that guides users through goal refinement, then automatically generates structured milestones and tasks.

Key features:

  • Chat with AI to transform vague goals into structured SMART goals
  • Auto-generate actionable milestones and tasks based on your refined goals
  • To-do list interface for tracking progress and completion
  • Persistent goal storage with progress visualization

Tech stack:

  • React frontend for conversational UI generated by GitHub Spark
  • Firebase Functions for serverless backend processing
  • OpenAI API for goal and task creation
  • Firebase Firestore for persistent goal and task storage

Initially I prototyped across Lovable, Replit, Bolt and GitHub Spark to see what each would generate. I eventually used the code GitHub Spark generated for a cleaner React component structure. Check it out here: https://tickgoals.com

Project 2: NewsVibe.AI, newsletter aggregation and summarization platform (Approximately 12 hours)

While catching up on email, I noticed my inbox was filled with newsletters that I’d often just skim or summarize, so I built a tool to handle this automatically. The app provides users with personalized email addresses for newsletter subscriptions, then presents content in a newsfeed interface to easily scroll through with AI summarization.

Key features:

  • Personal @newsvibe.me email addresses for newsletter subscriptions.
  • Instagram-style scrollable feed displaying all your newsletters.
  • AI-powered summarization to get quick overviews of content.
  • Automatic extraction of links and key information from newsletters.
  • Subscription management dashboard with usage analytics.

Tech stack:

  • Cloudflare pages for frontend hosting.
  • Maileroo for email processing and parsing.
  • Supabase for user management and content storage.
  • Python backend deployed on Render for newsletter and summarization processing.
  • OpenAI API for content summarization.
  • Stripe integration for subscription management.

I split this project into separate frontend and backend repos, and found it blazing fast to build out all the backend functionality first before tackling the frontend.

Project 3: Welcome.AI, newsletter editor agent (Approximately 10 hours)

Welcome AI has been my side project since 2017, initially focused on competitive analysis of AI tools. I’ve rebuilt it multiple times, with the latest iteration using retrieval augmented generation (RAG) for content. But, content curation still required manual review, either by me or community contributors, so I built an agent to automate the entire process, identifying, categorizing, and synthesizing AI news into a publication-ready newsletter. View a generated newsletter here. Subscribe at https://newsletter.welcome.ai/

Key Features:

  • Automatically identifies and filters AI-related news from RSS feeds and newsletters
  • Categorizes stories by topic and summarizes key points
  • Writes complete newsletter copy with insights and summaries
  • Curates the top stories and case studies for featured content sections
  • Generates HTML formatting and generates a feature image for the top story

Tech Stack:

  • Python news feed processing
  • OpenAI Agent SDK and APIs
  • GitHub Actions for automated workflow execution
  • Supabase for content management and curation state
  • Backblaze B2 for generated feature image storage

This was purely a backend project to test and experiment with the OpenAI Agent SDK, though I diverged from it toward more direct large language model (LLM) tasks by the end.

Lessons learned

At a high level, you can definitely see how these tools are going to dramatically speed up development, especially for getting to minimum viable product (MVP) or prototype. You should only need a day or two to get something up and test market traction, especially with prompt app builders.

I found Claude Opus/Claude Code worked best for backend code within the IDE, while Gemini Pro was particularly good at frontend landing page development. Coding agents that make multiple changes across multiple files simultaneously, like those in Cursor, Copilot Agent, or ChatGPT Codex, still felt a bit daunting. I experienced chunks of code being deleted a few times, so I spent considerable time reviewing changes or reverting them.

Prompt app builders like Lovable, Replit, GitHub Spark, and Bolt can get you pretty far, but you can eventually hit a wall where the AI starts breaking more than it fixes, or you need to integrate third-party services that require direct code access. With one project, I started in a prompt builder then moved to an IDE for refinement.

High-level, here are some tips that should help in your vibe coding journey.

Before starting: Set instructions and rules

Like custom instructions in ChatGPT, each tool benefits from coding guidelines: Claude Code uses CLAUDE.md, Copilot uses configured instructions, and Cursor has rules (templates at https://cursor.directory/rules).

A screenshot of provided context for generative code tools.

Both Claude Code and Cursor support MCP (Model Context Protocol) for enhanced integrations (Cursor MCP directory: https://cursor.directory/mcp). Some tools can also index documentation folders for deeper context. Set these up first for better code generation.

Start with a complete product requirements document (PRD)

Before writing any code, spend time iterating with an LLM to generate a thorough PRD. This back-and-forth refinement process goes a long way in providing the context your AI coding tools need. Capture everything: user workflows, UI specifications, technical requirements, and success metrics. Save this in your README.md as your north star.

Prompt app builders like GitHub Spark generate PRDs first from your initial prompt, so the more complete and refined it is, the better.

Define your project structure upfront

Work with the LLM to create a structure that follows best practices but stays simple for what you’re building. An MVP doesn’t need enterprise architecture. Map out where components, services, and APIs belong, and include this in your initial prompt.

A screenshot of the project file structure for one of the vibe coding apps created by Jeronimo De Leon.

Monitor new file generation closely as AI tools can suggest new files when not needed. When this happens, correct it immediately. Keep the structure as simple as possible. Break up files that are doing too many things, as this makes them harder to read and update later.

Add context markers throughout your code

Include file paths and descriptions at the top of each file. This helps the AI maintain context when making changes. Add detailed logging at critical points to track what’s happening when things break. Watch for function renames, LLMs often change function names unnecessarily when updating code, breaking references elsewhere.

Always check current API documentation

LLMs can generate outdated code. OpenAI and Pinecone have changed their import syntax, but AI tools still produce the old versions. Have the LLM search for the latest docs, or check them yourself. Knowing how your services currently work helps you catch these mistakes immediately.

One feature, one conversation

Multitasking with AI means juggling code review while it generates more changes. Keep each conversation focused on a single feature unless features are directly related. When the LLM offers to optimize unrelated areas, decline. If the AI gets stuck repeating failed solutions, start fresh rather than fighting it.

Wisdom of the crowds

When stuck, get code reviews from other LLMs since they can catch different issues. But always review their output carefully. LLMs can duplicate functions across files or, worse, delete essential code. In Agent mode especially, I’ve seen them remove core functionality unrelated to the current task. Give specific instructions about where functions belong and double-check nothing critical disappeared.

Vibe Coding = Product Management + Engineering

The most significant shift with AI-assisted development isn’t the speed; it’s the role change. You’re no longer just implementing; you’re defining what to build, how it should work, and why it matters.

This is the multi-skilled professional evolution I mentioned earlier. When “We’re All Programmers Now,” it means domain experts can build their own solutions, but it also means programmers must become domain experts in product thinking. Success with vibe coding requires clear product vision to articulate requirements, technical knowledge to guide the AI correctly, and relentless focus on user problems.

You become the conductor orchestrating AI capabilities while maintaining the judgment to build what people actually need. The future belongs to these blended roles: product managers who understand engineering deeply enough to guide AI tools, and engineers who think like product managers. These T-shaped and M-shaped professionals operate fluidly across domains. This is how we compress innovation cycles from weeks to days: by eliminating the translation layer between idea and implementation.

About Jeronimo De Leon

Jeronimo De Leon is a seasoned product management leader with over 10 years of experience driving AI-driven innovation across enterprise and startup environments. Currently serving as Senior Product Manager, AI at Backblaze, he leads the development of AI/ML features, focuses on how Backblaze enhances the AI data lifecycle for customers' MLOps architectures, and implements AI tools and agents to optimize internal operations.