Product Manager’s Vibe Coding Experiment: From Errors to Delivery in 47 Minutes
A 47-minute AI programming experiment completely reshaped the traditional boundaries of product management. When a product manager with no coding experience built a runnable demo inspired by the game Onmyoji using Vibe Coding, it revealed not only the potential of AI tools but also the incredible value of product thinking during the debugging phase. This article will take you through the entire process from errors to delivery, breaking down the three core capabilities and precise methodologies that empower product managers to harness AI programming.

On a weekend afternoon, I opened Trae CN, aiming to create a mini-game in the style of Onmyoji. After 47 minutes, a runnable demo was born, which went through errors, debugging, and fixes, ultimately leading to code that my development colleagues confirmed could be run directly.
Recently, I published an article discussing how Vibe Coding transforms product managers from “requirement translators” to “AI commanders.” After the article was released, many friends asked, “It sounds great, but how does it actually work? Can AI-generated code really run? What if there are issues?”
As a boy who grew up playing games, I always wanted to create a simplified version of a game—nothing too complex, just something that showcases characters and simulates battles. Previously, this idea was just a thought because I didn’t write code. But now, with AI programming tools available, I decided to give it a try.
This weekend experiment gave me a new understanding of Vibe Coding: the biggest hurdle for product managers using AI programming is not “generation” but rather “debugging” and “delivery.” The key to overcoming this hurdle lies in our familiar product thinking and a well-crafted prompt.
Starting Point: A Product Manager’s “Unproductive” Weekend
On Saturday afternoon, I opened my computer and launched my “AI collaborative workflow”:
- DeepSeek: Helps refine vague ideas into clear, unambiguous AI prompts.
- Trae CN: Receives prompts, generates, and runs code.
My goal was clear: create a runnable mini-game in the style of Onmyoji, featuring login, character statuses, and simple battles—not for delivery, but for fun and to validate what AI could help me achieve.
Practical Application: 47 Minutes from Errors to a Runnable Demo
Below is a real record of my collaboration with AI. I want to showcase how a product manager without coding knowledge can solve problems using product thinking when AI-generated code encounters issues.
Minutes 1-10: Initial Generation
I described my idea in natural language to DeepSeek: “I want to create a mini-game in the style of Onmyoji, where users can log in to view their characters and engage in battles.” DeepSeek helped me organize this into a structured prompt, which I input into Trae CN.
Soon, a backend based on Flask and an HTML page were generated. I eagerly clicked to run it—only to see a red message pop up:
“Page load failed, please refresh and try again.”
If it were my first time using AI programming, I might have given up upon seeing this error. But as a product manager, I felt excited—this was a true moment of “human-machine collaboration.” It was like receiving user feedback about a bug, and now I needed to “reproduce, locate, and solve” it.
Minutes 11-30: Directing Debugging
Instead of diving into the code myself, I continued to issue commands to Trae CN in natural language—like directing an intern:
- Me: “Check the server’s running status and see if there are any error messages.”
- Trae CN: Checked the server logs and found no obvious errors.
- Me: “Look at the game.html file; which line of code triggered the ‘page load failed’ message?”
- Trae CN: Located the code near line 316, discovering that the checkLogin function had failed.
- Me: “Examine the implementation of the checkLogin function to see why it failed.”
- Trae CN: Traced it back to the apiRequest function in base.html, finding that the frontend call to /api/account/info might not be returning the expected format.
- Me: “Check the backend implementation of this interface to see what format it returns.”
- Trae CN: Reviewed account/routes.py and models.py, revealing that the interface returned JSON, but the frontend code did not handle exceptions properly.
My value point: I may not understand complex code, but I grasp the “data flow”—the frontend requests an interface, the backend returns data, and any issues in between can lead to failure. This basic understanding of system architecture, accumulated from daily interactions with development colleagues, allowed me to direct AI in precise troubleshooting.
Minutes 31-40: Formulating a Fix
- Me: “Modify the checkLogin and apiRequest functions in base.html to enhance error handling. It should handle non-JSON responses, network timeouts, and other exceptions.”
- Trae CN: Generated the modified code and displayed the changes.
- Me: “Could other pages have similar issues? Check the initialization code in game.html and add existence checks for elements.”
- Trae CN: Checked and modified game.html to add existence checks for all DOM operations.
Minutes 41-47: Verification and Unexpected Gains
After completing the modifications, I refreshed the page again—success! User information loaded correctly, the game interface displayed fully, and all interactions worked as expected.
What surprised me even more was when I shared this demo with my development colleagues on Monday (just for fun), their feedback was: “The code structure is clear, and the tech stack is standard; we can directly continue development based on this.”
In 47 minutes, from 0 to 1, from errors to deliverable—what started as an “unproductive” weekend yielded tangible results.
Reflection: Three Core Capabilities for Product Managers in Vibe Coding
This experiment deeply impressed upon me that product managers using Vibe Coding are not “replacing programmers” but rather “leading an AI intern.” AI serves as my hands and magnifying glass, while I am the commander who knows where to look and what to do next.
1. Prompt Refinement Ability
My biggest takeaway from this practical exercise was that poorly written prompts lead to results that are worlds apart from expectations.
I had previously tried giving Trae CN a simple command like “create a game page,” only to receive something completely unusable. However, this time I used DeepSeek to refine the prompt first—transforming vague ideas into structured descriptions, clarifying the tech stack, interaction details, and error handling requirements—resulting in a significant increase in generation quality.
Now, I spend at least 30% of my time refining prompts. It’s akin to writing a PRD; the clearer the requirements, the fewer pitfalls arise later.
2. Problem Decomposition Ability
When the page threw an error, I remained calm and decomposed the vague issue of “page error” into an executable troubleshooting path:
- Is there a problem with the server?
- Which line of code triggered the error?
- What function triggered the error?
- What data does this function rely on?
- Where does the data come from?
This decomposition logic stems from our daily handling of user feedback and analyzing product issues. AI can execute commands, but only humans can decide what to check next.
3. Technical Understanding
I don’t write code, but I understand basic concepts like interfaces, data flow, cross-domain requests, and DOM operations. This enables me to communicate with AI on the same channel and accurately identify issues.
Advancement: How to Harness AI with “Precise Definitions”
In addition to debugging skills, Vibe Coding requires product managers to have “precise definitions”—the clearer you are about what you want, the closer AI’s output will be to your expectations.
In another dashboard project, instead of letting AI “draw a nice one randomly,” I provided it with a technical stack requirement:
Technical Stack Requirements
- Framework: React + TypeScript
- Styles: Tailwind CSS + shadcn/ui component library
- Charts: Recharts (all charts filled with mock data, no empty spaces)
- Tables: TanStack Table (to facilitate sorting and filtering later)
- Icons: Lucide React
- Interactions: Top navigation + collapsible sidebar; show a success message with Toast when clicking “Save”
What was the result? The page generated by AI met the standards for UI style and interaction details, making it ready for direct discussion with developers.

More importantly, because of the standardized tech stack, it would be seamless for developers to take over and continue improving it later—this was precisely why my “weekend mini-game” was ultimately recognized by the development team.
Final Thoughts: To All Product Managers Who Want to Try
As I wrote this article, I wanted to convey not anxiety but a sense of possibility: Vibe Coding gives product managers a pair of hands to “realize their ideas”.
I still proficiently use Axure and believe that prototyping skills are fundamental. But when I can turn a whimsical idea into a runnable, debuggable, and even deliverable demo in just 47 minutes over a weekend, my imagination of what product managers can achieve has been completely unlocked.
Regardless of your current prototyping skills, I encourage you to find a weekend to walk through the complete process:
- Open Trae CN (the web version is sufficient, no environment configuration needed).
- Find a small idea that interests you (game, tool, dashboard, etc.), but don’t just say “make an XX”—first use DeepSeek or another AI assistant to refine your idea into a structured prompt.
- Propose at least three modification requests (change styles, add fields, adjust logic) and experience the AI’s response speed.
- Intentionally create an error to experience how to direct AI in debugging.
When you complete the cycle of “refining prompts → generating → encountering issues → analyzing → directing AI to fix” you will understand: Vibe Coding is not about replacing product managers, but rather providing us with a new possibility—transforming “waiting for others to do it” into “trying it ourselves first.”
From today onward, try to be a “proactive creator” as a product manager.
Comments
Discussion is powered by Giscus (GitHub Discussions). Add
repo,repoID,category, andcategoryIDunder[params.comments.giscus]inhugo.tomlusing the values from the Giscus setup tool.