We were genuinely excited about the potential of tools like Cursor to help everyone on our team, even those who aren’t full-time programmers, contribute to building better AI interactions. We thought, “Imagine if our Subject Matter Experts (SMEs), with their deep understanding of our users and the subject matter, could directly translate their expertise into effective AI prompts using these new AI coding assistants!”

So, we gave it a shot. We introduced Cursor to our SMEs, provided them with a solid template for creating structured prompts, and encouraged them to dive in and experiment. Now, these are incredibly smart, logical thinkers – the best in their fields. Yet, to our surprise, they quickly found themselves… well, hopelessly lost.

This got me thinking. I wanted to understand exactly why these intelligent individuals were struggling, even with a clear template to guide them. And honestly, I was also a little excited to roll up my sleeves and get back to some hands-on coding myself. It had been a while, and I was curious to see these AI tools in action.

What I quickly learned was that while Cursor’s AI capabilities are undeniably powerful and hold immense promise, they’re not a substitute for the nuanced understanding and hard-earned experience of a seasoned programmer. It became clear that “vibe coding,” the idea of intuitively coding with AI assistance, isn’t quite as straightforward as it’s sometimes portrayed.

Time and time again, I watched as Cursor, left to its own devices, would get itself into tangled webs of logic. It would suggest solutions that were overly complex, introduce inconsistencies, or simply miss crucial steps in the process. It was like having a very enthusiastic but inexperienced intern who needed constant guidance. That’s where I found myself stepping in – acting as a mentor, walking the AI through fundamental programming principles, the kind that senior engineers accumulate over years, often through the painful process of debugging code in the wee hours of the morning.

This experience really brought to mind a list of common mistakes that junior programmers often make when they’re just starting out. It felt like, in many ways, the AI was exhibiting some of these same tendencies:

  • Poor Naming Conventions: Suggesting names for variables or functions that were unclear or didn’t accurately reflect their purpose. Or worse reusing the same ambiguous name for different things!
  • Writing Overly Complex Code: Proposing solutions that were far more intricate than necessary for the task at hand.
  • Ignoring DRY (Don’t Repeat Yourself) Principle: Suggesting the same block of code in multiple places instead of creating reusable components.
  • Premature Optimization: Often focusing on potential performance improvements before the core logic was even correctly implemented.
  • Not Handling Errors Properly: Offering basic or incomplete error handling suggestions that wouldn’t cover obvious potential issues.
  • Over-Engineering: Building solutions that were much more elaborate than the immediate problem required.

Furthermore, I noticed that when the AI encountered an error, it would often fix the first instance it found but fail to recognize and address similar errors elsewhere in the code. And perhaps most tellingly, it rarely proactively asked for clarification or context when it seemed to be missing crucial information. It required me to constantly provide the necessary background and guide its thinking.

This whole experience resonated with something my friend Pete at The Gnar Company recently pointed out. He said that coding these days is a lot like flying an airplane. Modern airplanes are incredibly sophisticated pieces of technology, packed with automation and AI systems. But you wouldn’t dream of letting one take off without an experienced pilot in the cockpit. That pilot’s years of training, their ability to understand complex situations, and their intuition honed through countless hours of experience are still absolutely essential for a safe and successful flight.

Similarly, while AI-assisted coding tools like Cursor are incredibly powerful and can undoubtedly accelerate development, they are not a replacement for the expertise and judgment of an experienced programmer. These tools can be fantastic co-pilots, suggesting solutions and automating tedious tasks. But ultimately, a skilled human is still needed at the helm to provide direction, ensure the code is well-structured, maintainable, and truly solves the underlying problem.

Our experiment with our SMEs highlighted this perfectly. While the AI could offer coding suggestions, it lacked the higher-level understanding of software design principles, the ability to anticipate potential pitfalls, and the critical thinking needed to navigate complex coding challenges effectively.

So, while the promise of “vibe coding” is exciting, the reality is that it’s not yet a simple, intuitive process for everyone. It requires a foundational understanding of programming concepts and the ability to guide and mentor the AI effectively. These tools are powerful, but they amplify the skills of an experienced programmer far more than they democratize coding for everyone. We’re still incredibly optimistic about the future of AI-assisted development, but it’s crucial to approach it with a realistic understanding of its current capabilities and the continued importance of human expertise.


Leave a Reply

Your email address will not be published. Required fields are marked *