In partnership with

I built an enterprise platform almost entirely through AI-assisted coding sessions. Here is what broke, what I missed, and what I now do differently.

Over three months I shipped an AI-driven enterprise platform -- LLM content pipeline, analytics dashboard, containerized services, the works. My primary development partner was an AI coding agent. I would describe what I needed, the agent would write the code, and I would review, steer, and deploy. The velocity was unlike anything I had experienced before. So were the failure modes.

This is not an article about whether AI agents are useful for coding. They are. This is about the specific class of problems that emerge when you build a real platform this way -- and how I learned to stay ahead of them.

1. Every AI Session Is a Context Bubble. Your Codebase Is Not.

The single most important thing I learned: each conversation with an AI agent operates in isolation. Session 47 establishes that user roles are called "employee" and "manager". Session 112 introduces a new feature and names them "staff" and "leader". Both sessions produce correct code. Globally, I now had two conventions for the same concept across 30+ files.

I found the same enum value written four different ways. I found SQL queries using element IDs with an _exam suffix in one module while the storage layer used the base ID without it. Each written in a separate session, each locally valid, together forming a silent data mismatch that took days to surface.

What I do now: I maintain centralized constants files for every shared concept -- roles, statuses, content types, feature flags. I point the agent to these files at the start of every session. The cost of "just hardcode the string" compounds exponentially when each session introduces its own interpretation.

2. The Bugs That Cost the Most Are the Ones That Look Like They Work

The 500 errors never worried me. They are loud and fixable. The expensive bugs were the silent ones -- green dashboards, successful deploys, data flowing -- but the numbers quietly wrong.

A KPI showing 0% efficiency for every user? The query referenced content_id when the actual column was related_content_id. It ran perfectly, returned zero rows, and the formula calculated 0 / target = 0. It took weeks before anyone questioned it. The AI agent had generated plausible-looking SQL against a schema it had never directly inspected.

What I do now: I treat any metric returning zero or 100% with the same suspicion as a crash. I build sanity-check assertions into data pipelines. And before deploying any AI-generated SQL, I validate column names against the actual database schema -- not the agent's assumption of what it should be.

3. AI Agents Build One Layer at a Time. Features Require All Layers at Once.

I lost weeks to what I call "invisible features." The pattern: I would ask the agent to build a backend endpoint. It would implement it perfectly -- correct data, clean API. Then in a separate session, I would work on the frontend. But I would forget to wire the UI to the new endpoint. Or the agent would build the component but not connect it to the API response.

This happened five separate times for the same feature. Each debugging session started with "Why aren't the KPIs showing?", spent an hour confirming the backend was fine, and eventually discovered the React component simply was not rendering the data already sitting in the API response.

What I do now: I implement features as vertical slices in a single session -- API endpoint, TypeScript types, and UI component together. When I split work across sessions, I keep a checklist of unpaired endpoints. A backend endpoint without a frontend consumer is dead code with a pulse.

Trust-First AI, Built Into Your Browser

Agentic workflows are everywhere. Real trust is still rare.

Norton Neo is the world’s first AI-native browser designed from the ground up for safety, speed, and clarity. It brings AI directly into how you browse, search, and work without forcing you to prompt, manage, or babysit it.

Key Features:

  • Privacy and security are built into its DNA.

  • Tabs organize themselves intelligently.

  • A personal memory adapts to how you work over time.

  • This is zero-prompt productivity. AI that anticipates what you need next, so you can stay focused on doing real work instead of managing tools.

If agentic AI is the trend, Neo is the browser that makes it trustworthy.

Try Norton Neo and experience the future of browsing.

4. AI Agents Create Files. They Don't Commit Them.

This sounds trivial. It bit me repeatedly. The agent refactors a module into a package, creates three new files. Everything works locally. Imports resolve. I deploy.

The container crashes with ModuleNotFoundError. The files exist on my machine but were never committed to git. The Docker image was built from the repo. The repo does not have the files. Production enters a crash loop for 30 minutes.

What I do now: After every session where the agent creates new files, I run git status before doing anything else. I added a CI check that verifies all Python imports resolve against committed files. The gap between "works on my machine" and "works in the container" is wider when an AI is creating files on your behalf.

5. AI-Generated Content Has Biases Users Detect Instantly

I built an AI quiz generator. Users cracked the pattern within a week: "The longest answer is always correct." LLMs naturally produce more detailed text for correct answers and brief, dismissive text for wrong ones. Similarly, my AI-powered feedback evaluator defaulted to 3 out of 5 stars on nearly everything -- the path of least resistance for a language model.

What I do now: I add explicit anti-bias instructions to every generative prompt ("make all answer options similar in length"). I run post-generation validation that flags length imbalances. I treat AI content quality as an ongoing tuning problem with user feedback loops, not a one-time prompt.

6. Database Migrations in Auto-Deploy Pipelines Are Fragile

logo

Subscribe to Premium to See the Rest

Upgrade to Premium for exclusive demos, valuable insights, and an ad-free experience!

Get Exclusive Insights

A subscription gets you:

  • ✅ Full access to 100% of all content.
  • ✅ Exclusive DEMOs, reports, and other premium content.
  • ✅ Ad-free experience.

Keep Reading