Blog · February 13, 2025
It Cost Me $239.72 to Build My SaaS With Claude Code
I built MarginDash — a tool that tells you which AI customers are profitable — in several weeks, working alone. My total AI bill was $239.72.
Here's exactly how that money got spent, and what I didn't expect.
What I built
MarginDash is a Rails app with a pricing database covering 100+ AI models across every major vendor. It ingests usage events via SDK, calculates real costs using live pricing data, connects to Stripe for revenue, and shows you profit margin per customer. There's a cost simulator for modeling what happens when you swap models, and budget alerts when spending goes sideways.
Alongside the main app, I shipped SDKs for both TypeScript and Python — installable from npm and PyPI with a 6-line integration.
The final codebase: 352 files, 23,476 lines of code, 422 commits.
The plan that didn't survive contact with reality
I started on the $20/month Claude Pro plan. I burned through it in a single day.
Day one was the core platform: database schema, event ingestion API, user authentication, the basics. By evening, I'd hit the usage ceiling and couldn't continue. I upgraded to Claude Max 5x at $100/month.
That lasted six days. By February 4th, I was hitting limits again mid-session — right in the middle of building the cost simulator. I'd already burned through the $100 tier, so I bought $18.65 in extra usage tokens to keep going. After doing the math, buying tokens piecemeal would cost more than upgrading — so I moved to the Claude Max 20x plan at $200/month plan within the same hour. In that moment, my credit card statement was my only dashboard.
I requested a downgrade back to the lower tier starting next billing cycle. The heavy construction was nearly done.
The full breakdown:
| Date | Amount | What happened |
|---|---|---|
| Jan 28 | $20.00 | Started on Pro plan |
| Jan 29 | $80.37 | Upgraded to $100 Max plan |
| Feb 4 | $5.00 | Extra usage tokens |
| Feb 4 | $13.65 | More extra usage tokens |
| Feb 4 | $120.70 | Upgraded to $200 Max plan |
| Total | $239.72 |
What $239.72 bought me
Over several weeks, I ran 124 Claude Code sessions. Some days were heavier than others — my peak day had over 6,000 back-and-forth messages with the model and produced 66 commits.
The token numbers are large in the abstract: nearly 2 million output tokens, billions of cached input tokens. But what matters is the output. In several weeks I went from an empty directory to a deployed product with two published SDK packages, a documentation site, five SEO comparison pages, and a working Stripe integration.
That's roughly $0.57 per commit, or about a penny per line of code.
What surprised me
The cost escalation was fast. I genuinely thought the $20 plan would last a week. It didn't last a day. Each tier felt adequate for about half the time I expected. If you're doing serious application development — not just asking questions — budget for the higher tiers from the start.
Sessions got expensive as the codebase grew. Claude Code reads your project files for context. As the app grew from a few files to 352, each session consumed more input tokens just to understand the codebase. The first week was cheap. The second week was not.
Attempting Codex was a dead end. I tried OpenAI's Codex before committing to Claude Code. It didn't work for this kind of full-stack development. I'm not going to editorialize on why — I just ended up doing all the work in Claude Code and don't regret it.
The daily rhythm matters more than the tool. My most productive days weren't the days I spent the most on AI. They were the days I started with a clear list of what needed building and broke it into specific tasks. The worst days were when I opened Claude Code with a vague "make the dashboard better" and let it wander.
Would I do it again?
Without hesitation. $239.72 for several weeks of output that would have taken a small team months is not a number I need to think about. The harder question is whether this pace is sustainable — I downgraded my plan because the heavy building is done, but maintenance, bug fixes, and new features will keep the meter running.
I'm building a tool that tracks AI costs. It would be embarrassing if I didn't track my own.