Step 7 — The Launch
All tests green. The Release agent builds the iOS and Android packages, submits to the stores, then waits.
After tests pass, the Release agent packages the game for iOS and Android, submits to the stores, then waits for review.
This is the most boring step of the whole pipeline, and that's good. It means the hard work is done.
The release checklist
# Now we wait. Apple: ~24 hours review Google: ~48 hours review
/ship
Boring is good.
If the test step did its job, the launch is mechanical. Build, sign, upload, wait.
Most "launch problems" are tests that should have caught the bug earlier. We've never had Apple reject a build for behavior issues — only metadata.
Store listing — translated by AI
The Release agent translates the store listing into all 5 languages. We never use the same English text everywhere.
en: Tap. Bounce. Survive 100 levels. zh: 點擊。彈跳。生存。 ja: タップ。バウンド。サバイブ。 ko: 탭. 바운스. 서바이브. de: Tap. Bounce. Survive.
# Screenshots auto-localized:
- Each locale gets screenshots
captured with that locale's UI
✓ 5 listings, ~5 minutes total ✓ All store-ready
/listings
Localized listings = more installs.
Players prefer apps with their language on the store page, even if the game itself is universal.
5 minutes of AI translation = doubled install rates from JP/KR/CN markets.
After launch — we don't disappear
Once the game is live, we watch the first players for 48 hours. If anything looks wrong (crashes, bad reviews, weird behavior), we file a fix and start the cycle over.
Day 2: 500 downloads crash fixed and re-shipped → no new crashes 4.4 → 4.6 stars
Day 7: 3,000 downloads 4.6 star rating → stable
Day 14: 8,000 downloads 2 player suggestions → added to roadmap
# Then we start the next game.
/iterate
Launch is the start, not the end.
Players find things tests didn't. We watch, file bug cards, fix in days, re-ship.
The whole pipeline runs again — same agents, same rules, just for the bug. Cycle takes hours, not weeks.
What we measure post-launch
Crash rate ≤ 0.1% Star rating ≥ 4.0 Day-1 retention ≥ 30% Avg session ≥ 90s Reviews/day ≥ 5
# If any drops below threshold:
✗ file BUG card ✗ agent team investigates ✗ fix or rollback
/metrics
Numbers, not feelings.
Each metric has a threshold. If we cross below, the pipeline auto-files a bug card and assigns it.
This catches issues hours after launch, not days. Players appreciate quick fixes.
Real example — BOP launch
Tue 11:00 Apple: in review 21:00 Apple: approved 21:00 released to App Store
Wed 10:00 Google: approved 10:00 released to Google Play
Wed 11:00 62 installs 17:00 214 installs ✓ no crashes day 1
Sun end of week 1 2,143 installs 4.5 ★ avg ✓ all metrics green
# Total time: idea → live = 4 days # Active dev time: 1 day # Apple/Google review: 3 days
/case-study
BOP — 4 days idea to live.
1 day of actual work. 3 days of waiting for store reviews.
The pipeline can't make Apple faster. But it can make sure we use the wait time on the next game.
End of the journey — game live
That's it. 7 steps. 1 to 7 days.
If you read every post in this series, you now know exactly how we make games. The pipeline isn't magic — it's just specific rules and specific roles.
You can copy any of it. The whole thing is on this site.
The real takeaway
You don't need a big team to ship. You need:
- Clear roles, even if they're all AI sessions.
- Strict written rules the AI must follow.
- Automated tests that block bad code.
- A producer (you) who says no.
Everything else is execution. The AI handles execution well. The bottleneck is good briefs.
Read the games we built this way
- BOP — Tap. Bounce. Survive.
- Mole Bash — Rhythm whack arcade
- Mirror Match — Two souls, one reflection
- Dodge or Die — Hypercasual survival
Thanks for reading
If this helped, share the post or get in touch — we love hearing from people building with AI.