/about

About GXAI Studio

/mission

Our Mission

We use AI to ship games faster, weirder, and more often. One developer, one AI, one game per month.

/studio

The Studio

GXAI Studio is a one-person indie studio experimenting with AI-assisted game development. Every game is a test of how far human + AI collaboration can go — from the first pitch to the live build.

/stats

By the numbers

0
Games shipped
0k
Lines of code
0
AI-paired commits
0
Languages live

/process

AI Development Process

Every game follows the same six-step pipeline. AI does the heavy lifting; humans pick the direction.

01

1. Idea

Pitch one mechanic in one sentence. If it doesn't fit, it doesn't ship.

02

2. Spec

AI drafts a design document. We refine it until it reads like a contract.

03

3. Contract

Lock the interface, constraints, and acceptance tests. No surprises later.

04

4. Implement

AI codes against the contract. Every diff is reviewed by a human.

05

5. Test

Automated playthroughs run the actual game loop — no LLM in the test path.

06

6. Ship

Tests pass → deploy. Failure → back to spec, never patch around it.

/pipeline-detail

The full pipeline

Six phases. Each one produces a concrete artifact. Nothing gets skipped, nothing gets rushed.

01 — Discovery

1–4 hours

We turn vague ideas into shippable mechanics.

Inputs: 1-paragraph pitch, optional reference games

Outputs: One-pager, mechanics tree, target audience

  • [01]30-min call: explore your vision and constraints
  • [02]Reference research: 5–10 similar games analyzed
  • [03]Mechanics breakdown: identify the core hook
  • [04]Risk audit: trademark, platform fit, monetization
  • [05]Go/no-go decision with written rationale

02 — Design Spec

2–8 hours

AI drafts. We harden until every line is testable.

Inputs: One-pager from Phase 01

Outputs: Design doc, art direction, audio direction

  • [01]AI generates first draft of GDD (Game Design Document)
  • [02]Pushback rounds: every fuzzy line gets sharpened
  • [03]Visual mockups: 3–5 key screens (HTML/SVG)
  • [04]Audio brief: procedural vs licensed, key SFX list
  • [05]Sign-off checklist before any code is written

03 — Contract Lock

1–4 hours

Interface frozen. Acceptance tests written. No surprises.

Inputs: Approved design spec

Outputs: contracts/*.md, acceptance tests stub

  • [01]Decompose features into atomic contracts (1 contract = 1 testable unit)
  • [02]Define interface signatures (TypeScript types or proto)
  • [03]Write acceptance tests as code stubs (T1, T2, T3...)
  • [04]Identify constraints and edge cases (C1, C2, C3...)
  • [05]Lock contract version — any change = new contract

04 — Implementation

1–3 days

AI codes. Human reviews every diff. Atomic commits only.

Inputs: Locked contracts

Outputs: Working build on dev URL

  • [01]AI implements one contract at a time
  • [02]Conventional commits: feat / fix / refactor scopes
  • [03]Daily code review: anything outside contract reverts
  • [04]Phaser/React/Capacitor — stack picked per game shape
  • [05]Build deployed to staging URL daily for live preview

05 — Use-Case Tests

4–12 hours

Real playthroughs. No LLM in the test path.

Inputs: Implementation + acceptance test stubs

Outputs: UC pass report, video captures, perf graphs

  • [01]Automated playthroughs run actual physics, input, scoring
  • [02]Playwright + headless Phaser harness
  • [03]Performance gates: 60fps on iPhone SE, <100ms input latency
  • [04]Multi-locale snapshot testing (EN/ZH/JA/KO/DE)
  • [05]Failed test = blocked deploy, no exceptions

06 — Ship & Iterate

4–24 hours

Submission to stores + post-launch tuning.

Inputs: All UCs green

Outputs: Live on App Store + Google Play, dev log

  • [01]Capacitor wrap: iOS Xcode build, Android signed APK
  • [02]Store listings: 5 locales, screenshots, video preview
  • [03]Submission review handling (Apple ~24h, Google ~48h)
  • [04]Crashlytics + Firebase analytics live from day 1
  • [05]Post-launch tuning: A/B difficulty, daily dev log entries

From first message to App Store: typically 1–7 days.

/why-ai

Why AI changes the math

Traditional indie dev: 6 months, $50k, 1 game. Our pipeline: 1 week, $5k, 1 game.

AspectTraditional indieGXAI pipeline
Time to ship3–6 months1–7 days
Team size3–5 people1 human + AI
Cost$30k–$80k$3k–$15k
Iteration speedWeekly buildsHourly builds
Languages out the door1 (English)5 simultaneously
DocumentationOften missingAuto-generated
Test coverageSparse manual QAAutomated UC suite

/deliverables

What you get

Every project ships with a complete artifact bundle, not just a binary.

Design Document (GDD)

Markdown spec covering mechanics, economy, art direction, audio brief.

Locked Contracts

Per-feature interface + constraints + acceptance tests, version-controlled.

Source Code

Full repo (TypeScript/JS) + build scripts + CI/CD pipeline.

UC Test Suite

Automated playthroughs, perf benchmarks, regression harness.

Store-Ready Builds

iOS .ipa + Android .aab signed and ready to submit.

5-Language Bundle

All UI + store listings translated to EN/ZH/JA/KO/DE.

Analytics Setup

Firebase Analytics + Crashlytics + custom events live from day 1.

Dev Log + Postmortem

Day-by-day record of decisions, plus a launch retrospective.

/why-us

Why work with us

Three options exist when you have a game idea. Here's the honest comparison.

vs Freelancer

Faster (parallel AI streams), cheaper (no agency markup), more transparent (every contract + test in your repo).

vs Studio

We ship in weeks, not quarters. You own everything end-to-end: code, assets, accounts, store listings.

vs No-Code

Real native apps via Capacitor. No platform lock-in. Full source code yours forever.

/faq

Frequently asked

Who owns the IP?+

You do — 100%. Code, assets, store listings, brand. We hand over everything at the end.

What if I want changes after launch?+

Contracts make changes cheap. Each new feature is a new contract; we estimate, lock, ship.

Can you work with my existing brand/IP?+

Yes. Bring your style guide and we adapt the visual direction phase to match.

Will the game use AI to generate runtime content?+

Only if you want it to. The game logic itself never depends on an LLM at runtime — only at build time.

What about app store rejection?+

We handle review responses. Most games pass first review since we follow Apple/Google guidelines from spec phase.

Do you do multiplayer / backend?+

Yes. Firebase Realtime DB + Firestore for sub-100ms sync. See Tap War + Mole Bash for examples.

/contract-first

Every feature is a contract.

Before any code, we lock down purpose, interface, constraints, and acceptance tests. The AI implements against this contract — adding nothing, removing nothing.

When tests pass, the contract is satisfied. When they fail, we go back to the spec — never patch around it.

contracts/bounce-controller.mdmarkdown
## Contract: BounceController v1.0

### Purpose
Manage ball physics and tap-to-bounce input.

### Interface
- onTap(): apply downward force F = mass * 9.8 * 1.4
- update(dt): step physics, decay energy 15% per bounce

### Constraints
- C1: Ball energy must never go negative
- C2: Combo resets if tap is > 50ms from apex
- C3: No raster assets — SVG/procedural only

### Acceptance Tests
- T1: 30s no-input run ends in death animation
- T2: Perfect-tap × 10 awards combo bonus
$ npm run usecasesbash
# Agent runs UC playthroughs (no LLM in test path)
$ npm run usecases -- --save-results

▶ UC-01  tap-bounce-survives-spike       PASS  (1.4s)
▶ UC-02  no-input-loses-energy            PASS  (2.1s)
▶ UC-03  combo-resets-on-late-tap         PASS  (0.8s)
▶ UC-04  boss-wall-defeats-ball           FAIL  ✗

  spec: ball must rebound off wall < 100ms
  actual: 142ms

✗ Test failed → reject deploy → back to spec

/tests-are-real

No LLM in the test path.

Automated playthroughs run the actual game loop — physics, input, scoring. If a test fails, the deploy is rejected. No exceptions.

The AI helped write the tests. It doesn't get to grade its own homework.

/day-in-the-life

A day in the AI-dev life

09:00

Coffee + roadmap

Pick the next contract from the backlog. One feature, one day.

10:00

Spec session

AI drafts the contract. I push back on every fuzzy line until it's testable.

11:30

Implement

AI codes against the contract. I review every diff. Anything outside scope gets reverted.

14:00

Run UCs

Automated playthroughs. If they pass, ship. If they fail, root-cause — no patches.

16:00

Polish + commit

Atomic conventional commits. PR description writes itself from the contract.

17:30

Ship + record

Deploy → screenshot → dev log entry → tomorrow's roadmap.

/stack

Tools we use

Claude
Phaser 3
React
Next.js
Firebase
Capacitor
Vite
TypeScript
Tailwind
Pixi.js
Web Audio
Playwright

Ready to ship together?

Pitch Your Idea